index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
1,500
|
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks Akito Sakurai School of Knowledge Science Japan Advanced Institute of Science and Technology Nomi-gun, Ishikawa 923-1211, Japan. CREST, Japan Science and Technology Corporation. ASakurai@jaist.ac.jp Abstract O(ws(s log d+log(dqh/ s))) and O(ws((h/ s) log q) +log(dqh/ s)) are upper bounds for the VC-dimension of a set of neural networks of units with piecewise polynomial activation functions, where s is the depth of the network, h is the number of hidden units, w is the number of adjustable parameters, q is the maximum of the number of polynomial segments of the activation function, and d is the maximum degree of the polynomials; also n(wslog(dqh/s)) is a lower bound for the VC-dimension of such a network set, which are tight for the cases s = 8(h) and s is constant. For the special case q = 1, the VC-dimension is 8(ws log d). 1 Introduction In spite of its importance, we had been unable to obtain VC-dimension values for practical types of networks, until fairly tight upper and lower bounds were obtained ([6], [8], [9], and [10]) for linear threshold element networks in which all elements perform a threshold function on weighted sum of inputs. Roughly, the lower bound for the networks is (1/2)w log h and the upper bound is w log h where h is the number of hidden elements and w is the number of connecting weights (for one-hidden-Iayer case w ~ nh where n is the input dimension of the network). In many applications, though, sigmoidal functions, specifically a typical sigmoid function 1/ (1 + exp( -x)), or piecewise linear functions for economy of calculation, are used instead of the threshold function. This is mainly because the differentiability of the functions is needed to perform backpropagation or other learning algorithms. Unfortunately explicit bounds obtained so far for the VC-dimension of sigmoidal networks exhibit large gaps (O(w2h2) ([3]), n(w log h) for bounded depth 324 A. Sakurai and f!(wh) for unbounded depth) and are hard to improve. For the piecewise linear case, Maass obtained a result that the VO-dimension is O(w210g q), where q is the number of linear pieces of the function ([5]). Recently Koiran and Sontag ([4]) proved a lower bound f!(w 2 ) for the piecewise polynomial case and they claimed that an open problem that Maass posed if there is a matching w 2 lower bound for the type of networks is solved. But we still have something to do, since they showed it only for the case w = 8(h) and the number of hidden layers being unboundedj also O(w2 ) bound has room to improve. We in this paper improve the bounds obtained by Maass, Koiran and Sontag and consequently show the role of polynomials, which can not be played by linear functions, and the role of the constant functions that could appear for piecewise polynomial case, which cannot be played by polynomial functions. After submission of the draft, we found that Bartlett, Maiorov, and Meir had obtained similar results prior to ours (also in this proceedings). Our advantage is that we clarified the role played by the degree and number of segments concerning the both bounds. 2 Terminology and Notation log stands for the logarithm base 2 throughout the paper. The depth of a network is the length of the longest path from its external inputs to its external output, where the length is the number of units on the path. Likewise we can assign a depth to each unit in a network as the length of the longest path from the external input to the output of the unit. A hidden layer is a set of units at the same depth other than the depth of the network. Therefore a depth L network has L - 1 hidden layers. In many cases W will stand for a vector composed of all the connection weights in the network (including threshold values for the threshold units) and w is the length of w. The number of units in the network, excluding "input units," will be denoted by hj in other words, the number of hidden units plus one, or sometimes just the number of hidden units. A function whose range is {O, 1} (a set of 0 and 1) is called a Boolean-valued function. 3 Upper Bounds To obtain upper bounds for the VO-dimension we use a region counting argu.ment, developed by Goldberg and Jerrum [2]. The VO-dimension of the network, that is, the VO-dimension of the function set {fG(wj . ) I W E'RW} is upper bounded by max {N 12N ~ Xl~.~N Nee ('Rw - UJ:1.N'(fG(:Wj x£))) } (3.1) where NeeO is the number of connected components and .N'(f) IS the set {w I f(w) = O}. The following two theorems are convenient. Refer [11] and [7] for the first theorem. The lemma followed is easily proven. Theorem 3.1. Let fG(wj Xi) (1 ~ i ~ N) be real polynomials in w, each of degree d or less. The number of connected components of the set n~l {w I fG(wj xd = O} is bounded from above by 2(2d)W where w is the length of w. Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks 325 Lemma 3.2. Ifm ~ w(1ogC + loglogC + 1), then 2m > (mC/w)W for C ~ 4. First let us consider the polynomial activation function case. Theorem 3.3. Suppose that the activation function are polynomials of degree at most d. O( ws log d) is an upper bound of the VC-dimension for the networks with depth s. When s = 8(h) the bound is O(whlogd). More precisely ws(1ogd + log log d + 2) is an upper bound. Note that if we allow a polynomial as the input function, d1d2 will replace d above where d1 is the maximum degree of the input functions and d2 is that of the activation functions. The theorem is clear from the facts that the network function (fa in (3.1)) is a polynomial of degree at most dS + ds- 1 + ... + d, Theorem 3.1 and Lemma 3.2. For the piecewise linear case, we have two types of bounds. The first one is suitable for bounded depth cases (i. e. the depth s = o( h)) and the second one for the unbounded depth case (i.e. s = 8(h)). Theorem 3.4. Suppose that the activation functions are piecewise polynomials with at most q segments of polynomials degree at most d. O(ws(slogd + log(dqh/s))) and O(ws((h/s)logq) +log(dqh/s)) are upper bounds for the VC-dimension, where s is the depth of the network. More precisely, ws((s/2)logd + log(qh)) and ws( (h/ s) log q + log d) are asymptotic upper bounds. Note that if we allow a polynomial as the input function then d1 d2 will replace d above where d1 is the maximum degree of the input functions and d2 is that of the activation functions. Proof. We have two different ways to calculate the bounds. First S i=1 < s (8eNQhs(di-1 + .. . + d + l)d) 'l»l+'''+W; -p Wl+"'+W' J=1 J ::; (8eNqd(s:)/2(h/S)) ws where hi is the number of hidden units in the i-th layer and 0 is an operator to form a new vector by concatenating the two. From this we get an asymptotic upper bound ws((s/2) log d + log(qh)) for the VC-dimension. Secondly From this we get an asymptotic upper bound ws((h/s)logq + log d) for the VCdimension. Combining these two bounds we get the result. Note that sin log( dqh/ s) in it is introduced to eliminate unduly large term emerging when s = 8(h). 0 4 Lower Bounds for Polynomial Networks Theorem 4.1 Let us consider the case that the activation function are polynomials of degree at most d. n( ws log d) is a lower bound of the VC-dimension for the networks with depth s. When s = 8(h) the bound is n(whlogd), More precisely, 326 A. Sakurai (1/16)w( 5 - 6) log d is an asymptotic lower bound where d is the degree of activation functions and is a power of two and h is restricted to O(n2) for input dimension n. The proof consists of several lemmas. The network we are constructing will have two parts: an encoder and a decoder. We deliberately fix the N input points. The decoder part has fixed underlying architecture but also fixed connecting weights whereas the encoder part has variable weights so that for any given binary outputs for the input points the decoder could output the specified value from the codes in which the output value is encoded by the encoder. First we consider the decoder, which has two real inputs and one real output. One of the two inputs y holds a code of a binary sequence bl , b2, ... ,bm and the other x holds a code of a binary sequence Cl, C2, ... ,Cm . The elements of the latter sequence are all O's except for Cj = 1, where Cj = 1 orders the decoder to output bj from it and consequently from the network. We show two types of networks; one of which has activation functions of degree at most two and has the VC-dimension w(s-l) and the other has activation functions of degree d a power of two and has the VC-dimension w( s - 5) log d. We use for convenience two functions 'H9(X) = 1 if x 2:: 0 and ° otherwise and 'H9,t/J (x) = 1 if x 2:: cp, ° if x ::; 0, and undefined otherwise. Throughout this section we will use a simple logistic function p(x) = (16/3)x(1- x) which has the following property. Lemma 4.2. For any binary sequence bl , b2, . .. , bm , there exists an interval [Xl, X2] such that bi = 'Hl /4,3/4(pi(x)) and ° :S /(x) ::; 1 for any x E [Xl, X2]' The next lemmas are easily proven. Lemma 4.3. For any binary sequence Cl, C2,"" Cm which are all O's except for Cj = 1, there exists Xo such that Ci = 'Hl/4,3/4(pi(xo)). Specifically we will take Xo = p~(j-l)(1/4), where PLl(x) is the inverse of p(x) on [0,1/2]. Then pi-l(xo) = 1/4, pi(xo) = 1, pi(xo) = ° for all i > j, and pj-i(xo) ::; (1/4)i for all positive i ::; j. Proof. Clear from the fact that p(x) 2:: 4x on [0,1/4]. o Lemma 4.4. For any binary sequence bl , b2, ... , bm , take y such that bi 'H 1/ 4,3/4(pi(y)) and ° ::; pi(y) ::; 1 for all i and Xo = p~(j-l)(1/4), then 'H7/ 12,3/4 (l::l pi(xo)pi(y)} = bi' i.e. 'Ho (l::l pi(xo)pi(y) - 2/3} = bi' Proof. If bj = 0, l::l pi(xo)pi(y) = l:1=1 pi(xo)pi(y) :S pi(y) + l:1:::(1/4)i < pi(y) + (1/3)::; 7/12. If bj = 1, l::l pi(xo)pi(y) > pi(xo)pi(y) 2:: 3/4. 0 By the above lemmas, the network in Figure 1 (left) has the following function: Suppose that a binary sequence bl , ... ,bm and an integer j is given. Then we can present y that depends only on bl , •• • ,bm and Xo that depends only on j such that bi is output from the decoder. Note that we use (x + y)2 - (x - y)2 = 4xy to realize a multiplication unit. For the case of degree of higher than two we have to construct a bit more complicated one by using another simple logistic function fL(X) = (36/5)x(1- x). We need the next lemma. Lemma 4.5. Take Xo = fL~(j-l)(1/6), where fLLl(X) is the inverse of fL(X) on [0,1/2]. Then fLi-1(xo) = 1/6, fLj(XO) = 1, fLi(xo) = ° for all i > j, and fLi-i(xo) = Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks 327 L--_L...-_---L_...L..-_ X. ~A·l f·i~~] i~-!~ '........ ,----_ .. __ ...... __ .. x, y Figure 1: Network architecture consisting of polynomials of order two (left) and those of order of power of two (right). (1/6)i for all i > 0 and $ j. Proof. Clear from the fact that J-L(x) ~ 6x on [0,1/6]. 0 Lemma 4.6. For any binary sequence bl. b2, ... , bk, bk+b bk+2, . .. ,b2k , ... , b(m-1)H1,'''' bmk take y such that bi = 1-l1/4,3/4(pi(y)) and 0 $ pi(y) $ 1 for all i. Moreover for any 1 $ j $ m and any 1 $ 1 $ k take Xl = J-LL(j-1)(1/6), and Xo = J-LL(I-1)(1/6k). Then for Z = E:1 pik(Y)J-Lik(xt), 1-lo (E~==-Ol pi(z)J-Li(xo) - (1/2)) = bki+l holds. Lemma 4.7. If 0 < pi(x) < 1 for any 0 < i $1, take an £ such that (16/3)1£ < 1/4. Then pl(x) - (16/3)1£ < pl(x + £) < pl(x) + (16/3)1£. Proof.. There are four cases ~epending on ~hether pl- ~ (x + £) is on the uphill or downhIll of p and whether x IS on the uphlll or downhIll of p -1 . The proofs are done by induction. First suppose that the two are on the uphill. Then pl(x + £) = p(pl-1\X + f)) < p(pl-1(X) + (16/3)1-1£)) < pl(x) + (16/3)1£. Secondly suppose that p -l(x + £) is on the uphill but x is on the downhill. Then pl(x + £) = p(pl-1(x + f)) > p(pl-1(x) - (16/3)1-1£)) > pl(x) - (16/3)1£. The other two cases are similar. 0 Proof of Lemma 4.6. We will show that the difference between piHl(y) and E~==-ol p'(z)J-Li(xo) is sufficiently small. Clearly Z = E:1 J-Lik(X1)pik(y) = E{=l J-Lik(X1)pik(y) $ pik(y)+ E{~i(1/6k)i < pik(y)+1/(6k-1) and pik(y) < z. If Z is on the uphill of pI then by using the above lemma, we get E~==-Ol pi(z)J-Li(xO) = E~=o p'(z)J-Li(xo) < pl(z) + 1/(6k - 1) < piHl(y) + (1 + (16/3)1)(1/(6k - 1)) < pik+1(y) + 1/4 (note that 1 $ k - 1 and k ~ 2). If z is on the downhill of pI then by using the above lemma, we get E~==-Ol pi(Z)J-Li(xo) = E~=o pi(z)J-Li(xo) > pl(z) > pl(pik(y)) _ (16/3)1(1/(6k - 1)) > pik+l(y) - 1/4. 0 Next we show the encoding scheme we adopted. We show only the case w = 8(h2 ) since the case w = 8(h) or more generally w = O(h2) is easily obtained from this. Theorem 4.8 There is a network of2n inputs, 2h hidden units with h2 weights w, 328 A. Sakurai and h 2 sets of input values Xl, ... ,Xh2 such that for any set of values Y1, ... , Yh2 we can chose W to satisfy Yi = fG(w; Xi). Proof. We extensively utilize the fact that monomials obtained by choosing at most k variables from n variables with repetition allowed (say X~X2X6) are all linearly independent ([1]). Note that the number of monomials thus formed is (n~m). Suppose for simplicity that we have 2n inputs and 2h main hidden units (we have other hidden units too), and h = (n~m). By using multiplication units (in fact each is a composite of two squaring units and the outputs are supposed to be summed up as in Figure 1), we can form h = (n~m) linearly independent monomials composed of variables Xl, . •• ,Xn by using at most (m -l)h multiplication units (or h nominal units when m = 1). In the same way, we can form h linearly independent monomials composed of variables Xn+ll . .• , X2n. Let us denote the monomials by U1, •.• , Uh and V1, . .. , Vh. We form a subnetwork to calculate 2:7=1 (2:7=1 Wi,jUi)Vj by using h multiplication units. Clearly the calculated result Y is the weighted sum of monomials described above where the weights are Wi,j for 1 $ i, j $ h. Since y = fG(w; x) is a linear combination of linearly independent terms, if we choose appropriately h2 sets of values Xll . . . , Xh2 for X = (Xl, .. • , X2n) , then for any assignment of h2 values Y1, ... ,Yh2 to Y we have a set of weights W such that Yi = f(xi, w). 0 Proof of Theorem -4.1. The whole network consists of the decoder and the encoder. The input points are the Cartesian product of the above Xl, ... ,Xh2 and {xo defined in Lemma 4.4 for bj = 111 $ j :$ 8'} for some h where 8' is the number of bits to be encoded. This means that we have h2 s points that can be shattered. Let the number of hidden layers of the decoder be 8. The number of units used for the decoder is 4(8 - 1) + 1 (for the degree 2 case which can decode at most 8 bits) or 4(8 - 3) + 4(k - 1) + 1 (for the degree 2k case which can decode at most (8 - 2)k bits). The number of units used for the encoder is less than 4h; we though have constraints on 8 (which dominates the depth of the network) and h (which dominates the number of units in the network) that h :$ (n~m) and m = O(s) or roughly log h = 0(8) be satisfied. Let us chose m = 2 (m = log 8 is a better choise). As a result, by using 4h + 4(s I} + 1 (or 4h + 4(8 - 3) + 4(k -1) + 1) units in s + 2 layers, we can shatter h 28 (or h 2 (8 - 2) log d) points; or asymptotically by using h units 8 layers we can shatter (1/16)w( 8 - 3) (or (1/16)w( 8 - 5) log d) points. 0 5 Piecewise Polynomial Case Theorem 5.1. Let us consider a set of networks of units with linear input functions and piecewise polynomial (with q polynomial segments) activation functions. Q( W8 log( dqh/ 8)) is a lower bound of the VC-dimension, where 8 is the depth of the network and d is the maximum degree of the activation functions. More precisely, (1/16)w(s - 6)(10gd+ log(h/s) + logq) is an asymptotic lower bound. For the scarcity of space, we give just an outline of the proof. Our proof is based on that of the polynomial networks. We will use h units with activation function of q ~ 2 polynomial segments of degree at most d in place of each of pk unit in the decoder, which give the ability of decoding log dqh bits in one layer and slog dqh bits in total by 8( 8h) units in total. If h designates the total number of units, the Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks 329 number of the decodable bits is represented as log(dqh/s). In the following for simplicity we suppose that dqh is a power of 2. Let pk(x) be the k composition of p(x) as usual i.e. pk(x) = p(pk-l(x)) and pl(X) = p(x). Let plogd,/(x) = /ogd(,X/(x)), where 'x(x) = 4x if x $ 1/2 and 4 - 4x otherwise, which by the way has 21 polynomial segments. Now the pk unit in the polynomial case is replaced by the array /ogd,logq,logh(x) of h units that is defined as follows: (i) plogd,logq,l(X) is an array of two units; one is plogd,logq(,X+(x)) where ,X+(x) = 4x if x $ 1/2 and 0 otherwise and the other is plog d,log q ('x - (x)) where ,X - (x) = 0 if x $ 1/2 and 4 - 4x otherwise. (ii) plog d,log q,m~x) is the array of 2m units, each with one of the functions plogd,logq(,X ( . .• ('x±(x)) . . . )) where ,X±( ... ('x±(x)) .. ·) is the m composition of 'x+(x) or 'x - (x). Note that ,X±( ... ('x±(x)) ... ) has at most three linear segments (one is linear and the others are constant 0) and the sum of 2m possible combinations t(,X±( . . . ('x±(x)) · . . )) is equal to t(,Xm(x)) for any function f such that f(O) = O. Then lemmas similar to the ones in the polynomial case follow. References [1] Anthony, M: Classification by polynomial surfaces, NeuroCOLT Technical Report Series, NC-TR-95-011 (1995). [2] Goldberg, P. and M. Jerrum: Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers, Proc. Sixth Annual ACM Conference on Computational Learning Theory, 361-369 (1993). [3] Karpinski, M. and A. Macintyre, Polynomial bounds for VC dimension of sigmoidal neural networks, Proc. 27th ACM Symposium on Theory of Computing, 200-208 (1995). [4] Koiran, P. and E. D. Sontag: Neural networks with quadratic VC dimension, Journ. Compo Syst. Sci., 54, 190-198(1997). [5] Maass, W. G.: Bounds for the computational power and learning complexity of analog neural nets, Proc. 25th Annual Symposium of the Theory of Computing, 335-344 (1993). [6] Maass, W. G.: Neural nets with superlinear VC-dimension, Neural Computation, 6, 877-884 (1994) [7] Milnor, J.: On the Betti numbers of real varieties, Proc. of the AMS, 15, 275-280 (1964). [8] Sakurai, A.: Tighter Bounds of the VC-Dimension of Three-layer Networks, Proc. WCNN'93, III, 540-543 (1993). [9] Sakurai, A.: On the VC-dimension of depth four threshold circuits and the complexity of Boolean-valued functions, Proc. ALT93 (LNAI 744), 251-264 (1993); refined version is in Theoretical Computer Science, 137, 109-127 (1995). [10] Sakurai, A.: On the VC-dimension of neural networks with a large number of hidden layers, Proc. NOLTA'93, IEICE, 239-242 (1993). [11] Warren, H. E.: Lower bounds for approximation by nonlinear manifolds, Trans. AMS, 133, 167-178, (1968).
|
1998
|
143
|
1,501
|
The effect of eligibility traces on finding optimal memoryless policies in partially observable Markov decision processes John Loch Department of Computer Science University of Colorado Boulder, CO 80309-0430 loch@cs.colorado.edu Abstract Agents acting in the real world are confronted with the problem of making good decisions with limited knowledge of the environment. Partially observable Markov decision processes (POMDPs) model decision problems in which an agent tries to maximize its reward in the face of limited sensor feedback. Recent work has shown empirically that a reinforcement learning (RL) algorithm called Sarsa(A) can efficiently find optimal memoryless policies, which map current observations to actions, for POMDP problems (Loch and Singh 1998). The Sarsa(A) algorithm uses a form of short-term memory called an eligibility trace, which distributes temporally delayed rewards to observation-action pairs which lead up to the reward. This paper explores the effect of eligibility traces on the ability of the Sarsa(A) algorithm to find optimal memoryless policies. A variant of Sarsa(A) called k-step truncated Sarsa(A) is applied to four test problems taken from the recent work of Littman, Littman, Cassandra and Kaelbling, Parr and Russell, and Chrisman. The empirical results show that eligibility traces can be significantly truncated without affecting the ability of Sarsa(A) to find optimal memoryless policies for POMDPs. 1 Introduction Agents which operate in the real world, such as mobile robots, must use sensors which at best give only partial information about the state of the environment. Information about the robot's surroundings is necessarily incomplete due to noisy and/or imperfect sensors, occluded objects, and the inability of the robot to know precisely where it is. Such agentenvironment systems can be modeled as partially observable Markov decision processes or POMDPs (Sondik, 1978). A variety of algorithms have been developed for solving POMDPs (Lovejoy, 1991). However most of these techniques do not scale well to problems involving more than a few dozen states due to the computational complexity of the solution methods (Cassandra, 1994; Littman 1994). Therefore, finding efficient reinforcement learning Effect of Eligibility Traces on Finding Optimal Memoryless Policies lOll methods for solving POMDPs is of great practical interest to the Artificial Intelligence and engineering fields. Recent work has shown empirically that the Sarsa(A) algorithm can efficiently find the best deterministic memoryless policy for several POMDPs problems from the recent literature (Loch and Singh 1998). The empirical results from Loch and Singh (1998) suggest that eligibility traces are necessary for finding the best or optimal memoryless policy. For this reason, a variant of Sarsa(A) called k-step truncated Sarsa(A) is formulated to explore the effect of eligibility traces on the ability of Sarsa( A) to find the best memory less policy. The main contribution of this paper is to show empirically that a variant of Sarsa(A) using truncated eligibility traces can find the optimal memory less policy for several POMDP problems from the literature. Specifically we show that the k-step truncated Sarsa(A) method can find the optimal memoryless policy for the four POMDP problems tested when k :S 2. 2 Sarsa(J..) and POMDPs An environment is defined by a finite set of states S, the agent can choose from a finite set of actions A, and the agent's sensors provide it observations from a finite set X. On executing action a £ A in state s £ S the agent receives expected reward rsa and the environment transitions to a state s' £ S with probability pass" The probability of the agent observing x £ X given that the state is s is O(xls). A straightforward way to extend RL algorithms to POMDPs is to learn Q-value functions of observation-action pairs, i.e. to simply treat the agents observations as states. Below we describe the standard Sarsa(A) algorithm applied to POMDPs. At time step t the Qvalue function is denoted Qt ; the eligibility trace function is denoted 'YIt ; and the reward received is denoted rt . On experiencing transition <xt. at. rb Xt+l> the following updates are performed in order: 'YIt(x, a) = YA 'YIt-l(X, a) ; for all X"# Xt and a"# at where bt = rt + Y Qt<xt+h at+l) - Qt(Xb aJ and a is the step-size (learning rate). The eligibility traces are initialized to zero, and in episodic tasks they are reinitiaHzed to zero after every episode. The greedy policy at time step t assigns to each observation x the action a = argmaxb Qt<x, b). 2.1 Sarsa(A) Using Truncated Eligibility Traces Sarsa(A) with truncated eligibility traces uses a parameter k which sets the eligibility trace for an observation-action pair to zero if that observation-action pair was not visited within the last k-I time steps. Thus I-step truncated Sarsa(A) is equivalent to Sarsa(O) and 2-step truncated Sarsa(A) updates the Q-values of the current observation-action pair and the immediately preceding observation-action pair. 1012 J Loch 3 Empirical Results The truncated Sarsa(/..) algorithm was applied in an identical manner to four POMDP problems taken from the recent literature. Complete descriptions of the states, actions, observations, and rewards for each problem are provided in Loch and Singh (1998). Here we describe the aspects of the empirical results common to all four problems. At each step. the agent selected a random action with a probability equal to the exploration rate parameter and selected a greedy action otherwise. An initial exploration rate of 35% was used, decreasing linearly with each action (step) until the 350000th action from there onward the exploration rate remain fixed at 0%. Q-values were initialized to O. Both the step-size a and the /.. values are held constant in each experiment. A discount factor y of 0.95 and a /.. value of 1.0 were used for all four problems. 3.1 Sutton's Grid World Sutton's grid world (Littman 1994) is an agent-environment system with 46 states, 30 observations, and 4 actions. State transitions and observations are deterministic. The I-step truncated eligibility trace, equivalent to Sarsa(O), was able to find a policy which could only reach the goal from start states within 7 steps of the goal state as shown in Figure 1. The optimal memoryless policy yielding 416 total steps to the goal state was found by the 2-step, 4-step and 8-step truncated eligibility trace methods shown in Figure 1. s)' ut os '0 sT. .. • "~ 3$ I $ • la J 2 to IS " lQ __ .-_-_.) ' .. ?eO ,., • eo '''' "" -.,-p. ..... ) .Q-_.-IS os • • , .... "~ • IS 1 l • r ~ I ~ IS 0.5 10' 1M soo 1100 _.,_,Io ..... ) ~l . 100 . .L1 200 SfIO 4!l1 '" MO "'. lOll 11& SOB .00 ........ AcIIeIN (M dOO".) ....... oI~ 0- l..-.) 5DO s.t ... Figure 1: Sutton's Grid World (from Littman, 1994). Total steps to goal performance as a function of the number oflearning steps for 1,2,4, and 8-step eligibility traces. Effect of Eligibility Traces on Finding Optimal Memoryless Policies 1013 3.2 Chrisman's Shuttle Problem Chrisman's shuttle problem is an agent-environment system with 8 states, 5 observations, and 3 actions. State transitions and observations are stochastic. The I-step truncated eligibility trace, equivalent to Sarsa(O), was unable to find a policy which could could reach the goal state (Figure 2). The optimal memoryless policy yielding an average reward per step of 1.02 was found by the 2-step, 4-step, and 8-step truncated eligibility trace methods shown in Figure 2. lQ_ ... _·-..cOI " " r· a" ], r es •• '2 °o~~ * ,GO .00 <G' ... ......... AdIoM"" t.eor.l , 4Q __ ... _ • • I' • .' 2 1 ..u.. , ..,. ... f '0"" I: , , • I , 0 " " 114 I" ] , r G. ~ .. oz '0 , • , .. r' .' , I , 1 08 •• .. , 2Q __ -_ ..... ..... .. ...... .,.. ,110 l~ HO 4M _"_110'-'1 eQ __ _ ... .i ..... , ,1 rr ' .. .. ~ .. .. ..... • I Gt l'ftl JOo ... 0 5tft uo lee XI" ~ 040(1 ~f)O 6.L'II .......... ~~ (lftS0."8) ........ elAdIaN(1a I ..... ) Figure 2: Chrisman's shuttle problem. Average reward per step performance as a function of the number ofleaming steps for 1,2,4, and 8-step eligibility traces. 3.3 Littman, Cassandra, and Kaelbling's 89 State Office World Littman et al.' s 89 state office world (Littman (995) is an agent-environment system with 89 states, 17 observations, and 5 actions. State transitions and observations are stochastic. The I-step truncated eligibility trace, equivalent to Sarsa(O), was able to find a policy which could reach the goal state in only 51% of the 251 trials (Figure 3). The 2-step, 4step and 8-step truncated eligibility trace methods converged to the best memoryless policy found by Loch & Singh (1998) yielding a 77% success rate in reaching the goal state (Figure 3). 1014 J. Loch IQ ____ -_O} 2Q ___ _ •• ot "' •• I., lu fo. t .. j:: i:: 1°' JOS . 2 H . , •• '. , .. 2<>0 , .. ••• ... .. • ° _"_0_1_'0) lO' s .. • •• u u lor 10$ jos •• J .. '2 ., . , """'''AdMM O. 1"'.) Figure 3: Littman et al.'s 89 state office world, Percent successful trials in reaching goal performance as a function of the number oflearning steps for 1,2,4, and 8-step eligibility traces_ 3.4 Parr & Russell's Grid World Parr and Russell's grid world (parr and Russell 1995) is an agent-environment system with 11 states, 6 observations, and 4 actions, State transitions are stochastic while observations are deterministic, The optimal memoryless policy yielding an average reward per step of 0,024 was found by both the I-step and 2-step truncated eligibility trace methods (Figure 4), Policies found by the 4-step and 8-step methods were not optimal, This result can be attributed to the sharp eligibility trace cutoff as this effect was not observed with smoothly decaying eligibility traces. Effect of Eligibility Traces on Finding Optimal Memoryless Policies 1015 II .. , Of I .os 1 I I .. , I· 1 I ,~ .. ' ., .. ' 51 100 I so it1l 2SO ' H 'sa .. ., +W $90 ......... ., AdIona CIIII 1 ...... Figure 4: Parr & Russell's Grid World. Average reward per step performance as a function of the number oflearning steps for 1, 2, 4, and 8-step eligibility traces. 3.5 Discussion In all the empirical results presented above, we have shown that the k-step truncated Sarsa(i-.) algorithm was able to find the best or the optimal deterministic memoryless policy when k=2. This result is surprising since it was expected that the length of the eligibility trace required to find a good or optimal policy would vary widely depending on problem specific factors such as landmark (unique observation) spacing and the delay between critical decisions and rewards. Several additional POMDP problems were formulated in an attempt to create a POMDP which would require a k value greater than 2 to find the optimal policy. However, for all trial POMDPs tested the optimal memoryless policy could be found with k ~ 2. 4 Conclusions and Future Work The ability of the Sarsa(i-.) algorithm and the k-step truncated Sarsa(i-.) algorithm to find optimal deterministic memoryless policies for a class of POMDP problems is important for several reasons. For POMDPs with good memoryless policies the Sarsa(i-.) algorithm provides an efficient method for finding the best policy in that space. If the performance of the memoryless policy is unsatisfactory, the observation and action spaces of the agent can be modified so as to produce an agent with a good memoryless policy. The designer of the autonomous system or agent can modifY the observation 1016 JLoch space of the agent by either adding sensors or making finer distinctions in the current sensor values. In addition, the designer can add attributes from past observations into the current observation space. The action space can be modified by adding lower-level actions and by adding new actions to the space. Thus one method for designing a capable agent is to iterate between selecting an observation and action space for the agent, using Sarsa(J...) to find the best memory less policy in that space, and repeating until satisfactory perfonnance is achieved. This suggests a future line of research into how to automate the process of observation and action space selection so as to acheive an acceptable performance level. Other avenues of research include an exploration into theoretical reasons why Sarsa(J...) and k-step truncated Sarsa(J...) are able to solve POMDPs. In addition, further research needs to be conducted as to why short (k -::; 2) eligibility traces work well over a wide class of POMDPs. References Cassandra, A (1994). Optimal policies for partially observable Markov decision processes. Technical Report CS-94-14, Brown University, Department of Computer Science, Providence RI. Littman, M. (1994). The Witness Algorithm: Solving partially observable Markov decision processes. Technical Report CS-94-40, Brown University, Department of Computer Science, Providence RI. Littman, M., Cassandra, A, & Kaelbling, L. (1995). Learning policies for partially observable environments: Scaling up. In Proceedings of the Twelfth International Conference on Machine Learning, pages 362-370, San Francisco, CA, 1995. Morgan Kaufinann. Loch, J., & Singh, S. (1998). Using eligibility traces to find the best memoryless policy in partially observable Markov decision processes. To appear In Proceedings of the Fifteenth International Conference on Machine Learning" Madison, WI, 1998. Morgan Kaufinann. (Available from http://www.cs.colorado.edul-baveja/papers.htm1) Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observable Markov decision processes. In Annals of Operations Research, 28 : 47~66. Parr, R. & Russell, S. (1995). Approximating optimal policies for partially observable stochastic domains. In Proceedings of the International Joint Conference on Artificial Intelligence. Sondik, E. J. (1978). The optimal control of partially observable Markov decision processes over the infinite horizon: Discounted costs. InOperations Research, 26(2). Sutton, R.S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference of Machine Learning, pages 216-224, San Mateo, CA Morgan Kaufman. Littman, M. (1994). Memoryless policies: theoretical limitations and practical results. In From Animals to Animats 3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior, Cambridge, MA MIT Press.
|
1998
|
144
|
1,502
|
Exploratory Data Analysis Using Radial Basis Function Latent Variable Models Alan D. Marrs and Andrew R. Webb DERA St Andrews Road, Malvern Worcestershire U.K. WR14 3PS {marrs,webb}@signal.dera.gov.uk @British Crown Copyright 1998 Abstract Two developments of nonlinear latent variable models based on radial basis functions are discussed: in the first, the use of priors or constraints on allowable models is considered as a means of preserving data structure in low-dimensional representations for visualisation purposes. Also, a resampling approach is introduced which makes more effective use of the latent samples in evaluating the likelihood. 1 INTRODUCTION Radial basis functions (RBF) have been extensively used for problems in discrimination and regression. Here we consider their application for obtaining low-dimensional representations of high-dimensional data as part of the exploratory data analysis process. There has been a great deal of research over the years into linear and nonlinear techniques for dimensionality reduction. The technique most commonly used is principal components analysis (PCA) and there have been several nonlinear generalisations, each taking a particular definition of PCA and generalising it to the nonlinear situation. One approach is to find surfaces of closest fit (as a generalisation of the PCA definition due to the work of Pearson (1901) for finding lines and planes of closest fit). This has been explored by Hastie and Stuetzle (1989), Tibshirani (1992) (and further by LeBlanc and Tibshirani, 1994) and various authors using a neural network approach (for example, Kramer, 1991). Another approach is one of variance maximisation subject to constraints on the transformation (Hotelling, 1933). This has been investigated by Webb (1996), using a transformation modelled as an RBF network, and in a supervised context in Webb (1998). An alternative strategy also using RBFs, based on metric multidimensional scaling, is described by Webb (1995) and Lowe and Tipping (1996). Here, an optimisation criterion, 530 A. D. Marrs and A. R. Webb termed stress, is defined in the transformed space and the weights in an RBF model determined by minimising the stress. The above methods use a radial basis function to model a transformation from the highdimensional data space to a low-dimensional representation space. A complementary approach is provided by Bishop et al (1998) in which the structure of the data is modelled as a function of hidden or latent variables. Termed generative topographic mapping (GTM), the model may be regarded as a nonlinear generalisation of factor analysis in which the mapping from latent space to data space is characterised by an RBF. Such generative models are relevant to a wide range of applications including radar target modelling, speech recognition and handwritten character recognition. However, one of the problems with GTM that limits its practical use for visualising data on manifolds in high dimensional space arises from distortions in the structure that it imposes. This is acknowledged in Bishop et al (1997) where 'magnification factors' are introduced to correct for the GTM's deficiency as a means of data visualisation. This paper considers two developments: constraints on the permissible models and resampiing of the latent space. Section 2 presents the background to latent variable models; Model constraints are discussed in Section 3. Section 4 describes a re-sampling approach to estimation of the posterior pdf on the latent samples. An illustration is provided in Section 5. 2 BACKGROUND Briefly, we shall re-state the basic GTM model, retaining the notation of Bishop et al (1998). Let Ui, i = 1, ... , N}, ti E RP represent measurements on the data space variables; Z E R. represent the latent variables. Let t be normally-distributed with mean y(z; W) and covariance matrix {3-1 I; y(z; W) is a nonlinear transformation that depends on a set of parameters W. Specifically, we shall assume a basis function model M y(z; W) = L Wi<Pi(Z) i=1 where the vectors Wi E R. D are to be determined through optimisation and {<Pi, i = 1, . . . , M} is a set of basis functions defined on the latent space. The data distribution may be written p(tIW,{3) = ! p(tlz; W ,{3)p(z)dz (1) where, under the assumptions of normality, ( {3 ) D /2 {{3 } p(tlz; W,{3) = 271" exp -"2ly(z; W) - tll2 Approximating the integral by a finite sum (assuming the functions p(z) and y(z) do not vary too greatly compared with the sample spacing), we have K p(tIW,{3) = LPiP(tlzi; W,{3) (2) i =1 which may be regarded as a function of the parameters W and {3 that characterise y . Exploratory Data Analysis Using Radial Basis Function Latent Variable Models Given the data set {ti' i = 1, ... ,N}, the log likelihood is given by N L(W,J3) = I)n[p(t;IW,J3)] j=l which may be maximised using a standard EM approach (Bishop et ai, 1998). In this case, we have 1 N Pj = N L~n n=l as the re-estimate of the mixture component weights, Pj, at the (m + 1) step, where ~ _ p~m)p(tnIZi; w(m), J3(m») n EiP~m)p(tnlzi;W(m),J3(m») 531 (3) (4) and (.) (m) denotes values at the mth step. Note that Bishop et al (1998) do not re-estimate P;; all values are taken to be equal. The number of P; terms to be re-estimated is K, the number of terms used to approximate the integral (1). We might expect that the density is smoothly varying and governed by a much fewer number of parameters (not dependent on K). The re-estimation equation for the D x M matrix W = [w 11 ... I W M] is W(m+l) = TT RT +[+T G+]-l (5) where G is the K x K diagonal matrix with N Gjj = LRjn n=l and TT = [tIl .. . ltN], +T = [tfJ(Zl)I .. . ltfJ(ZK)]. The term J3 is re-estimated as l/J3(m) = 1/(ND) E~l E~l Rjilti - w(m+l)tfJ(Zj) 12. Once we have determined parameters of the transformation, we may invert the model by asking for the distribution of Z given a measurement ti. That is, we require p(Zlti) = p(tilz)p(z) f p(tilz)p(z)dz (6) For example, we may plot the position of the peak of the distribution p( Ziti) for each data sample ti. 3 APPLYING A CONSTRAINT One way to retain structure is to impose a condition that ensures that a unit step in the latent space corresponds to a unit step in the data space (more or less). For a single latent variable, Xl, we may impose the constraints that lay 12 1 aXl = which may be written, in terms of W as ifwTWil 1 532 A. D. Marrs and A. R. Webb where;l = 8c/J. ~ The derivative of the data space variable with respect to the latent variable has unit magnitude. The derivative is of course a function of Xl and imposing such a condition at each sample point in latent space would not be possible owing to the smoothness of the RBF model. However, we may average over the latent space, where ( .) denotes average over the latent space. In general, for L latent variables we may impose a constraint JTWTW J = 1 L leading to the penalty term Tr {A(JTWTW J - IL)} where J is an M x L matrix with jth column 8¢/8xj and A is a symmetric matrix of Lagrange multipliers. This is very similar to regularisation terms. It is a condition on the norm of W; it incorporates the Jacobian matrix J and a symmetric L x L matrix of Lagrange multipliers, A. The re-estimation solution for W may be written (7) with A chosen so that the constraint JT W T W J = 1 L is satisfied. We may also use the derivatives of the transformation to define a distortion measure or magnification factor, M(Zj W) = IIJTWTW J - 1112 which is a function of the latent variables and the model parameters. A value of zero shows that there is no distortion 1 • An alternative to the constraint approach above is to introduce a prior on the allowable transformations using the magnification factor; for example, P(W) ~ exp(-AM(zj W)) (8) where A is a regularisation parameter. This leads to a modification to the M-step reestimation equation for W, providing a maximum a posteriori estimate. Equation (8) provides a natural generalisation of PCA since for the special case of a linear transformation (Pi = Xi, M = L), the solution for W is the PCA space as A ~ 00. 4 RESAMPLING THE LATENT SPACE Having obtained a mapping from latent space to data space using the above constraint, we seek a better estimate to the posterior pdf of the latent samples. Current versions of GTM require the latent samples to be uniformly distributed in the latent space which leads to distortions when the data of interest are projected into the latent space for visualisation. Since the responsibility matrix R can be used to determine a weight for each of the latent samples it is possible to update these samples using a resampling scheme. We propose to use a resampling scheme based upon adaptive kernel density estimation. The basic procedure places a Gaussian kernel on each latent sample. This results in a Gaussian 1 Note that this differs from the measure in the paper by Bishop et aI, where a rati()-()f-areas criterion is used, a factor which is unity for zero distortion, but may also be unity for some distortions. Exploratory Data Analysis Using Radial Basis Function Latent Variable Models 533 mixture representation of the pdf of the latent samples p( x It), K p(xlt) = ~PiN(lLi' E i ), (9) i=l where each mixture component is weighted according to the latent sample weight Pi. Initially, the Ei'S are all equal, taking their value from the standard formula of Silverman (1986), Ei = hLy, (10) where matrix Y is an estimate of the covariance of p( x )and, (11) If the kernels are centered exactly on the latent samples, this model artificially inflates the variance of the latent samples. Following West (1993) we perform kernel shrinkage by making the lLi take the values (12) where jL is the mean of the latent samples. This ensures that there is no artificial inflation of the variance. To reduce the redundancy in our initially large number of mixture components, we propose a kernel reduction scheme in a similar manner to West. However, the scheme used here differs from that of West and follows a scheme proposed by Salmond (1990). Essentially, we chose the component with the smallest weight and its nearest neighbour, denoting these with subscripts 1 and 2 respectively. These components are then combined into a single component denoted with subscript c as follows, Pc = Pl + P2 PllLl + P21L2 IL = --=---= c Pc (13) (14) Ec = Pl[El + (lLc -lLl)(lLc -lLl)T] + P2[E2 + (lLc -1L2)(lLc -1L2)T]. (15) Pc This procedure is repeated until some stopping criterion is met. The stopping criterion could be a simple limit upon the number of mixture components ie; smaller than K but sufficiently large to model the data structure. Alternatively, the average kernel covariance and between kernel covariance can be monitored and the reduction stopped before some multiple (eg. 10) of the average kernel covariance exceeds the between kernel covariance. Once a final mixture density estimate is obtained, a new set of equally weighted latent samples can be drawn from it. The new latent samples represent a better estimate of the posterior pdf of the latent samples and can be used, along with the existing RBF mapping, to calculate a new responsibility matrix R. This procedure can be repeated to obtain a further improved estimate of the posterior pdf which, after only a couple of iterations can lead to good estimates of the posterior pdf which further iterations fail to improve upon. 5 RESULTS A latent variable model based oil a spherically-symmetric Gaussian RBF has been implemented. The weights and the centres of the RBF were initialised so that the solution best approximated the zer<rdistortion principal components solution for tw<rdimensional projection. 534 A. D. Marrs and A. R. Webb For our example we chose to construct a simulated data set with easily identifiable structure. Four hundred points lying on the letters "NIPS" were sampled and projected onto a sphere of radius 50 such that the points lay between 250 and 1750 longitude and 750 and 1250 latitude with Gaussian noise of variance 4.0 on the radius of each point. The resulting data are shown in figure 1. ToY dataset Figure 1: Simulated data. :: I I . .,:.' .:\ ~ .. ~. "I: .i,~ • . > I I .~.,...... I -u r ... 1 I £~: l' -to.,. :., ...... Figure 2: Results for standard GTM model. Figure 3: Results for regularisedlresampled model. Figure 2 shows results for the standard GTM (uniform grid of latent samples) projection of the data to two dimensions. The central figure shows the projection onto the latent space, exhibiting significant distortion. The left figure shows the projection of the regular grid of latent samples (red points) into the data space. Distortion of this grid can be easily seen. The right figure is a plot of the magnification factor as defined in section 3, with mean value of 4.577. For this data set most stretching occurs at the edges of the latent variable space. Figure 3 shows results for the regularisedlresampled version of the latent variable model for A = 1.0. Again the central figure shows the projection onto the latent space after 2 iterations of the resampling procedure. The left-hand figure shows the projection of the initial regular grid of latent samples into the data space. The effect of regularisation is evident by the lack of severe distortions. Finally the magnification factors can be seen in the right-hand figure to be lower, with a mean value of 0.976. Exploratory Data Analysis Using Radial Basis Function Latent Variable Models 535 6 DISCUSSION We have considered two developments of the GTM latent variable model: the incorporation of priors on the allowable model and a resampling approach to the maximum likelihood parameter estimation. Results have been presented for this regularisedlresampling approach and magnification factors lower than the standard model achieved, using the same RBF model. However, further reduction in magnification factor is possible with different RBF models, but the example illustrates that resampling offers a more robust approach. Current work is aimed at assessing the approach on realistic data sets. References Bishop, C.M. and Svensen, M. and Williams, C.K.1. (1997). Magnification factors for the GTM algorithm. lEE International Conference on Artificial Neural Networks, 465-471. Bishop, C.M. and Svensen, M. and Williams, C.K.1. (1998). GTM: the generative topographic mapping. Neural Computation, 10,215-234. Hastie, T. and Stuetzle, W. (1989). Principal curves, Journal of the American Statistical Association, 84, 502-516. Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24, 417-441,498-520. Kramer, M.A. (1991). Nonlinear principal component analysis using autoassociative neural networks. American Institute of Chemical Engineers Journal, 37(2),233-243. LeBlanc, M. and Tibshirani, R. (1994). Adaptive principal surfaces. Journal of the American Statistical Association, 89(425), 53-664. Lowe, D. and Tipping, M. (1996). Feed-forward neural networks and topographic mappings for exploratory data analysis. Neural Computing and Applications, 4, 83-95. Pearson, K. (1901). On lines and planes of closest fit. Philosophical Magazine, 6, 559-572. Salmond, D.J. (1990). Mixture reduction algorithms for target tracking in clutter. Signal & Data processing of small targets, edited by O. Drummond, SPlE, 1305. Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall,1986. Tibshirani, R. (1992). Principal curves revisited. Statistics and Computing, 2(4), 183-190. Webb, A.R. (1995). Multidimensional scaling by iterative majorisation using radial basis functions. Pattern Recognition, 28(5), 753-759. Webb, A.R. (1996). An approach to nonlinear principal components analysis using radially-symmetric kernel functions. Statistics and Computing, 6, 159-168. Webb, A.R. (1997). Radial basis functions for exploratory data analysis: an iterative majorisation approach for Minkowski distances based on multidimensional scaling. Journal of Classification, 14(2),249-267. Webb, A.R. (1998). Supervised nonlinear principal components analysis. (submitted for publication ). West, M. (1993). Approximating posterior distributions by mixtures. J. R. Statist. Soc B, 55(2), 409-422.
|
1998
|
145
|
1,503
|
A VI model of pop out and asymmetry visual search Zhaoping Li University College London, z.li@ucl.ac.uk Abstract Visual search is the task of finding a target in an image against a background of distractors. Unique features of targets enable them to pop out against the background, while targets defined by lacks of features or conjunctions of features are more difficult to spot. It is known that the ease of target detection can change when the roles of figure and ground are switched. The mechanisms underlying the ease of pop out and asymmetry in visual search have been elusive. This paper shows that a model of segmentation in VI based on intracortical interactions can explain many of the qualitative aspects of visual search. 1 Introduction • In Visual search is closely related to visual segmentation, and therefore can be used to diagnose the mechanisms of visual segmentation. For instance, a red dot can popout against a background of green distractor dots instantaneously, suggesting that only pre-attentive mechanisms are necessary (Treisman et aI, 1990). On the other hand, it is much more difficult to search for a red 'X' among green 'X's and red 'O's - the time it takes to detect the target's presence increases with the number of background distractors, suggesting some form of attentive serial search. Sometimes, the search times change when the role of the figure (target) and ground (distractors) are switched -- asymmetry in visual search. For instance, it is easier to find a longer bar in a background of shorter bars than vice-versa. It has been unclear which visual areas or neural mechanisms are responsible for the pop out and asymmetry in visual search. There are, however, psychophysical theories (Treisman et at 1990, Treisman and Gormican 1988) which argue that visual inputs are coded in a number of primitive or basic feature dimensions: orientation, color, brightness, motion direction, disparity, line ends, line intersections, and closure. A target can pop-out preattentively if it has a feature in one of these dimensions, such as a particular color or orientation, which is absent in the distracA VI Model of Pop Out and Asymmetry in Visual Search 797 tors. Hence, a red dot pops out among green ones. However, red 'X' is difficult to spot among green 'X's and red 'O's because neither being red nor being 'X' is unique for the target, and therefore serial search is required. While a vertical line pops out of horizontal ones and vice versa without any search asymmetry, search asymmetry will arise when a single feature in which target and distractors differ is present in one of the two and absent or reduced in the other. Hence, a long line is more easily spotted among short lines than the reserve. This theory has been very helpful in understanding search phenomena. However, it has to make assumptions about what are the primitive feature dimensions, as well as what constitutes larger or smaller values along a given dimension. For instance, to explain that a curved line is more easily spotted among straight lines than the reverse, the theory has to define straightness as the default or standard, and curvaciousness as the deviation from this standard and thus an added feature. Empirically, other pairs of standard and deviant properties include vertical versus tilted, parallel versus convergent, short vs long lines, circle vs ellipse, and complete versus incomplete circles. The basis behind these assumptions are not completely clear. Other related theories have similar problems. For instance, Julesz's texton theory (Julesz 1981) for visual segmentation or pop out starts off by assuming a complete set of special features that constitute textons. This paper proposes and demonstrates in a model that pre-attentive mechanisms in VI can qualitatively explain many of the phenomena of visual search. It is assumed that the ease of search is determined by the relative saliencies of the target and distractors. Intracortical interactions in VI alter the saliencies of targets and distractors according to their own image features as well as those of the distractor or targets images that form the context. Hence, the relative saliency depends on the particular target-distractor pair involved. In particular, asymmetry is a natural consequence of contextual influences. 2 The VI model We use a VI model of pre-attentive visual segmentation which has been shown to be able to detect and highlight smooth contours in noisy backgrounds and find boundaries between texture regions in images (Li 1998a, 1998b). Its behavior agrees with physiological observations (Knierim and van Essen 1992, Kapadia et al 1995). Without loss of generality, the model ignores color, motion, and stereo dimensions, includes mainly layer 2-3 orientation selective cells, and ignores the intra-hypercolumnar mechanism by which their receptive fields are formed. Inputs to the model are images filtered by the edge- or bar-like local receptive fields (RFs) of VI cells.! The cells influence each other contextually via horizontal intra-cortical connections (Rockland and Lund 1983, Gilbert, 1992), transforming patterns of inputs to patterns of cell responses. Fig. 1 shows the elements of the model and their interactions. At each location i there is a model VI hypercolumn composed of K neuron pairs. Each pair (i, 0) has RF center i and preferred orientation 0 = br / K for k = 1,2, ... K, and is called (the neural representation of) an edge segment. Based on experimental data (White, 1989, Douglas and Martin 1990), each edge segment consists of an excitatory and an inhibitory neuron that are interconnected, and each model cell represents a collection of local cells of similar types. The excitatory cell receives the visual input; its output is used as a measure of the response or salience of the edge segment and projects to higher visual areas. The inhibitory cells are treated as interneurons. Based on observations by Gilbert, Lund and their colleagues (Rockland and Lund, 1983, Gilbert 1992) horizontal connections JiO,jO' IThe terms 'edge' and 'bar' will be used interchangeably. 798 A Visual space, edge detectors, and their interactions ~~~ ~~~~ *!~* !mp!- -~ne o! edge location detectors B Neural connection pattern. Solid: J, Dashed: W ~ " .< "'" " .. , ~ ~~"" " "" ~~ ~~~-~~~ ~~"" "'."~~ ~""'."" '." ." ' ~ C Model Neural Elements Edge outputs to higher visual areas Inputs Ic to inhibitory cells r-+-;--+-.--+-~-+-'-* An interconnected .. :.- neuron pair for - ~ edge segment i e 1" Inhibitory : intemeurons Visual inputs, filtered through the receptive fields, to the excitatory cells. Excitatory neurons Z. Li Figure 1: A: Visual inputs are sampled in a discrete grid of edge/bar detectors. Each grid point i has K neuron pairs (see C), one per bar segment, tuned to different orientations 8 spanning 1800 • Two segments at different grid points can interact with each other via monosynaptic excitation J (the solid arrow from one thick bar to anothe r) or disynaptic inhibition W (the dashed arrow to a thick dashed bar). See also C. B: A schematic of the neural connection pattern from the center (thick solid) bar to neighboring bars within a few sampling unit distances. J's contacts are shown by thin solid bars. W's are shown by thin dashed bars. The connection pattern is translation and rotation invariant. C: An input bar segment is directly processed by an interconnected pair of excitatory and inhibitory cells, each cell models abstractly a local group of cells of the same type. The excitatory cell receives visual input and sends output 9x(XiO) to higher centers. The inhibitory cell is an interneuron. Visual space is taken as having periodic boundary conditions. (respectively Wio.jo') mediate contextual influences via monosynaptic excitation (respectively disynaptic inhibition) from j8' to i8 which have nearby but different RF centers, i # j, and similar orientation preferences, 8 f'V 8'. The membrane potentials follow the equations: XiO -O:xXiO - L 1P(~8)9Y(Yi .6+~O) + J09x(XiO) + L Jio•jO'9x(Xjo') + liO + lo ~o ji-i,O' ilie = -O:yYiO + 9x(XiB) + L WiO.jO' 9x(XjO') + Ie ji-i,O' where O:xXiO and O:yYiO model the decay to resting potential, 9x(x) and 9y(Y) are sigmoid-like functions modeling cells' firing rates in response to membrane potentials x and Y, respectively, 1P(~(}) is the spread of inhibition within a hypercolumn, J09x (XiO) is self excitation, Ie and 10 are background inputs, including noise and inputs modeling the general and local normalization of activities (see Li (1998b) for more details). Visual input liB persists after onset, and initializes the activity levels 9x (XiO ). The activities are then modified by the contextual influences. Depending on the visual input, the system often settles into an oscillatory state (Gray A VI Modelo/Pop Out and Asymmetry in Visual Search 799 and Singer, 1989, see the details in Li 1998b). Temporal averages of gx(XiO) over several oscillation cycles are used as the model's output. The nature of the computation performed by the model is determined largely by the horizontal connections J and W, which are local (spanning only a few hypercolumns), and translation and rotation invariant (Fig. 1B). A: Pop out ~ B: No pop ~out Input (liB) Input (liB) ~~ ~~ ~ ~ ~ ~ ~~ ~~~ ~ ~~~~~~~ ~~~ ~ T- ~~ ~ ~~~ T-~~ ~ ~~~ ~ ~ ~ ~ ~ ~~ ~~ ~~~ ~ ~~~ ~~ ~~~ ~ ~ ~ ~ ~ ~~~ ~~ ~~~~~~ ~ Output ~ ,r .r ~~,r~~~l ~~,r~T- ~ 1 ~ ~ ~~~.r ,r ~.r~ .r ~ ~.r~ ~.r~~.r.r ~ c: Cross amo!.1g bars Input (liB) I I I I I I I I I I I I I I I I I I I + I I I I I I I I I I I I I I Output 1 1 1 1 1 1 I 1 I I I 11+1 I 1 1 1 1 1 1 I I (r, z) = (2.5,3.3) (r, z) = (0.38, -0.9) (r, z) = (2.4,7.1) D: Bar aJIlong CrOsses Input (liB) +++++++ +++ ++ +++ +++ ++ 1++++ +++ + + +++ ++ + ++++ Output +++++++ +++ ++ +++ +++ ++1++++ +++ + + +++ ++ + ++++ (r, z) = (1.5,0.8) Figure 2: Visual search examples plotted by the model inputs and outputs. A: A single distinctive feature, the horizontal bar in the target, enables pop out. This target is the most salient (measured as the saliency of the horizontal bar in target) spot in the image. B: The target does not pop out since neither of its features, a horizontal and a 45° bars, is unique in the image. The target is less salient than average in the image. C and D demonstrate the asymmetry in a target-distractor pair. C: The cross is the most salient (measured by the saliency of the horizontal bar) spot in the image. The popout strength is stronger than in A. D: The target bar does not pop out, The model was applied to a variety of input patterns, as shown in examples in the figures. The input values fio are the same for all visible bars in each example. The differences in the outputs are caused by intracortical interactions. They become significant about one membrane time constant after the initial neural response (Li, 1998b). The widths of the bars in the figures are proportional to input and output strengths. The plotted region in each picture is often a small region of an extended image. The same model parameters (e.g. the dependence of the synaptic weights on distances and orientations, the thresholds and gains in the functions gx 0 and gyO, and the level of input noise in 10 ) are used for all the simulation examples. We define the net saliency S i at each grid point i as that of the most activated bar. Define S and as be the mean and standard deviation of the saliencies of all grid points with visible stimuli. Let Ti == Sd S and Zi == (Si - S)/as . A highly salient point i should have large values of (Ti , Zi ) - in particular, both Ti and Zi should be larger than 1. For larger targets that occupy more than one grid point, the relative saliency measure of the target is that of the most salient grid point on the target. Fig. (2)A,B compare the state of the target '7'-' in two different contexts. Against a texture of ')" it is highly salient because of its unique horizontal bar. Against ')" and '~ ' it is much less salient because only the conjunction of '-' and '/ ' distinguishes it. Fig. (2)C,D exhibit search asymmetry. The horizontal bar in the target is unique in the image of Fig. (2)A,C, which leads to pop out, and each target sits at the most salient location in the respective images. On the other hand, no feature in the targets of Fig. (2)B,D is unique. These examples are consistent with the psychophysical 800 A: closed vs open t ) , , :) +> '" , ::I :) ... ) ~ , , , , , = , , , , ' .. ) , ~ -, , ' J 'J , -' , c .. , '" , , T, Z = 1.02, 0.4 ' ... ) , , t..) () +> C) :) , ' ... ) , , ::I ~ , , I .. ) ' J () = ~ , , , , , ,) () ' ... ) , , () , , T ,Z = 1.1,9.7 B: parallel vs convergent , , , , , , , , , , , , , , , , , , , , , , , , T ,Z = 0 .89, -1.4 - , , , , , , , , , , , T ,Z = 1.17,1.9 C: short vs. long D: straight vs. curved ) ) ) ) ) , , ) T , Z = 0 .99, -0.06 T , Z = 1.02, 0 .3 I : I , : ' T , Z = 1.06, 1.07 T , Z = 1.09, 1.12 Z. Li E: circle vs ellipse ._. I I II - n (I (I - ( I) (I (I (I (I (I (I o 0 o 0 o . o I. °c o T, Z = 1.05, 0.7 '-' (I (l I) (I Z = 1.13,2.8 Figure 3: Five typical examples, one column each, of visual search asymmetry as simulated in the model. The input stimuli are plotted, the target saliency r, z scores are indicated below each of them. All input bars are of the same intermediate input contrast. The role of figure and ground is switched from the top to the bottom rows. theories mentioned in introduction. Further, we note that because intracortical interactions link mostly neurons preferring similar orientations, two very different orientations can be viewed as independent features. The pop out is stronger in Fig. (2)C than Fig. (2)A since horizontal differs more from vertical (90°) than from 45°. The V1 orientation selective RFs and orientation specific horizontal connnections provide the neural basis for orientation as one of the primitive feature dimensions. In fact, the contextual influences between image features imply that saliency values depend on detailed geometrical relationships between features within and between a target or distrator and its nearby target or distractors (see Fig. (2)B). The relative ease in searches varies continuously from extreme pop out to slow serial searches depending on the specific stimuli, as suggested by Duncan and Humphreys (1989). Further interesting examples of search asymmetry include cases for which neither target nor distractors have a primitive feature (such as color or orientation) that is absent in the other. Asymmetry is much weaker but still present. Figure 3 shows some typical examples. Although the saliencies of the more salient targets are only fractionally higher than the average feature saliency in rest of the image, this fraction is significant when the standard deviation (J" s of the saliencies is small or when z is large enough, thus making the search task easier. 3 Summary and Discussion Early psychophysical studies (Treisman et al 1990) suggested that most aspects of visual search involve mechanisms of early vision. However, it has never been clear which visual areas or neural mechanisms might be responsible. To the best of my knowledge, this model is the first non-phenomenological model to understand the A V1 Model a/Pop Out and Asymmetry in Visual Search )l ~~~~~~~~~~~~~~j~j~j~~~~~j~~ c """"""'/~//////////// """"""'1'///////////// """"",,'////////////// """"""'////////////// "',"""','////////////// """"""'////////////// """"""'////////////// """"""'////////////// ,,"""""'////////////// ... , .. , .. , .. , .. , .. , .. , .. , .. , .. , .. , ////////// /////////// ////////// /////////// ////////// /"//J';;;;;; ////////// /////////// ////////// /////////// ////////// , , , , , , , , , , , B /-/-/-/-/1/1/1/1/1/1/ -/-/-/-/-/1/1/1/1/1/1 /-/-/-/-/1/1/1/1/1/1/ -/-/-/-/-/1/1/1/1/1/1 /-/-/-/-/1/1/1/1/1/1/ -/-/-/-/-/1/1/1/1/1/1 /-/-/-/-/1/1/1/1/1/1/ -/-/-/-/-/1/1/1/1/1/1 /-/-/-/-/1/1/1/1/1/1/ -/-/-/-;-/1/1/1/1;1/1 /-/-;-;-;1;1;1;1/1/1/ I ,.--. /" *\ ---+----~--, / '-....... 801 Figure 4: Four examples of model performance under various inputs. Each plots the visual input image at the top and the most activated bars in VI cell outputs (using a threshold) at the bottom. Every visible bar in a given input image has the same input strength. A, B, and C demonstrate that the texture region boundaries have the highest output saliencies. D shows that the smooth contours are detected as the most salient against a background of noise. neural bases of visual search phenomena (see Rubenstein and Sagi (1990) for a model of asymmetry using variances of the local image filter responses). This paper has shown that intra-cortical interactions in VI can account for the qualitative phenomena of pop-out and asymmetry in visual search, assuming that the ease of detection is directly determined by the saliencies of targets. Of course, the task of search requires decision making and often visual attention, especially when the target does not spontaneously pop-out. The quantitative search times can only be modeled on the basis of an assumption of specific mechanisms for attention and decision making. Our model suggests, nevertheless, that pre-attentive VI mechanisms playa significant and controlling role in such tasks. Furthermore, it suggests that some otherwise intractable phenomena can be understood without resorting to additional concepts such as textons (Julesz 1981) or defining certain image properties (such as closure and straightness) as having standard or reference values. Our current implementation of VI is still very simplistic. We have not yet included color, motion, or stereo inputs, nor multiscale sampling. Further, our input sampling density is very low. Consequently, the model cannot simulate many of the more complex input stimuli used in psychophysical experiments (Treisman and Gormican, 1988). An extended implementation is needed to test whether VI mechanisms alone can qualitatively account for all or most types of search pop-out and asymmetries. Physiological evidence (Gilbert 1992) suggests that intracortical connections tend to link neurons with similar selectivities in other dimensions, such as color and stereo, in addition to orientation. This supports the idea that color, motion, and disparity are also primitive visual coding dimensions like orientation. We 802 Z. Li believe that the example in Fig. 2A,B demonstrating pop-out versus serial search would be more convincing if color were included to simulate, for instance, a red 'X' among green 'X's with and without red 'O's in the background. Our current model does not explain why a slightly tilted line pops out more readily from vertical line distractors than the reverse. This is because our VI model idealistically assumes rotational symmetry, and so vertical is not distinguished from other orientations. Neither our visual environment nor our visual system is in fact rotationally invariant. The VI model was originally proposed to account for pre-attentive contour enhancement and visual segmentation (Li 1998a, 1998b). The contextual influences mediated by the intracortical interactions enable each VI neuron to process inputs from a local image area larger than its classical receptive field. This enables cortical neurons to detect image locations where translation invariance in the input image breaks down, and highlight these image locations with higher neural activities, making them conspicuous. These highlights mark candidate locations for image region (or object surface) boundaries, smooth contours and small figures against backgrounds, serving the purpose of pre-attentive segmentation. Fig. 4 demonstrates the performance of the model for pre-attentive segmentation. In each example, the visual inputs and the most salient outputs are shown. All examples are simulated using exactly the same model parameters as those used in examples of visual search. It is not too surprising that a model of pre-attentive segmentation in VI can explain visual search phenomena. Indeed, pop out has been commonly understood as a sign of pre-attentive segmentation. Our model further suggests that asymmetry in visual search is partly a side-effect of pre-attentive segmentation. Our VI model can in turn be improved using visual search as a diagnostic tool. References [1] R. J. Douglas and K. A. Martin (1990) "Neocortex" in Synaptic Organization of the Brain ed. G. M. Shepherd. (Oxford University Press), 3rd Edition, pp389438 [2] Duncan J. Humphreys G. Psychological Review 96: pl-26, (1989). [3] C. D. Gilbert (1992) Neuron. 9(1), 1-13. [4] C. M. Gray and W. Singer (1989) Proc. Natl. A cad. Sci. USA 86, 1698-1702. [5] B. Julesz. (1981) Nature 290, 91-97. [6] M. K. Kapadia, M. Ito, C. D. Gilbert, and G. Westheimer (1995) Neuron. 15(4),843-56. [7] J. J. Knierim and D. C. van Essen (1992) J. Neurophysiol. 67, 961-980. [8] Z. Li (1998a) in Theoretical aspects of neural computation Eds. Wong, K.Y.M, King, I, and D-Y Yeung, Springer-Verlag, 1998. [9] Z. Li (1998b) Neural Computation 10(4) p 903-940. [10] K.S. Rockland and J. S. Lund (1983) J. Compo Neurol. 216, 303-318 [11] Rubenstein B. and Sagi D. asymmetries" J. Opt. Soc. Am. A 9: 1632-1643 (1990). [12] Treisman A, Cavanagh, P, Fischer B, Ramachandran V.S., and R. von der Heydt in Visual perception, the Neurophysiological Foundations Eds. L. Spillmann and J S. Werner, 1990 Academic Press. [13] Treisman A. and Gormican S. (1988) Psychological Rev. 95, 15-48. [14] E. L. White (1989) Cortical circuits (Birkhauser).
|
1998
|
146
|
1,504
|
A High Performance k-NN Classifier Using a Binary Correlation Matrix Memory Ping Zhou zhoup@cs.york.ac.uk Jim Austin austin@cs.york.ac.uk John Kennedy johnk@cs.york.ac.uk Advanced Computer Architecture Group Department of Computer Science University of York, York YOW 500, UK Abstract This paper presents a novel and fast k-NN classifier that is based on a binary CMM (Correlation Matrix Memory) neural network. A robust encoding method is developed to meet CMM input requirements. A hardware implementation of the CMM is described, which gives over 200 times the speed of a current mid-range workstation, and is scaleable to very large problems. When tested on several benchmarks and compared with a simple k-NN method, the CMM classifier gave less than I % lower accuracy and over 4 and 12 times speed-up in software and hardware respectively. 1 INTRODUCTION Pattern classification is one of most fundamental and important tasks, and a k-NN rule is applicable to a wide range of classification problems. As this method is too slow for many applications with large amounts of data, a great deal of effort has been put into speeding it up via complex pre-processing of training data, such as reducing training data (Dasarathy 1994) and improving computational efficiency (Grother & Candela 1997). This work investigates a novel k-NN classification method that uses a binary correlation matrix memory (CMM) neural network as a pattern store and match engine. Whereas most neural networks need a long iterative training time, a CMM is simple and quick to train. It requires only one-shot storage mechanism and simple binary operations (Willshaw & Buneman 1969), and it has highly flexible and fast pattern search ability. Therefore, the combination of CMM and k-NN techniques is likely to result in a generic and fast classifier. For most classification problems, patterns are in the form of multi-dimensional real numbers, and appropriate quantisation and encoding are needed to convert them into binary inputs to a CMM. A robust quantisation and encoding method is developed to meet requirements for CMM input codes, and to overcome the common problem of identical data points in many applications, e.g. background of images or normal features in a diagnostic problem. Many research projects have applied the CMM successfully to commercial problems, e.g. symbolic reasoning in the AURA (Advanced Uncertain Reasoning Architecture) approach 714 P. Zhou. J. Austin and J. Kennedy (Austin 1996), chemical structure matching and post code matching. The execution of the CMM has been identified as the bottleneck. Motivated by the needs of these applications for a further high speed processing, the CMM has been implemented in dedicated hardware, i.e. the PRESENCE architecture. The primary aim is to improve the execution speed over conventional workstations in a cost-effective way. The following sections discuss the CMM for pattern classification, describe the PRESENCE architecture (the hardware implementation of CMM), and present experimental results on several benchmarks. 2 BINARY CMM k-NN CLASSIFIER The key idea (Figure I) is to use a CMM to pre-select a smaIl sub-set of training patterns from a large number of training data, and then to apply the k-NN rule to the sub-set. The CMM is fast but produces spurious errors as a side effect (Turner & Austin 1997); these are removed through the application of the k-NN rule. The architecture of the CMM classifier (Figure I) includes an encoder (detailed in 2.2) for quantising numerical inputs and generating binary codes, a CMM pattern store and match engine and a conventional kNN module as detailed below . Training patterns stored in CMM Patterns preselected by CMM k-NN patterns • • I I • • ~B~r:~~k • • • • classification Figure 1: Architecture of the binary CMM k-NN classifier 2.1 PATTERN MATCH AND CLASSIFICATION WITH CMM A correlation matrix memory is basically a single layer network with binary weights M. In the training process a unique binary vector or separator s, is generated to label an unseen input binary vector P,; the CMM learns their association by performing the following logical ORing operation: M=VS,TPi i In a recall process, for a given test input vector Pk' the CMM performs: (1) Vk=MPJ=(ysTpl,J (2) followed by thresholding v k and recovering individual separators. For speed, it is appropriate to use a fixed thresholding method and the threshold is set to a level proportional to the number of 'I' bits in the input pattern to allow an exact or partial match. To understand the recall properties of the CMM, consider the case where a known pattern Pk is represented, then Equation 2 can be written as the following when two different patterns are orthogonal to each other: (3) where np is a scalar, i.e. the number of 'I' bits in P k ' and P,P: = 0 for i:;; k . Hence a perfect recall of Sk can be obtained by thresholding v, at the level n" . In practice 'partially orthogonal' codes may be used to increase the storage capacity of the CMM and the recall noise can be removed via appropriately thresholding v k (as p,p[ ~ n p for i :;; k ) RNNs Can Learn Symbol-Sensitive Counting 715 and post-processing (e.g. applying k-NN rule). Sparse codes are usually used, i.e. only a few bits in SA and P, being set to 'I' , as this maximises the number of codes and minimises the computation time (Turner & Austin 1997). These requirements for input codes are often met by an encoder as detailed below. The CMM exhibits an interesting 'partial match' property when the data dimensionality d is larger than one and input vector p; consists of d concatenated components. If two different patterns have some common components, v k also contains separators for partially matched patterns, which can be obtained at lower threshold levels. This partial or near match property is useful for pattern classification as it allows the retrieval of stored patterns that are close to the test pattern in Hamming distance. From those training patterns matched by the CMM engine, a test pattern is classified using the k-NN rule. Distances are computed in the original input space to minimise the information loss due to quantisation and noise in the above match process. As the number of matches returned by the CMM is much smaller than the number of training data, the distance computation and comparison are dramatically reduced compared with the simple k-NN method. Therefore, the speed of the classifier benefits from fast training and matching of the CMM, and the accuracy gains from the application of the k-NN rule for reducing information loss and noise in the encoding and match processes. 2.2 ROBUST UNIFORM ENCODING Figure 2 shows three stages of the encoding process. d-dimensional real numbers, xi' are quantised as y; ; sparse and orthogonal binary vectors, Ci ' are generated and concatenated to form a CMM input vector. Yd (~, Figure 2: Quantisation, code generation and concatenation CMM input codes should be distributed as uniformly as possible in order to avoid some parts of the CMM being used heavily while others are rarely used. The code uniformity is met at the quantisation stage. For a given set of N training samples in some dimension (or axis), it is required to divide the axis into Nb small intervals, called bins, such that they contain uniform numbers of data points. As the data often have a non-uniform distribution, the sizes of these bins should be different. It is also quite common for real world problems that many data points are identical. For instance, there are 11 %-99.9% identical data in benchmarks used in this work. Our robust quantisation method described below is designed to cope with the above problems and to achieve a maximal uniformity. In our method data points are first sOfted in ascending order, N, identical points are then identified, and the number of non-identical data points in each bin is estimated as N p = (N - N,)/ Nb . B in boundaries or partitions are determined as follows. The right boundary of a bin is initially set to the next N I' -th data point in the ordered data sequence; the number of identical points on both sides of the boundary is identified; these are either included in the current or next bin. If the number of non-identical data points in the last bin is N, and N,~(Np +Nb)' Np may be increased by (N, -Np)/Nb and the above partition process may be repeated to increase the uniformity. Boundaries of bins obtained become parameters of the encoder in Figure 2. In general it is appropriate to choose Nh such that each bin contains a number of samples, which is larger than k nearest neighbours for the optimal classification. 716 P. Zhou, J. Austin and J. Kennedy 3 THE PRESENCE ARCHITECTURE The pattern match and store engine of the CMM k-NN classifier has been implemented using a novel hardware based CMM architecture. i.e. the PRESENCE. 3.1 ARCHITECTURE DESIGN Important design decisions include the use of cheap memory, and not embedding both the weight storage and the training and testing in hardware (VLSI). This arises because the applications commonly use CMMs with over 100Mb of weight memory. which would be difficult and expensive to implement in custom silicon. VME and PCI are chosen to host on industry standard buses and to allow widespread application. The PRESENCE architecture implements the control logic and accumulators, i.e. the core of the CMM. As shown in Figure 3a binary input selects rows from the CMM that are added, thresholded using L-max (Austin & Stonham 1987) or fixed global thresholding, and then returned to the host for further processing. The PRESENCE architecture shown in Figure 3b consists of a bus interface, a buffer memory which allows interleaving of memory transfer and operation of the PRESENCE system, a SATCON and SA TSUM combination that accumulates and thresholds the weights. The data bus connects to a pair of memory spaces, each of which contains a control block, an input block and an output block. Thus the PRESENCE card is a memory mapping device, that uses interrupts to confirm the completion of each operation. For efficiency, two memory input/output areas are provided to be acted on from the external bus and used by the card. The control memory input block feeds to the control unit, which is a FPGA device. The input data are fed to the weights and the memory area read is then passed to a block of accumulators. In our current implementation the data width of each FPGA device is 32 bits, which allows us to add a 32 bit row from the weights memory in one cycle per device Input (sparse codes) p wei hts (-) • • Sumv Separator output s Data bus Figure 3: (a) correlation matrix memory. and (b) overall architecture of PRESENCE Currently we have 16Mb of 25ns static memory implemented on the VME card, and 128 Mb of dynamic (60ns) memory on the PCI card. The accumulators are implemented along with the thresholding logic on another FPGA device (SATSUM). To enable the SA TSUM processors to operate faster, a 5 stage pipeline architecture was used, and the data accumulation time is reduced from 175ns to 50ns. All PRESENCE operations are supported by a C++ library that is used in all AURA applications. The design of the SA TCON allows many SA TSUM devices to be used in parallel in a SIMD configuration. The VME implementation uses 4 devices per board giving a 128 bit wide data path. In addition the PCI version allows daisy chaining of cards allowing a 4 card set for a 512 bit wide data path. The complete VME card assembly is shown in Figure 4. The SA TCON and SA TSUM devices are mounted on a daughter board for simple upgrading and alteration. The weights memory, buffer memory and VME interface are held on the mother board. RNNs Can Learn Symbol-Sensitive Counting 717 Figure 4: The VME based PRESENCE card (a) motherboard, and (b) daughterboard 3.2 PERFORMANCE By an analysis of the state machines used in the SATCON device the time complexity of the approach can be calculated. Equation 4 is used to calculate the processing time, T, in seconds to recall the data with N index values, a separator size of S, R 32 bit SATSUM devices, and the clock period of C. T = C[23+(s-l)/32R+I)(N +38+2R)] (4) A comparison with a Silicon Graphics 133MHz R4600SC Indy in Table shows the speed up of the matrix operation (Equation 2) for our VME implementation (128 bits wide) using a fixed threshold. The values for processing rate are given in millions of binary weight additions per-second (MW/s). The system cycle time needed to sum a row of weights into the counters (i.e. time to accumulate one line) is SOns for the VME version and lOOns for the PCI version. In the PCI form, we will use 4 closely coupled cards, which result in a speed-up of 432. The build cost of the VME card was half the cost of the baseline SGI Indy machine, when using 4Mb of 20ns static RAM. In the PCI version the cost is greatly reduced through the use of dynamic RAM devices allowing a 128Mb memory to be used for the same cost. allowing only a 2x slower system with 32x as much memory per card (note that 4 cards used in Table I hold 512Mb of memory). Table I : Relative speed-up of the PRESENCE architecture Platform Processing_ Rate I Relative Speed Workstation 11.8 MW/s I I Card VME implementation 2557MW/s 216 Four card PCI system (estimate) 17,114MW/s 432 The training and recogmtlon speed of the system are approximately equal. This is particularly useful in on-line applications, where the system must learn to solve the problem incrementally as it is presented. In particular, the use of the system for high speed reasoning allows the rules in the system to be altered without the long training times of other systems. Furthermore our use of the system for a k-NN classifier also allows high speed operation compared with a conventional implementation of the classifier, while still allowing very fast training times. 4 RESULTS ON BENCHMARKS Performance of the robust quantisation method and the CMM classifier have been evaluated on four benchmarks consisting of large sets of real world problems from the Statlog project (Michie & Spiegelhalter 1994), including a satellite image database, letter image recognition database. shuttle data set and image segmentation data set. To visualise the result of quantisation, Figure Sa shows the distribution of numbers of data points of the 8th feature of the image segment data for equal-size bins. The distribution represents 718 P. Zhou, J. Austin and J. Kennedy the inherent characteristics of the data. Figure 5b shows our robust quantisation (RQ) has resulted in the uniform distribution desired. 400~~~--________ --~ 350 ~ 300 ""2SO ; ~ 200 .i1 150 E g 100 ~ 1 111111 o 5 10 15 20 25 30 35 values o f)ll; 40~ ____ --__ ~ ____ ~~ 35 " 30 ~ '" 25 ~ 20 ; .i1 15 E g 10 o o 10 15 20 25 30 3S values of x Figure 5: Distributions of the image segment data for (a) equal bins, (b) RQ bins We compared the CMM classifier with the simple k-NN method, multi-layer perceptron (MLP) and radial basis function (RBF) networks (Zhou and Austin 1997). In the evaluation we used the CMM software libraries developed in the project AURA at the University of York. Between 1 and 3 '1' bits are set in input vectors and separators. Experiments were conducted to study influences of a CMM's size on classification rate (crate) on test data sets and speed-up measured against the k-NN method (as shown in Figure 6). The speed-up of the CMM classifier includes the encoding, training and test time. The effects of the number of bins N b on the performance were also studied. ~ 0.89 i'! g 0.88 e 0.87 '" ~ 0.86 00.85 0 .84 0.5 I 1.5 2 2.5 3 15 4 CMM Si7.e (MBytes) I 1.5 2 2.5 3 3.5 4 CMM size (MBytes) Figure 6: Effects of the CMM size on (a) c-rate and (b) speed-up on the satellite image data Choices of the CMM size and the number of bins may be application dependent, for instance, in favour of the speed or accuracy. In the experiment it was required that the speed-up is not 4 times less and c-rate is not 1 % lower than that of the k-NN method. Table 2 contains the speed-up of MLP and RBF networks and the CMM on the four benchmarks. It is interesting to note that the k-NN method needed no training. The recall of MLP and RBF networks was very faster but their training was much slower than that of the CMM classifier. The recall speed-up of the CMM was 6-23 times, and the overall speed-up (including training and recall time) was 4-15x. When using the PRESENCE, i.e. the dedicated CMM hardware, the speed of the CMM was further increased over 3 times. This is much less than the speed-up of 216 given in Table 1 because of recovering separators and k-NN classification are performed in software. Table 2: Speed-up of MLP, RBF and CMM relative to the simple k-NN method Image segment Satellite image Letter Shuttle method training test training Test training test training test MLPN 0.04 18 0.2 28.4 0.2 96.5 4.2 587.2 RBFN 0.09 9 0.07 20.3 0.3 66.4 1.8 469.7 simplek-NN I 1 1 I CMM 18 9 15.8 5.7 24.6 6.8 43 23 The classification rates by the four methods are given in Table 3, which shows the CMM classifier performed only 0-1% less accurate than the k-NN method. RNNs Can Learn Symbol-Sensitive Counting 719 Table 3: Classification rates of four methods on four benchmarks Image segment Satellite image Letter Shuttle MLPN 0.950 0.914 0.923 0.998 RBFN 0.939 0.914 0.941 0.997 simple k-NN 0.956 0.906 0.954 0.999 CMM 0.948 0.901 0.945 0.999 5 CONCLUSIONS A novel classifier is presented, which uses a binary CMM for storing and matching a large amount of patterns efficiently, and the k-NN rule for classification. The RU encoder converts numerical inputs into binary ones with the maximally achievable uniformity to meet requirements of the CMM. Experimental results on the four benchmarks show that the CMM classifier, compared with the simple k-NN method, gave slightly lower classification accuracy (less than 1 % lower) and over 4 times speed in software and 12 times speed in hardware. Therefore our method has resulted in a generic and fast classifier. This paper has also described a hardware implementation of a FPGA based chip set and a processor card that will support the execution of binary CMM. It has shown the viability of using a simple binary neural network to achieve high processing rates. The approach allows both recognition and training to be achieved at speeds well above two orders of magnitude faster than conventional workstations at a much lower cost than the workstation. The system is scaleable to very large problems with very large weight arrays. Current research is aimed at showing that the system is scaleable, evaluating methods for the acceleration of the pre- and post processing tasks and considering greater integration of the elements of the processor through VLSI. For more details of the AURA project and the hardware described in this paper see http://www.cs.york.ac.uk/arch/nnJaura.html. Acknowledgements We acknowledge British Aerospace and the Engineering and Physical Sciences Research Council (grant no. GRiK 41090 and GR/L 74651) for sponsoring the research. Our thanks are given to R Pack, A Moulds, Z Ulanowski. R Jennison and K Lees for their support. References Willshaw, 0.1., Buneman, O.P. & Longuet-Higgins, H.C. (1969) Non-holographic associative memory. Nature, Vol. 222, p960-962. Austin, J. (1996) AURA, A distributed associative memory for high speed symbolic reasoning. In: Ron Sun (ed), Connectionist Symbolic Integration. Kluwer. Turner, M. & Austin, J. (1997) Matching performance of binary correlation matrix memories. Neural Networks; 10:1637-1648. Dasarathy, B.V. (1994) Minimal consistent set (MCS) identification for optimal nearest neighbor decision system design. IEEE Trans. Systems Man Cybernet; 24:511-517. Grother, P.l., Candela, G.T. & Blue, J.L. (1997) Fast implementations of nearest neighbor classifiers. Pattern Recognition; 30:459-465. Austin, J., Stonham, T.J. (1987) An associative memory for use in image recognition and occlusion analysis. Image and Vision Computing; 5:251-261. Michie, D., Spiegelhalter, 0.1. & Taylor, c.c. (1994) Machine learning, neural and statistical classification (Chapter 9). New York, Ellis Horwood. Zhou, P. & Austin J. (1998) Learning criteria for training neural network classifiers. Neural Computing and Applications Forum; 7:334-342. PART VI SPEECH, HANDWRITING AND SIGNAL PROCESSING
|
1998
|
147
|
1,505
|
Exploring Unknown Environments with Real-Time Search or Reinforcement Learning Sven Koenig College of Computing, Georgia Institute of Technology skoenig@cc.gatech.edu Abstract Learning Real-Time A* (LRTA*) is a popular control method that interleaves planning and plan execution and has been shown to solve search problems in known environments efficiently. In this paper, we apply LRTA * to the problem of getting to a given goal location in an initially unknown environment. Uninformed LRTA * with maximal lookahead always moves on a shortest path to the closest unvisited state, that is, to the closest potential goal state. This was believed to be a good exploration heuristic, but we show that it does not minimize the worst-case plan-execution time compared to other uninformed exploration methods. This result is also of interest to reinforcement-learning researchers since many reinforcement learning methods use asynchronous dynamic programming, interleave planning and plan execution, and exhibit optimism in the face of uncertainty, just like LRTA *. 1 Introduction Real-time (heuristic) search methods are domain-independent control methods that interleave planning and plan execution. They are based on agent-centered search [Dasgupta et at., 1994; Koenig, 1996], which restricts the search to a small part of the environment that can be reached from the current state of the agent with a small number of action executions. This is the part of the environment that is immediately relevant for the agent in its current situation. The most popular real-time search method is probably the Learning Real-Time A * (LRTA *) method [Korf, 19901 It has a solid theoretical foundation and the following advantageous properties: First, it allows for fine-grained control over how much planning to do between plan executions and thus is an any-time contract algorithm [Russell and Zilberstein, 1991]. Second, it can use heuristic knowledge to guide planning, which reduces planning time without sacrificing solution quality. Third, it can be interrupted at any state and resume execution at a different state. Fourth, it amortizes learning over several search episodes, which allows it to find plans with suboptimal plan-execution time fast and then improve the plan-execution time as it solves similar planning tasks, until its plan-execution time is optimal. Thus, LRTA * always has a small sum of planning and plan-execution 1004 Initially, u( s) = 0 for all s E S. 1. Scurrent := S start. 2. If Scurrent E G, then stop successfully. 3. Generate a local search space Sios ~ S with S current E Si s s and Siss n G = 0. 4. Update u( s) for all S E Sios (Figure 2). 5. a := one-ofargminaEA(scurrent) u( succ( Scurrent , a)). 6. Execute action a. 7. S current := SUCC(Scurrent, a). 8. If Scurrent E Si s", then go to 5. 9. Go to 2. Figure 1: Uninformed LRTA * S. Koenig 1. For all S E SI .. : u(s) := 00. 2. If u( s) < 00 for all S E Slss, then return. 3. s' := one-ofargminsEs, •• :u(s)= oo minaEA(s) u( succ(s, a)) . 4. IfminaEA(sl) u(succ(s', a)) = 00, then return. 5. u(s') := 1 + minaEA( sl) u(succ(s', a)). 6. Go to 2. Figure 2: Value-Update Step time, and it minimizes the plan-execution time in the long run in case similar planning tasks unexpectedly repeat. This is important since no search method that executes actions before it has solved a planning task completely can guarantee to minimize the plan-execution time right away. Real-time search methods have been shown to be efficient alternatives to traditional search methods in known environments. In this paper, we investigate real-time search methods in unknown environments. In such environments, real-time search methods allow agents to gather information early. This information can then be used to resolve some of the uncertainty and thus reduce the amount of planning done for unencountered situations. We study robot-exploration tasks without actuator and sensor uncertainty, where the sensors on-board the robot can uniquely identify its location and the neighboring locations. The robot does not know the map in advance, and thus has to explore its environment sufficiently to find the goal and a path to it. A variety of methods can solve these tasks, including LRTA *. The proceedings of the AAAI-97 Workshop on On-Line Search [Koenig et al., 1997] give a good overview of some of these techniques. In this paper, we study whether uninformed LRTA * is able to minimize the worst-case plan-execution time over all state spaces with the same number of states provided that its lookahead is sufficiently large. Uninformed LRTA * with maximallookahead always moves on a shortest path to the closest unvisited state, that is, to the closest potential goal state - it exhibits optimism in the fac\! of uncertainty [Moore and Atkeson, 19931 We show that this exploration heuristic is not as good as it was believed to be. This sol ves the central problem left open in [Pemberton and Korf, 1992] and improves our understanding of LRTA *. Our results also apply to learning control for tasks other than robot exploration, for example the control tasks studied in [Davies et ai., 19981 They are also of interest to reinforcement-learning researchers since many reinforcement learning methods use asynchronous dynamic programming, interleave planning and plan execution, and exhibit optimism in the face of uncertainty, just like LRTA * [Barto et ai., 1995; Kearns and Singh, 19981 2 LRTA* We use the following notation to describe LRTA *: S denotes the finite set of states of the environment, S3t(Jrt E S the start state, and 0 =I G ~ S the set of goal states. The number of states is n := lSI. A( s) =I 0 is the finite, nonempty set of actions that can be executed in state s E S. succ( s, a) denotes the successor state that results from the execution of action a E A(s) in state s E S. We also use two operators with the following semantics: Given Exploring Unknown Environments 1005 a set X, the expression "one-of X" returns an element of X according to an arbitrary rule. A subsequent invocation of "one-of X" can return the same or a different element. The expression "arg minxEx !(x)" returns the elements x E X that minimize !(x), that is, the set {x E XI!(x) = minx'Ex !(x')}. We model environments (topological maps) as state spaces that correspond to undirected graphs, and assume that it is indeed possible to reach a goal state from the start state. We measure the distances and thus plan-execution time in action executions, which is reasonable if every action can be executed in about the same amount of time. The graph is initially unknown. The robot can always observe whether its current state is a goal state, how many actions can be executed in it, and which successor states they lead to but not whether the successor states are goal states. Furthermore, the robot can identify the successor states when it observes them again at a later point in time. This assumption is realistic, for example, if the states look sufficiently different or the robot has a global positioning system (GPS) available. LRTA * learns a map of the environment and thus needs memory proportional to the number of states and actions observed. It associates a small amount of information with the states in its map. In particular, it associates a u-value u(s) with each state s E S. The u-values approximate the goal distances of the states. They are updated as the search progresses and used to determine which actions to execute. Figure 1 describes LRTA *: LRTA * first checks whether it has already reached a goal state and thus can terminate successfully (Line 2). If not, it generates the local search space S/H ~ S (Line 3). While we require only that the current state is part of the local search space and the goal states are not [Barto et al., 1995], in practice LRTA * constructs S/88 by searching forward from the current state. LRTA * then updates the u-values of all states in the local search space (Line 4), as shown in Figure 2. The value-update step assigns each state its goal distance under the assumption that the u-values of all states outside of the local search space correspond to their correct goal distances. Formally, if u( s) E [0,00] denotes the u-values before the value-update step and u(s) E [0,00] denotes the u-values afterwards, then u(s) = 1 + mina EA(8) u(succ(s, a)) for all s E S/S8 and u( s) = u( s) otherwise. Based on these u-values, LRTA * decides which action to execute next (Line 5). It greedily chooses the action that minimizes the u-value of the successor state (ties are broken arbitrarily) because the u-values approximate the goal distances and LRTA * attempts to decrease its goal distance as much as possible. Finally, LRTA * executes the selected action (Line 6) and updates its current state (Line 7). Then, if the new state is still part of the local search space used previously, LRTA * selects another action for execution based on the current u-values (Line 8). Otherwise, it iterates (Line 9), (The behavior of LRTA * with either minimal or maximal lookahead does not change if Line 8 is deleted.) 3 Plan-Execution Time of LRTA * for Exploration In this section, we study the behavior of LRTA * with minimal and maximallookaheads in unknown environments. We assume that no a-priori heuristic knowledge is available and, thus, that LRTA * is uninformed. In this case, the u-values of all unvisited states are zero and do not need to be maintained explicitly. Minimal Lookahead: The lookahead of LRTA * is minimal if the local search space contains only the current state. LRTA * with minimallookahead performs almost no planning between plan executions. Its behavior in initially known and unknown environments is identical. Figure 3 shows an example. Let gd(s) denote the goal distance of state s. Then, according to one of our previous results, uninformed LRTA * with any lookahead reaches a goal state after at most L 8 E s gd ( s) action executions [Koenig and Simmons, 1995]. Since L8ES gd(s) ~ L7:o1 i = 1/2n2 - 1/2n, 1006 goal -~ o = visited vertex (known not to be a goal vertex) o = unvisited (but known) vertex (unknown whether ft is a goal vertex) • = current vertex 0' the robot 0 3 = u·value of the vertex = edge trav~sed in at least one direction = untraversed edge _ = local search space LATA" with minimallookahead: LATA" with maximallookahead: ~ ~o o start Figure 3: Example t all edge lengths ara one goal Figure 4: A Planar Undirected Graph S. Koenig uninformed LRTA* with any lookahead reaches a goal state after O(n2 ) action executions. This upper bound on the plan-execution time is tight in the worst case for uninformed LRTA * with rninimallookahead, even if the number of actions that can be executed in any state is bounded from above by a small constant (here: three). Figure 4, for example, shows a rectangular grid-world for which uninformed LRTA * with rninimallookahead reaches a goal state in the worst case only after 8( n 2) action executions. In particular, LRTA * can traverse the state sequence that is printed by the following program in pseudo code. The scope of the for-statements is shown by indentation. for i := n-3 downto n/ 2 step 2 for j : = 1 to i step 2 print j for j : = i+l downto 2 step 2 print j for i := 1 to n-l step 2 print i In this case, LRTA * executes 3n 2/16 - 3/4 actions before it reaches the goal state (for n 2: 2 with n mod 4 = 2). For example, for n = 10, it traverses the state sequence 8), 83, 85,87,88,86,84,82,8),83,85,86,84,82,81,83,85,87, and 89 . Maximal Lookahead: As we increase the lookahead of LRTA *, we expect that its planexecution time tends to decrease because LRTA * uses more information to decide which Exploring Unknown Environments t branches of length 3 /~ LRTA* is now here o = visited vertex o = unvisited vertex the order in which the remaining unvisited vertices are visited t start = edge traversed in at least one direction = untraversed edge Figure 5: Another Planar Undirected Graph (m = 3) 1007 goal + action to execute next. This makes it interesting to study LRTA * with maximallookahead. The lookahead of LRTA * is maximal in known environments if the local search space contains all non-goal states. In this case, LRTA * performs a complete search without interleaving planning and plan execution and follows a shortest path from the start state to a closest goal state. Thus, it needs gd( Sst art ) action executions. No other method can do better than that. The maximallookahead ofLRTA * is necessarily smaller in initially unknown environments than in known environments because its value-update step can only search the known part of the environment. Therefore, the look ahead of LRTA * is maximal in unknown environments if the local search space contains all visited non-goal states. Figure 3 shows an example. Uninformed LRTA * with maximal lookahead always moves on a shortest path to the closest unvisited state, that is, to the closest potential goal state. This appears to be a good exploration heuristic. [Pemberton and Korf, 1992] call this behavior "incremental best-first search," but were not able to prove or disprove whether this locally optimal search strategy is also globally optimal. Since this exploration heuristic has been used on real mobile robots [Thrun et al., 1998], we study how well its plan-execution time compares to the plan-execution time of other uninformed exploration methods. We show that the worst-case plan-execution time of uninformed LRTA * with maximallookahead in unknown environments is Q( IO~~; n n) action executions and thus grows faster than linearly in the number of states n. It follows that the plan-execution time of LRTA * is not optimal in the worst case, since depth-first search needs a number of action executions in the worst case that grows only linearly in the number of states. Consider the graph shown in Figure 5, that is a variation of a graph in [Koenig and Smirnov, 19961. It consists of a stem with several branches. Each branch consists of two parallel paths of the same length that connect the stem to a single edge. The length of the branch is the length of each of the two paths. The stem has length mm for some integer m ;:::: 3 and consists of the vertices Vo, VI , .. . , Vmm . For each integer i with 1 ::; i ::; m there are mm-i branches of length :L~~~ mj each (including branches of length zero). These branches attach to the stem at the vertices Vj m' for integers j; if i is even, then 0::; j ::; mm-i 1, otherwise 1 ::; j ::; mm-i. There is one additional single edge that attaches to vertex Vo. 1008 S. Koenig Vm m is the starting vertex. The vertex at the end of the single edge of the longest branch is the goal vertex. Notice that the graph is planar. This is a desirable property since non-planar graphs are, in general, rather unrealistic models of maps. Uninformed LRTA * with maximallookahead can traverse the stem repeatedly forward and backward, and the resulting plan-execution time is large compared to the number of vertices that are necessary to mislead LRTA * into this behavior. In particular, LRTA * can behave as follows: It starts at vertex Vmm and traverses the whole stem and all branches, excluding the single edges at their end, and finally traverses the additional edge attached to vertex vo, as shown in Figure 5. At this point, LRTA* knows all vertices. It then traverses the whole stem, visiting the vertices at the ends of the single edges of the branches of length O. It then switches directions and travels along the whole stem in the opposite direction, this time visiting the vertices at the end of the single edges of the branches of length m, and so forth, switching directions repeatedly. It succeeds when it finally uses the longest branch and discovers the goal vertex. To summarize, the vertices at the ends of the branches are tried out in the order indicated in Figure 5. The total number of edge traversals is.o.( mm+l ) since the stem of length mm is traversed m + 1 times. To be precise, the total number of edge traversal~ is (mm+3 +3mm+2_8mm+1 +2m2 -m+3)/(m2-2m+ 1). It holds that n = 8(mm) smcen = (3mm+2_5mm+l_mm+mm-l +2m2-2m+2)/(m2-2m+l). This implies that m = .0.( IO~~; n) since it holds that, for k > 1 and all sufficiently large m (to be precise: m with m ~ k) 10Ik m+IOlk logk m mlOlk m 1 1 .1.+ logk logk m < I":Ui+o = m. m mlogk m m Put together, it follows that the total number of edge traversals is .o.(mm+!) = .o.(m n) = .0.( IO:~; n n). (We also performed a simulation that confirmed our theoretical results.) The graph from Figure 5 can be modified to cause LRTA * to behave similarly even if the assumptions of the capabilities of the robot or the environment vary from our assumptions here, including the case where the robot can observe only the actions that lead to unvisited states but not the states themselves. 4 Future Work Our example provided a lower bound on the plan-execution time of uninformed LRTA * with maximallookahead in unknown environments. The lower bound is barely super-linear in the number of states. A tight bound is currently unknown, although upper bounds are known. A trivial upper bound, for example, is O(n2) since LRTA* executes at most n - 1 actions before it visits another state that it has not visited before and there are only n states to visit. A tighter upper bound follows directly from [Koenig and Smirnov, 19961. It was surprisingly difficult to construct our example. It is currently unknown, and therefore a topic of future research, for which classes of graphs the worst-case plan-execution time of LRTA * is optimal up to a constant factor and whether these classes of graphs correspond to interesting and realistic environments. It is also currently unknown how the bounds change as LRTA * becomes more informed about where the goal states are. 5 Conclusions Our work provides a first analysis of uninformed LRTA * in unknown environments. We studied versions of LRTA * with minimal and maximal lookaheads and showed that their Exploring Unknown Environments 1009 worst-case plan-execution time is not optimal, not even up to a constant factor. The worstcase plan-execution time of depth-first search, for example, is smaller than that of LRTA * with either minimal or maximallookahead. This is not to say that one should always prefer depth-first search over LRTA * since, for example, LRTA * can use heuristic knowledge to direct its search towards the goal states. LRTA * can also be interrupted at any location and get restarted at a different location. If the batteries of the robot need to get recharged during exploration, for instance, LRTA * can be interrupted and later get restarted at the charging station. While depth-first search could be modified to have these properties as well, it would lose some of its simplicity. Acknowledgments Thanks to Yury Smirnov for our collaboration on previous work which this paper extends. Thanks also to the reviewers for their suggestions for improvements and future research directions. Unfortunately, space limitations prevented us from implementing all of their suggestions in this paper. References (Barto etal., 1995) Barto, A.; Bradtke, S.; and Singh, S. 1995. Learning to act using real-time dynamic programming. Artificial1ntelligence 73(1):81-138. (Dasgupta et aI., 1994) Dasgupta, P.; Chakrabarti, P.; and DeSarkar, S. 1994. Agent searching in a tree and the optimality of iterative deepening. Artificial Intelligence 71 : 195-208. (Davies et al., 1998) Davies, S.; Ng, A; and Moore, A 1998. Applying online search techniques to reinforcement learning. In Proceedings of the National Conference on Artificial Intelligence. 753-760. (Kearns and Singh, 1998) Kearns, M. and Singh, S. 1998. Near-optimal reinforcement learning in polynomial time. In Proceedings of the International Conference on Machine Learning. 260-268. (Koenig and Simmons, 1995) Koenig, S. and Simmons, RG. 1995. Real-time search in nondeterministic domains. In Proceedings of the International Joint Conference on Artificial Intelligence. 1660-1667. (Koenig and Smirnov, 1996) Koenig, S. and Smirnov, Y. 1996. Graph learning with a nearest neighbor approach. In Proceedings of the Conference on Computational Learning Theory. 19-28. (Koenig et aI., 1997) Koenig, S.; Blum, A; Ishida, T.; and Korf, R, editors 1997. Proceedings of the AAAI-97 Workshop on On-Line Search. AAAI Press. (Koenig, 1996) Koenig, S. 1996. Agent-centered search: Situated search with small look-ahead. In Proceedings of the National Conference on Artificial Intelligence. 1365. (Korf,1990) Korf, R. 1990. Real-time heuristic search. Artificial Intelligence 42(2-3):189-211. (Moore and Atkeson, 1993) Moore, A. and Atkeson, C. 1993. Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning 13:103-130. (Pemberton and Korf, 1992) Pemberton, J. and Korf, R 1992. Incremental path planning on graphs with cycles. In Proceedings of the International Conference on Artificial Intelligence Planning Systems. 179-188. (Russell and Zilberstein, 1991) Russell, S. and Zilberstein, S. 1991. Composing real-time systems. In Proceedings of the Internationalloint Conference on Artificial Intelligence. 212-217. (Thrun etal., 1998) Thrun, S.; BUcken, A; Burgard, W; Fox, D.; Frohlinghaus, T.; Hennig, D.; Hofmann, T.; Krell, M.; and Schmidt, T. 1998. Map learning and high-speed navigation in rhino. In Kortenkamp, D.; Bonasso, R.; and Murphy, R., editors 1998, Artificial Intelligence Based Mobile Robotics: Case Studies of Successful Robot Systems. MIT Press. 21-52.
|
1998
|
148
|
1,506
|
Inference in Multilayer Networks Large Deviation Bounds Michael Kearns and Lawrence Saul AT&T Labs Research Shannon Laboratory 180 Park A venue A-235 Florham Park, NJ 07932 {mkearns ,lsaul}Oresearch.att. com Abstract • VIa We study probabilistic inference in large, layered Bayesian networks represented as directed acyclic graphs. We show that the intractability of exact inference in such networks does not preclude their effective use. We give algorithms for approximate probabilistic inference that exploit averaging phenomena occurring at nodes with large numbers of parents. We show that these algorithms compute rigorous lower and upper bounds on marginal probabilities of interest, prove that these bounds become exact in the limit of large networks, and provide rates of convergence. 1 Introduction The promise of neural computation lies in exploiting the information processing abilities of simple computing elements organized into large networks. Arguably one of the most important types of information processing is the capacity for probabilistic reasoning. The properties of undirectedproDabilistic models represented as symmetric networks have been studied extensively using methods from statistical mechanics (Hertz et aI, 1991). Detailed analyses of these models are possible by exploiting averaging phenomena that occur in the thermodynamic limit of large networks. In this paper, we analyze the limit of large, multilayer networks for probabilistic models represented as directed acyclic graphs. These models are known as Bayesian networks (Pearl, 1988; Neal, 1992), and they have different probabilistic semantics than symmetric neural networks (such as Hopfield models or Boltzmann machines). We show that the intractability of exact inference in multilayer Bayesian networks Inference in Multilayer Networks via Large Deviation Bounds 261 does not preclude their effective use. Our work builds on earlier studies of variational methods (Jordan et aI, 1997). We give algorithms for approximate probabilistic inference that exploit averaging phenomena occurring at nodes with N » 1 parents. We show that these algorithms compute rigorous lower and upper bounds on marginal probabilities of interest, prove that these bounds become exact in the limit N -+ 00, and provide rates of convergence. 2 Definitions and Preliminaries A Bayesian network is a directed graphical probabilistic model, in which the nodes represent random variables, and the links represent causal dependencies. The joint distribution of this model is obtained by composing the local conditional probability distributions (or tables), Pr[childlparents], specified at each node in the network. For networks of binary random variables, so-called transfer functions provide a convenient way to parameterize conditional probability tables (CPTs). A transfer function is a mapping f : [-00,00] -+ [0,1] that is everywhere differentiable and satisfies f' (x) 2: 0 for all x (thus, f is nondecreasing). If f' (x) ::; a for all x, we say that f has slope a. Common examples of transfer functions of bounded slope include the sigmoid f(x) = 1/(1 + e- X ), the cumulative gaussian f(x) = J~oodt e- t2 / ft, and the noisy-OR f(x) = 1 - e- x . Because the value of a transfer function f is bounded between 0 and 1, it can be interpreted as the conditional probability that a binary random variable takes on a particular value. One use of transfer functions is to endow multilayer networks of soft-thresholding computing elements with probabilistic semantics. This motivates the following definition: Definition 1 For a transfer function f, a layered probabilistic f-network has: • Nodes representing binary variables {xf}, f = 1, ... ,L and i = 1, ... , N. Thus, L is the number of layers, and each layer contains N nodes. • For every pair of n~des XJ- 1 and xf in adjacent layers, a real-valued weight 0'-:-1 from X l - 1 to Xl tJ J t . • For every node xl in the first layer, a bias Pi. We will sometimes refer to nodes in layer 1 as inputs, and to nodes in layer L as outputs. A layered probabilistic f-network defines a joint probability distribution over all of the variables {Xf} as follows: each input node xl is independently set to 1 with probability Pi, and to 0 with probability 1 - Pi. Inductively, given binary values XJ-1 = x;-l E {O, 1} for all of the nodes in layer f - 1, the node xf is set to 1 with probability f('Lf=l ofj- 1 x;-l). Among other uses, multilayer networks of this form have been studied as hierarchical generative models of sensory data (Hinton et aI, 1995). In such applications, the fundamental computational problem (known as inference) is that of estimating the marginal probability of evidence at some number of output nodes, say the first f{ ::; N. (The computation of conditional probabilities, such as diagnostic queries, can be reduced to marginals via Bayes rule.) More precisely, one wishes to estimate Pr[Xf = Xl, ... ,XI< = XK] (where Xi E {O, 1}), a quantity whose exact computation involves an exponential sum over all the possible settings of the uninstantiated nodes in layers 1 through L - 1, and is known to be computationally intractable (Cooper, 1990). 262 M. Kearns and L. Saul 3 Large Deviation and Union Bounds One of our main weapons will be the theory of large deviations. As a first illustration of this theory, consider the input nodes {Xl} (which are independently set to 0 or 1 according to their biases pj) and the weighted sum 2::7= 1 Blj Xl that feeds into the ith node xl in the second layer. A typical large deviation bound (Kearns & Saul, 1997) states that for all f > 0, Pr[1 2::7=1 Blj (XJ - pj) I > f] ~ 2e-2~2 /(N0 2) where e is the largest weight in the network. If we make the scaling assumption that each weight Blj is bounded by T / N for some constant T (thus, e ~ T / N), then we see that the probability of large (order 1) deviations of this weighted sum from its mean decays exponentially with N. (Our methods can also provide results under the weaker assumption that all weights are bounded by O(N-a) for a > 1/2.) How can we apply this observation to the problem of inference? Suppose we are interested in the marginal probability Pr[Xl = 1]. Then the large deviation bound tells us that with probability at least 1 - 0 (where we define 0 = 2e- 2N€2/ r2), the weighted sum at node Xl will be within f of its mean value Pi = 2::7=1 Bljpj. Thus, with probability at least 1- 0, we are assured that Pr[Xl = 1] is at least f(pi - f) and at most f(Pi + f). Of course, the flip side of the large deviation bound is that with probability at most 0, the weighted sum may fall more than f away from Pi. In this case we can make no guarantees on Pr[Xl = 1] aside from the trivial lower and upper bounds of 0 and 1. Combining both eventualities, however, we obtain the overall bounds: (1 - O)f(Pi - f) ~ Pr[Xl = 1] ~ (1 - O)f(Pi + f) + o. (1) Equation (1) is based on a simple two-point approximation to the distribution over the weighted sum of inputs, 2::7=1 BtjX]. This approximation places one point, with weight 1 - 0, at either f above or below the mean Pi (depending on whether we are deriving the upper or lower bound); and the other point, with weight 0, at either -00 or +00. The value of 0 depends on the choice of f: in particular, as f becomes smaller, we give more weight to the ±oo point, with the trade-off governed by the large deviation bound. We regard the weight given to the ±oo point as a throw-away probability, since with this weight we resort to the trivial bounds of 0 or 1 on the marginal probability Pr[Xl = 1]. Note that the very simple bounds in Equation (1) already exhibit an interesting trade-off, governed by the choice of the parameter f-namely, as f becomes smaller, the throw-away probability 0 becomes larger, while the terms f(Pi ± f) converge to the same value. Since the overall bounds involve products of f(Pi ± f) and 1 - 0, the optimal value of f is the one that balances this competition between probable explanations of the evidence and improbable deviations from the mean. This tradeoff is reminiscent of that encountered between energy and entropy in mean-field approximations for symmetric networks (Hertz et aI, 1991). So far we have considered the marginal probability involving a single node in the second layer. We can also compute bounds on the marginal probabilities involving ]{ > 1 nodes in this layer (which without loss of generality we take to be the nodes Xr through Xi<). This is done by considering the probability that one or more of the weighted sums entering these ]{ nodes in the second layer deviate by more than f from their means. We can upper bound this probability by ]{ 0 by appealing to the so-called union bound, which simply states that the probability of a union of events is bounded by the sum of their individual probabilities. The union bound allows us to bound marginal probabilities involving multiple variables. For example, Inference in Multilayer Networks via Large Deviation Bounds 263 consider the marginal probability Pr[Xf = 1, ... , Xldeviation and union bounds, we find: 1]. Combining the large K K (I-Kb") rr f(Pi- f) ~ Pr[Xf = 1, ... , xl- = 1] < (I-Kb") rr f(Pi+f)+Kb". (2) i=1 i=1 A number of observations are in order here. First, Equation (2) directly leads to efficient algorithms for computing the upper and lower bounds. Second, although for simplicity we have considered f- deviations of the same size at each node in the second layer, the same methods apply to different choices of fi (and therefore b"i) at each node. Indeed, variations in fi can lead to significantly tighter bounds, and thus we exploit the freedom to choose different fi in the rest of the paper. This results, for example, in bounds of the form: ( _ ~ .) rrK . .) [ 2 _ 2 _ ] . _ -2NE; /r2 1 t;tb"t i=1 f(pt - ft ~ Pr Xl - 1, . .. ,XK - 1, where b"t - 2e . (3) The reader is invited to study the small but important differences between this lower bound and the one in Equation (2). Third, the arguments leading to bounds on the marginal probability Pr[X; = 1, ... , Xl- = 1] generalize in a straightforward manner to other patterns of evidence besides all 1 'so For instance, again just considering the lower bound, we have: ( 1 - t, 0;) ny -/(1';+';)] }I /(/4 -';) :s Pr[Xf = X" ... , Xl" = XK] (4) where Xi E {a, I} are arbitrary binary values. Thus together the large deviation and union bounds provide the means to compute upper and lower bounds on the marginal probabilities over nodes in the second layer. Further details and consequences of these bounds for the special case of two-layer networks are given in a companion paper (Kearns & Saul, 1997); our interest here, however, is in the more challenging generalization to multilayer networks. . 4 Multilayer Networks: Inference via Induction In extending the ideas of the previous section to multilayer networks, we face the problem that the nodes in the second layer, unlike those in the first, are not independent. But we can still adopt an inductive strategy to derive bounds on marginal probabilities. The crucial observation is that conditioned on the values of the incoming weighted sums at the nodes in the second layer, the variables {xl} do become independent. More generally, conditioned on these weighted sums all falling "near" their means an event whose probability we quantified in the last section the nodes {Xl} become "almost" independent. It is exactly this near-independence that we now formalize and exploit inductively to compute bounds for multilayer networks. The first tool we require is an appropriate generalization of the large deviation bound, which does not rely on precise knowledge of the means of the random variables being summed. Theorem 1 For all 1 ~ j ~ N, let Xj E {a, I} denote independent binary random variables, and let I Tj I ~ T. Suppose that the means are bounded by IE[Xj ]-Pj I ~ !:l.j, where ° < !:l.j ~ Pj ~ 1 - !:l.j. Then for all f > ~ L:f=l h I!:l.j,' Pr [ ~tTj(Xj-Pj) >f] ~2e-~~(E-ttL:~==1IrJI~Jr (5) J=1 264 M. Keams and L. Saul The proof of this result is omitted due to space considerations. Now for induction, consider the nodes in the fth layer of the network. Suppose we are told that for every i, the weighted sum 2:;=1 07j-1 XJ-1 entering into the node Xl lies in the interval (p~ - fr , J.lr + frJ, for some choice of the J.l~ and the ff . Then the mean of node xf is constrained to lie in the interval [pf - ~r, pf + ~n, where ~ [f(J.l~ - ff) + f(J.lf + ff)] ~ [J(J.lf + ff) - f(J.lf - fDJ . (6) (7) Here we have simply run the leftmost and rightmost allowed values for the incoming weighted sums through the transfer function , and defined the interval around the mean of unit xf to be centered around pf. Thus we have translated uncertainties on the incoming weighted sums to layer f into conditional uncertainties on the means of the nodes Xf in layer f . To complete the cycle, we now translate these into conditional uncertainties on the incoming weighted sums to layer f + 1. In particular, conditioned on the original intervals [J.lf - ff , J.lf + ff] , what is probability that for each i "N O~ .X~ lies inside some new interval [//+1 _l+l 1I~+1 + fl+1J? , 0J =1 1J J r1 l' r1 l ' In order to make some guarantee on this probability, we set J.lf+1 = 2:;=1 efjP] and assume that ff+1 > 2:;=1 IOfj I~]. These conditions suffice to ensure that the new intervals contain the (conditional) expected values of the weighted sums 2:;=1 efjxf , and that the new intervals are large enough to encompass the incoming uncertainties. Because these conditions are a minimal requirement for establishing any probabilistic guarantees, we shall say that the [J.lf - d, J.lf + ffj define a valid set of f-intervals if they meet these conditions for all 1 ::; i ::; N. Given a valid set of f-intervals at the (f + 1 )th layer, it follows from Theorem 1 and the union bound that the weighted sums entering nodes in layer f + 1 obey where Pr [ ~ O~ ·Xl 1I~+1 > f~+l for some 1 < i < N] ~ 1J J r1 1 j=l 2N (l+1 "N I l I l)2 8~+1 = 2e - -;:2 f, -0)=1 8,) L:.) 1 (8) i=l (9) In what follows, we shall frequently make use of the fact that the weighted sums "N Ol.x~ are bounded by intervals r .. l+1 _l+l 1I~+1 + f~+l] This motivates the 0J=11J 1 lP'1 1 , r1 l' following definitions. Definition 2 Given a valid set of f-intervals and binary values {Xf = xf} for the nodes in the fth layer, we say that the (f + 1)st layer of the network satisfies its f-intervals if 12:;=1 Ofjx] - J.lf+11 < fl+1 for all 1 ::; i::; N. Otherwise, we say that the (f + 1)st layer violates its f-intervals. Suppose that we are given a valid set of f-intervals and that we sample from the joint distribution defined by the probabilistic I-network. The right hand side of Equation (8) provides an upper bound on the conditional probability that the (f + 1)st layer violates its f-intervals, given that the fth layer did not. This upper bound may be vacuous (that is, larger than 1), so let us denote by 81+1 whichever is smaller the right hand side of Equation (8) , or 1; in other words, 81+1 = min {2:~1 8;+1,1 }. Since at the fth layer, the probability of violating the f-intervals is at most 81 we Inference in Multilayer Networks via Large Deviation Bounds 265 are guaranteed that with probability at least TIl> 1 [1 - 6l ], all the layers satisfy their f-intervals. Conversely, we are guaranteed that the probability that any layer violates its f-intervals is at most 1 - TIl>l [1 - 6l ]. Treating this as a throw-away probability, we can now compute upper and lower bounds on marginal probabilities involving nodes at the Lth layer exactly as in the case of nodes at the second layer. This yields the following theorem. Theorem 2 For any subset {Xf, ... , Xi(} of the outputs of a probabilistic fnetwork, for any setting Xl, ... ,XK, and for any valid set of f-intervals, the marginal probability of partial evidence in the output layer obeys: (10) < Pr[ X f = Xl, ... , X f< = X K] < D [1- 0'] }I f("f +tf) }}o [1- f("f - tf)] + (1-D [1- O']}ll) Theorem 2 generalizes our earlier results for marginal probabilities over nodes in the second layer; for example, compare Equation (10) to Equation (4). Again, the upper and lower bounds can be efficiently computed for all common transfer functions. 5 Rates of Convergence To demonstrate the power of Theorem 2, we consider how the gap (or additive difference) between these upper and lower bounds on Pr[Xf = Xl,· .. , Xi( = XK] behaves for some crude (but informed) choices of the {fn. Our goal is to derive the rate at which these upper and lower bounds converge to the same value as we examine larger and larger networks. Suppose we choose the f-intervals inductively by defining .6.; = 0 and setting ~+1 = ;....I()~ .1.6.l J ,r2 ln N fl L...J lJ J + N j=l (12) for some / > 1. From Equations (8) and (9), this choice gives 6l+1 ::;: 2N l - 2,,/ as an upper bound on the probability that the (£ + 1 )th layer violates its f-intervals. Moreover, denoting the gap between the upper and lower bounds in Theorem 2 by G, it can be shown that: (13) Let us briefly recall the definitions of the parameters on the right hand side of this equation: a is the maximal slope of the transfer function f, N is the number of nodes in each layer, ]{ is the number of nodes with evidence, r = N8 is N times the largest weight in the network, L is the number of layers, and / > 1 is a parameter at our disposal. The first term of this bound essentially has a 1/ VN dependence on N, but is multiplied by a damping factor that we might typically expect to decay exponentially with the number ]{ of outputs examined. To see this, simply notice that each of the factors f(f.lj +fj) and [1- f(f.lj -fj)] is bounded by 1; furthermore, 266 M Kearns and L. Saul since all the means J.lj are bounded, if N is large compared to 1 then the Ci are small, and each of these factors is in fact bounded by some value f3 < 1. Thus the first term in Equation (13) is bounded by a constant times f3K - l f{ Jln(N)/N. Since it is natural to expect the marginal probability of interest itself to decrease exponentially with f{, this is desirable and natural behavior. Of course, in the case of large f{, the behavior of the resulting overall bound can be dominated by the second term 2L/ N 2'Y- l of Equation (13). In such situations, however, we can consider larger values of I, possibly even of order f{; indeed, for sufficiently large I, the first term (which scales like y0) must necessarily overtake the second one. Thus there is a clear trade-off between the two terms, as well as optimal value of 1 that sets them to be (roughly) the same magnitude. Generally speaking, for fixed f{ and large N, we observe that the difference between our upper and lower bounds on Pr[Xf = Xl, ... , xi = XK] vanishes as 0 (Jln(N)/N). 6 An Algorithm for Fixed Multilayer Networks We conclude by noting that the specific choices made for the parameters Ci in Section 5 to derive rates of convergence may be far from the optimal choices for a fixed network of interest. However, Theorem 2 directly suggests a natural algorithm for approximate probabilistic inference. In particular, regarding the upper and lower bounds on Pr [X f = Xl, ... , Xi = X K] as functions of { cn, we can optimize these bounds by standard numerical methods. For the upper bound, we may perform gradient descent in the {cn to find a local minimum, while for the lower bound, we may perform gradient ascent to find a local maximum. The components of these gradients in both cases are easily computable for all the commonly studied transfer functions. Moreover, the constraint of maintaining valid c-intervals can be enforced by maintaining a floor on the c-intervals in one layer in terms of those at the previous one. The practical application of this algorithm to interesting Bayesian networks will be studied in future work. References Cooper, G. (1990). Computational complexity of probabilistic inference usmg Bayesian belief networks. Artificial Intelligence 42:393-405. Hertz, J, . Krogh, A., & Palmer, R. (1991). Introduction to the theory of neural computation. Addison-Wesley, Redwood City, CA. Hinton, G., Dayan, P., Frey, B., and Neal, R. (1995). The wake-sleep algorithm for unsupervised neural networks. Science 268:1158- 1161. Jordan, M., Ghahramani, Z. , Jaakkola, T. , & Saul, 1. (1997) . An introduction to variational methods for graphical models. In M. Jordan, ed. Learning in Graphical Models. Kluwer Academic. Kearns, M. , & Saul, 1. (1998) . Large deviation methods for approximate probabilistic inference. In Proceedings of the 14th Annual Conference on Uncertainty in A rtificial Intelligence. Neal, R. (1992). Connectionist learning of belief networks. Artificial Intelligence 56:71-113. Pearl, J . (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA.
|
1998
|
149
|
1,507
|
Fisher Scoring and a Mixture of Modes Approach for Approximate Inference and Learning in Nonlinear State Space Models Thomas Briegel and Volker Tresp Siemens AG, Corporate Technology Dept. Information and Communications Otto-Hahn-Ring 6,81730 Munich, Germany {Thomas.Briegel, Volker.Tresp} @mchp.siemens.de Abstract We present Monte-Carlo generalized EM equations for learning in nonlinear state space models. The difficulties lie in the Monte-Carlo E-step which consists of sampling from the posterior distribution of the hidden variables given the observations. The new idea presented in this paper is to generate samples from a Gaussian approximation to the true posterior from which it is easy to obtain independent samples. The parameters of the Gaussian approximation are either derived from the extended Kalman filter or the Fisher scoring algorithm. In case the posterior density is multimodal we propose to approximate the posterior by a sum of Gaussians (mixture of modes approach). We show that sampling from the approximate posterior densities obtained by the above algorithms leads to better models than using point estimates for the hidden states. In our experiment, the Fisher scoring algorithm obtained a better approximation of the posterior mode than the EKF. For a multimodal distribution, the mixture of modes approach gave superior results. 1 INTRODUCTION Nonlinear state space models (NSSM) are a general framework for representing nonlinear time series. In particular, any NARMAX model (nonlinear auto-regressive moving average model with external inputs) can be translated into an equivalent NSSM. Mathematically, a NSSM is described by the system equation (1) where Xt denotes a hidden state variable, (t denotes zero-mean uncorrelated Gaussian noise with covariance Qt and Ut is an exogenous (deterministic) input vector. The time-series measurements Yt are related to the unobserved hidden states Xt through the observation equation (2) where Vt is uncorrelated Gaussian noise with covariance lit. In the following we assume that the nonlinear mappings fw{.) and gv{.) are neural networks with weight vectors w and v, respectively. The initial state Xo is assumed to be Gaussian distributed with mean ao and covariance Qo. All variables are in general multidimensional. The two challenges 404 T Briegel and V. Tresp in NSSMs are the interrelated tasks of inference and learning. In inference we try to estimate the states of unknown variables Xs given some measurements Yb ... , Yt (typically the states of past (s < t), present (s = t) or future (s > t) values of Xt) and in learning we want to adapt some unknown parameters in the model (i.e. neural network weight vectors wand v) given a set of measurements.1 In the special case of linear state space models with Gaussian noise, efficient algorithms for inference and maximum likelihood learning exist. The latter can be implemented using EM update equations in which the E-step is implemented using forward-backward Kalman filtering (Shumway & Stoffer, 1982). If the system is nonlinear, however, the problem of inference and learning leads to complex integrals which are usually considered intractable (Anderson & Moore, 1979). A useful approximation is presented in section 3 where we show how the learning equations for NSSMs can be implemented using two steps which are repeated until convergence. First in the (Monte-Carlo) E-step, random samples are generated from the unknown variables (e.g. the hidden variables Xt) given the measurements. In the second step (a generalized M-step) those samples are treated as real data and are used to adapt Iw (.) and gv (.) using some version of the backpropagation algorithm. The problem lies in the first step, since it is difficult to generate independent samples from a general multidimensional distribution. Since it is difficult to generate samples from the proper distribution the next best thing might be to generate samples using an approximation to the proper distribution which is the idea pursued in this paper. The first thing which might come to mind is to approximate the posterior distribution of the hidden variables by a multidimensional Gaussian distribution since generating samples from such a distribution is simple. In the first approach we use the extended Kalman filter and smoother to obtain mode and covariance ofthis Gaussian.2 Alternatively, we estimate the mode and the covariance of the posterior distribution using an efficient implementation of Fisher scoring derived by Fahrmeir and Kaufmann (1991) and use those as parameters of the Gaussian. In some cases the approximation of the posterior mode by a single Gaussian might be considered too crude. Therefore, as a third solution, we approximate the posterior distribution by a sum of Gaussians (mixture of modes approach). Modes and covariances of those Gaussians are obtained using the Fisher scoring algorithm. The weights of the Gaussians are derived from the likelihood of the observed data given the individual Gaussian. In the following section we derive the gradient of the log-likelihood with respect to the weights in I w (.) and gv (.). In section 3, we show that the network weights can be updated using a Monte-Carlo E-step and a generalized M-step. Furthermore, we derive the different Gaussian approximations to the posterior distribution and introduce the mixture of modes approach. In section 4 we validate our algorithms using a standard nonlinear stochastic time-series model. In section 5 we present conclusions. 2 THE GRADIENTS FOR NONLINEAR STATE SPACE MODELS Given our assumptions we can write the joint probability of the complete data for t 1, ... , T as3 r r p(Xr, Yr, Ur) = p(Ur) p(xo) II p(Xt IXt-l. ut} II p(Yt IXt, ut} (3) t=1 t=l 1 In this paper we focus on the case s :::; t (smoothing and offline learning, respectively). 2Independently from our work, a single Gaussian approximation to the E-step using the EKFS has been proposed by Ghahramani & Roweis (1998) for the special case of a RBF network. They show that one obtains a closed form M-step when just adapting the linear parameters by holding the nonlinear parameters fixed. Although avoiding sampling, the computational load of their M-step seems to be significant. 3In the following, each probability density is conditioned on the current model. For notational convenience, we do not indicate this fact explicitly. Fisher Scoring and Mixture of Modes for Inference and Learning in NSSM 405 where UT = {Ul,"" UT} is a set of known inputs which means that p( UT) is irrelevant in the following. Since only YT = {Yl,"" YT} and UT are observed, the log-likelihood of the model is log L = log J p(XT, YTIUT)p(UT) dXT ex log J p(XT, YTIUT ) dXT (4) with XT = {xo, ... , XT}. By inserting the Gaussian noise assumptions we obtain the gradients of the log-likelihood with respect to the neural network weight vectors wand v, respectively (Tresp & Hofmann, 1995) T 810gL ~J!8fw(Xt-l,Ut)( ) I 8w ex L.J 8w Xt -fw(Xt-llUd p(Xt,Xt-l YT,UT)dxt-ldxt t=1 810gL ~J8gv(Xt'Ut)( ) 8v ex L.J 8v Yt-gv(Xt,ut} p(Xt!YT,UT)dxt. t=1 (5) 3 APPROXIMATIONS TO THE E-STEP 3.1 Monte-Carlo Generalized EM Learning The integrals in the previous equations can be solved using Monte-Carlo integration which leads to the following learning algorithm. 1. Generate S samples {xo, ... , xr };=1 from P(XT \YT, UT ) assuming the current model is correct (Monte-Carlo E-Step). 2. Treat those samples as real data and update wnew = wold + 1] &~! Land vnew = void + 1]&I~~L with stepsize 1] and T 5 aIogL ex ~2:2:8fw(Xt-l,udl (x:-fW(X:_l,ud) (6) 8w S t=1 s=1 8w Xt-l=t:_ 1 T 5 aIogL ~~~8gv (Xt,Ut)1 (_ ("S )) 8 ex SL.JL.J 8 Yt gv Xt,Ut v t=1 s=1 V Xt=i:; (7) (generalized M-step). Go back to step one. The second step is simply a stochastic gradient step. The computational difficulties lie in the first step. Methods which produce samples from multivariate distributions such as Gibbs sampling and other Markov chain Monte-Carlo methods have (at least) two problems. First, the sampling process has to "forget" its initial condition which means that the first samples have to be discarded and there are no simple analytical tools available to determine how many samples must be discarded. Secondly, subsequent samples are highly correlated which means that many samples have to be generated before a sufficient amount of independent samples is available. Since it is so difficult to sample from the correct posterior distribution p(XT !YT, UT) the idea in this paper is to generate samples from an approximate distribution from which it is easy to draw samples. In the next sections we present approximations using a multivariate Gaussian and a mixture of Gaussians. 3.2 Approximate Mode Estimation Using the Extended Kalman Filter Whereas the Kalman filter is an optimal state estimator for linear state space models the extended Kalman filter is a suboptimal state estimator for NSSMs based on locallinearizations of the nonlinearities.4 The extended Kalman filter and smoother (EKFS) algorithm is 4 Note that we do not include the parameters in the NSSM as additional states to be estimated as done by other authors, e.g. Puskorius & Feldkamp (1994). 406 T. Briegel and V. Tresp a forward-backward algorithm and can be derived as an approximation to posterior mode estimation for Gaussian error sequences (Sage & Melsa, 1971). Its application to our framework amounts to approximating x~ode ~ x~KFS where x~KFS is the smoothed estimate of Xt obtained from forward-backward extended Kalman filtering over the set of measurements YT and x~ode is the mode of the posterior distribution p( Xt I YT, UT). We use x~KFS as the center of the approximating Gaussian. The EKFS also provides an estimate of the error covariance of the state vector at each time step t which can be used to form the covariance matrix of the approximating Gaussian. The EKFS equations can be found in Anderson & Moore (1979). To generate samples we recursively apply the following algorithm. Given xLI is a sample from the Gaussian approximation of p(xt-IIYT, UT) at time t - 1 draw a sample xt from p(XtIXt-1 = X:_I' YT, UT). The last conditional density is Gaussian with mean and covariance calculated from the EKFS approximation and the lag-one error covariances derived in Shumway & Stoffer (1982), respectively. 3.3 Exact Mode Estimation Using the Fisher Scoring Algorithm If the system is highly nonlinear, however, the EKFS can perform badly in finding the posterior mode due to the fact that it uses a first order Taylor series expansion of the nonlinearities fw (.) and gv(.) (for an illustration, see Figure 1). A u!:>cful- and computationally tractable - alternative to the EKFS is to compute the "exact" posterior mode by maximizing logp(XT IYrr, UT) with respect to XT. A suitable way to determine a stationary point of the log posterior, or equivalently, of P(XT, YTIUT) (derived from (3) by dropping P(UT)) . I D' h . W' h h . X FS old b . IS to app y rlS er scormg. It t e current estImate T' we get a etter estImate X~s,new = X;S,old + 1] J for the unknown state sequence XT where J is the solution of (8) with the score function s(XT ) = alogp(::~YTIUT) and the expected information matrix S(XT) = E[_a210~1X;{fIUT'J.5 By extending the arguments given in Fahrmeir & T T Kaufmann (1991) to nonlinear state space models it turns out that solving equation (8) e.g. to compute the inverse of the expected information matrix - can be performed by Cholesky decomposition in one forward and backward pass.6 The forward-backward steps can be implemented as a fast EKFS-Iike algorithm which has to be iterated to obtain the maximum posterior estimates x~ode = x;S (see Appendix). Figure 1 shows the estimate obtained by the Fisher scoring procedure for a bimodal posterior density. Fisher scoring is successful in finding the "exact" mode, the EKFS algorithm is not. Samples of the approximating Gaussian are generated in the same way as in the last section. 3.4 The Mixture of Modes Approach The previous two approaches to posterior mode smoothing can be viewed as single Gaussian approximations of the mode of p(XTIYT, UT). In some cases the approximation of the posterior density by a single Gaussian might be considered too crude, in particular if the posterior distribution is multimodal. In this section we approximate the posterior by a weighted sum of m Gaussians p(XT IYT, UT) ~ :E~I okp(XT Ik) where p(XT Ik) is the k-th Gaussian. If the individual Gaussians model the different modes we are able to model multimodal posterior distributions accurately. The approximations of the individual modes are local maxima of the Fisher scoring algorithm which are f~)Und by starting the algorithm using different initial conditions. Given the different Gaussians, the optimal weighting factors are ok = p(YTlk)p(k)jp(YT) where p(YTlk) = jp(YTIXT)P(XTlk)dXT is the SNote that the difference between the Fisher scoring and the Gauss-Newton update is that in the fonner we take the expectation of the information matrix. 6The expected information matrix is a positive definite blOCk-tridiagonal matrix. Fisher Scoring and Mixture of Modes for Inference and Learning in NSSM 407 likelihood of the data given mode k. If we approximate that integral by inserting the Fisher scoring solutions x;S,k for each time step t and linearize the nonlinearity gv (.) about the Fisher scoring solutions, we obtain a closed form solution for computing the ok (see Appendix). The resulting estimator is a weighted sum of the m single Fisher scoring estimates x~M = L::=l ok x;s,k. The mixture of modes algorithm can be found in the Appendix. For the learning task samples of the mixture of Gaussians are based on samples of each of the m single Gaussians which are obtained the same way as in subsection 3.2. 4 EXPERIMENTAL RESULTS In the first experiment we want to test how well the different approaches can approximate the posterior distribution of a nonlinear time series (inference). As a time-series model we chose Xt-l ( ) 1 2 f(Xt-l, Ut} = 0.5Xt-l + 25 2 + 8eas 1.2(t -I}, g(xt} = 20Xt, (9) 1 + x t _ 1 the covariances Qt = 10, lit = 1 and initial conditions ao = 0 and Qo = 5 which is considered a hard inference problem (Kitagawa, 1987). At each time step we calculate the expected value of the hidden variables Xt, t = 1, ... , 400 based on a set of measurements Y400 = {Yl, ... , Y400} (which is the optimal estimator in the mean squared sense) and based on the different approximations presented in the last section. Note that for the single mode approximation, x~ode is the best estimate of Xt based on the approximating Gaussian. For the mixture of modes approach, the best estimate is L:~l ok x;S,k where x;S,k is the mode of the k-th Gaussian in the dimension of Xt. Figure 2 (left) shows the mean squared error (MSE) of the smoothed estimates using the different approaches. The Fisher scoring (FS) algorithm is significantly better than the EKFS approach. In this experiment, the mixture of modes (MM) approach is significantly better than both the EKFS and Fisher scoring. The reason is that the posterior probability is multimodal as shown in Figure 1. In the second experiment we used the same time-series model and trained a neural network to approximate fw (.) where all covariances were assumed to be fixed and known. For adaptation we used the learning rules of section 3 using the various approximations to the posterior distribution of XT . Figure 2 (right) shows the results. The experiments show that truly sampling from the approximating Gaussians gives significantly better results than using the expected value as a point estimate. Furthermore, using the mixture of modes approach in conjunction with sampling gave significantly better results than the approximations using a single Gaussian. When used for inference, the network trained using the mixture of modes approach was not significantly worse than the true model (5% significance level, based on 20 experiments). 5 CONCLUSIONS In our paper we presented novel approaches for inference and learning in NSSMs. The application of Fisher scoring and the mixture of modes approach to nonlinear models as presented in our paper is new. Also the idea of sampling from an approximation to the posterior distribution of the hidden variables is presented here for the first time. Our results indicate that the Fisher scoring algorithm gives better estimates of the expected value of the hidden variable than the EKFS based approximations. Note that the Fisher scoring algorithm is more complex in requiring typically 5 forward-backward passes instead of only one forward-backward pass for the EKFS approach. Our experiments also showed that if the posterior distribution is multi modal, the mixture of modes approach gives significantly better estimates if compared to the approaches based on a single Gaussian approximation. Our learning experiments show that it is important to sample from the approximate distributions and that it is not sufficient to simply substitute point estimates. Based on the 408 0.2,---------------------. 0 . 18 0.16 0.14 , , , . , . T. Briegel and V. Tresp 0.4,---------------------, 0 . 35 0.3 1t 0.25 .8 0.2 1.0.15 0.05 ,,.... , , o -'o--~~~-=-=--~o--~--~~-~ t =-=295 Figure 1: Approximations to the posterior distribution p( x t iY400, U 400) for t = 294 and t = 295. The continuous line shows the posterior distribution based on Gibbs sampling using 1000 samples and can be considered a close approximation to the true posterior. The EKFS approximation (dotted) does not converge to a mode. The Fisher scoring solution (dashdotted) finds the largest mode. The mixture of modes approach with 50 modes (dashed) correctly finds the two modes. sampling approach it is also possible to estimate hyperparameters (e.g. the covariance matrices) which was not done in this paper. The approaches can also be extended towards online learning and estimation in various ways (e.g. missing data problems). Appendix: Mixture of Modes Algorithm The mixture of modes estimate x~M is derived as a weighted sum of k = 1, . .. ,m individual Fisher scoring (mode) estimates x;S,k. For m = 1 we obtain the Fisher scoring algorithm of subsection 3.3. First, one performs the set of forward recursions (t = 1, ... , T) for each single mode estimator k. ",k = p(~FS , k)"'k pT( , FS ,k)+Q (10) ""'tit-I tXt_I ""'t-Ilt-I tXt_I t Btk ",k pT('FS,k)(",k )-1 ""'t-llt-I tXt_I ""'tit-I k 'Yt ( ,FS,k) k T k St x t + Bt 'Yt-l (11 ) (12) (13) with the initialization :E~lo = Qo, 'Yo = So (X~S , k). Then, one performs the set of backward smoothing recursions (t = T , .. . , 1) (Dk )-1 t-I :Ek Bk:Ek Bk T t-llt-l t tit-I t (14) :E~_1 (D~_d-l + B;:E~ B; T (15) 0:_1 = ( k )-1 k Bkok Dt- 1 'Yt-l + t t (16) with Pt(z) = 8fw(Xt_l 'U') I G ( ) &YdXt,Utll ( ) &logp(Xr,YrIUT) I a d &Xt_l Xt_l=Z' t Z &Xt Xt=Z, St Z &Xt Xt=Z n initialization o} = :E}'Y}. The k individual mode estimates x;S,k are obtained by iterative application of the update rule X~S , k := '7 Ok + X~S , k with stepsize '7 where X~S , k = {X~S , k, .. . ,X~S,k} and Ok = {o~ , ... , o} }. After convergence we obtain the mixture of modes estimate as the weighted ,MM ",m k~FS k . h . h' ffi . k k h k( T 1 0) sum X t = 6k=1 0' x t ' WIt welg ttng coe Clents 0' := 0'0 were O't t = -, "' , are computed recursively starting with a uniform prior O'} = .k (N(xlp,:E) stands for a Gaussian with center p and covariance :E evaluated at x): k O't = O'~+IN(Ytlgv(xfs,k, ur), nn (17) (18) Fisher Scoring and Mixture of Modes for Inference and Learning in NSSM 409 0.8 e 8 0.7 7 o.s f: 0.5 ~ ~O.4 ';"4 ~ ~3 0.3 2 0.2 0 . "1 0 0 Figure 2: Left (inference): The heights of the bars indicate the mean squared error between the true Xt (which we know since we simulated the system) and the estimates using the various approximations. The error bars show the standard deviation derived from 20 repetitions of the experiment. Based on the paired t-test, Fisher scoring is significantly better than the EKFS and all mixture of modes approaches are significantly better than both EKFS and Fisher scoring based on a 1 % rejection region. The mixture of modes approximation with 50 modes (MM 50) is significantly better than the approximation using 20 modes. The improvement of the approximation using 20 modes (MM 20) is not significantly better than the approximation with 10 (MM 10) modes using a 5% rejection region. Right (learning): The heights of the bars indicate the mean squared error between the true fw (.) (which is known) and the approximations using a multi-layer perceptron with 3 hidden units and T = 200. Shown are results using the EKFS approximation, (left) the Fisher scoring approximation (center) and the mixture of modes approximation (right). There are two bars for each experiment: The left bars show results where the expected value of x t calculated using the approximating Gaussians are used as (single) samples for the generalized M-step - in other words - we use a point estimate for Xt. Using the point estimates, the results of all three approximations are not significantly different based on a 5% significance level. The right bars shows the result where S = 50 samples are generated for approximating the gradient using the Gaussian approximations. The results using sampling are all significantly better than the results using point estimates (l % significance level). The sampling approach using the mixture of modes approximation is significantly better than the other two sampling-based approaches (l % significance level). If compared to the inference results of the experiments shown on the left, we achieved a mean squared error of 6.02 for the mixture of modes approach with 10 modes which is not significantly worse than the results the with the true model of 5.87 (5% significance level). References Anderson, B. and Moore, J. (1979) Optimal Filtering, Prentice-Hall, New Jersey. Fahnneir, L. and Kaufmann, H. (1991) On Kalman Filtering. Posterior Mode Estimation and Fisher Scoring in Dynamic Exponential Family Regression, Metrika, 38, pp. 37-60. Ghahramani, Z. and Roweis, S. (1999) Learning Nonlinear Stochastic Dynamics using the Generalized EM ALgorithm, Advances in Neural Infonnation Processing Systems 11, eps. M. Keams, S. Solla, D. Cohn, MIT Press, Cambridge, MA. Kitagawa, G. (1987) Non-Gaussian State Space Modeling of Nonstationary Time Series (with Comments), JASA 82, pp. 1032-1063. Puskorius, G. and Feldkamp, L. (1994) NeurocontroL of Nonlinear Dynamical Systems with KaLman Filter Trained Recurrent Networks, IEEE Transactions on Neural Networks, 5:2, pp. 279-297. Sage, A. and Melsa, J. (1971) Estimation Theory with Applications to Communications and Control, McGraw-Hill, New York. Shumway, R. and Stoffer, D. (1982) Time Series Smoothing and Forecasting Using the EM Algorithm, Technical Report No. 27, Division of Statistics, UC Davis. Tresp, V. and Hofmann, R. (1995) Missing and Noisy Data in NonLinear Time-Series Prediction, Neural Networks for Signal Processing 5, IEEE Sig. Proc. Soc., pp. 1-10.
|
1998
|
15
|
1,508
|
Basis Selection For Wavelet Regression Kevin R. Wheeler Caelum Research Corporation NASA Ames Research Center Mail Stop 269-1 Moffett Field, CA 94035 kwheeler@mail.arc.nasa.gov Abstract Atam P. Dhawan College of Engineering University of Toledo 2801 W. Bancroft Street Toledo, OH 43606 adhawan@eng.utoledo.edu A wavelet basis selection procedure is presented for wavelet regression. Both the basis and threshold are selected using crossvalidation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated using widely published sampled functions. The results of the method are contrasted with other basis function based methods. 1 INTRODUCTION Wavelet regression is a technique which attempts to reduce noise in a sampled function corrupted with noise. This is done by thresholding the small wavelet decomposition coefficients which represent mostly noise. Most of the papers published on wavelet regression have concentrated on the threshold selection process. This paper focuses on the effect that different wavelet bases have on cross-validation based threshold selection, and the error in the final result. This paper also suggests how prior information may be incorporated into the basis selection process, and the effects of choosing a wrong prior. Both orthogonal and biorthogonal wavelet bases were explored. Wavelet regression is performed in three steps. The first step is to apply a discrete wavelet transform to the sampled data to produce decomposition coefficients. Next a threshold is applied to the coefficients. Then an inverse discrete wavelet transform is applied to these modified coefficients. 628 K. R. Wheeler and A. P Dhawan The basis selection procedure is demonstrated to perform better than other wavelet regression methods even when the wrong prior on the space of the basis selections is specified. This paper is broken into the following sections. The background section gives a brief summary of the mathematical requirements of the discrete wavelet transform. This section is followed by a methodology section which outlines the basis selection algorithms, and the process for obtaining the presented results. This is followed by a results section and then a conclusion. 2 BACKGROUND 2.1 DISCRETE WAVELET TRANSFORM The Discrete Wavelet Transform (DWT) [Daubechies, 92] is implemented as a series of projections onto scaling functions in L2 (R). The initial assumption is that the original data samples lie in the finest space Vo, which is spanned by the scaling function ,p E Vo such that the collection {,p( x -t) It E Z} is a Riesz basis of Vo . The first level of the dyadic decomposition then consists of projecting the data samples onto scaling functions which have been dilated to be twice as wide as the original ,p. These span the coarser space V·- 1 : {,p(2x - 2t) It E Z}. The information that is lost going from the finer to coarser scale is retained in what is known as wavelet coefficients. Instead of taking the difference, the wavelet coefficients can be obtained via a projection operation onto the wavelet basis functions 'I/J which span a space known as Woo The projections are typically implemented using Quadrature Mirror Filters (QMF) which are implemented as Finite Impulse Response filters (FIR). The next level of decomposition is obtained by again doubling the scaling functions and projecting the first scaling decomposition coefficients onto these functions. The difference in information between this level and the last one is contained in the wavelet coefficients for this level. In general, the scaling functions for level j and translationmmayberepresentedby: ,pj(t) = 2:::,}-,p(2- j t-m)wheretE [0, 2k-1J, k ~ 1, 1 ~ j ~ k, ° ~ m ~ 2k - j - 1. 2.1.1 Orthogonal An orthogonal wavelet decomposition is defined such that the difference space Wj is the orthogonal complement of Vj in Vj +! : Wo..l Vo which means that the projection of the wavelet functions onto the scaling functions on a level is zero: ('I/J,,pC -t)) = 0, t E Z This results in the wavelet spaces Wj with j E Z being all mutually orthogonal. The refinement relations for an orthogonal decomposition may be written as: ,p(x) = 2 Lk hk,p(2x - k) and 'I/J(x) = 2 Lk gk,p(2x - k). 2.1.2 Biorthogonal Symmetry is as an important property when the scaling functions are used as interpolatory functions. Most commonly used interpolatory functions are symmetric. It is well known in the subband filtering community that symmetry and exact reconstruction are incompatible if the same FIR filters are used for reconstruction and decomposition (except for the Haar filter) [Daubechies, 92]. If we are willing to Basis Selectionfor Wavelet Regression 629 use different filters for the analysis and synthesis banks, then symmetry and exact reconstructior:: are possible using b~orthogonal wavelets. Biorthogonal wavelets have dual scaling 4> and dual wavelet 1/J functions. These generate a dual multiresolution analysis with subspaces ~ and TVj so that: l~ 1.. Wj and Vj 1.. Wj and the orthogonality conditions can now be written as: (¢, 1/J(. -l)) = ('¢,4>(- -l)) = 0 (¢j,l,4>k,m) (-0j ,I,1/Jk,m) OJ-k ,OI-m for l,m,j,k E Z OJ-k,OI-m for l,m,j,k E Z where OJ- k = 1 when j = k, and zero otherwise. The refinement relations for biorthogonal wavelets can be written: 4>(:::) = 2 L hk4>(2x - k) and 1/J(x) 2 L gk4>(2x - k) k k ¢(x) 2 L hk¢(2x - k) and -0(x) k Basically, this means that the scaling functions at one level are composed of linear combinations of scaling functions at the next finer level. The wavelet functions at one level are also composed of linear combinations of the scaling functions at the next finer level. 2.2 LIFTING AND SECOND GENERATION WAVELETS Swelden's lifting scheme [Sweldens, 95a] is a way to transform a biorthogonal wavelet decomposition obtained from low order filters to one that could be obtained from higher order filters (more FIR filter coefficients), without applying the longer filters and thus saving computations. This method can be used to increase the number of vanishing moments of the wavelet, or change the shape of the wavelet. This means that several different filters (i.e. sets of basis functions) may be applied with properties relevant to the problem domain in a manner more efficient than directly applying the filters individually. This is beneficial to performing a search over the space of admissible basis functions meeting the problem domain requirements. Swelden's Second Generation Wavelets [Sweldens, 95b] are a result of applying lifting to simple interpolating biorthogonal wavelets, and redefining the refinement relation of the dual wavelet to be: ,¢(x) = ¢(2x - 1) - L ak¢(x - k) k where the ak are the lifting parameters. The lifting parameters may be selected to achieve desired properties in the basis functions relevant to the problem domain. Prior information for a particular application domain may now be incorporated into the basis selection for wavelet regression. For example, if a particular application requires that there be a certain degree of smoothness (or a certain number of vanishing moments in the baSiS), then only those lifting parameters which result in a number of vanishing moments within this range are used. Another way to think 630 K. R. Wheeler and A. P Dhawan about this is to form a probability distribution over the space of lifting parameters. The most likely lifting parameters will be those which most closely match one's intuition for the given problem domain. 2.3 THRESHOLD SELECTION Since the wavelet transform is a linear operator the decomposition coefficients will have the same form of noise as the sampled data. The idea behind wavelet regression is that the decomposition coefficients that have a small magnitude are substantially representative of the noise component of the sampled data. A threshold is selected and then all coefficients which are below the threshold in magntiude are either set to zero (a hard threshold) or a moved towards zero (a soft threshold). The soft threshold'T]t(Y) = sgn(Y)(1 Y I -t) is used in this study. There are two basic methods of threshold selection: 1. Donoho's [Donoho, 95] analytic method which relies on knowledge of the noise distribution (such as a Gaussian noise source with a certain variance); 2. a cross-validation approach (many of which are reviewed in [Nason, 96]). It is beyond the scope of this paper to review these methods. Leave-one-out cross-validation with padding was used in this study. 3 METHODOLOGY The test functions used in this study are the four functions published by Donoho and Johnstone [Donoho and Johnstone, 94]. These functions have been adopted by the wavelet regression community to aid in comparison of algorithms across publications. Each function was uniformly sampled to contain 2048 points. Gaussian white noise was added so that the signal to noise ratio (SNR) was 7.0. Fifty replicates of each noisy function were created, of which four instantiations are depicted in Figure 1. The noise removal process involved three steps. The first step was to perform a discrete wavelet transform using a paticular basis. A threshold was selected for the resulting decomposition coefficients using leave-one-out cross validation with padding. The soft threshold was then applied to the decomposition. Next, the inverse wavelet transform was applied to obtain a cleaner version of the original signal. These steps were repeated for each basis set or for each set of lifting parameters. 3.1 WAVELET BASIS SELECTION To demonstrate the effect of basis selection on the threshold found and the error in the resulting recovered signal, the following experiments were conducted. In the first trial two well studied orthogonal wavelet families were used: Daubechies most compactly supported (DMCS), and Symlets (8) [Daubechies, 92]. For the DMCS family, filters of order 1 (which corresponds to the Haar wavelet) through 7 were used. For the Symlets, filters of order 2 through 8 were used. For each filter, leaveone-out cross-validation was used to find a threshold which minimized the mean square error for each of the 50 replicates for the four test functions. The median threshold found was then applied to the decomposit.ion of each of the replicates Basis Selection for Wavelet Regression 631 for each test function. The resulting reconstructed signals are compared to the ideal function (the original before noise was added) and the Normalized Root Mean Square Error (NRMSE) is presented. 3.2 INCORPORATING PRIOR INFORMATION: LIFTING PARAMETERS If the function that we are sampling is known to have certain smoothness properties, then a distribution of the admissible lifting coefficients representing a similar smoothness characteristic can be formed. However, it is not necessary to cautiously pick a prior. The performance of this method with a piecewise linear prior (the (2,2) biorthogonal wavelet of Cohen-Daubechies-Feauveau [Cohen, 92]) has been applied to the non-linear smooth test functions Bumps, Doppler, and Heavysin. This method has been compared with several standard techniques [Wheeler, 96]. The Smoothing Spline method (SS) [Wahba, 90] , Donoho's Sure Shrink method (SureShrink)[Donoho, 95], and an optimized Radial Basis Function Neural Network (RBFNN) . 4 RESULTS In the first experiment, the procedure was only allowed to select between two well known bases (Daubechies most compactly supported and symmlet wavelets) with the desired filter order. Table 1 shows the filter order resulting in lowest crossvalidation error for each filter and function. The NRMSE is presented with respect to the original noise-free functions for comparison. As expected the best basis for the noisy blocks function was the piecewise linear basis (Daubechies, order 1). The doppler, which had very high frequency components required the highest filter order. Figure 2 represents typical denoised versions for the functions recovered by the filters listed in bold in the table. The method selected the basis having similar properties to the underlying function without knowing the original function. When higher order filters were applied to the noisy Blocks data, the resulting NRMSE was higher. The basis selection procedure (labelled CV-Wavelets in Table 2) was compared with Donoho's SureShrink, Wahba's Smoothing Splines (SS), and an optimized RBFNN [Wheeler, 96]. The prior information specified incorrectly to the procedure to prefer bases near piecewise linear. The remarkable observation is that the method did better than the others as measured by Mean Square Error. 5 CONCLUSION A basis selection procedure for wavelet regression was presented. The method was shown to select bases appropriate to the characteristics of the underlying functions. The shape of the basis was determined with cross-validation selecting from either a pre-set library of filters or from previously calculated lifting coefficients. The lifting coefficients were calculated to be appropriate for the particular problem domain. The method was compared for various bases and against other popular methods. Even with the wrong lifting parameters, the method was able to reduce error better than other standard algorithms. 632 K. R. Wheeler and A. P Dhawan Noisy Blocks Function Noisy Bumps Function Noisy Heavysin function Noisy Doppler function. Figure 1: Noisy Test Functions Recovered Blocks Function Recovered Bumps Function Recovered Heavysin function Recovered Doppler function. Figure 2: Recovered Functions Basis Selection/or Wavelet Regression 633 Table 1: Effects of Basis Selection Filter Median NRMSE Median NRMSE Function Order Family Thr. (MT) Using MT True Thr. using MTT Blocks 1 Daubechies 1.33 0.038 1.61 0.036 Blocks 2 Symmlets 1.245 0.045 1.40 0.045 Bumps 4 Daubechies 1.11 0.059 1.47 0.056 Bumps 5 Symmlets 1.13 0.058 1.48 0.055 Doppler 8 Daubechies 1.27 0.058 1.65 0.054 Doppler 8 Symmlets 1.36 0.054 1.74 0.050 Heavysin 2 Daubechies 1.97 0.039 2.17 0.038 Reavysin 5 Symmlets 1.985 0.039 2.16 0.038 Table 2: Methods Comparison Table of MSE Function SS Sure Shrink RBFNN CV -Wavelets Blocks 0.546 0.398 1.281 0.362 Heavysin 0.075 0.062 0.113 0.051 Doppler 0.205 0.145 0.287 0.116 References A. Cohen, 1. Daubechies, and J . C. Feauveau (1992), "Biorthogonal bases of compactly supported wavelets," Communications on Pure and Applied Mathematics, vol. 45, no. 5, pp. 485 - 560, June. 1. Daubechies (1992), Ten Lectures on Wavelets, CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 61, SIAM, Philadelphia, PA. D. L. Donoho (1995), "De-noising by soft-thresholding," IEEE Transactions on Information Theory, vol. 41, no. 3, pp.613-627, May. D. L. Donoho, 1. M. Johnstone (1994), "Ideal spatial adaptation by wavelet shrinkage," Biometrika, vol. 81, no. 3, pp. 425-455, September. G. P. Nason (1996), "Wavelet shrinkage using cross-validation," Journal of the Royal Statistical Society, Series B, vol. 58, pp. 463 - 479. W. Sweldens (1995), "The lifting scheme: a custom-design construction of biorthogonal wavelets," Technical Report, no. IMI 1994:7, Dept. of Mathematics, University of South Carolina. W. Sweldens (1995), "The lifting scheme: a construction of second generation wavelets," Technical Report, no. IMI 1995:6, Dept. of Mathematics, University of South Carolina. G. Wahba (1990), Spline Models for Observational Data, SIAM, Philadelphia, PA. K. Wheeler (1996), Smoothing Non-uniform Data Samples With Wavelets, Ph.D. Thesis, University of Cincinnati, Dept. of Electrical and Computer Engineering, Cincinnati, OR.
|
1998
|
150
|
1,509
|
Where does the population vector of motor cortical cells point during reaching movements? Pierre Baraduc* pbaraduc@snv.jussieu.fr Emmanuel Guigon guigon@ccr.jussieu.fr Yves Burnod ybteam@ccr.jussieu.fr INSERM U483, Universite Pierre et Marie Curie 9 quai St Bernard, 75252 Paris cedex 05, France Abstract Visually-guided arm reaching movements are produced by distributed neural networks within parietal and frontal regions of the cerebral cortex. Experimental data indicate that (I) single neurons in these regions are broadly tuned to parameters of movement; (2) appropriate commands are elaborated by populations of neurons; (3) the coordinated action of neurons can be visualized using a neuronal population vector (NPV). However, the NPV provides only a rough estimate of movement parameters (direction, velocity) and may even fail to reflect the parameters of movement when arm posture is changed. We designed a model of the cortical motor command to investigate the relation between the desired direction of the movement, the actual direction of movement and the direction of the NPV in motor cortex. The model is a two-layer self-organizing neural network which combines broadly-tuned (muscular) proprioceptive and (cartesian) visual information to calculate (angular) motor commands for the initial part of the movement of a two-link arm. The network was trained by motor babbling in 5 positions. Simulations showed that (1) the network produced appropriate movement direction over a large part of the workspace; (2) small deviations of the actual trajectory from the desired trajectory existed at the extremities of the workspace; (3) these deviations were accompanied by large deviations of the NPV from both trajectories. These results suggest the NPV does not give a faithful image of cortical processing during arm reaching movements. • to whom correspondence should be addressed 84 P. Baraduc, E. Guigon and Y. Burnod 1 INTRODUCTION When reaching to an object, our brain transforms a visual stimulus on the retina into a finely coordinated motor act. This complex process is subserved in part by distributed neuronal populations within parietal and frontal regions of the cerebral cortex (Kalaska and Crammond 1992). Neurons in these areas contribute to coordinate transformations by encoding target position and kinematic parameters of reaching movements in multiple frames of reference and to the elaboration of motor commands by sending directional and positional signals to the spinal cord (Georgopoulos 1996). An ubiquitous feature of cortical populations is that most neurons are broadly tuned to a preferred attribute (e.g. direction) and that tuning curves are uniformly (or regularly) distributed in the attribute space (Georgopoulos 1996). Accordingly, a powerful tool to analyse cortical populations is the NPV which describes the behavior of a whole population by a single vector (Georgopoulos 1996). Georgopoulos et al. (1986) have shown that the NPV calculated on a set of directionally tuned neurons in motor cortex points approximately (error f"o.J 15°) in the direction of movement. However, the NPV may fail to indicate the correct direction of movement when the arm is in a particular posture (Scott and Kalaska 1995). These data raise two important questions: (1) how populations of broadly tuned neurons learn to compute a correct sensorimotor transformation? Previous models (Burnod et al. 1992; Bullock et al. 1993; Salinas and Abbott 1995) provided partial solutions to this problem but we still lack a model which closely matches physiological and psychophysical data on reaching movements; (2) Are cortical processes involved in the visual guidance of arm movements readable with the NPV tool? This article provides answers to these questions through a physiologically inspired model of sensorimotor transformations. 2 MODEL OF THE VISUAL-TO-MOTOR TRANSFORMATION 2.1 ARM GEOMETRY The arm model has voluntarily been chosen simple. It is a planar, two-link arm, with limited (160 degrees) joint excursion at shoulder and elbow. An agonist/antagonist pair is attached at each joint. 2.2 INPUT AND OUTPUT eODINGS No cell is finely tuned to a specific input or output value to mimic the broad tunings or monotonic firing characteristics found in cortical visuomotor areas. 2.2.1 Arm position By analogy with the role of muscle spindles, proprioceptive sensors are assumed to code muscle length. Arm position is thus represented by the population activity of NT = 20 neurons coding for the length of each agonist or antagonist. The activity of a sensor neuron k is defined by: Tk = adLn(k)) where LIl(k) is the length of muscle number n(k), and ak a piecewise linear sigmoid: L ~ Ak Ak < L < Ak L? Ak Sensibility thresholds Ak are uniformly distributed in [Lmin , Lmax], and the dynamic range is Ak - Ak is taken constant, equal to Lmax - Lmin. Population Coding of Reaching Movements 85 2.2.2 Desired direction The direction V of the desired movement in visual space is coded by a population of N x = 50 neurons with cosine tuning in cartesian space. Each visual neuron j thus fires as: Xj = V· Vj Vj being the preferred direction of the cell. These 50 preferred directions are chosen uniformly distributed in 2-D space. 2.2.3 Motor Command In attempt to model the existence of muscular synergies (Lemon 1988), we identified motor command with joint movement rather than with muscle contraction. A motor neuron i among Nt = 50 contributes to the effective movement M by its action on a synergy (direction in joint space) Mi. This collective effect is formally expressed by: M .= LtiMi where ti is the activity of motor neuron i. The 50 directions of action Mi are supposed uniformly distributed in joint space. 3 NETWORK STRUCTURE AND LEARNING 3.1 STRUCTURE OF THE NETWORK Information concerning the position of the arm and the desired direction in cartesian space 0~, cY ~~~~----~ Cf t; . . . . . ~~~~t®®® motor synergy Figure 1: Network Architecture desired = ....... ~(visual) direction is combined asymmetrically (Fig. I). First, an intermediate (somatic) layer of neurons 86 P. Baraduc. E. Guigon and Y. Bumod forms an internal representation of the arm position by a combination of the input from the NT muscle sensors and the lateral interactions inside the population. Activity in this layer is expressed by: Sij = L Wijk Tk + L ljp Sip (1) k p where the lateral connections are: ljp = cos (27r(j - p)/NT ) Equation 1 is self-referent; so calculation is done in two steps. The feed-forward input first arrives at time zero when there is no activity in the layer; iterated action of the lateral connections comes into play when this feed-forward input vanishes. The activity in the somatic layer is then combined with the visual directional information by the output sigma-pi neurons as follows: ti = L Xj Sij j 3.2 WEIGHTS AND LEARNING The only adjustable weights are the Wijk linking the proprioceptive layer to the somatic layer. Connectivity is random and not complete: only 15% of the somatic neurons receive information on arm position. The visuomotor mapping is learnt by modifying the internal representation of the arm. Motor commands issued by the network are correlated with the visual effect of the movement ("motor babbling"). More precisely, the learning algorithm is a repetition of the following cycle: 1. choice of an arm position among 5 positions (stars on Fig. 2) 2. random emission of a motor command (ti) 3. corresponding visual reafference (Xj) 4. weight modification according to a variant of the delta rule: c:'Wijk oc (tiXj Sij) Tk The random commands are gaussian distributions of activity over the output layer. 5000 learning epochs are sufficient to obtain a stabilized performance. It must be noted that the error between the ideal response of the network and the actual performance never decreases completely to zero, as the constraints of the visuomotor transformation vary over the workspace. 4 RESULTS 4.1 NETWORK PERFORMANCE Correct learning of the mapping was tested in 21 positions in the workspace in a pointing task toward 16 uniformly distributed directions in cartesian space. Movement directions generated by the network are shown in Fig. 2 (desired direction 0 degree is shown bold). Norm of movement vectors depends on the global activity in the network which varies with arm position and movement direction. Performance of the network is maximal near the learning positions. However, a good generalization is obtained (directional error 0.3°, SD 12.1°); a bias toward the shoulder can be observed in extreme right or left positions. A similar effect was observed in psychophysical experiments (Ghilardi et a1. 1995). Population Coding of Reaching Movements 90 180.0 270 Figure 2: Performance in a pointing task 4.2 PREFERRED DIRECTIONS AND POPULATION VECTOR 4.2.1 Behavior of the population vector 87 Preferred directions (PO) of output units were computed using a multilinear regression; a perfect cosine tuning was found, which is a consequence of the exact multiplication in sigma-pi neurons. Then, the population vector, the effective movement vector, and the desired movement were compared (Fig. 3) for two different arm configurations A and B marked on Fig. 2. The movement generated by the network (dashed arrow) is close to the ~, ~1' , , deSired direction contnbution of one neuron population vector ~ actual movement ......... ;:., Figure 3: Actual movement and population vector in two arm positions desired one (dotted rays) for both arm configurations. However, the population vector (solid arrow) is not always aligned with the movement. The discrepancy between movement and population vector depends both on the direction and the position of the arm: it is maximal 88 P Baraduc, E. Guigon and Y Burnod for positions near the borders of the workspace as position B. Fig. 3 (position B) shows that the deviations of the population vector are due to the anisotropic distribution of the PDs in cartesian space for given positions. 4.2.2 Difference between direction of action and preferred direction Marked anisotropy in the distribution of PDs is compatible with accurate performance. To see why, let us call "direction of action" (DA) the motor cell's contribution to the movement. The distribution of DAs presents an anisotropy due to the geometry of the arm. This anisotropy is canceled by the distribution of PDs. Mathematically, if U is a N x 2 matrix of uniformly distributed 2D vectors, the PD matrix is UJ-1 whereas the DA matrix is UJT , J being the jacobian of the angular-to-cartesian mapping. Difference between DA and PD has been plotted with concentric arcs for four representative neurons at 21 arm positions in Fig. 4. Sign and magnitude of the difference vary continuously over the workspace and neuron number 4 / Vi: DA, " .. _ clockwise = counterclockwise Figure 4: Difference between direction of action and preferred direction for four units. often exceed 45 degrees. It can also be noted that preferred directions rotate with the arm as was experimentally noted by (Caminiti et a1. 1991). 5 DISCUSSION We first asked how a network of broadly tuned neurons could produce visually guided arm movements. The model proposed here produces a correct behavior over the entire workspace. Biases were observed at the extreme right and left which closely resemble experimental data in humans (Ghilardi et a1. 1995). Single cells in the output layer behave as motor cortical cells do and the NPV of these cells correctly indicated the direction of movement for hand positions in the central region of the workspace (see Caminiti et al. 1991). Models of sensorimotor transformations have already been proposed. However they either considered motor synergies in cartesian coordinates (Burnod et a1. 1992), or used sharply Population Coding of Reaching Movements 89 tuned units (Bullock et al. 1993), or motor effects independent of arm position (Salinas and Abbott 1995). Next, the use of the NPV to describe cortical activity was questioned. A fundamental assumption in the calculation of the NPV is that the PD of a neuron is the direction in which the arm would move if the neuron were stimulated. The model shows that the two directions DA and PD do not necessarily coincide, which is probably the case in motor cortex (Scott and Kalaska 1995). It follows that the NPV often points neither in the actual movement direction nor in the desired movement direction (target direction), especially for unusual arm configurations. A maximum-likelihood estimator does not have these flaws; it would however accurately predict the desired movement out of the output unit activities, even for a wrong actual movement. In conclusion: (l) the NPV does not provide a faithful image of cortical visuomotor processes; (2) a correct NPV should be based on the DAs, which cannot easily be determined experimentally; (3) planning of trajectories in space cannot be realized by the successive recruitment of motor neurons whose PDs sequentially describe the movement. References Bullock, D., S. Grossberg, and F. Guenther (1993). A self-organizing neural model of motor equivalent reaching and tool use by a multijoint arm. J Cogn Neurosci 5(4), 408435. Burnod, Y., P. Grandguillaume, I. Otto, S. Ferraina, P. Johnson, and R Caminiti (1992). Visuomotor transformations underlying arm movements toward visual targets: a neural network model of cerebral cortical operations. J Neurosci 12(4), 1435-53. Caminiti, R, P. Johnson, C. Galli, S. Ferraina, and Y. Burnod (1991). Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets. J Neurosci 11(5), 1182-97. Georgopoulos, A (1996). On the translation of directional motor cortical commands to activation of muscles via spinal interneuronal systems. Brain Res Cogn Brain Res 3(2), 151-5. Georgopoulos, A, A Schwartz, and R Kettner (1986). Neuronal population coding of movement direction. Science 233(4771), 1416-9. Ghilardi, M., J. Gordon, and C. Ghez (1995). Learning a visuomotor transformation in a local area of work space pr oduces directional biases in other areas. J NeurophysioI73(6), 2535-9. Kalaska, J. and D. Crammond (1992). Cerebral cortical mechanisms of reaching movements. Science 255(5051),1517-23. Lemon, R (1988). The output map of the primate motor cortex. Trends Neurosci 11 (II), 501-6. Salinas, E. and L. Abbott (1995). Transfer of coded information from sensory to motor networks. J Neurosci 15(10),6461-74. Scott, S. and J. Kalaska (1995). Changes in motor cortex activity during reaching movements with similar hand paths but different arm postures. J Neurophysioi 73(6), 2563-7.
|
1998
|
151
|
1,510
|
Probabilistic Modeling for Face Orientation Discrimination: Learning from Labeled and Unlabeled Data Shumeet Baluja baluja@cs.cmu.edu Justsystem Pittsburgh Research Center & School of Computer Science, Carnegie Mellon University Abstract This paper presents probabilistic modeling methods to solve the problem of discriminating between five facial orientations with very little labeled data. Three models are explored. The first model maintains no inter-pixel dependencies, the second model is capable of modeling a set of arbitrary pair-wise dependencies, and the last model allows dependencies only between neighboring pixels. We show that for all three of these models, the accuracy of the learned models can be greatly improved by augmenting a small number of labeled training images with a large set of unlabeled images using Expectation-Maximization. This is important because it is often difficult to obtain image labels, while many unlabeled images are readily available. Through a large set of empirical tests, we examine the benefits of unlabeled data for each of the models. By using only two randomly selected labeled examples per class, we can discriminate between the five facial orientations with an accuracy of 94%; with six labeled examples, we achieve an accuracy of 98%. 1 Introduction This paper examines probabilistic modeling techniques for discriminating between five face orientations: left profile, left semi-profile, frontal, right semi-profile, and right profile. Three models are explored: the first model represents no inter-pixel dependencies, the second model is c,~pable of modeling a set of arbitrary pair-wise dependencies, and the last model allows'~~~ndencies only between neighboring pixels. Models which capture inter-pixel dependencies can provide better classification performance than those that do not capture dependencies. The difficulty in using the more complex models, however, is that as more dependencies are modeled, more parameters must be estimated - which requires more training data. We show that by using Expectation-Maximization, the accuracy of what is learned can be greatly improved by augmenting a small number of labeled training images with unlabeled images, which are much easier to obtain. The remainder of this section describes the problem of face orientation discrimination in detail. Section 2 provides a brief description ofthe probabilistic models explored. Section 3 presents results with these models with varying amounts of training data. Also shown is how Expectation-Maximization can be used to augment the limited labeled training data with unlabeled training data. Section 4 briefly discusses related work. Finally, Section 5 closes the paper with conclusions and suggestions for future work. Probabilistic Modelingfor Face Orientation Discrimination 855 1.1 Detailed Problem Description The interest in face orientation discrimination arises from two areas. First, the rapid increase in the availability of inexpensive cameras makes it practical to create systems which automatically monitor a person while using a computer. By using motion, color, and size cues, it is possible to quickly fmd and segment a person's face when he/she is sitting in front of a computer monitor. By determining whether the person is looking directly at the computer, or is staring away from the computer, we can provide feedback to any user interface that could benefit from knowing whether a user is paying attention or is distracted (such as computer-based tutoring systems for children, computer games, or even car-mounted cameras that monitor drivers). Second, to perform accurate face detection for use in video-indexing or content-based image retrieval systems, one approach is to design detectors specific to each face orientation, such as [Rowley et aI., 1998, Sung 1996]. Rather than applying all detectors to every location, a face-orientation system can be applied to each candidate face location to "route" the candidate to the appropriate detector, thereby reducing the potential for falsepositives, and also reducing the computational cost of applying each detector. This approach was taken in [Rowley et at., 1998]. For the experiments in this paper, each image to be classified is 20x20 pixels. The face is centered in the image, and comprises most of the image. Sample faces are shown in Figure 1. Empirically, our experiments show that accurate pose discrimination is possible from binary versions of the images. First, the images were histogram-equalized to values between 0 and 255. This is a standard non-linear transformation that maps an approximately equal number of pixels to each value within the 0-255 range. It is used to improve the contrast in images. Second, to "binarize" the images, pixels with intensity above 128 were mapped to a value of255, otherwise the pixels were mapped to a value ofO. Frontal Right Half Profile Right Profile Left Half Profile Left Profile "... . ,. -[j .. ~ ;; ,," • l" ... -' J J • .i ~ \' ' . 1 I i ~~ l l J .J ·b , .... t. Original 2 Methods Explored ..... ., fII'!-• , t . ... lIZ! ~ ... 6; Figure 1: 4 images of each of the 5 classes to be discriminated, Note the variability in the images. Left: Original Images. Right: Images after histogram equalization and binary quantization. This section provides a description of the probabilistic models explored: Naive-Bayes, Dependency Trees (as proposed by [Chow and Liu, 1968]), and a dependence network which models dependencies only between neighboring pixels. For more details on using Bayesian "multinets" (independent networks trained to model each class) for classification in a manner very similar to that used in this paper, see [Friedman, et at., 1997]. 2.1 The Naive-Bayes Model The first, and simplest, model assumes that each pixel is independent of every other pixel. Although this assumption is clearly violated in real images, the model often yields good results with limited training data since it requires the estimation of the fewest parameters. Assuming that each image belongs exclusively to one of the five face classes to be dis856 S. Baluja criminated, the probability of the image belonging to a particular class is given as follows: CI P(ImagelClassc) x P(Classc) P ( assc Jlmage) = -...:....-;;."".,.;.--P-(l-m...!:a,;..ge-)-.:...-~ 400 P(lmagelClassc) = I1 P(Pixel, IClassc) , ~ I P(PixelilClassJ is estimated directly from the training data by: k+ l: Pixel, x P(Classcllmage) P(Pixelri C lassc) = _""'Tr-"-a"""l1""ng .... /""ma .... g .... es'--______ _ 2k+ l: P(Classcllmage) Trarnrng/mages Since we are only counting examples from the training images, P(ClasscIImage) is known. The notation P(ClasscIImage) is used to represent image labels because it is convenient for describing the counting process with both labeled and unlabeled data (this will be described in detail in Section 3). With the labeled data, P(ClasscIImage)E{O,I}. Later, P(ClasscIImage) may not be binary; instead, the probability mass may be divided between classes. PixeliE {O, I} since the images are binary. k is a smoothing constant, set to 0.001. When used for classification, we compute the posterior probabilities and take the maximum, Cpredicted, where: cpred,cled = BrgmBX c P(Classc I Image) =: P(lmagelClassc) . For simplicity, P(ClassJ is assumed equal for all c; prImage) is a normalization constant which can be ignored since we are only interested in fmding the maximum posterior probability. 2.2 Optimal Pair-Wise Dependency Trees We wish to model a probability distribution P(Xb ... , X4001ClassJ, where each X corresponds to a pixel in the image. Instead of assuming pixel independence, we restrict our model to the following form: n P(X1·· ·XnIClassc) = Il p(xilnx-,ClassJ i = 1 I where I1x is Xi's single "parent" variable. We require that there be no cycles in these , "parent-of' relationships: formally, there must exist some permutation m = (m b ... ' m,J of (1, ... , n) such that ( n x, = x) ~ m(,) < mU) for all i. In other words, we restrict P' to factorizations representable by Bayesian networks in which each node (except the root) has one parent, i.e., tree-shaped graphs. A method for finding the optimal model within these restrictions is presented in [Chow and Liu, 1968]. A complete weighted graph G is created in which each variable Xi is represented by a corresponding vertex Vi, and in which the weight Wjj for the edge between vertices V j and Vj is set to the mutual information I(Xj,Xj) between Xj and Xj. The edges in the maximum spanning tree of G determine an optimal set of (n-l) conditional probabilities with which to construct a tree-based model of the original probability distribution. We calculate the probabilities P(Xi) and P(Xj, Xj) directly from the dataset. From these, we calculate the mutual information, I(Xj, Xj), between all pairs of variables Xi and Xj: P(X . = a, X = b) I(X X) = "P(X = a X = b). log I I "J L.. I 'J P(X = a) · P(X = b) a,b I J The maximum spanning tree minimizes the Kullback-Leibler divergence D(PIIP') between Probabilistic Modelingfor Face Orientation Discrimination the true and estimated distributions: D(P II P') = L P(X)log :,«~ x 857 as shown in [Chow & Liu, 1968]. Among all distributions of the same form, this distribution maximizes the likelihood of the data when the data is a set of empirical observations drawn from any unknown distribution. 2.3 Local Dependency Models Unlike the Dependency Trees presented in the previous section, the local dependency networks only model dependencies between adjacent pixels. The most obvious dependencies to model are each pixel's eight neighbors. The dependencies are shown graphically in Figure 2(left). The difficulty with the above representation is that two pixels may be dependent upon each other (if this above model was represented as a Bayesian network, it would contain cycles). Therefore, to avoid problems with circular dependencies, we use the following model instead. Each pixel is still connected to each of its eight neighbors; however, the arcs are directed such that the dependencies are acyclic. In this local dependence network, each pixel is only dependent on four of its neighbors: the three neighbors to the right and the one immediately below. The dependencies which are modeled are shown graphically in Figure 2 (right). The dependencies are: 400 P(ImagelClassC> = TI P(Pixel,lnptrel,' Classc) , = 1 (0,0) 0 DO 0 (0,0)0 0 DO 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 DO 0 o 0 . •• 0 (20,20) 0 DODD 0(20,20) Figure 2: Diagmm of the dependencies maintained. Each square represents a pixel in the image. Dependencies are shown only for two pixels. (Left) Model with 8 dependencies - note that because this model has circular dependencies, we do not use it. Instead, we use the model shown on the Right. (Right) Model used has 4 dependencies per pixel. By imposing an ordering on the pixels, circular dependencies are avoided. 3 Performance with Labeled and Unlabeled Data In this section, we compare the results of the three probabilistic models with varying amounts of labeled training data. The training set consists of between 1 and 500 labeled training examples, and the testing set contains 5500 examples. Each experiment is repeated at least 20 times with random train/test splits of the data. 3.1 Using only Labeled Data In this section, experiments are conducted with only labeled data. Figure 3(left} shows each model's accuracy in classifying the images in the test set into the five classes. As 858 S. Baluja expected, as more training data is used, the performance improves for all models. Note that the model with no-dependencies performs the best when there is little data. However, as the amount of data increases, the relative performance of this model, compared to the other models which account for dependencies, decreases. It is interesting to note that when there is little data, the Dependency Trees perform poorly. Since these trees can select dependencies between any two pixels, they are the most susceptible to fmding spurious dependencies. However, as the amount of data increases, the performance of this model rapidly improves. By using all of the labeled data (500 examples total), the Dependency Tree and the Local-Dependence network perform approximately the same, achieving a correct classification rate of approximately 99% . CIoMfkwtIap Poi Ibi ..tda....." LaboIecI Data I All ------0.11 , ,/ 0 ... / , I 0.'111 / , , , I 0 .. / --=-.., , , ~""-J 0 .. , , ~".. , ---0'" I l -- .... ~ I o.m , .......... " 0:111 1011 l.oa 0" 0.' 0 ... , , , I , •. 711 , 3 I I I ... I , , ... I I .... I ........ , 0.40 .... , o.m 0:111 au us ..--~ ( .......... J ---~,.,. -- .... D~ 10 :00 1011 ......... Figure 3: Perfonnance of the three models. X Axis: Amount oflabeled training data used. Y Axis: Percent correct on an independent test set. In the left graph, only labeled data was used. In the right graph, unlabeled and labeled data was used (the total number of examples were 500, with varying amounts of labeled data). 3.2 Augmenting the Models with Unlabeled Data We can augment what is learned from only using the labeled examples by incorporating unlabeled examples through the use of the Expectation-Maximization (EM) algorithm. Although the details of EM are beyond the scope of this paper, the resulting algorithm is easily described (for a description of EM and applications to filling in missing values, see [Dempster et al., 1977] and [Ghahramani & Jordan, 1994]): 1. Build the models using only the labeled data (as in Section 2). 2. Use the models to probabilistically label the unlabeled images. 3. Using the images with the probabilistically assigned labels, and the images with the given labels, recalculate the models' parameters. As mentioned in section 2, for the images labeled by this process, P(Classcllmage) is not restricted to {0,1}; the probability mass for an image may be spread to multiple classes. 4. If a pre-specified termination condition is not met, go to step 2. This process is used for each classifier. The termination condition was five iterations; after five iterations, there was little change in the models' parameters. The performance of the three classifiers with unlabeled data is shown in Figure 3(right). Note that with small amounts of data, the performance of all of the classifiers improved dramatically when the unlabeled data is used. Figure 4 shows the percent improvement by using the unlabeled data to augment the labeled data. Note that the error is reduced by Probabilistic Modelingfor Face Orientation Discrimination 859 almost 90% with the use of unlabeled data (see the case with Dependency Trees with only 4 labeled examples, in which the accuracy rates increase from 44% to 92.5%). With only 50 labeled examples, a classification accuracy of 99% was obtained. This accuracy was obtained with almost an order of magnitude fewer labeled examples than required with classifiers which used only labeled examples. In almost every case examined, the addition of unlabeled data helped performance. However, unlabeled data actually hurt the no-dependency model when a large amount of labeled data already existed. With large amounts of labeled data, the parameters of the model were estimated well. Incorporating unlabeled data may have hurt performance because the underlying generative process modeled did not match the real generative process. Therefore, the additional data provided may not have been labeled with the accuracy required to improve the model's classification performance. It is interesting to note that with the more complex models, such as the dependency trees or local dependence networks, even with the same amount of labeled data, unlabeled data improved performance. [Nigam, et al., 1998] have reported similar performance degradation when using a large number of labeled examples and EM with a naive-Bayesian model to classify text documents. They describe two methods for overcoming this problem. First, they adjust the relative weight of the labeled and unlabeled data in the M-step by using cross-validation. Second, they providing multiple centroids per class, which improves the data/model fit. Although not presented here due to space limitations, the first method was attempted - it improved the performance on the face orientation discrimination task. t) ~ 0 u is 8 ... 01) 0... --~-... --.,.,..J..-_ ~~ ....................... , . LaaI •• "",..<-~) ,.. ...... (1K) LaaI_Do ILK) ...... ~~ = • &'-; , · • • ~ • '" '" .. • I ;::• ...... .f<-) · (11"') • • .. .1 .. I .. • • • • I • • " " , . ., . ,--: Figure 4: Improvement for each model by using unlabeled data to augment the labeled data. Left: with only 1 labeled example, Middle: 4 labeled, Right: 50 labeled. The bars in light gray represent the performance with only labeled data, the dark bars indicate the performance with the unlabeled data. The number in parentheses indicates the absolute (in contrast to relative) percentage change in classification performance with the use of unlabeled data. 4 Related Work There is a large amount of work which attempts to discover attributes of faces, including (but not limited to) face detection, face expression discrimination, face recognition, and face orientation discrimination (for example [Rowley et al., 1998][Sung, 1996][Bartlett & Sejnowski, 1997][Cottrell & Metcalfe, 1991 ][Turk & Pentland, 1991 D. The work presented in this paper demonstrates the effective incorporation of unlabeled data into image classification procedures; it should be possible to use unlabeled data in any of these tasks. The closest related work is presented in [Nigam et aI, 1998]. They used naive-Bayes methods to classify text documents into a pre-specified number of groups. By using unlabeled data, they achieve significant classification performance improvement over using labeled documents alone. Other work which has employed EM for learning from labeled and unlabeled data include [Miller and Uyar, 1997] who used a mixture of experts classifier, and [Shahshahani & Landgrebe, 1994] who used a mixture of Gaussians. However, the dimensionality oftheir input was at least an order of magnitude smaller than used here. There is a wealth of other related work, such as [Ghahramani & Jordan, 1994] who have 860 S. Baluja used EM to fill in missing values in the training examples. In their work, class labels can be regarded as another feature value to fill-in. Other approaches to reducing the need for large amounts of labeled data take the fonn of active learning in which the learner can ask for the labels of particular examples. [Cohn, et. a11996] [McCallum & Nigam, 1998] provide good overviews of active learning. 5 Conclusions & Future Work This paper has made two contributions. The first contribution is to solve the problem of discriminating between five face orientations with very little data. With only two labeled example images per class, we were able to obtain classification accuracies of94% on separate test sets (with the local dependence networks with 4 parents). With only a few more examples, this was increased to greater than 98% accuracy. This task has a range of applications in the design of user-interfaces and user monitoring. We also explored the use of mUltiple probabilistic models with unlabeled data. The models varied in their complexity, ranging from modeling no dependencies between pixels, to modeling four dependencies per pixel. While the no-dependency model perfonns well with very little labeled data, when given a large amount of labeled data, it is unable to match the perfonnance of the other models presented. The Dependency-Tree models perfonn the worst when given small amounts of data because they are most susceptible to finding spurious dependencies in the data. The local dependency models perfonned the best overall, both by working well with little data, and by being able to exploit more data, whether labeled or unlabeled. By using EM to incorporate unlabeled data into the training of the classifiers, we improved the perfonnance of the classifiers by up to approximately 90% when little labeled data was available. The use of unlabeled data is vital in this domain. It is time-consuming to hand label many images, but many unlabeled images are often readily available. Because many similar tasks, such as face recognition and facial expression discrimination, suffer from the same problem of limited labeled data, we hope to apply the methods described in this paper to these applications. Preliminary results on related recognition tasks have been promising. Acknowledgments Scott Davies helped tremendously with discussions about modeling dependencies. I would also like to acknowledge the help of Andrew McCallum for discussions of EM, unlabeled data and the related work. Many thanks are given to Henry Rowley who graciously provided the data set. Finally, thanks are given to Kaari Flagstad for comments on drafts of this paper. References Bartlett, M. & Sejnowski, T. (1997) "Viewpoint Invariant Face Recognition using ICA and Attractor Networks", in Adv. in Neural Information Processing Systems (NIPS) 9. Chow, C. & Liu, C. (1968) "Approximating Discrete Probability Distributions with Dependence Trees". IEEETransactions on Information Theory, 14: 462-467. Cohn, D.A., Ghahramani, Z. & Jordan, M. (1996) "Active Learning with Statistical Models", Journal of Artificial Intelligence Research 4: 129-145. Cottrell, G. & Metcalfe, (1991) "Face, Gender and Emotion Recognition using Holons", NIPS 3. Dempster, A. P., Laird, N.M., Rubin, D.B. (1977) "Maximum Likelihood from Incomplete Data via the EM Algorithm", J Royal Statistical Society Series B, 39 1-38. Friedman, N., Geiger, D. Goldszmidt, M. (1997) "Bayesian Network Classifiers", Machine Learning 1:29. Ghahramani & Jordan (1994) "Supervised Learning from Incomplete Data Via an EM Approach" NIPS 6. McCallum, A. & Nigam, K. (1998) "Employing EM in Pool-Based Active Learning", in ICML98. Miller, D. & Uyar, H. (1997) "A Mixture of Experts Classifier with Learning based on both Labeled and Unlabeled data", in Adv. in Neural Information Processing Systems 9. Nigam, K. McCallum, A., Thrun, S., Mitchell, T. (1998), "Learning to Classify Text from Labeled and Unlabeled Examples", to appear in AAAI-98. Rowley, H., Baluja, S. & Kanade, T. (1998) "Neural Network-Based Face Detection", IEEE-Transactions on Pattern Analysis and Machine Intelligence (PAMI). Vol. 20, No. 1, January, 1998. Shahshahani, B. & Landgrebe, D. (1994) "The Effect of Unlabeled samples in reducing the small sample size problem and mitigating the Hughes Phenomenon", IEEE Trans. on Geosc. and Remote Sensing 32. Sung, K.K. (1996), Learning and Example Selection for Object and Pattern Detection. Ph.D. Thesis, MIT AI Lab - AI Memo 1572. Turk, M. & Pentland, A. (1991) "Eigenfaces for Recognition". J. Cog Neurosci. 3 (I).
|
1998
|
16
|
1,511
|
Direct Optimization of Margins Improves Generalization in Combined Classifiers Llew Mason,Peter Bartlett, Jonathan Baxter Department of Systems Engineering Australian National University, Canberra, ACT 0200, Australia {lmason, bartlett, jon }@syseng.anu.edu.au Abstract Cumulative training margin distributions for AdaBoost versus our "Direct Optimization Of Margins" (DOOM) algorithm. The dark curve is AdaBoost, the light curve is DOOM. DOOM sacrifices significant training error for improved test error (horizontal marks on margin= 0 line)_ -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Margin 1 Introduction Many learning algorithms for pattern classification minimize some cost function of the training data, with the aim of minimizing error (the probability of misclassifying an example). One example of such a cost function is simply the classifier's error on the training data. Recent results have examined alternative cost functions that provide better error estimates in some cases_ For example, results in [Bar98] show that the error of a sigmoid network classifier f(-) is no more than the sample average of the cost function sgn(B-yf(x)) (which takes value 1 when yf(x) is no more than Band 0 otherwise) plus a complexity penalty term that scales as IlwlldB, where (x,y) E X x {±1} is a labelled training example, and Ilwlll is the sum of the magnitUdes of the output node weights. The quantity yf(x) is the margin of the real-valued function f, and reflects the extent to which f(x) agrees with the label y E {± 1}. By minimizing squared error, neural network learning algorithms implicitly maximize margins, which may explain their good generalization performance. More recently, Schapire et al [SFBL98] have shown a similar result for convex combinations of classifiers, such as those produced by boosting algorithms. They show Direct Optimization of Margins Improves Generalization 289 that, with high probability over m random examples, every convex combination of classifiers from some finite class H has error satisfying Pr[yf(x) <: 0] <: Es [sgn(O - yf(x))] + 0 ( J,n Cogm ~~gIHI + IOg(1/0)) t) (1) for all e > 0, where Es denotes the average over the sample S. One way to think of these results is as a technique for adjusting the effective complexity of the function class by adjusting e. Large values of e correspond to low complexity and small values to high complexity. If the learning algorithm were to optimize the parametrized cost function Essgn(e - yf(x)) for large values of e, it would not be able to make fine distinctions between different functions in the class, and so the effective complexity of the class would be reduced. The second term in the error bounds (the regularization term involving the complexity parameter e and the size of the base hypothesis class H) would be correspondingly reduced. In both the neural network and boosting settings, the learning algorithms do not directly minimize these cost functions; we use different values of the complexity parameter in the cost functions only in explaining their generalization performance. In this paper, we address the question: what are suitable cost functions for convex combinations of classifiers? In the next section, we give general conditions on parametrized families of cost functions that ensure that they can be used to give error bounds for convex combinations of classifiers. In the remainder of the paper, we investigate learning algorithms that choose the convex coefficients of a combined classifier by minimizing a suitable family of piecewise linear cost functions using gradient descent. Even when the base hypotheses are chosen by the AdaBoost algorithm, and we only use the new cost functions to adjust the convex coefficients, we obtained an improvement on the test error of AdaBoost in all but one of the UC Irvine data sets we used. Margin distribution plots show that in many cases the algorithm achieves these lower errors by sacrificing training error, in the interests of reducing the new cost function. 2 Theory In this section, we derive an error bound that generalizes the result for convex combinations of classifiers described in the previous section. The result involves a family of margin cost functions (functions mapping from the interval [-1, 1] to ~+), indexed by an integer-valued complexity parameter N, which measures the resolution at which we examine the margins. The following definition gives conditions on the margin cost functions that relate the complexity N to the amount by which the margin cost function is larger than the function sgn( -yf(x)). The particular form of this definition is not important. In particular, the functions lit N are only used in the analysis in this section, and will not concern us later in the paper. Definition 1 A family {CN : N E N} of margin cost functions is B-admissible for B ~ 0 if for all N E N there is an interval Y C ~ of length no more than B and a function lit N : [-1, 1] -+ Y that satisfies sgn( -a) ~ EZ~QN,Q (lit N(Z)) ~ CN(a) for all a E [-1, 1], where E Z ~Q N, Q (.) denotes the expectation when Z is chosen randomly as Z = (l/N) 2:/(=1 Zi with Zi E {-1, 1} and Pr(Zi = 1) = (1 + a)/2. As an example, let CN(a) = sgn(e - a) + c, for e = l/VN and some constant c. This is a B-admissible family of margin cost functions, for suitably large B. (This is 290 L. Mason, P L. Bartlett and J. Baxter exhibited by the functions W N(a) = sgn(O /2 - a) + c/2; the proof involves Chernoff bounds.) Clearly, for larger values of N, the cost functions CN are closer to the threshold function sgn( -a). Inequality (1) is implied by the following theorem. In this theorem, co(H) is the set of convex combinations of functions from H. A similar proof gives the same result with VCdim(H) In m replacing In IHI. Theorem 2 For any B-admissible family {CN : N E N} of margin cost junctions, any finite hypothesis class H and any distribution P on X x { -1,1}, with probability at least 1 - 8 over a random sample S of m labelled examples chosen according to P, every N and every f in co(H) satisfies Pr [yf(x) ~ 0] < Es [CN(yf(x))] + B2 2m (N In IHI + In(N(N + 1)/8)). Proof Fix Nand f E co(H), and suppose that f = r:d aihi for hi E H. Define cON(H) = {(I/N) 2:.%,1 hj : hj E H} , and notice that ICON(H)I :s; IHIN. As in the proof of (1) in [SFBL98], we show using the probabilistic method that there is a function 9 in cON(H) that closely approximates f. Let Q be the distribution on cON(H) corresponding to the average of N independent draws from {hd according to the distribution {ad, and let Q N,Ci be the distribution given in Definition 1. Then for any fixed pair x, y, when 9 is chosen according to Q the distribution of yg(x) is Q N,yf(x)' Now, fix the function W N implied by the B-admissibility condition. By the definition of B-admissibility, Eg~QEp [w N(yg(X))] = EpEz~QN , Yf(") [WN(Z)] ~ Ep sgn( -yf(x)) = P [yf(x) ~ 0]. Similarly, Es [CN(yf(x))] ~ Eg~QEs [WN(yg(X))]. Hence, if Pr [yf(x) :s; 0] Es [CN(yf(x))] ~ EN, then Eg~Q [Ep [WN(yg(X))]- Es [WN(yg(X))]] ~ EN. Thus, Pr [3f E co(H): Pr [yf(x) ~ 0] ~ Es [CN(yf(x))] + EN] ~ Pr [3g E CON (H) : Ep [w N(yg(X))] ~ Es [WN(yg(X))] + EN] ~ IHIN exp( -2mE~/ B2), where the last inequality follows from the union bound and Hoeffding's inequality. Setting this probability to 8N = 8/(N(N + 1)), solving for EN, and summing over values of N completes the proof, since 2:NEN 8N = 8. 0 For the best bounds, we want W N to satisfy EZ~QN." [w N(Z)] 2 sgn( -0), but with the difference EZ~QN , ,, [WN(Z) - sgn(-a)] as small as possible for a E [-1, 1]. One approach would be to minimize the expectation of this difference, for 0 chosen uniformly in [-1,1]. However, this yields a non-monotone solution for CN(o). Figure la illustrates an example of a monotone B-admissible family; it shows the cost functions CN(a) = EZ~QN,,, WN(Z), for N = 20,50 and 200, where WN(O) = sgn(y'210gN/N - a) + I/N. 3 Algorithm We now consider how to select convex coefficients WI, ... , WT for a sequence of {-1,1} classifiers h1 , ... ,hT so that the combined classifier f(x) = 2:;=1 Wtht(x) has small error. In the experiments we used the hypotheses provided by AdaBoost. (The aim was to investigate how useful are the error estimates provided by the cost functions of the previous section.) If we take Theorem 2 at face value and ignore log terms, the best error bound is obtained if the weights WI, . .. , WT and the complexity N are chosen to minimize Direct Optimization of Margins improves Generalization 291 1.2 .----~--~--~---, 0.8 0.8 Cii 8 0.6 Cii 8 0.6 0.4 0.4 0.2 0.2 0 -1 -0.5 0 0.5 -1 -0.5 0 0.5 Figure 1: (a) The margin cost functions CN(O), for N = 20,50 and 200, compared to the function sgn( -0). Larger values of N correspond to closer approximations to sgn( -0). (b) Piecewise linear upper bounds on the functions C N (0), and the function sgn( -0). (11m) 2::1 CN(yi!(xd) + KvNlm, where K is a constant and {CN} is a family of B-admissible cost functions. Although Theorem 2 provides an expression for the constant K, in practical problems this will almost certainly be an overestimate and so our penalty for even moderately complex models will be too great. To solve this problem, instead of optimizing the average cost of the margins plus a penalty term over all values of the parameter 0, we estimated the optimal value of 0 using a cross-validation set. That is, for fixed values of 0 in a discrete but fairly dense set we selected weights optimizing the average cost ! 2::1 Co (yi!(Xi)) and then chose the solution with smallest error on an independent cross-validation set. We considered the use of the cost functions plotted in Figure la, but the existence of flat regions caused difficulties for gradient descent approaches. Instead we adopted a piecewise linear family of cost functions Co that are linear in the intervals [-1, OJ, [0, OJ, and [0,1]' and pass through the points (-1,1.2), (0,0.1), (0,0.1), and (1,0), for 0 E (0,1). The numbers were chosen to ensure the Co are upper bounds on the cost functions of Figure Ia (see Figure Ib). Note that 0 plays the role of a complexity parameter, except that in this case smaller values of 0 correspond to higher complexity classes. Even with the restriction to piecewise linear cost functions, the problem of optimizing ! 2::1 Co (yi!(Xi)) is still hard. Fortunately, the nature of this cost function makes it possible to find successful heuristics (which is why we chose it). The algorithm we have devised to optimize the Co family of cost functions is called Direct Optimization Of Margins (DOOM). (The pseudo-code of the algorithm is given in the full version [MBB98].) DOOM is basically a form of gradient descent, with two complications: it takes account of the fact that the cost function is not differentiable at 0 and 0, and it ensures that the weight vector lies on the unit ball in it. In order to avoid problems with local minima we actually allow the weight vector to lie within the it-ball throughout optimization rather than on the h-ball. If the weight vector reaches the surface of the ll-ball and the update direction points out of the it -ball, it is projected back to the surface of the it -ball. Observe that the gradient of ! 2::1 CO(yi!(Xi)) is a constant function of the weights W = (WI, ... , WT) provided no example (Xi, Yi) "crosses" one of the discontinuities at 0 or 0 (Le. provided the margin yi!(Xi) does not cross 0 or 0). Hence, the central operation of DOOM is to step in the negative gradient direction until an example's margin hits one of the discontinuities (projecting where necessary to ensure the weight vector lies within the h ball). At this point the gradient vector becomes multi-valued (generally two-valued but it can be more). Each of the possible gradient directions is then tested by taking a small step in that direction (a 292 L. Mason. P L. Bartlett and J. Baxter random subset of the gradient directions is chosen if there are too many of them). If none of the directions lead to a decrease in the cost, the examples whose margins lie on discontinuities of the cost function are added to a constraint set E. In subsequent iterations the same stepping procedure above is followed except that the direction step is modified to ensure that the examples in E do not move (Le. they remain on the discontinuity points of C(J). That is, the weight vector moves within the subspace defined by the examples in E. If no progress is made in any iteration, the constraint set E is reset to zero. If still no progress is made the procedure terminates. 4 Experiments We used the following two-class problems from the UC Irvine database [CBM98] : Cleveland Heart Disease, Credit Application, German, Glass, Ionosphere, King Rook vs King Pawn, Pima Indians Diabetes, Sonar, Tic-Tac-Toe, and Wisconsin Breast Cancer. For the sake of simplicity we did not consider multi-class problems. Each data set was randomly separated into train, test and validation sets, with the test and validation sets being equal in size. This was repeated 10 times independently and the results were averaged. 35 ~ 30 It c: 25 '" > §. 20 .§ '" 15 > .~ ;:; 10 0:: ::E 5 0 8 0 -5 0 x: . . ...~ .... x 5 . x .. x , i .. ,! xi 10 15 20 25 30 AdaBoost Test Error (%) Each experiment consisted of the following steps. First, AdaBoost was run on the training data to produce a sequence of base classifiers and their corresponding weights. In all of the experiments the base classifiers were axis-orthogonal hyperplanes (also known as decision stumps); this choice ensured that the complexity of the class of base classifiers was constant. Boosting was halted when adding a new classifier failed to decrease the error on the validation set. DOOM was then run on the classifiers produced by AdaBoost for a large range of e values and 1000 random initial weight vectors for each value of e. Figure 2: Relative improvement of The weight vector (and e value) with minimum DOOM over AdaBoost for all exammisclassification on the validation set was chosen ined datasets. as the final solution. In some cases the training sets were reduced in size to make overfitting more likely, so that complexity regularization with DOOM could have an effect. (The details are given in the full version [MBB98].) In three of the datasets (Credit Application, Wisconsin Breast Cancer and Pima Indians Diabetes), AdaBoost gained no advantage from using more than a single classifier. In these datasets, the number of classifiers was chosen so that the validation error was reasonably stable. A comparison between the test errors generated by AdaBoost and DOOM is shown in Figure 2. In only one data set did DOOM produce a classifier which performed worse than AdaBoost in terms of test error; for most data sets DOOM's test error was a significant improvement over AdaBoost's. Figure 3 shows cumulative training margin distribution graphs for four of the datasets for both AdaBoost and DOOM (with optimal e chosen by cross-validation). For a given margin the value on the curve corresponds to the proportion of training examples with margin no mOI;e than this value. The test errors for both algorithms are also shown for comparison, as short horizontal lines on the vertical axis. The margin distributions show that the value of the minimum training margin has no real impact on generalization performance. (See also [Bre97] and [GS98].) As Direct Optimization of Margins Improves Generalization 293 40 ..................... . .... .................................................... . ....... . 100 ......................... . Wisconsion Breast Cancer Credit Application so 30 ~ ~ 1! ! !: :§ 20 . ." " ~ E " " 60 40 u JI u 10 20 o o+-~~--~~~~--~~~~ -I -U.S -0.6 -U.4 ·0.2 0 0.2 0.4 0.6 0.8 I -I -O.S -U.6 -U.4 -0.2 0 0.2 0.4 0.6 0.8 I 100 ..•........................... 100 .........................•.................... Ionosphere Sonar "" 1: 60 ] ~ 40 " u 20 20 _~ ..... - ............ -. ·1 -0.8 -U.6 -U.4 -0.2 0 0.2 0.4 0.6 0.8 I -I -0.8 -0.6 -U.4 -U.2 0 0.2 0.4 0.6 0.8 I Margin Margin Figure 3: Cumulative training margin distributions for four datasets. The dark curve is AdaBoost, the light curve is DOOM with e selected by cross-validation. The test errors for both algorithms are marked on the vertical axis at margin O. can be seen in Figure 3 (Credit Application and Sonar data sets), the generalization performance of the combined classifier produced by DOOM can be as good as or better than that of the classifier produced by AdaBoost, despite having dramatically worse minimum training margin. Conversely, Figure 3 (Ionosphere data set) shows that improved generalization performance can be associated with an improved minimum margin. The margin distributions also show that there is a balance to be found between training error and complexity (as measured by 0). DOOM is willing to sacrifice training error in order to reduce complexity and thereby obtain a better margin distribution. For instance, in Figure 3 (Sonar data set), DOOM's training error is over 20% while AdaBoost's is 0%, but DOOM's test error is 5% less than that of AdaBoost's. The reaSOn for this success can be seen in Figure 4, which illustrates the changes in the cost function, training error, and test error as a function of o. The optimal complexity for this data set is low (corresponding to a large optimal 0). In this case, a reduction in complexity is more important to generalization error than a reduction in training error. 5 Conclusion In this paper we have addressed the question: what are suitable cost functions for COnvex combinations of base hypotheses? For general families of cost functions that are functions of the margin of a sample, we proved (Theorem 2) that the error of a COnvex combination is nO more than the sample average of the cost function plus a regularization term involving the complexity of the cost function and the size of the base hypothesis class. We constructed a piecewise linear family of cost functions satisfying the conditions of Theorem 2 and presented a heuristic algorithm (DOOM) for optimizing the sample 294 0.45 0.40 0.35 0.30 ~ 0.25 u 0.20 0.15 0.10 0.05 0.00 0 8 a L. Mason, P L. Bartlett and J. Baxter 50 -r------------------.. 45 .................... ............................ ..... ......... . ... • .:~~, :: 25 ~ 20 ..... Ada Boost Train _ ... . __ .. _ .. __ .. . . _. -- AdaBoost Tes! 15 ... ..... DOOM Train 10 . -0- DOOM Tes! o 0 000 8 e 5 ~ ~ o 0 tv W 'J> 0 a Figure 4: Sonar data set, Left: Plot of cost (~ 2:~1 C9(yi/(Xi))) against () for AdaBoost and DOOM. Right: Plot of training and test error against (). average of the cost. We ran experiments on several of the datasets in the UC Irvine database, in which AdaBoost was used to generate a set of base classifiers and then DOOM was used to find the optimal convex combination of those classifiers. In all but one case the convex combination generated by DOOM had lower test error than AdaBoost's combination. Margin distribution plots show that in many cases DOOM achieves these lower test errors by sacrificing training error, in the interests of reducing the new cost function. The margin plots also show that the size of the minimum margin is not relevant to generalization performance. Acknow ledgments Thanks to Yoav Freund, Wee Sun Lee and Rob Schapire for helpful comments and suggestions. This research was supported in part by a grant from the Australian Research Council. Jonathan Baxter was supported by an Australian Research Council Fellowship and Llew Mason was supported by an Australian Postgraduate Award. References [Bar98] P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525- 536, 1998. [Bre97] L. Breiman. Prediction games and arcing algorithms. Technical Report 504, Department of Statistics, University of California, Berkeley, 1997. [CBM98] E. Keogh C. Blake and C.J . Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/rvmlearn/MLRepository.html. [GS98] A. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 692- 699, 1998. [MBB98] L. Mason, P. L. Bartlett, and J. Baxter. Improved generalization through explicit optimization of margins. Technical report, Department of Systems Engineering, Australian National University, 1998. (Available from http://syseng.anu.edu.au/lsg) . [SFBL98] R. E. Schapire, Y. Freund, P. L. Bartlett, and W. S. Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics, (to appear), 1998.
|
1998
|
17
|
1,512
|
A N euromorphic Monaural Sound Localizer John G. Harris, Chiang-Jung Pu, and Jose C. Principe Department of Electrical & Computer Engineering University of Florida Gainesville, FL 32611 Abstract We describe the first single microphone sound localization system and its inspiration from theories of human monaural sound localization. Reflections and diffractions caused by the external ear (pinna) allow humans to estimate sound source elevations using only one ear. Our single microphone localization model relies on a specially shaped reflecting structure that serves the role of the pinna. Specially designed analog VLSI circuitry uses echo-time processing to localize the sound. A CMOS integrated circuit has been designed, fabricated, and successfully demonstrated on actual sounds. 1 Introduction The principal cues for human sound localization arise from time and intensity differences between the signals received at the two ears. For low-frequency components of sounds (below 1500Hz for humans), the phase-derived interaural time difference (lTD) can be used to localize the sound source. For these frequencies, the sound wavelength is at least several times larger than the head and the amount of shadowing (which depends on the wavelength of the sound compared with the dimensions of the head) is negligible. lTD localization is a well-studied system in biology (see e.g., [5]) and has even been mapped to neuromorphic analog VLSI circuits with limited success on actual sound signals [6] [2]. Above 3000Hz, interaural phase differences become ambiguous by multiples of 3600 and are no longer viable localization cues. For these high frequencies, the wavelength of the sound is small enough that the sound amplitude is attenuated by the head. The intensity difference of the log magnitudes at the ears provides a unique inter aural intensity difference (lID) that can be used to localize. Many studies have shown that when one ear is completely blocked, humans can still localize sounds in space, albeit at a worse resolution in the horizontal direcA Neuromorphic Monaural Sound Localizer Sound Signal Neuromorphic Microphone ---- --, Model : Detecting Onset Generating Pulse I i Adaptive Threshold (a) Reflector Sl : C . "" -,~:I . ; d1 l ~" ~ ' , r2 k f Mic. ' .. 2 .. 1 d S2 Reflector (b ) 693 Computing Delay Source Figure 1: (a) Proposed localization model is inspired from the biological model (b) Special reflection surface to serve the role of the pinna tion. Monaural localization requires that information is somehow extracted from the direction-dependent effects of the reflections and diffractions of sound off of the external ear (pinna), head, shoulder, and torso. The S<rcalled "Head Related Transfer Function" (HRTF) is the effective direction-dependent transfer function that is applied to the incoming sound to produce the sound in the middle ear. Section 2 of this paper introduces our monaural sound localization model and Section 3 discusses the simulation and measurement results. 2 Monaural Sound Localization Model Batteau [1] was one of the first to emphasize that the external ear, specifically the pinna, could be a source of spatial cues that account for vertical localization. He concluded that the physical structure of the external ear introduced two Significant echoes in addition to the original sound. One echo varies with the azimuthal position of the sound source, having a latency in the 0 to 80t'S range, while the other varies with elevation in the lOOt'S to 300t'S range. The output y(t) at the inner ear is related to the original sound source x(t) as y(t) = x(t) + atx(t - Ta) + a2x(t - Tv) (1) where Ta , Tv refer to azimuth and elevation echoes respectively; at and a2 are two reflection constants. Other researchers subsequently verified these results [11] [4]. Our localizer system (shown in Figure l(a)) is composed of a special reflection surface that encodes the sound source's direction, a silicon cochlea that functions as a band-pass filter bank, onset detecting circuitry that detects and amplifies the energy change at each frequency tap, pulse generating circuitry that transfers analog sound signals into pulse signals based on adaptively thresholding the onset signal, and delay time computation circuitry that computes the echo's time delay then decodes the sound source's direction. Since our recorded signal is composed of a direct sound and an echo, the sound is a simplified version of actual HRTF recordings that are composed of the direct sound 694 J. G. Harris, c.-J. Pu and J. C. Principe YIn Figure 2: (a) Sound signal's onset is detected by taking the difference 01 two low-pass filters with different time constants. (b) Pulse generating circuit. and its reflections from the external ear, head, shoulder, and torso. To achieve localization in a ID plane, we may use any shape of reflection surface as long as the reflection echo caused by the surface provides a one-to-one mapping between the echo's delay time and the source's direction. Thus, we propose two flat surfaces to compose the reflection structure in our proposed model depicted in Figure l(b). A microphone is placed at distances a1 and lI2 from two flat surfaces (81 and 82 ), dis the distance between the microphone and the sound source moving line (the dotted line in Figure l(b). As shown in Figure l(b), a sound source is at L~ position. If the source is far enough from the reflection surface, the ray diagram is valid to analyze the sound's behavior. We skip the complete derivation but the echo's delay time can be expressed as c (2) where d1 is the length of the direct path, r1 + r2 is reflected path length, and c is the speed of sound. The path distance are easily solved in terms of the source direction and the geometry of the setup (see [9] for complete details). The echo's delay time T decreases as the source position ~ moves from 0 to 90 degrees. A similar analysis can be made if the source moves in the opposite direction, and the reflection is caused by the other reflection surface 82 • Since the reflection path is longer for reflection surface 82 than for reflection surface 8 1 , the echo's delay time can be segmented into two ranges. Therefore, the echo's delay time encodes the source's directions in a one-to-one mapping relation. In the setup, an Earthworks M30 microphone and Labl amplifier were used to record and amplify the sound signals [3]. For this preliminary study of monaurallocalization, we have chosen to localize simple impulse sounds generated through speakers and therefore can drop the silicon cochlea from our model. In the future, more complicated signals, such as speech, will require a silicon cochlea implementation. Inspired by ideas from visual processing, onset detection is used'to segment sounds [10]. The detection of an onset is produced by first taking the difference of two first-order, low-pass filters given by [10] OCt, k, r) = lot Iz(t - x, k)s(x)dx -lot Iz(t - x, k/r)s(x)dx (3) where r>l, k is a time constant, sex) is the input sound signal, and /z(x, k) = kexp(-kx). A hardware implementation of the above equation is depicted in Figure 2a. In our model, sound signals from the special reflection surface microphone are fed into two low-pass filters which have different time constants determined by two bias A Neuromorphic Monaural Sound Localizer Vref2 Figure 3: Adaptive threshold circuit used to remove unwanted reflections. A(t) A(t- 't) A(t-2 't) t Dl t 02 ! , , + 03 A(t-m't) Om Figure 4: Neural signal processing model 695 voltages V Onb1 and Vonb2 . The bias voltage VonbS determines the amplification of the difference. The output of the onset detecting circuit is Vonouc. The onset detection circuit determines significant increases in the signal energy and therefore segments sound events. By computing the delay time between two sound events (direct sound and its echo caused by the reflection surface), the system is able to decode the source's direction. Each sound event is then transformed into a fixed-width pulse so that the delay time can be computed with binary autocorrelators. The fixed-width pulse generating circuit is depicted in Figure 2b. The pulse generating circuit includes a self-resetting neuron circuit [8] that controls the pulse duration based on the bias voltage Vneubs' As discussed above, an appropriate threshold is required to discriminate sound events from noise. One input of the pulse generating circuit is the output of the onset detecting signal, Vonouc. vthreBh is set properly in the pulse generating circuit in order to generate a fixed width pulse when Vonouc exceeds vthreBh. Unfortunately the system may be confused by unwanted sound events due to extraneous reflections from the desks and walls. However, since we know the expect range of echo delays, we can inhibit many of the environmental echoes that fall outside this range using an adaptive threshold circuit. In order to cancel unwanted signals, we need to design an inhibition mechanism which suppresses signals arriving to our system outside of the expected time range. This inhibition is implemented in Figure 3. As the pulse generating circuit detects the first sound event (which is the direct sound signal), the threshold becomes high in a certain period of time to suppress the detection of the unwanted reflections (not from our reflection surfaces). The input of the adaptive threshold circuit is Vneuouc which is the output of the pulse generating circuit. The output of the threshold circuit is vthreBh which is the input of the pulse generating circuit. When the pulse generating circuit detects a sound event, Vneuouc becomes high, which increases vthreBh from Vre/ 2 to Vre/ 1 as shown in Figure 3. The higher vthreBh suppresses the detection. The suppression time is determined by the other self-resetting neuron circuit. 696 1. G. Harris. c.-1. Pu and 1. C. Principe 2 . G5':2 ,,=2 . 551 :2 . 't51 ;241:2 . )51 ~ 2 It;-, , I I i I I , , i , I , , j , , , Ii, , -; 5 • ~ .,. ~"r :: 'i'f 'i5'UN2 no qs.? I t ~ ~ ... ~ . I ~ j rJ! :.:: t-~.---~ --:-~ It ----:------. --4 ~,=-=-=-~ "~~--~~~~~~~--~ ,,:,:: l . .. II I! . ..• . • 1 . ~~~~T~~L~~~~~~~~~~r.7~~ o. TH~[ (LBO 2? i511" Figure 5: (a) The input sound signal: impulse signal recorded in typical office environment (b) HSPICE simulation of the output of the detecting onset circuit (label 61), the output of the pulse generating circuit {label 12), and the adaptive threshold circuit response (label 11) The nervous system likely uses a running autocorrelation analysis to measure the time delay between signals. The basic neural connections are shown in Figure 4 [7]. A(t) is the input neuron, A(t - r), A(t - 2r), ... A(t - mr) is a delay chain. The original signal and the delayed signal are multiplied when A(t) and A(t - kr) feed Ck. Assuming the state of neuron A is NA(t). H each synaptic delay in the chain is r, the chain gives us NA(t) under various delays. Ck fires simultaneously when both A(t) and A(t - kr) fire. Neuron Ck connects neuron Dk. Excitation is built up at Dk by the charge and discharge of Ck' The excitation at Dk is therefore (4) Viewing the arrangement of Figure 4 as a neuron autocorrelator, the time-varying excitation at Db D2, .. Dk provides a spatial representation of the autocorrelation function. The localization resolution of this system depends on the delay time r, and the number of the correlators. As r decreases, the localization resolution is improved provided there are enough correlators. In this paper, 30 unit delay taps, and 10 correlators have been implemented on chip. The outputs of the 10 correlators display the time difference between two sound events. The delay time decodes the source's direction. Therefore, the 10 correlators provide a unit encoding of the source location in the ID plane. 3 Simulation and Measurement Results The complete system has been successfully simulated in HSPICE using database we have recorded. Figure 5(a) shows the input sound signal which is an impulse signal recording in our lab (a typical student office environment). Figure 5(b) shows the output of the onset detector (labeled 61), the pulse generating output (labeled 12), and the adaptive threshold (labeled 11). When the onset output exceeds the threshold, the output of the pulse generating circuit becomes high. Simultaneously, the high value of the generated pulse turns on the adaptive threshold circuit to increase the threshold voltage. The adaptive threshold voltage suppresses the unwanted reA Neuromorphic Monaural Sound Localizer Reflection Surface , " Speaker d2 ~ - - --> Speaker moving direction dl • a1 M30 Labl Amp. Localizer Chip Figure 6: Block diagram of the test setup 697 LED 1 o LED 2 LED 3 LED 4 flection which can be seen right after the direct signal (we believe the unwanted reflection is caused by the table). Further simulation results are discussed in [9]. The single microphone sound localizer circuit has been fabricated through the MOSIS 2J.'m N-well CMOS process. Impulse signals are played through speakers to test the fabricated localizer chip. Figure 6 depicts the block diagram of the test setup. The M30 microphone picks up the direct impulse signal and echoes from the reflection surface. Since the reflection surface in our test is just a single flat surface, localization is only tested in one-half of the ID plane. The composite signals are fed into the input of the sound localizer after amplification. Our sound localizer chip receives the composite signal, computes the echo time delay, and sends out the localization result to a display circuit. The display circuit is composed of 4 LEDs with each LED representing a specific sound source location. The sound localizer sends the computational result to turn on a specific LED signifying the echo time delay. In the test, the M30 microphone and the reflection surface are placed at fixed locations. The speaker is moved along the dotted line shown in Figure 6. The M30 microphone is d1 (33cm) from the reflection surface and al (24cm) from the speaker moving line. The speaker's location is defined as ch as depicted in Figure 6. Figure 7(a) shows the theoretical echo's delay at various speaker locations. Figure 7(b) is the measurement of the setup depicted in Figure 6. The y-axis indicates LED 1 through LED 4. The x-axis represents the distance between the speaker's location (ch in Figure 6). The solid horizontal line in Figure 7(b) represents the theoretical results for which LED should respond for each displacement. The results show that localization is accurate within each region with possibilities of two LEDs responding in the overlap regions. 4 Conclusion We have developed the first monaural sound localization system. This system provides a real-time model for human sound localization and has potential use in such applications as low-cost teleconferencing. More work is needed to further develop the system. We need to characterize the accuracy of our system and to test more interesting sound signals, such as speech. Our flat reflection surface is straightforward and simple, but it lacks sufficient flexibility to encode the source's direction in more than a I-D plane. We plan to replace the flat surfaces with a more complicated surface to provide more reflections to encode a richer set of source directions. 698 J G. Harris, c.-J Pu and J C. Principe 1 0 10 20 30 40 50 80 70 80 sound so ..... di_from_ (em) localizer clip ............... 4 00 fil3 9 9 9 9 9 EJ 0 ....0 c: 0 r C,l 9 9 9 99 9 e 00 .... 1 e e e 0 0 0 10 20 30 40 50 80 70 80 sound so ..... di_lrom ___ (em) Figure 7: Sound localizer chip test result Acknowledgments This work was supported by an ONR contract #NOOOI4-94-1-0858 and an NSF CAREER award #MIP-9502307. We gratefully acknowledge MOSIS chip fabrication and Earthworks Inc. for loaning the M30 microphone and amplifier. References [1] D. W. Batteau. The role of the pinna in human localization. Proc. R. Soc. London, Ser. B, 168:158-180,1967. [2] Neal A. Bhadkamkar. Binaural source localizer chip using subthreshold analog cmos. In Proceeding of JCNN, pages 1866-1870, 1994. [3] Earthworks, Inc., P.O. Box 517, Wilton, NH 03086. M90 Microphone. [4] y. Hiranaka and H. Yamasaki. Envelop representations of pinna impulse responses relating to three-dimensional localization of sound sources. J. Acoust. Soc. Am., 73:29, 1983. [5] E. Knudsen, G. Blasdel, and M. Konishi. Mechanisms of sound localization in the barn owl (tyto alba). J. Compo Physiol, 133:13-21, 1979. [6] J. Lazzaro and C. A. Mead. A silicon model of auditory localization. Neural Computation, 1:47-57, 1989. [7] J.C. Licklider. A duplex theory of pitch perception. Experientia, 7:128-133, 1951. [8] C. Mead. Analog VLSJ and Neural Systems. Addison-Wesley, 1989. [9] Chiang-Jung Pu. A neuromorphic microphone for sound localization. PhD thesis, University of Florida, Gainesville, FL, May 1998. [10] L.S. Smith. Sound segmentation using onsets and offsets. J. of New Music Research, 23, 1994. [11] A.J. Watkins. Psychoacoustical aspects of synthesized vertical locale cues. J. Acoust. Soc. Am., 63:1152-1165, 1978.
|
1998
|
18
|
1,513
|
Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard·, Leon Botton, Patrick Haffner and Yann LeCnn AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033 patrice@microsoft.com {leon b ,haffner ,yann }@research.att.com Abstract Signal processing and pattern recognition algorithms make extensive use of convolution. In many cases, computational accuracy is not as important as computational speed. In feature extraction, for instance, the features of interest in a signal are usually quite distorted. This form of noise justifies some level of quantization in order to achieve faster feature extraction. Our approach consists of approximating regions of the signal with low degree polynomials, and then differentiating the resulting signals in order to obtain impulse functions (or derivatives of impulse functions). With this representation, convolution becomes extremely simple and can be implemented quite effectively. The true convolution can be recovered by integrating the result of the convolution. This method yields substantial speed up in feature extraction and is applicable to convolutional neural networks. 1 Introduction In pattern recognition, convolution is an important tool because of its translation invariance properties. Feature extraction is a typical example: The distance between a small pattern (i.e. feature) is computed at all positions (i.e. translations) inside a larger one. The resulting "distance image" is typically obtained by convolving the feature template with the larger pattern. In the remainder of this paper we will use the terms image and pattern interchangeably (because of the topology implied by translation invariance). There are many ways to convolve images efficiently. For instance, a multiplication of images of the same size in the Fourier domain corresponds to a convolution of the two images in the original space. Of course this requires J{ N log N operations (where N is the number of pixels of the image and J{ is a constant) just to go in and out of the Fourier domain. These methods are usually not appropriate for feature extraction because the feature to be extracted is small with respect to the image. For instance, if the image and the feature have respectively 32 x 32 and 5 x 5 pixels, • Now with Microsoft, One Microsoft Way, Redmond, WA 98052 572 P Y. Simard, L. BOllou, P Haffner and Y. Le Cun the full convolution can be done in 25 x 1024 multiply-adds. In contrast, it would require 2 x J{ x 1024 x 10 to go in and out of the Fourier domain. Fortunately, in most pattern recognition applications, the interesting features are already quite distorted when they appear in real images. Because of this inherent noise, the feature extraction process can usually be approximated (to a certain degree) without affecting the performance. For example, the result of the convolution is often quantized or thresholded to yield the presence and location of distinctive features ll]. Because precision is typically not critical at this stage (features are rarely optimal, thresholding is a crude operation), it is often possible to quantize the signals before the convolution with negligible degradation of performance. The subtlety lies in choosing a quantization scheme which can speed up the convolution while maintaining the same level of performance. We now introduce the convolution algorithm, from which we will deduce the constraints it imposes on quantization. The main algorithm introduced in this paper is based on a fundamental property of convolutions. Assuming that 1 and 9 have finite support and that r denotes the n-th integral of 1 (or the n-th derivative if n is negative), we can write the following convolution identity: (J * g)n = r * 9 = 1 * gn (1) where * denotes the convolution operator. Note that 1 or 9 are not necessarily differentiable. For instance, the impulse function (also called Dirac delta function), denoted J, verifies the identity: (2) where J~ denotes the n-th integral of the delta function, translated by a (Ja(x) = J(x - a)). Equations 1 and 2 are not new to signal processing. Heckbert has developed an effective filtering algorithm [2] where the filter 9 is a simple combination of polynomial of degree n - 1. Convolution between a signal 1 and the filter 9 can be written as I*g = r *g-n (3) where r is the n-th integral of the signal, and the n-th derivative of the filter 9 can be written exclusively with delta functions (resulting from differentiating n 1 degree polynomials n times). Since convolving with an impulse function is a trivial operation, the computation of Equation 3 can be carried out effectively. Unfortunately, Heckbert's algorithm is limited to simple polynomial filters and is only interesting when the filter is wide and when the Fourier transform is unavailable (such as in variable length filters). In contrast, in feature extraction, we are interested in small and arbitrary filters (the features). Under these conditions, the key to fast convolution is to quantize the images to combinations of low degree polynomials, which are differentiated, convolved and then integrated. The algorithm is summarized by equation: 1 * 9 ~ F * C = (F- n * C-m)m+n (4) where F and C are polynomial approximation of 1 and g, such that F- n and C- m can be written as sums of impulse functions and their derivatives. Since the convolution F- n * C- m only involves applying Equation 2, it can be computed quite effectively. The computation of the convolution is illustrated in Figure 1. Let 1 and 9 be two arbitrary I-dimensional signals (top of the figure). Let's assume that 1 and 9 can both be approximated by partitions of polynomials, F and C. On the figure, the polynomials are of degree 0 (they are constant), and are depicted in the second line. The details on how to compute F and C will be explained in the next section. In the next step, F and C are differentiated once, yielding successions of impulse functions (third line in the figure). The impulse representation has the advantage of having a finite support, and of being easy to convolve. Indeed two impulse functions can be convolved using Equation 2 (4 x 3 = 12 multiply-adds on the figure). Finally the result of the convolution must be integrated twice to yield F * C = (F- 1 * C- 1)2 (5) Boxlets: A Fast Convolution Algorithm Original Quantization = F Differentiation -1...' -----L..---r_--1Convolution Double Integration FIG' FIG G V 'r-I ------,11-__ ----..I G' t t 573 Figure 1: Example of convolution between I-dimensional function f and g , where the approximations of f and 9 are piecewise constant. 2 Quantization: from Images to Boxlets The goal of this section is to suggest efficient ways to approximate an image f by cover of polynomials of degree d suited for convolution. Let S be the space on which f is defined, and let C = {cd be a partition of S (Ci n Cj = 0 for i =f. j , and Ui Ci = S). For each Ci, let Pi be a polynomial of degree d which minimizes equatIOn: (6) The uniqueness of Pi is guaranteed if Ci is convex. The problem is to find a cover C which minimizes both the number of Ci and I.:i ei. Many different compromises are possible, but since the computational cost of the convolution is proportional to the number of regions, it seemed reasonable to chose the largest regions with a maximum error bounded by a threshold K . Since each region will be differentiated and integrated along the directions of the axes, the boundaries of the CiS are restricted to be parallel to the axes, hence the appellation boxlet. There are still many ways to compute valid partitions of boxlets and polynomials. We have investigated two very different approaches which both yield a polynomial cover of the image in reasonable time. The first algorithm is greedy. It uses a procedure which, starting from a top left corner, finds the biggest boxlet Ci which satisfies ei < K without overlapping another boxlet. The algorithm starts with the top left corner of the image, and keeps a list of all possible starting points (uncovered top left corners) sorted by X and Y positions. When the list is exhausted, the algorithm terminates. Surprisingly, this algorithm can run in O(d(N + PlogN)), where N is the number of pixels, P is the number of boxlets and d is the order of the polynomials PiS. Another much simpler algorithm consists of recursively splitting boxlets, starting from a boxlet which encompass the whole image, until ei < K for all the leaves of the tree. This algorithm runs in O(dN) , is much easier to implement, and is faster (better time constant). Furthermore, even though the first algorithm yields a polynomial coverage with less boxlets, the second algorithm yields less impulse functions after differentiation because more impulse functions can be combined (see next section). Both algorithms rely on the fact that Equation 6 can be computed 574 P. Y. Simard, L. Bottou, P. Haffner and Y. Le Cun Figure 2: Effects of boxletization: original (top left), greedy (bottom left) with a threshold of tO,OOO, and recursive (top and bottom right) with a threshold of 10,000. in constant time. This computation requires the following quantities L f(x, y), L f(x, y)2 , L f(x, y)x, L f(x, y)y, L f(x, y)xy,... (7) ~~------~v~------~"~--------------v-------------~ degree a degree 1 to be pre-computed over the whole image, for the greedy algorithm, or over recursively embedded regions, for the recursive algorithm. In the case of the recursive algorithm these quantities are computed bottom up and very efficiently. To prevent the sums to become too large a limit can be imposed on the maximum size of Ci. The coefficients of the polynomials are quickly evaluated by solving a small linear system using the first two sums for polynomials of degree a (constants), the first 5 sums for polynomials of degree 1, and so on. Figure 2 illustrates the results of the quantization algorithms. The top left corner is a fraction of the original image. The bottom left image illustrates the boxletization of the greedy algorithm, with polynomials of degree 1, and ei <= 10, 000 ( 13000 boxlets, 62000 impulse (and its derivative) functions. The top right image illustrates the boxletization of the recursive algorithm, with polynomials of degree o and ei <= 10, 000 ( 47000 boxlets, 58000 impulse functions). The bottom right is the same as top right without displaying the boxlet boundaries. In this case the pixel to impulse function ratio 5.8. 3 Differentiation: from Boxlets to Impulse Functions If Pi is a polynomial of degree d, its (d + 1 )-th derivative can be written as a sum of impulse function's derivatives, which are zero everywhere but at the corners of Ci. These impulse functions summarize the boundary conditions and completely characterize Pi. They can be represented by four (d + 1 )-dimensional vectors associated with the 4 corners of Ci. Figure 3 (top) illustrates the impulse functions at the 4 Boxlets: A Fast Convolution Algorithm Polynomial (constant) Polynomial covering (constants) X derivative Derivatives l-~C~) Yd ·1. envatlve (of X derivative) Combined ~~ D D D ~ D D ~ D D Sorted list representation 575 Figure 3: Differentiation of a constant polynomial in 2D (top). Combining the derivative of adjacent polynomials (bottom) corners when the polynomial is a constant (degree zero). Note that the polynomial must be differentiated d + 1 times (in this example the polynomial is a constant, so d = 0), with respect to each dimension of the input space. This is illustrated at the top of Figure 3. The cover C being a partition, boundary conditions between adjacent squares do simplify, that is, the same derivatives of a impulse functions at the same location can be combined by adding their coefficients. It is very advantageous to do so because it will reduce the computation of the convolution in the next step. This is illustrated in Figure 3 (bottom). This combining of impulse functions is one of the reason why the recurslve algorithm for the quantization is preferred to the greedy algorithm. In the recursive algorithm, the boundaries of boxlets are often aligned, so that the impulse functions of adjacent boxlets can be combined. Typically, after simplification, there are only 20% more impulse functions than there are boxlets. In contrast, the greedy algorithm generates up to 60% more impulse functions than boxlets, due to the fact that there are no alignment constraints. For the same threshold the recursive algorithm generates 20% to 30% less impulse functions than the greedy algorithm. Finding which impulse functions can be combined is a difficult task because the recursive representation returned by the recursive algorithm does not provide any means for matching the bottom of squares on one line, with the top of squares from below that line. Sorting takes O(P log P) computational steps (where P is the number of impulse functions) and is therefore too expensive. A better algorithm is to visit the recursive tree and accumulate all the top corners into sorted (horizontal) lists. A similar procedure sorts all the bottom corners (also into horizontal lists). The horizontal lists corresponding to the same vertical positions can then be merged in O(P) operations. The complete algorithm which quantizes an image of N pixels and returns sorted lists of impulse functions runs in O(dN) (where d is the degree of the polynomials). 4 Results The convolution speed of the algorithm was tested with feature extraction on the image shown on the top left of Figure 2. The image is quantized, but the feature is not. The feature is tabulated in kernels of sizes 5 x 5, 10 x 10, 15 x 15 and 20 x 20. If the kernel is decomposable, the algorithm can be modified to do two 1D convolutions instead of the present 2D convolution. The quantization of the image is done with constant polynomials, and with thresholds varying from 1,000 to 40,000. This corresponds to varying the pixel to impulse function ratio from 2.3 to 13.7. Since the feature is not quantized, these ratios correspond exactly to the ratios of number of multiply-adds for the standard convolution versus the boxlet convolution (excluding quantization and integration). The 576 P Y. Simard, L. Bottou, P Haffner and Y. Le Cun 8.4 12.5 13.4 13.8 Table 1: Convolution speed-up factors Horizontal convolution A * (a) A Convolution of runs (b) --* ~'\I A (C) ~ ?-)ld..b--------------- ----r ------T------'w w' \. ---(d)~;%)+r: ~ ~ Figure 4: Run length X convolution actual speed up factors are summarized in Table 1. The four last columns indicate the measured time ratios between the standard convolution and the boxlet convolution. For each threshold value, the top line indicates the time ratio of standard convolution versus quantization, convolution and integration time for the boxlet convolution. The bottom line does not take into account the quantization time. The feature size was varied from 5 x 5 to 20 x 20. Thus with a threshold of 10,000 and a 5 x 5 kernel, the quantization ratio is 5.8, and the speed up factor is 2.8. The loss in image quality can be seen by comparing the top left and the bottom right images. If several features are extracted, the quantization time of the image is shared amongst the features and the speed up factor is closer to 4.7. It should be noted that these speed up factors depend on the quantization level which depends on the data and affects the accuracy of the result. The good news is that for each application the optimal threshold (the maximum level of quantization which has negligible effect on the result) can be evaluated quickly. Once the optimal threshold has been determined, one can enjoy the speed up factor. It is remarkable that with a quantization factor as low as 2.3, the speed up ratio can range from 1.5 to 2.3, depending on the number of features. We believe that this method is directly applicable to forward propagation in convolutional neural nets (although no results are available at this time). The next application shows a case where quantization has no adverse effect on the accuracy of the convolution, and yet large speed ups are obtained. Boxlets: A Fast Convolution Algorithm 577 5 Binary images and run-length encoding The quantization steps described in Sections 2 and 3 become particularly simple when the image is binary. If the threshold is set to zero, and if only the X derivative is considered, the impulse representation is equivalent to run-length encoding. Indeed the position of each positive impulse function codes the beginning of a run, while the position of each negative impulses code the end of a run. The horizontal convolution can be computed effectively using the boxlet convolution algorithm. This is illustrated in Figure 4. In (a), the distance between two binary images must be evaluated for every horizontal position (horizontal translation invariant distance). The result is obtained by convolving each horizontal line and by computing the sum of each of the convolution functions. The convolution of two runs, is depicted in (b), while the summation of all the convolutions of two runs is depicted in (c). If an impulse representation is used for the runs (a first derivative) , each summation of a convolution between two runs requires only 4 additions of impulse functions, as depicted in (d). The result must be integrated twice, according to Equation 5. The speed up factors can be considerable depending on the width of the images (an order of magnitude if the width is 40 pixels), and there is no accuracy penalty. Figure 5: Binary image (left) and compact impulse function encoding (right). This speed up also generalizes to 2-dimensional encoding of binary images. The gain comes from the frequent cancellations of impulse functions of adjacent boxlets. The number of impulse functions is proportional to the contour length of the binary shapes. In this case, the boxlet computation is mostly an efficient algorithm for 2-dimensional run-length encoding. This is illustrated in Figure 5. As with runlength encoding, a considerable speed up is obtained for convolution, at no accuracy penalty cost. 6 Conclusion When convolutions are used for feature extraction, preCISIon can often be sacrificed for speed with negligible degradation of performance. The boxlet convolution method combines quantization and convolution to offer a continuous adjustable trade-off between accuracy and speed. In some cases (such as in relatively simple binary images) large speed ups can come with no adverse effects. The algorithm is directly applicable to the forward propagation in convolutional neural networks and in pattern matching when translation invariance results from the use of convolution. References [1] Yann LeCun and Yoshua Bengio, "Convolutional networks for images, speech, and time-series," in The Han'dbook of Brain Theory and Neural Networks, M. A. Arbib, Ed. 1995, MIT Press. [2] Paul S. Heckbert, "Filtering by repeated integration," in ACM SIGGRAPH conference on Computer graphics, Dallas, TX, August 1986, vol. 20, pp. 315321.
|
1998
|
19
|
1,514
|
A Randomized Algorithm for Pairwise Clustering Yoram Gdalyahu, Daphna Weinshall, Michael Werman Institute of Computer Science, The Hebrew University, 91904 Jerusalem, Israel {yoram,daphna,werman}@cs.huji.ac.il Abstract We present a stochastic clustering algorithm based on pairwise similarity of datapoints. Our method extends existing deterministic methods, including agglomerative algorithms, min-cut graph algorithms, and connected components. Thus it provides a common framework for all these methods. Our graph-based method differs from existing stochastic methods which are based on analogy to physical systems. The stochastic nature of our method makes it more robust against noise, including accidental edges and small spurious clusters. We demonstrate the superiority of our algorithm using an example with 3 spiraling bands and a lot of noise. 1 Introduction Clustering algorithms can be divided into two categories: those that require a vectorial representation of the data, and those which use only pairwise representation. In the former case, every data item must be represented as a vector in a real normed space, while in the second case only pairwise relations of similarity or dissimilarity are used. The pairwise information can be represented by a weighted graph G(V, E): the nodes V represent data items, and the positive weight Wij of an edge (i, j) representing the amount of similarity or dissimilarity between items i and j. The graph G might not be a complete graph. In the rest of this paper Wij represents a similarity value. A vectorial representation is very convenient when one has either an explicit or an implicit parametric model for the data. An implicit model means that the data distribution function is not known, but it is assumed, e.g., that every cluster is symmetrically distributed around some center. An explicit model specifically describes the shape of the distribution (e.g., Gaussian). In these cases, if a vectorial representation is available, the clustering procedure may rely on iterative estimation of means (e.g., [2, 8]). In the absence of a vectorial representation, one can either try to embed the graph of distances in a vector space, or use a direct pairwise clustering method. The A Randomized Algorithm for Pairwise Clustering 425 embedding problem is difficult, since it is desirable to use a representation that is both low dimensional and has a low distortion of distances [6, 7, 3]. Moreover, even if such embedding is achieved, it can help to cluster the data only if at least an implicit parametric model is valid. Hence, direct methods for pairwise clustering are of great value. One strategy of pairwise clustering is to use a similarity threshold (), remove edges with weight less than (), and identify the connected components that remain as clusters. A transformation of weights may precede the thresholding!. The physically motivated transformation in [1] uses a granular magnet model and replaces weights by "spin correlations". Our algorithm is similar to this model, see Section 2.4. A second pairwise clustering strategy is used by agglomerative algorithms [2], which start with the trivial partition of N points into N clusters of size one, and continue by subsequently merging pairs of clusters. At every step the two clusters which are most similar are merged together, until the similarity of the closest clusters is lower than some threshold. Different similarity measures between clusters distinguish between different agglomerative algorithms. In particular, the single linkage algorithm defines the similarity between clusters as the maximal similarity between two of their members, and the complete linkage algorithm uses the minimal value. A third strategy of pairwise clustering uses the notion of cuts in a graph. A cut (A, B) in a graph G(V, E) is a partition of V into two disjoint sets A and B. The capacity of the cut is the sum of weights of all edges that cross the cut, namely: c(A, B) = 2:iEA,jEB Wij. Among all the cuts that separate two marked vertices, the minimal cut is the one which has minimal capacity. The minimal cut clustering algorithm [11] divides the graph into components using a cascade of minimal cuts2 . The normalized cut algorithm [9] uses the association of A (sum of weights incident on A) and the association of B to normalize the capacity c(A, B). In contrast with the easy min-cut problem, the problem of finding a minimal normalized cut (Ncut) is NP-hard, but with certain approximations it reduces to a generalized eigenvalue problem [9]. Other pairwise clustering methods include techniques of non parametric density estimation [4] and pairwise deterministic annealing [3]. However, the three categories of methods above are of special importance to us, since our current work provides a common framework for all of them. Specifically, our new algorithm may be viewed as a randomized version of an agglomerative clustering procedure, and in the same time it generalizes the minimal cut algorithm. It is also strongly related to the physically motivated granular magnet model algorithm. By showing the connection between these methods, which may seem very different at a first glance, we provide a better understanding of pairwise clustering. Our method is unique in its stochastic nature while provenly maintaining low complexity. Thus our method performs as well as the aforementioned methods in "easy" cases, while keeping the good performance in "difficult" ,cases. In particular, it is more robust against noise and pathological configurations: (i) A minimal cut algorithm is intuitively reasonable since it optimizes so that as much of the similarity 1 For example, the mutual' neighborhood clustering algorithm [10] substitutes the edge weight W,} with a new weight w:} = m + n where i is the m th nearest neighbor of j and j is the nth nearest neighbor of i. 2The reader who is familiar with flow theory may notice that this algorithm also belongs to the first category of methods, as it is equivalent to a weight transformation followed by thresholding. The weight transformation replaces Wi} by the maximal flow between i and J. 426 Y. Gdalyahu, D. Weins hall and M Werman weight remains within the parts of the clusters, and as little as possible is "wasted" between the clusters. However, it tends to fail when there is no clean separation into 2 parts, or when there are many small spurious parts due, e.g., to noise. Our stochastic approach avoids these problems and behaves more robustly. (ii) The single linkage algorithm deals well with chained data, where items in a cluster are connected by transitive relations. Unfortunately the deterministic construction of chains can be harmful in the presence of noise, where a few points can make a "bridge" between two large clusters and merge them together. Our algorithm inherits the ability to cluster chained data; at the same time it is robust against such noisy bridges as long as the probability to select all the edges in the bridge remains small. 2 Stochastic pairwise clustering Our randomized clustering algorithm is constructed of two main steps: 1. Stochastic partition of the similarity graph into r parts (by randomized agglomeration). For each partition index r (r = N ... 1): (a) for every pair of points, the probability that they remain in the same part is computed; (b) the weight of the edge between the two points is replaced by this probability; (c) clusters are formed using connected components and threshold of 0.5. This is described in Sections 2.1 and 2.2. 2. Selection of proper r values, which reflect "interesting" structure m our problem. This is described in Section 2.3. 2.1 The similarity transformation At each level r, our algorithm performs a similarity transformation followed by thresholding. In introducing this process, our starting point is a generalization of the minimal cut algorithm; then we show how this generalization is obtained by the randomization of a single linkage algorithm. First, instead of considering only the minimal cuts, let us induce a probability distribution on the set of all cuts. We assign to each cut a probability which decreases with increasing capacity. Hence the minimal cut is the most probable cut in the graph, but it does not determine the graph partition on its own. As a second generalization to the min-cut algorithm we consider multi-way cuts. An r-way cut is a partition of G into r connected components. The capacity of an r-way cut is the sum of weights of all edges that connect different components. In the rest of this paper we may refer to r-way cuts simply as "cuts". Using the distribution induced on r-way cuts, we apply the following family of weight transformations. The weight Wij is replaced by the probability that nodes i and j are in the same side of a random r-way cut: Wij -+ pij' This transformation is defined for every integer r between 1 and N. Since the number of cuts in a graph is exponentially large, one must ask whether pi· is computable. Here the decaying rate of the cut probability plays an essential r01e. The induced probability is found to decay fast enough with the capacity, hence pij is dominated by the low capacity cuts. Thus, since there exists a polynomial A Randomized Algorithm/or Pairwise Clustering 427 bound on the number of low capacity cuts in any graph [5], the problem becomes computable. This strong property suggests a sampling scheme to estimate the pairing probabilities. Assume that a sampling tool is available, which generates cuts according to their probability. Under this condition, a sample of polynomial size is sufficient to estimate the P'ij's. The sampling tool that we use is called the "contraction algorithm" [5]. Its discovery led to an efficient probabilistic algorithm for the minimal cut problem. It was shown that for a given r, the probability that the contraction algorithm returns the minimal r-way cut of any graph is at least N- 2(r-l), and it decays with increasing capacity3. For a graph which is really made of clusters this is a rough underestimation. The contraction algorithm can be implemented in several ways. We describe here its simplest form, which is constructed from N-l edge contraction steps. Each edge contraction follows the procedure below: • Select edge (i, j) with probability proportional to Wij. • Replace nodes i and j by a single node {ij}. • Let the set of edges incident on {ij} be the union of the sets of edges incident on i and j, but remove self loops formed by edges originally connecting i to j. It is shown in [5] that each step of edge contraction can be implemented in O(N) time, hence this simple form of the contraction algorithm has complexity of O(N2). For sparse graphs an O(N log N) implementation can be shown. The contraction algorithm as described above is a randomized version of the agglomerative single linkage procedure. If the probabilistic selection rule is replaced by a greedy selection of the maximal weight edge, the single linkage algorithm is obtained. In terms of similarity transformations, a single linkage algorithm which halts with r clusters may be associated with the transformation Wij~O,1 (1 if i and j are returned at the same cluster, 0 otherwise). Our similarity transformation (P'ij) uses the expected value (or the average) of of this binary assignment under the probabilistic relaxation of the selection rule. We could estimate pij by repeating the contraction algorithm M times and averaging these binary indicators (a better way is described belowJ. Using Chernoff inequality it can be shown4 that if M 2: (21n 2 + 4ln N - 2ln Il) / t then each P'ij is estimated, with probability 2:1 - Il, within t from its true value. 2.2 Construction of partitions To compute a partition at every r level, it is sufficient to know for every i-j pair which r satisfies P'ij = 0.5. This is found by repeating the contraction algorithm M times. In each iteration there exists a single r at which the edge between points i - j is marked and the points are merged. Denote by rm the level r which joins i and j at the m-th iteration (m = 1 ... M). The median r' of the sequence {rl' r2 ... r M } is the sample estimate 3The exact decay rate is not known, but found experimentally to be adequate. Otherwise we would ignore cuts generated with high capacity. 4Thanks to Ido Bergman for pointing this out. 428 Y. Gdalyahu, D. Weins hall and M Werman for the level r that satisfies pij = 0.5. We use an on-line technique (not described here) to estimate the median r' using constant and small memory. Having computed the matrix r', where the entry r;j is the estimator for r that satisfies pij = 0.5, we find the connected components at a given r value after disconnecting every edge (i,j) for which r~j > r. This gives the r level partition. 2.3 Hierarchical clustering We now address the problem of choosing "good" r values. The transformed weight pij has the advantage of reflecting transitive relations between data items i and j. For a selected value of r (which defines a specification level) the partition of data items into clusters is obtained by eliminating edges whose weight (pij) is less than a fixed threshold (0.5). That is: nodes are assigned to the same cluster if at level r their probability to be on the same side of a random r-way cut is larger than half. Partitions which correspond to subsequent r values might be very similar to each other, or even identical, in the sense that only a few nodes (if at all) change the component to which they belong. Events which are of interest, therefore, are when the variation between subsequent partitions is of the order of the size of a cluster. This typically happens when two clusters combine to form one cluster which corresponds to a higher scale (less resolution). In accordance, using the hierarchical partition obtained in Section 2.2, we measure the variation between subsequent partitions by L~=l ~Nk , where J{ is a small constant (of the order of the number of clusters) and Nk is the size of the kth largest component of the partition. 2.4 The granular magnet model Our algorithm is closely related to the successful granular magnet model recently proposed in [1]. However, the two methods draw the random cuts effectively from different distributions. In our case the distribution is data driven, imposed by the contraction algorithm. The physical model imposes the Boltzmann distribution, where a cut of capacity E is assigned a probability proportional to exp(-E/T) , and T is a temperature parameter. The probability P& measures whether nodes i and j are on the same side of a cut at temperature T (originally called "spin-spin correlation function" ). The magnetic model uses the similarity transformation Wij --+ P& and a threshold (0.5) to break the graph into components. However, even if identical distributions were used, P& is inherently different from pij since at a fixed temperature the random cuts may have different numbers of components. Superficially, the parameter T plays in the magnetic model a similar role to our parameter r. But the two parameterizations are quite different . First, r is a discrete parameter while T is a continuous one. Moreover, in order to find the pairing probabilities P'fJ.. for different temperatures, the stochastic process should be employed for every '1 ' value separately. On the other hand, our algorithm estimates pij for every 1 ::; r ::; N at once. For hard clustering (v.s. soft clustering) it was shown above that even this is not necessary, since we can get a direct estimation of r which satisfies pij = 0.5. A Randomized Algorithm/or Pairwise Clustering 429 3 Example Pairwise clustering has the advantage that a vectorial representation of the data is not needed. However, graphs of distances are hard to visualize and we therefore demonstrate our algorithm using vectorial data. In spite of having vectorial representation, the information which is made available to the clustering algorithm includes only the matrix of pairwise Euclidean distances5 dij . Since our algorithm works with similarity values and not with distances, it is necessary to invert the distances using Wij = f(dij ). We choose f to be similar to the function used in [1]: Wij = exp( -drj / a2) where a is the average distance to the n-th nearest neighbor (we used n=10, but the results remain the same as long as a reasonable value is selected). bl'---____ -.J Figure 1: The 2000 data points (left), and the three most pronounced hierarchical levels of clustering (right). At r=353 the three spirals form one cluster (figure a) . This cluster splits at r=354 into two (figures b1,b2), and into three parts at r=368 (figures c1,c2,c3). The background points form isolated clusters, usualy of size 1 (not shown). Figure 1 shows 2000 data points in the Euclidean plane. In the stochastic stage of the algorithm we used only 200 iterations of graph contraction, during which we estimated for every pair i-j the value of r which satisfies pij = 0.5 (see Section 2.2). As expected, subsequent partitions are typically identical or differ only slightly from each other (Figure 2). The variation between subsequent partitions was measured using the 10 largest parts (I{ = 10, see Section 2.3). The results did not depend on the exact value of J{ since the sum was dominated by its first terms. At low r values (partition into a small number of components) a typical partition is composed of one giant component and a few tiny components that capture isolated noise points. The incorporation of these tiny components into the giant one produce negligible variations between subsequent partitions. At high r values all the components are small, and therefore the variation between subsequent partitions must decay. At intermediate r values a small number of sharp peaks appear. The two highest peaks in Figure 2 are at r=354 and r=368; they mark meaningful hierarchies for the data clustering, as shown in Figure 1. We compare our results with two other methods in Figures 3 and 4. 5The vectorial representation of data points is not useful even if it was available, since the parametric model is not known (see Section 1) 430 1200_-------------, 1000 BOO 600 400 Figure 2: The variation between subsequent partitions (see text) as a function of the number of components (r). The variation is computed for every integer r (the spacing between peaks is not due to sparse sampling). Outside the displayed range the variation vanishes. Figure 4: A three (macroscopic) clusters partition by a deterministic single linkage algorithm. The probabilistic scheme avoids the "bridging effect" thanks to the small probability of selecting the particular chain of edges. References Y. Gdalyahu. D. Weinshall and M. Werman Figure 3: The best bi-partition according to the normalized cut algorithm [9). Since the first partition breaks one of the spirals, a satisfactory solution cannot be achieved in any of the later stages. [1] Blatt M., Wiseman S. and Domany E., "Data clustering usmg a model granular magnet", Neural Computation 9, 1805-1842, 1997. [2] Duda O. and Hart E., "Pattern classification and scene analysis", Wiley-Interscience, New York, 1973. [3] Hofmann T. and Buhmann J., "Pairwise data clustering by deterministic annealing", PAMI 19, 1-14, 1997. [4] Jain A. and Dubes R., "Algorithms for clustering data", Prentice Hall, NJ, 1988. [5] Karger D., "A new approach to the minimum cut problem", Journal of the ACM, 43(4) 1996. [6] Klock H. and Buhmann J., "Data visualization by multidimensional scaling: a deterministic annealing approach", Technical Report IAI-TR-96-8, Institut fur Informatik III, University of Bonn. October 1996. [7] Linial N., London E. and Rabinovich Y., "The geometry of graphs and some of its algorithmic applications", Combinatorica 15, 215-245, 1995. [8] Rose K., Gurewitz E. and Fox G., "Constrained clustering as an optimization method", PAMI 15, 785-794, 1993. [9] Shi J. and Malik J., "Normalized cuts and image segmentation", Proc. CVPR, 731737, 1997. [10] Smith S., "Threshold validity for mutual neighborhood clustering", PAMI 15, 89-92, 1993. [ll] Wu Z. and Leahy R., "An optimal graph theoretic approach to data clustering: theory and its application to image segmentation", PAMI 15, 1l01-1113, 1993.
|
1998
|
2
|
1,515
|
Experimental Results on Learning Stochastic Memoryless Policies for Partially Observable Markov Decision Processes John K. Williams Department of Mathematics University of Colorado Boulder, CO 80309-0395 jkwillia@euclid.colorado.edu Abstract Satinder Singh AT &T Labs-Research 180 Park Avenue Florham Park, NJ 07932 baveja@research.att.com Partially Observable Markov Decision Processes (pO "MOPs) constitute an important class of reinforcement learning problems which present unique theoretical and computational difficulties. In the absence of the Markov property, popular reinforcement learning algorithms such as Q-Iearning may no longer be effective, and memory-based methods which remove partial observability via state-estimation are notoriously expensive. An alternative approach is to seek a stochastic memoryless policy which for each observation of the environment prescribes a probability distribution over available actions that maximizes the average reward per timestep. A reinforcement learning algorithm which learns a locally optimal stochastic memoryless policy has been proposed by Jaakkola, Singh and Jordan, but not empirically verified. We present a variation of this algorithm, discuss its implementation, and demonstrate its viability using four test problems. 1 INTRODUCTION Reinforcement learning techniques have proven quite effective in solving Markov Decision Processes ("MOPs), control problems in which the exact state of the environment is available to the learner and the expected result of an action depends only on the present state [10]. Algorithms such as Q-Iearning learn optimal deterministic policies for "MOPs----rules which for every state prescribe an action that maximizes the expected future reward. In many important problems, however, the exact state of the environment is either inherently unknowable or prohibitively expensive to obtain, and only a limited, possibly stochastic observation of the environment is available. Such 1074 1. K. Williams and S. Singh Partially Observable Markov Decision Processes (POMDPs) [3,6] are often much more difficult than MDPs to solve [4]. Distinct sequences of observations and actions preceding a given observation in a POMDP may lead to different probabilities of occupying the underlying exact states of the MDP. If the efficacy of an action depends on the hidden exact state of the environment, an optimal choice may require knowing the past history as well as the current observation, and the problem is no longer Markov. In light of this difficulty, one approach to solving POMDPs is to explore the environment while building up a memory of past observations, actions and rewards which allows estimation of the current hidden state [1]. Such methods produce deterministic policies, but they are computationally expensive and may not scale well with problem size. Furthermore, policies that require state-estimation using memory may be complicated to implement. Memoryless policies are particularly appropriate for problems in which the state is expensive to obtain or inherently difficult to estimate, and they have the advantage of being extremely simple to act upon. For a POMDP, the optimal memoryless policy is generally a stochastic policy-one which for each observation of the environment prescribes a probability distribution over the available actions. In fact, examples of POMDPs can be constructed for which a stochastic policy is arbitrarily better than the optimal deterministic policy [9]. An algorithm proposed by Jaakkola, Singh and Jordan OSJ) [2], which we investigate here, learns memoryless stochastic policies for POMDPs. 2 POMDPs AND DIFFERENTIAL-REWARD Q-VALUES We assume that the environment has discrete states S = {s1, S2, .. IV}, and the learner chooses actions from a set f4. State transitions depend only on the current state s and the action a taken (the Markov property); they occur with probabilities r(s,sl) and result in expected rewards K'(s,s} In a POMDP, the learner cannot sense exactly the state s of the enVironment, but rather perceives only an observation--or "message"-from a set :M = {m1, m2, .. mM} according to a conditional probability distribution P(mls). The learner will in general not know the size of the underlying state space, its transition probabilities, reward function, or the conditional distributions of the messages. In MDPs, there always exists a policy which simultaneously maximizes the expected future reward for all states, but this is not the case for POMDPs [9]. An appropriate alternative measure of the merit of a stochastic POMDP policy 7Z{alm) is the asymptotic average reward per timestep, R7r, that it achieves. In seeking an optimal stochastic policy, the JSJ algorithm makes use of Q-values determined by the infinite-horizon differential reward for each observation-action pair (m,a). In particular, if rr denotes the reward obtained at time t, we may define the differential-reward Q-values by Q7r(s,a)= LE7r [Ii _R7r I S1 =s,a1 = a]; Q7r(m,a)= E s [Q7r(s,a)IM(s)=m](l) 1=1 where M is the observation operator. Note that E[rr] ~ R7r as t ~ 00, so the summand converges to ~ero. The value functions V7r(s) and V7r(m) may be defined similarly. 3 POLICY IMPROVEMENT The JSJ algorithm consists of a method for evaluating Q7r and V7r and a mechanism for using them to improve the current policy. Roughly speaking, if Q7r(m,a) > V7r(m), then action a realized a higher differential reward than the average for observation m, and assigning it a slightly greater probability will increase the average reward per timestep, R7r. We interpret the quantities ~m(a) = Q7r(m,a) V7r(m) as comprising a "gradient" of R7r in policy space. Their projections onto the probability simplexes may then be written An Algorithm which Learns Stochastic Memoryless Policies for POMDPs 1075 as 8m = Llm -<Llm,l> 11/JIl, where 1 is the one-vector (1,1, ... ,1), <, > is the inner product, and IJIl is the number of actions, or 8 1 ~ 1 R mea) = Llm(a) IAI LLlm (a') = Q (m,a) - -IAI LQ (m, a'). a'EA a'EA (2) For sufficiently small E;n, an improved policy 1l'(alm) may be obtained by the increments 1l'(a/m) = 1l(alm) + E;n 8m(a) . (3) In practice, we also enforce 1l'(alm) ~ P min for all a and m to guarantee continued exploration. The original JSJ algorithm prescribed using Llm(a) in place of 8m(a) in equation (3), followed by renormalization [2]. Our method has the advantage that a given value of Ll yields the same incremeiu regardless of the current value of the policy, and it ensures that the step is in the correct direction. We also do not require the differential-reward value estimate, yR. 4 Q-EVALUATION As the POMDP is simulated under a fixed stochastic policy 1l, every occurrence of an observation-action pair (m, a) begins a sequence of rewards which can be used to estimate QR(m, a). Exploiting the fact that the QR(m, a) are defined as sums, the JSJ Qevaluation method recursively averages the estimates from all such sequences using a socalled "every-visit" Monte-Carlo method. In order to reduce the bias and variance caused by the dependence of the evaluation sequences, a factor fJ is used to discount their shared "tails". Specifically, at time t the learner makes observation mr , takes action ar , and obtains reward rr. The number of visits K(mr,ar) is incremented, the tail discount rate rem, a) = 1-K(m, arl/4, and the following updates are performed (the indicator function x.:Cm, a) is 1 if (m,a) = (mr,ar) and 0 otherwise). fJ [ %r(m,a)] fJ %r(m,a) (m,a)= 1- K(m,a) r(m,a) (m,a)+ K(m,a) Q(m,a)= [1- ~~::~ ]Q(m, a) + fJ(m,a)[Ti - R] (tail discount factor) (4) (5) C(m,a)= [1- ~f:::~ ]c(m,a) + fJ(m, a) (cumulative discount effect) (6) R = (1 - lIt)R + (lit) rr (R~-estimate) (7) Q(m, a) = Q(m, a) - C(m, a) [R - Rold]; Rold = R (QR-estimate correction) (8) Other schedules for rem, a) are possible----see [2~and the correction provided by (8) need not be performed at every step, but can be delayed until the Q~-estirnate is needed. This evaluation method can be used as given for a policy-iteration type algorithm in which independent T-step evaluations of Q~ are interspersed with policy improvements as prescribed in section 3. However, an online version of the algorithm which performs policy improvement after every step requires that old experience be gradually "forgotten" so that the QR-estimate can respond to more recent experience. To achieve this, we multiply the previous estimates of fJ, Q, and C at each timestep by a "decay" factor a, 0 < a< 1, before they are updated via equations (4)-(6), and replace equation (7) by R = a(l - lit) R + [1 - a(1 - lit)] rl . (9) An alternative method, which also works reasonably well, is to multiply K and t by a at each timestep instead. 1076 (a) (b) A B +1 +1 (c) J K. Williams and S. Singh 0.' r-r-r-----;,.-----;------; .. .. .. .. j ...... . ., ... 10000 20000 3 0000 40000 50000 number of iterations 0 .8 \ f 06 \..-:' ~:: " ==_:::_-=-__ ::-0 ' ~~ .~=~.-... .:.....'. '-"'-" '-"'-" .----..;...--[0.4 0. 2 °o!:----,,;-;;coo=o-=-o --'2::::0~ 00:::0 -'3""0""00""'0 --;: 40:;';,00""0--= 50000 number Of Iterations Figure 1: (a) Schematic of confounded two-state POMDP, (b) evolution of the R7r_ estimate, and (c) evolution of n(A) (solid) and nCB) (dashed) for e= 0.0002, a= 0.9995. 5 EMPIRICAL RESULTS We present only results from single runs of our online algorithm, including the modified ]S] policy improvement and Q-evaluation procedures described above. Results from the policy iteration version are qualitatively similar, and statistics performed on multiple runs verify that those shown are representative of the algorithm's behavior. To simplify the presentation, we fix a constant learning rate, e, and decay factor, a, for each problem, and we use P min = 0.02 throughout. Note, however, that appropriate schedules or online heuristics for decreasing e and P min while increasing a would improve performance and are necessary to ensure convergence. Except for the first problem, we choose the initial policy n to be uniform. In the last two problems, values of n(alm) < 0.03 are rounded down to zero, with renormalization, before the learned policy is evaluated. 5.1 CONFOUNDED TWO-STA TE PROBLEM The two-state MDP diagrammed in Figure l(a) becomes a POMDP when the two states are confounded into a single observation. The learner may take action A or B, and receives a reward of either + 1 or -1; the state transition is deterministic, as indicated in the diagram. Note that either stationary deterministic policy results in R7r = -1, whereas the optimal stochastic policy assigns each action the probability 112, resulting in R7r = O. The evolution of the R7r-estimate and policy, starting from the initial policy n(A) = 0.1 and nCB) = 0.9, is shown in Figure 1. Clearly the learned policy approaches the optimal stochastic policy n = (112,112). 5.2 MATRIX GAME: SCISSORS-PAPER-STONE-GLASS-WATER Scissors-Paper-Stone-Glass-Water (SPSGW), an extension of the well-known ScissorsPaper-Stone, is a symmetric zero-sum matrix game in which the learner selects a row i, the opponent selects a column j, and the learner's payoff is determined by the matrix entry M(i,j). A game-theoretic solution is a stochastic (or "mixed") policy which guarantees the learner an expected payoff of at least zero. It can be shown using linear programming that the unique optimal strategy for SPSGW, yielding R7r = 0, is to play stone and water with probability 1/3, and to play scissors, paper, and glass with probability 119 [7]. Any stationary deterministic policy results in R7r = -1, since the opponent eventually learns to anticipate the learner's choice and exploit it. An Algorithm which Learns Stochastic Memory/ess Policiesfor POMDPs 1077 (a) stone water paper or:---I---\--~ scissors (b) [0 -1 1 1 -1] 1 0 1 -1 -1 M= -1 -1 0 -1 1 -1 1 1 0 -1 1 1 -1 1 0 (c) (d) - 0.4 . ... -0 5 O~--='-=OO::::OO:-----:::20=:':O=OO"-----:::300'-!:'OO:::::---:-::40::':::OOO:::-----:5;-;::'OOOO number of iterations 0.8 -___ ~ ~.= _______ . ______________ ___ s __ _ %~-~1=OO~OO~~2~OO~OO~~3~OO~OO~~4~OO~OO~~50000 number of iteratio ns Figure 2: (a) Diagram of Scissors-Paper-Stone-Glass-Water, (b) the payoff matrix, (c) evolution of the RJr-estimate, and (d) evolution of n(stone) and n(water) (solid) and n(scissors), n(paper), and n(glass) (dashed) for £= 0.00005, a= 0.9995. In formulating SPSGW as a POMDP, it is necessary to include in the state sufficient information to allow the opponent to exploit any sub-optimal strategy. We thus choose as states the learner's past action frequencies, multiplied at each timestep by the decay factor, a. There is only one observation, and the learner acts by selecting the "row" scissors, paper, stone, glass or water, producing a deterministic state transition. The simulated opponent plays the column which maximizes its expected payoff against the estimate of the learner's strategy obtained from the state. The learner's reward is then obtained from the appropriate entry of the payoff matrix. The policy n = (0.1124,0.1033,0.3350,0.1117,0.3376) learned after 50,000 iterations (see Figure 2) is very close to the optimal policy 7i = (119,119,113,119,1/3). 5.3 PARR AND RUSSELL'S GRID WORLD Parr and Russell's grid world [S] consists of 11 states in a 4x3 grid with a single obstacle as shown in Figure 3(a). The learner senses only walls to its immediate east or west and whether it is in the goal state (upper right comer) or penalty state (directly below the goal), resUlting in the 6 possible observations (0-3, G and P) indicated in the diagram. The available actions are to move N, E, S, or W, but there is a probability 0.1 of slipping to either side and only O.S of moving in the deSired direction; a movement into a wall results in bouncing back to the original state. The learner receives a reward of + 1 for a transition into the goal state, -1 for a transition into the penalty state, and -0.04 for all other transitions. The goal and penalty states are connected to a cost-free absorbing state; when the learner reaches either of them it is teleported immediately to a new start state chosen with uniform probability. The results are shown in Figure 3. A separate 106-step evaluation of the final learned policy resulted in RJr = 0.047. In contrast, the optimal deterministic policy indicated by arrows in Figure 3(a) yields RJr = 0.024 [5], while Parr and Russell's memory-based SPOVA-RL algorithm achieved RJr = 0.12 after learning for 400,000 iterations [S]. 5.4 MULTI-SERVER QUEUE At each timestep, an arriving job having type 1, 2, or 3 with probability 112, 113 or 116, respectively, must be assigned to server A, B or C; see Figure 4(a). Each server is optimized for a particular job type which it can complete in an expected time of 2.6 1078 J K. Williams and S. Singh 0.06 (a) (b) 0 04 "... ~ 0.02 ;j' ~ ~ +1 ~ 0 0 2 2 ~ - 0.02 a: - 0 .04 t t· - 0.06 -1 -0.0 8 0 20000 40000 60000 80000 100000 3 0 P nurrber of itera1iorlS; (c) rO.91 0.02 0.36 0.52J t ~ ~ ~ 7r(alm) = 8:8i 0.21 0.60 0.18 0.34 0.02 0.11 0 2 2 1 0.02 0.43 0.02 0.19 Figure 3: (a) Parr and Russell's grid world, with observations shown in lower right corners and the optimal deterministic memoryless policy represented by arrows, (b) evolution of the R7r-estimate, and (c) the resulting learned policy (observations 0-3 across columns, actions N, E, S, W down rows) for E= 0.02, a= 0.9999. timesteps, while the other job types require 50% longer. All jobs in a server's queue are handled in parallel, up to a capacity of 10 for each server; they finish with probability Ilf at each timestep, where f is the product of the expected time for the job and the number of jobs in the server's queue. The states for this POMDP are all combinations of waiting jobs and server occupancies of the three job types, but the learner's observation is restricted to the type of the waiting job. The state transition is obtained by removing all jobs which have finished and adding the waiting job to the chosen server if it has space available. The reward is + 1 if the job is successfully placed, or 0 if it is dropped. The results are shown in Figure 4. A separate 106-step evaluation of the learned policy obtained R7r = 0.95, corresponding to 95% success in placing jobs. In contrast, the optimal deterministic policy, which assigns each job to the server optimized for it, attained only 87% success. Thus the learned policy more than halves the drop rate! 6 CONCLUSION Our online version of an algorithm proposed by Jaakkola, Singh and Jordan efficiently learns a stochastic memoryless policy which is either provably optimal or at least superior to any deterministic memoryless policy for each of four test problems. Many enhancements are possible, including appropriate learning schedules to improve performance and ensure convergence, estimation of the time between observation-action visits to obtain better discount rates r and thereby enhance Q7r-estimate bias and variance reduction (see [2]), and multiple starts or simulated annealing to avoid local minima. In addition, observations could be extended to include some past history when appropriate. Most POMDP algorithms use memory and attempt to learn an optimal deterministic policy based on belief states. The stochastic memoryless policies learned by the JSJ algorithm may not always be as good, but they are simpler to act upon and can adapt smoothly in non-stationary environments. Moreover, because it searches the space of stochastic policies, the JS] algorithm has the potential to find the optimal memoryless policy. These considerations, along with the success of our simple implementation, suggest that this algorithm may be a viable candidate for solving real-world POMDPs, including distributed control or network admission and routing problems in which the numbers of states are enormous and complete state information may be difficult to obtain or estimate in a timely manner. An AlgOrithm which Learns Stochastic Memoryless Policiesjor POMDPs 1079 (a) Job arrival of type 1,2,or 3 Server A TA = (2.6,3.9,3.9) Server B TB = (3.9,2.6,3.9) Server C Tc = (3.9,3.9,2.6) (b) 095 . I 09 I a: o . 8o'----=20~00::-:0-----,-: 4oo~00 =---60~00-=-0 --::8~00'":c00::------:-=-::' 1 00000 number of iterations (c) [0.73 n(alm) = 0.02 0.25 0.02 0.96 0.02 0.02] 0.09 0.89 Figure 4: (a) Schematic of the multi-server queue, (b) evolution of the R71-estimate, and (c) the resulting learned policy (observations I, 2, 3 across columns, actions A, B, C down rows) for €= 0.005, a= 0.9999. Acknowledgements We would like to thank Mike Mozer and Tim Brown for helpful discussions. Satinder Singh was funded by NSF grant IIS-9711753. References [1] Chrisman, L. (1992). Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In Proceedings of the Tenth National Conference on Artificial Intelligence. [2] Jaakkola, T., Singh, S. P., and Jordan, M. I. (1995). Reinforcement learning algorithm for partially observable Markov decision problems. In Advances in Neural Information Processing Systems 7. [3] Littman, M., Cassandra, A., and Kaelbling, L. (1995). Learning poliCies for partially observable environments: Scaling up. In Proceedings of the Twelfth International Conference on Machine Learning. [4] Littman, M. L. (1994). Memoryless policies: Theoretical limitations and practical results. Proceedings of the Third International Conference on Simulation of Adaptive Behavior: From Animals to Animats. [5] Loch, J., and Singh, S. P. (1998). Using eligibility traces to find the best memoryless policy in partially observable Markov decision processes. In Machine Learning: Proceedings of the Fifteenth International Conference. [6] Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observable Markov decision processes. In Annals of Operations Research, 28. [7] Morris, P. (1994). Introduction to Game Theory. Springer-Verlag, New York. [8] Parr, R. and Russell, S. (1995). Approximating optimal poliCies for partially observable stochastic domains. In Proceedings of the International Joint Conference on Artificial Intelligence. [9] Singh, S. P., Jaakkola, T., and Jordan, M. I. (1994). Learning without stateestimation in partially observable Markovian decision processes. In Machine Learning: Proceedings of the Eleventh International Conference. [10] Sutton, R. S. and Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
|
1998
|
20
|
1,516
|
Attentional Modulation of Human Pattern Discrimination Psychophysics Reproduced by a Quantitative Model Laurent Itti, Jochen Braun, Dale K. Lee and Christof Koch {itti, achim, jjwen, koch}Oklab.caltech.edu Computation & Neural Systems, MSC 139-74 California Institute of Technology, Pasadena, CA 91125, U.S.A. Abstract We previously proposed a quantitative model of early visual processing in primates, based on non-linearly interacting visual filters and statistically efficient decision. We now use this model to interpret the observed modulation of a range of human psychophysical thresholds with and without focal visual attention. Our model calibrated by an automatic fitting procedure - simultaneously reproduces thresholds for four classical pattern discrimination tasks, performed while attention was engaged by another concurrent task. Our model then predicts that the seemingly complex improvements of certain thresholds, which we observed when attention was fully available for the discrimination tasks, can best be explained by a strengthening of competition among early visual filters. 1 INTRODUCTION What happens when we voluntarily focus our attention to a restricted part of our visual field? Focal attention is often thought as a gating mechanism, which selectively allows a certain spatial location and and certain types of visual features to reach higher visual processes. We here investigate the possibility that attention might have a specific computational modulatory effect on early visual processing. We and others have observed that focal visual attention can modulate human psychophysical thresholds for simple pattern discrimination tasks [7, 8, 5] When attention is drawn away from a task, for example by "cueing" [12] to another location of the display, or by a second, concurrent task [1, 7, 8], an apparently complex pattern of performance degradation is observed: For some tasks, attention has little or no effect on performance (e.g., detection of luminance increments), while for 790 L. ltti, J. Braun, D. K. Lee and C. Koch other tasks, attention dramatically improves performance (e.g., discrimination of orientation). Our specific findings with dual-task psychophysics are detailed below. These observations have been paralleled by electrophysiological studies of attention. In the awake macaque, neuronal responses to attended stimuli can be 20% to 100% higher than to otherwise identical unattended stimuli. This has been demonstrated in visual cortical areas VI, V2, and V4 [16, 11, 10,9] when the animal discriminates stimulus orientation, and in areas MT and MST when the animal discriminates the speed of stimulus motion [17]. Even spontaneous firing rates are 40% larger when attention is directed at a neuron's receptive field [9]. Whether neuronal responses to attended stimuli are merely enhanced [17] or whether they are also more sharply tuned for certain stimulus dimensions [16] remains controversial. Very recently, fMRI studies have shown similar enhancement (as measured with BOLD contrast) in area VI of humans, specifically at the retinotopic location where subjects had been instructed to focus their attention to [2, 14]. All of these observations directly address the issue of the "top-down" computational effect of attentional focusing onto early visual processing stages. This issue should be distinguished from that of the "bottom-up" control of visual attention [6], which studies which visual features are likely to attract the attention focusing mechanism (e.g., pop-out phenomena and studies of visual search). Top-down attentional modulation happens after attention has been focused to a location of the visual field, and most probably involves the massive feedback circuits which anatomically project from higher cortical areas back to early visual processing areas. In the present study, we quantify the modulatory effect of attention observed in human psychophysics using a model of early visual processing. The model is based on non-linearly interacting visual filters and statistically efficient decision [4, 5]. Although attention could modulate virtually any visual processing stage (e.g., the decision stage, which compares internal responses from different stimuli), our basic hypothesis here - supported by electrophysiology and fMRI [16,11,10,17,9,2, 14]is that this modulation might happen very early in the visual processing hierarchy. Given this basic hypothesis, we investigate how attention should affect early visual processing in order to quantitatively reproduce the psychophysical results. 2 PSYCHOPHYSICAL EXPERIMENTS We measured attentional modulation of spatial vision thresholds using a dual-task paradigm [15, 7]: At the center of the visual field, a letter discrimination task is presented, while a pattern discrimination task is simultaneously presented at a random peripheral location (40 eccentricity). The central task consists of discriminating between five letters "T" or four "T" and one "L". It has been shown to efficiently engage attention [7]. The peripheral task is chosen from a battery of a classical pattern discrimination tasks, and is the task of interest for this study. PsychophysCentral task: threshold measurement ical thresholds are measured for two distinct conditions: In the "fully attended" condition, observers are asked to devote their entire attention to the peripheral Quantitative Modeling of Attentional Modulation 791 task, and to ignore the central task (while still fixating the center of the screen). In the "poorly attended" condition, observers are asked to pay full attention to the central task (and the blocks of trials for which performance for the central task falls below a certain cut-off are discarded). Four classical pattern discrimination tasks were investigated, each with two volunteer subjects (average shown in Figure 1), similarly to our previous experiments [7, 8]. Screen luminance resolution was 0.2%. Screen luminance varied from 1 to 90cd/m2 (mean 45cd/m2), room illumination was 5cd/m2 and viewing distance 80cm. The Yes/No (present/absent) paradigm was used (one stimulus presentation per trial). Threshold (75% correct peformance) was reached using a staircase procedure, and computed through a maximum-likelihood fit of a Weibull function with two degrees of freedom to the psychometric curves. Exp. 2: Orientation discrimination f 20 J 15 ., 10 c: ,g S 5 ~ ~~~~~ 0.6 0.8 Mask contrast Contrast 0.4 :2 :2 0 .B I: II) e II> -6 £; !0.2 i 0.2 .to c: c: 8 0 () ° ° ° Figure 1: Psychophysical data and model fits using the parameters from Table 1 (P=poorly and F=fully attended). Gray curves: Model predictions for fully attended data, using the poorly attended parameters, except for -y = 2.9 and {) = 2.1 (see Results). Expo 1 measured increment contrast discrimination threshold: The observer discriminates between a 4cpd (cycles per degree) stochastic oriented mask [7] at fixed contrast, and the same mask plus a low-contrast sixth-derivative-of-Gaussian (D6G) bar; threshold is measured for bar contrast [8]. Expo 2 measured orientation discrimination thresholds: The observer discriminates between a vertical and tilted grating at 4cpd; threshold for the angle difference is measured. In addition, two contrast masking tasks were investigated for their sensitivity to non-linearities in visual processing. A 4cpd stochastic mask (50% contrast) was always present, and threshold was measured for the contrast of a vertical superimposed D6G bar. In Expo 3, the orientation of the masker was varied and its spatial frequency fixed (4cpd), while in Expo 4 the spatial period of the masker was varied and its orientation vertical. Our aim was to investigate very dissimilar tasks, in particular with respect to the decision strategy used by the observer. Using the dual-task paradigm, we found mixed attentional effects on psychophysical thresholds, including the appearance of a more pronounced contrast discrimination 792 L. ltti, J. Braun, D. K. Lee and C. Koch "dipper" in Exp. 1, substantial improvement of orientation thresholds in Exp. 2, and reduced contrast elevations due to masking in Exps. 3-4 (also see [7, 8]). 3 MODEL The model consists of three successive stages [4, 5]. In the first stage, a bank of Gabor-like linear filters analyzes a fixed location of the visual scene. Here, a singlescale model composed of 12 pairs of filters in quadrature phase, tuned for orientations o E e evenly spanning 1800 , was sufficient to account for the data (although a multiscale model may account for a wider range of psychophysical thresholds). The linear filters take values between 0.0 and 100.0, then multiplied by a gain factor A (one of the ten free parameters of the model), and to which a small background activity f. is added. In the second stage, filters non-linearly interact as follows: (1) Each unit receives non-linear self-excitation, and (2) each unit receives non-linear divisive inhibition from a pool of similarly-tuned units: With E8 being the linear response from a unit tuned for orientation 0, the pooled response R8 is given by: (/1'_/1)2 where W8(O') = e 2E~ is a Gaussian weighting function centered around 0, and 1J a positive constant to account for background activity in the pooling stage. This stage is inspired from Heeger's model of gain control in cat VI [3, 4]. Our formulation, in which none of the parameters is given a particular value, however allows for multiple outcomes, to be determined by fitting the model to our psychophysical data: A sigmoidal (S > 0, I > d') as well as simple power-law (S = 0) or even linear (! = 1, d' = 0) contrast response characteristic could emerge, the responses could be saturating (, = d') or not (, i= d'), and the inhibitory pool size (~8) could be broad or narrow. Because striate neurons are noisy, physiological noise is assumed in the model at the outputs of the second stage. The noise level is chosen close to what is typically observed in cortical pyramidal cells, and modeled by Gaussian noise with variance equal to mean taken to some power a determined by fitting. Because the decision stage - which quantitatively relates activity in the population of pooled noisy units to behavioral discrimination performance - is not fully characterized in humans, we are not in a position to model it in any detail. Instead, we trained our subjects (for 2-3 hours on each task), and assume that they perform close to an "optimal detector". Such optimal detector may be characterized in a formal manner, using Statistical Estimation Theory [4, 5]. We assume that a brain mechanism exists, which, for a given stimulus presentation, builds an internal estimate of some stimulus attribute ( (e.g., contrast, orientation, period). The central assumption of our decision stage is that this brain mechanism will perform close to an unbiased efficient statistic T, which is the best possible estimator of ( Quantitative Modeling of Attentional Modulation 793 given the noisy population response from the second stage. The accuracy (variance) with which T estimates ( can be computed formally, and is the inverse of the Fisher Information with respect to ( [13, 4]. Simply put, this means that, from the first two stages of the model alone, we have a means of computing the best possible estimation performance for (, and consequently, the best possible discrimination performance between two stimuli with parameters (1 and (2 [4, 5]. Such statistically efficient decision stage is implementable as a neural network [13]. This decision stage provides a unified framework for optimal discrimination in any behavioral situation, and eliminates the need for task-dependent assumptions about the strategy used by the observers to perform the task in a near optimal manner. Our model allows for a quantitative prediction of human psychophysical thresholds, based on a crude simulation of the physiology of primary visual cortex (area VI). 4 RESULTS All parameters in the model were automatically adjusted in order to best fit the psychophysical data from all experiments. A multidimensional downhill simplex with simulated annealing overhead was used to minimize the root-mean-square distance between the quantitative predictions of the model and the human data [4]. The best-fit parameters obtained independently for the "fully attended" and "poorly attended" conditions are reported in Table 1. The model's simultaneous fits to our entire dataset are plotted in Figure 1 for both conditions. After convergence of the fitting procedure, a measure of how well constrained each parameter was by the data was computed as follows: Each parameter was systematically varied around its best-fit value, in 0.5% steps, and the fitting error was recomputed; the amplitude by which each parameter could be varied before the fitting error increased by more than 10% of its optimum is noted as a standard deviation in Table 1. A lower deviation indicates that the parameter is more strongly constrained by the dataset. Table 1. Model parameters for both attentional conditions. Name Symbol fully attended poorly attended Linear gaint A l.7 ± 0.2 8.2 ± 0.9 Activity-independent inhibitiont S 14.1 ± 2.3 10l.5 ± 16.6 Excitatory exponent 'Y 3.36 ± 0.02 2.09 ± 0.01 Inhibitory exponent 6 2.48 ± 0.02 l.51 ± 0.02 Noise exponent a l.34 ± 0.07 1.39 ± 0.08 Background activity, linear stage f l.13 ± 0.35 1.25 ± 0.60 Background activity, pooling stage 7] 0.18 ± 0.05 0.77 ± 0.11 Spatial period tuning width X (r>. 0.85 ± 0.06 oct. 0.85 ± 0.09 oct. Orientation tuning width X (r8 26° ± 2.4° 38° ± 5.5° Orientation pooling width X ~8 48° ± 25° 50° ± 26° t Dynamic range of linear filters is [€ ... 100.0 X A + 4 x For clarity, FWHM values are given rather than 17 values (FWHM = 2I7J2ln(2». Although no human bias was introduced during the fitting procedure, interestingly, all of the model's internal parameters reached physiologically plausible best-fit values, such as, for example, slightly supra-Poisson noise level (a ~ 1.35), ~ 30° orientation tuning FWHM (full-width at half-maximum), and ~ 0.85 octave spatial period tuning FWHM. Some of the internal characteristics of the model which more closely relate to the putative underlying physiological mechanisms are shown in Figure 2. 794 L. ltti, J. Braun, D. K. Lee and C. Koch a Transducer function b Orientation tuning C Orientation pooling 0.8 0.8 5.o.s ..c: ~o.s c: ~0 .4 e en Ci5 0.4 p 0.2 F 0.4 0.8 0.8 0 -40 -20 0 20 40 Contrast Orientation (deg) Figure 2: Internals of the model. (a) The response function of individual units to contrast was sigmoidal under full (F) and almost linear under poor (P) attention. (b) Native linear orientation tuning was broader under poor (NP) than full (NF) attention, but it was sharpened in both cases by pooling (PP=pooled poor, and PF=pooled full attention). (c) There was no difference in orientation pooling width under poor (P) or full (F) attention. Using poorly attended parameters, except for -y = 2.9 and ~ = 2.1 (grey curves), yielded steep non-linear contrast response, and intermediary tuning (same width as NF). In Table 1, attention had the following significant effects on the model's parameters: 1) Both pooling exponents (-y, d) were higher; 2) the tuning width (0"/1) was narrower; 3) the linear gain (A) and associated activity-independent inhibition (5) were lower; and 4) the background activity of the pooling stage was lower. This yielded increased competition between filters: The network behaved more like a winner-take-all under full attention, and more like a linear network of independent units under poor attention. While the attentional modulation of "d and 0"/1 are easy to interpret, its effect on the A, 5 and 'fJ is more difficult to understand. Consequently, we conducted a further automatic fit, which, starting from the "poorly attended" parameters, was only allowed to alter, and d to fit the "fully attended" data. The motivation for not varying 0"/1 was that we observed significant sharpening of the tuning induced by higher exponents "d (Figure 2). Also, slight changes in the difference , - d can easily produce large changes in the overall gain of the system, hence compensating for changes in A, 5 and 'fJ. (We however do not imply here that 0"/1, A, 5 and 'fJ are redundant parameters; there is only a small range around the best-fit point over which, and d can compensate for variations in the other parameters, without dramatically impairing the quality of fit) . Although the new fit was not as accurate as that obtained with all parameters allowed to vary, it appeared that a simple modification of the pooling exponents well captured the effect of attention (Figure 1). Hence, the "poorly attended" parameters of Table 1 well described the "poorly attended" data, and the same parameters except for, = 2.9 and d = 2.1 well described the "fully attended" data. A variety of other simple parameter modifications were also tested, but none except for the pooling exponents (-y,o) could fully account for the attentional modulation. These modifications include: Changes in gain (obtained by modifying A only, , only, or d only), in tuning (0"/1), in the extent ofthe inhibitory pool (E/I), and in the noise level (a). A more systematic study, in which all possible parameter subsets are successively examined, is currently in progress in our laboratory. 5 DISCUSSION and CONCLUSION At the basis of our results is the hypothesis that attention might modulate the earlier rather than the later stages of visual processing. We found that a very Quantitative Modeling of Attentional Modulation 795 simple, prototypical, task-independent enhancement of the amount of competition between early visual filters accounts well for the human data. This enhancement resulted from increases in parameters 'Y and 5 in the model, and was paralleled by an increase in contrast gain and a sharpening in orientation tuning. Although it is not possible from our data to rule out any attentional modulation at later stages, our hypothesis has recently received experimental support that attention indeed modulates early visual processing in humans [2, 14]. More psychophysical experiments are needed to investigate attentional modulation at later processing stages. For example, it might be possible to study the effect of attention on the decision stage by manipulating attention during experiments involving decision uncertainty. In the absence of such results, we have attempted in our experiments to minimize the possible impact of attention on later stages, by using only simple stimulus patterns devoid of conceptual or emotional meaning, such as to involve as little as possible the more cognitive stages of visual processing. Our finding that attention may increase the amount of competition between early visual filters is accompanied by an enhancement of the gain and sensitivity of the filters, and by a sharpening of their tuning properties. The existence of two such processing states - one, more sensitive and selective inside the focus of attention, and the other, more broadly-tuned and non-specific outside - can be justified by at least two observations: First, the higher level of activity in attended neurons consumes more energy, which may not be desirable over the entire extent of visual cortices. Second, although less efficient for fine discriminations, the broadly-tuned and non-specific state may have greater ability at catching unexpected, non-specific visual events. In this perspective, this state would be desirable as an input to bottom-up, visual alerting mechanisms, which monitor the rest of our visual world while we are focusing on a specific task requiring high focal accuracy. Acknowledgements This research was supported by ONR and NSF (Caltech ERG). References [1] Bonnel AM, Stein JF, Bertucci P. Q J Exp Psychol fA} 1992;44(4):601-26 [2] Gandhi SP, Heeger DJ, Boynton GM. Inv Opht Vis Sci (ARVO'98) 1998;39(4):5194 [3] Heeger DJ. Vis Neurosci 1992;9:181-97 [4] Itti L, Braun J, Lee DK, Koch C. Proc NIPS *97 {in press) [5] Itti L, Koch C, Braun J. Inv Opht Vis Sci (Proc ARVO'98) 1998;39(4):2934 [6] Koch C, Ullman S. Hum NeurobioI1985;4:219-27 [7] Lee DK, Koch C, Braun J. Vis Res 1997:37(17):2409-18 [8] Lee DK, Koch C, Itti L, Braun J. Inv Opht Vis Sci (Proc ARVO'98) 1998;39(4):2938 [9] Luck SJ, Chelazzi L, Hillyard SA, Desimone R. J Neurophysio/1997;77{I):24-42 [10] Maunsell JH. Science 1995;270(5237)764-9 [11] Motter BC. J NeurophysioiI993;70(3):909-19 [12] Nakayama K, Mackeben M. Vis Res 1989;29(11):1631-47 [13] Pouget A, Zhang K, Deneve S, Latham PE. Neur Comp 1998;10:373-401 [14] Somers DC, et al. lnv Opht Vis Sci (Proc ARVO'98) 1998;39(4):5192 [15] Sperling G, Melchner MJ. Science 1978;202:315-8 [16] Spitzer H, Desimone R, Moran J. Science 1988;240(4850):338-40 [17] Treue S, Maunsell JH. Nature 1996;382(6591):539-41
|
1998
|
21
|
1,517
|
Signal Detection in Noisy Weakly-Active Dendrites Amit Manwani and Christof Koch {quixote,koch}@klab.caltech.edu Computation and Neural Systems Program California Institute of Technology Pasadena, CA 91125 Abstract Here we derive measures quantifying the information loss of a synaptic signal due to the presence of neuronal noise sources, as it electrotonically propagates along a weakly-active dendrite. We model the dendrite as an infinite linear cable, with noise sources distributed along its length. The noise sources we consider are thermal noise, channel noise arising from the stochastic nature of voltage-dependent ionic channels (K+ and Na+) and synaptic noise due to spontaneous background activity. We assess the efficacy of information transfer using a signal detection paradigm where the objective is to detect the presence/absence of a presynaptic spike from the post-synaptic membrane voltage. This allows us to analytically assess the role of each of these noise sources in information transfer. For our choice of parameters, we find that the synaptic noise is the dominant noise source which limits the maximum length over which information be reliably transmitted. 1 Introduction This is a continuation of our efforts (Manwani and Koch, 1998) to understand the information capacity ofa neuronal link (in terms of the specific nature of neural "hardware") by a systematic study of information processing at different biophysical stages in a model of a single neuron. Here we investigate how the presence of neuronal noise sources influences the information transmission capabilities of a simplified model of a weakly-active dendrite. The noise sources we include are, thermal noise, channel noise arising from the stochastic nature of voltage-dependent channels (K+ and Na+) and synaptic noise due to spontaneous background activity. We characterize the noise sources using analytical expressions of their current power spectral densities and compare their magnitudes for dendritic parameters reported in literature (Mainen and Sejnowski, 1998). To assess the role of these noise sources on dendritic integration, we consider a simplified scenario and model the dendrite as a linSignal Detection in Noisy Weakly-Active Dendrites ,(y'L lsynapse Cable __ .... ~I Optimal Detector \ / Measurement v t t t t t t t t t t t t t t t t t t t t t t f y Noise Sources x Spike Pe No spike 133 Figure 1: Schematic diagram of a simplified dendritic channel. The dendrite is modeled a weaklyactive I-D cable with noise sources distributed along its length. Loss of signal fidelity as it propagates from a synaptic location (input) y to a measurement (output) location x is studied using a signal detection task. The objective is to optimally detect the presence of the synaptic input I (y, t) (in the fonn ofa unitary synaptic event) on the basis of the noisy voltage wavefonn Vm(x, t), filtered by the cable's Green's function and corrupted by the noise sources along the cable. The probability of error, Pe is used to quantify task perfonnance. ear, infinite, one-dimensional cable with distributed current noises. When the noise sources are weak so that the corresponding voltage fluctuations are small, the membrane voltage satisfies a linear stochastic differential equation satisfied. Using linear cable theory, we express the power spectral density of the voltage noise in terms of the Green's function of an infinite cable and the current noise spectra. We use these results to quantify the efficacy of information transfer under a "signal detection" paradigm 1 where the objective is to detect the presence/absence of a presynaptic spike (in the form of an epsc) from the post-synaptic membrane voltage along the dendrite. The formalism used in this paper is summarized in Figure 1. 2 Neuronal Noise Sources In this section we consider some current noise sources present in nerve membranes which distort a synaptic signal as it propagates along a dendrite. An excellent treatment of membrane noise is given in DeFelice (1981) and we refer the reader to it for details. For a linear one-dimensional cable, it is convenient to express quantities in specific length units. Thus, we express all conductances in units of S/ j.Lm and current power spectra in units of A 2 /Hz j.Lm. A. Thermal Noise Thermal noise arises due to the random thermal agitation of electrical charges in a conductor and represents a fundamental lower limit of noise in a system. A conductor of resistance R is equivalent to a noiseless resistor R in series with a voltage noise source vth (t) of spectral density SVth (I) = 2kT R (V2IHz), or a noiseless resistor R in parallel with a cu.rrentnoise source, Ith(t) of spectral density SIth(l) = 2kT / R (A2/ Hz), where k is the Boltzmann constant and T is the absolute temperature of the conductor2. The transverse resistance Tm (units of 0 j.Lm) ofa nerve membrane is due to the combined resistance of the lipid bilayer and the resting conductances of various voltage-gated, ligand-gated and leak channels embedded in the lipid matrix. Thus, the current noise due to Tm , has power I For sake of brevity, we do not discuss the corresponding signal estimation paradigm as in Manwani and Koch (1998). 2Since the power spectra of real signals are even functions of frequency, we choose the doublesided convention for all power spectral densities. 134 A. Manwani and C. Koch spectral density, (1) B. Channel Noise Neuronal membranes contain microscopic voltage-gated and ligand-gated channels which open and close randomly. These random fluctuations in the number of channels is another source of membrane noise. We restrict ourselves to voltage-gated K+ and Na+ channels, although the following can be used to characterize noise due to other types of ionic channels as well. In the classical Hodgkin-Huxley formalism (Koch, 1998), a K+ channel consists of four identical two-state sub-units (denoted by n) which can either be open or closed. The K+ channel conducts only when all the sub-units are in their open states. Since the sub-units are identical, the channel can be in one of five states; from the state in which all the sub-units are closed to the open state in which all sub-units are open. Fluctuations in the number of open channels cause a random K+ current IK of power spectral density (DeFelice, 1981) 2 2 4 ~ (4) ( )i 4-i 28n /i SIK(f) = 1]K'YK(Vm - EK) noo f=t i 1 - noo noo 1 + 411'2 j2(8n/i)2' (2) where 1]K, 'YK and EK denote the K+ channel density (per unit length), the K+ single channel conductance and the K+ reversal potential respectively. Here we assume that the membrane voltage has been clamped to a value Vm . noo and 8n are the steady-state open probability and relaxation time constant of a single K+ sub-unit respectively and are in general non-linear functions of V m (Koch, 1998). When V m is close to the resting potential Vrest (usually between -70 to -65 m V), noo « 1 and one can simplify S I K (f) as 2 2 4 4 28n /4 SI K(f) ~ 1]K'YK(Vrest - EK) noo (1- noo) 1 + 411'2 j2(8n/4)2 (3) Similarly, the Hodgkin-Huxley Na+ channel is characterized by three identical activation sub-units (denoted by m) and an inactivation sub-unit (denoted by h). The Na+ channel conducts only when all the m sub-units are open and the h sub-unit is not inactivated. Thus, the Na+ channel can be in one of eight states from the state corresponding to all m subunits closed and the h sub-unit inactivated to the open state with all m sub-units open and the h sub-unit not inactivated. moo (resp. hoo) and 8m (resp. 8h ) are the corresponding steady-state open probability and relaxation time constant of a single Na+ m (resp. h) sub-unit respectively. For V m ~ Vrest , moo « 1, hoo ~ 1 and 2 ( )2 3 ( )3 2 28m /3 SINa(f) ~ 1]Na'YNa Vrest - ENa moo 1 - moo hoo 1 + 411'2 j2(8m /3)2 (4) where 1]Na, 'YNa and ENa denote the Na+ channel density, the Na+ single channel conductance and the sodium reversal potential respectively. C. Synaptic Noise In addition to voltage-gated ionic channels, dendrites are also awash in ligand-gated synaptic receptors. We restrict our attention to fast voltage-independent (AMP A-like) synapses. A commonly used function to represent the postsynaptic conductance change in response to a presynaptic spike is the alpha function (Koch, 1998) go:(t) = gpeak e t e-t/tpeak, 0 :::; t < 00 (5) tpeak where gpeak denotes the peak conductance change and tpeak the time-to-peak of the conductance change. We shall assume that for a spike train s(t) = ~j &(t - tj), the postsynaptic conductance is given gSyn(t) = ~j go:(t - tj). This ignores inter-spike interaction Signal Detection in Noisy Weakly-Active Dendrites /35 r· I Figure 2: Schematic diagram of the equivalent electrical circuit of a linear dendritic cable. The dendrite is modeled as an infinite ladder network. Ti (units ofOlJ-L m) denotes the longitudinal cytoplasmic resistance; em (units of FI J-L m) and gL (units of SI J-L m) denote the transverse membrane capacitance and conductance (due to leak channels with reversal potential E L) respectively. The membrane also contains active channels (K+, Na+) with conductances and reversal potentials denoted by (gK, gNa) and (EK, ENa) respectively, and fast voltage-independent (AMPA-like) synapses with conductance gSlI n and reversal potential EslIn • and synaptic saturation. The synaptic current is given by iSyn(t) = 9Syn(t)(Vm - ESyn) where ESyn is the synaptic reversal potential. If the spike train can be modeled as a homogeneous Poisson process with mean firing rate An, the power spectrum ofisyn(t) can be computed using Campbell's theorem (Papoulis, 1991) SISyn(f) = 7]Syn An(Vm - ESyn)2 I GnU) 12 , (6) where 7]Syn denotes the synaptic density and Gn(f) = 1000 9n(t) exp(-j21r/t) dt is the Fourier transform of 9n(t). Substituting for 9o(t) gives S (/) A (e 9peaktpeak(Vm - ESyn))2 ISyn - T/Syn n (1 + 41r2 J2t;)2 (7) 3 Noise in Linear Cables The linear infinite cable corresponding to a dendrite is modeled by the ladder network shown in Figure 2. The membrane voltage Vm(x, t) satisfies the differential equation (Tuckwell, 1988), [ aVm ri Cm 8t + 9K(Vm - EK) + 9Na(Vm - ENa) + gSyn(Vm - ESyn) + gdVm - Ed ] (8) Since the ionic conductances are random and nonlinearly related to Vm , eq. 8 is a nonlinear stochastic differential equation. If the voltage fluctuations (denoted by V) around the resting potential Vrest are small, one can express the conductances as small deviations (denoted by g) from their corresponding resting values and transform eq. 8 to _ \2 a2V(x, t) aV(x, t) (1 ~)V( t) = In /\ ax2 + T at + + U x, G (9) where A2 = 1/ (riG) and T = cm/G denote the length and time constant of the membrane respectively. G is the passive membrane conductance and is given by the sum of the resting values of all the conductances. ~ = gK + gNa + gSyn/G represents the random changes in the membrane conductance due to synaptic and channel stochasticity; ~ 136 A. Manwani and C. Koch -27 -6 -28 <>--1> Thermal K' -7 _-29 f "'-30 J i-31 .. !-32 'ii ~-33 .. Na' Synaptic -8 i -9 §-10 ~ r ll ~-12 .. ':-13 -34 -14 -35 -15 -360 0.5 1.5 2 25 3 3.5 -16 0 0.5 1.5 2 2.5 3.5 f\'equancy (Hz) (Log Unitt) f'l'equoncy (Hz) (Log 1.WIo) Figure 3: (a) Comparison of current spectra 8r(f) of the four noise sources we consider. Synaptic noise is the most dominant source source of noise and thermal noise, the smallest. (b) Voltage noise spectrum of a I-D infinite cable due to the current noise sources. 8Vth (f) is also shown for comparison. Summary of the ~arameters used (adopted from Mainen and Sejnowski, 1998) : Rm = 40 kOcm2 , em = 0.75 f..tF/cm , ri = 200 Ocm, d (dend. dia.) = 0.75 f..tm, 'TIK = 2.3 f..tm-1, 'TINa = 3 f..tm-1, 'TISyn = 0.1 f..tm-1, EK = -95 mY, ENa = 50 mY, ESyn = 0 mY, EL = Vrest = 70 mV, "IK = "INa = 20 pS. Vrest ) + lth denotes the total effective current noise due to the different noise sources. In order to derive analytical closed-fonn solutions to eq. 9, we further assume that 8 < < 13, which reduces it to the familiar one-dimensional cable equation with noisy current input (Tuckwell, 1988). For resting initial conditions (no charge stored on the membrane at t = 0) , V is linearly related to In and can be obtained by convolving In with the Green's function 9 (x, y, t) of the cable for the appropriate boundary conditions. It has been shown that V (x, t) is an asymptotically wide-sense stationary process (Tuckwell and Walsh, 1991) and its power spectrum Sv(x, f) can be expressed in tenns of the power spectrum of In, Sn(f) as Sn(f) 1 00 '2' Sv(x, f) = ----cJ2 -00 IQ(x, x ,f)1 dx (10) where Q(x, x' , f) is the Fouriertransfonn of g(x, x', t). For an infinite cable , e-T - ( X-X' ) 2 , g(X,X ,T) = J47rT e 4T ,-00 < X,X < 00,0::; T < 00 (11) where X = X/A, X' = x' / A and T = t / T are the corresponding dimensionless variables. Substituting for g(x, x' , t) we obtain S (f) = Sn(f) sin (tan-l~211"f7")) V 2 A G2 27r /T (1 + (27r /T)2//4 (12) Since the noise sources are independent, Sn(f) = SIth(f) + SIK(f) + SINa(f) + SISyn(f). Thus, eq. 12 allows us to compute the relative contribution of each of the noise sources to the voltage noise. The current and voltage noise spectra for biophysically relevant parameter values (Mainen and Sejnowski, 1998) are shown in Figure 3. 3Using self-consistency, we find the assumption to be satisfied in our case. In general, it needs verified on a case-by-case basis. Signal Detection in Noisy Weakly-Active Dendrites 137 4 Signal Detection The framework and notation used here are identical to that in Manwani and Koch (1998) and so we refer the reader to it for details. The goal in the signal detection task is to optimally decide between the two hypotheses Ho : y(t) = n(t), 0 ~ t ~ T Noise HI : y(t) = g(t) * s(t) + n(t), 0 ~ t ~ T Signal + Noise (13) where n(t), g(t) and s(t) denote the dendritic voltage noise, the Green's function of the cable (function of the distance between the input and measurement locations) and the epsc waveform (due to a presynaptic spike) respectively. The decision strategy which minimizes the probability of error Pe = PoP, + PIPm, where Po and PI = (1 - Po) are the prior probabilities of Ho and HI respectively, is (14) where A(y) = P[yIHIl/ P[yIHo] and £0 = Po/(l - Po). P, and Pm denote the false alarm and miss probability respectively. Since n(t) arises due to the effect of several independent noise sources, by invoking the Central Limit theorem, we can assume Hl that n(t) is Gaussian, for which eq. 14 reduces to r ~ 'T}. r = Jooo y(t) hd( -t) dt Ho is a correlation between y(t) and the matched filter hd(t), given in the Fourier domain as Hd(f) = e-j21r,Tg*(f)S*(f)/ Sn(f). g(f) and S(f) are Fourier transforms of g(t) and s(t) respectively and Sn(f) is the noise power spectrum. The conditional means and variances of the Gaussian variable r under Ho and HI are 110 = 0,111 = J~oo IG(f)S(f)12 / Sn(f) df and (76 = (7~ = (72 = 111 respectively. The error probabilities are given by P, = J1/oo P[rIHo] dr and Pm = J~oo P[rlHtJ dr. The optimal value of the threshold 'T} depends on (7 and the prior probability Po. For equiprobable hypotheses (Po = 1 - Po = 0.5), the optimal 'T} = (110 + 111)/2 = (72/2 and Pe = 0.5 Erfc[(7/2V2]. One can also regard the overall decision system as an effective binary channel. Let M and D be binary variables which take values in the set {Ho, Hd and denote the input and output of the dendritic channel respectively. Thus, the system performance can equivalently be assessed by computing the mutual information between M and D, I(M; D) = 1i(po (1- Pm) + (1- Po) PI) -Po1i(Pm) - (1- Po),H.(P, (Cover and Thomas, 1991) where 1i (x) is the binary entropy function. For equi-probable hypotheses, I(M; D) = 1 -1i(Pe ) bits. It is clear from the plots for Pe and I(M; D) (Figure 4) as a function of the distance between the synaptic (input) and the measurement (output) location that an epsc. can be detected with almost certainty at short distances, after which, there is a rapid decrease in detectability with distance. Thus, we find that membrane noise may limit the maximum length of a dendrite over which information can be transmitted reliably. 5 Conclusions In this study we have investigated how neuronal noise sources might influence and limit the ability of one-dimensional cable structures to propagate information. When extended to realistic dendritic geometries, this approach can help address questions as, is the length of the apical dendrite in a neocortical pyramidal cell limited by considerations of signal-to-noise, which synaptic locations on a dendritic tree (if any) are better at transmitting information, what is the functional significance of active dendrites (Yuste and Tank, 1996) and so on. Given the recent interest in dendritic properties, it seems timely to apply an informationtheoretic approach to study dendritic integration. In an attempt to experimentally verify 138 0.51-----.-----=::::==:::::::::::::;;~ ,;':: "::.;--~=:---~-:."!'0.45 0.4 -.p.lS !!:. ~ 0.3 w 15 0.25 f 0.2 0.15 0.1 0.05 ,,'" ", .... ,,'" ... ,," ./' ", , " " / I ,I • I I / ,/ I I / / I ,. I • I I // I I " , l(jlm) 1500 0.3 0.2 \\ \ \ \ ' \ \ \ \ \ , \ \ \ . \ \ \ ; \ ' \ \ \ ' \ '. \ \ \ ' \ \ \ \. \ , A. Manwani and C. Koch " "' .. ........ -:. .. .:-.:.- .. ~~---~~~--~~-1~~~---~1~ 0.1 l(jlm) Figure 4: Infonnation loss in signal detection. (a) Probability of Error (Pe) and (b) Mutual infonnation (I(M;D» for an infinite cable as a function of distance from the synaptic input location. Almost perfect detection occurs for small distances but perfonnance degrades steeply over larger distances as the signal-to-noise ratio drops below some threshold. This suggests that dendritic lengths may be ultimately limited by signal-to-noise considerations. Epsc. parameters: gpeak= 0.1 nS, tpeak = 1.5 msec and ESyn = 0 mY. N syn is the number of synchronous synapses which activate in response to a pre-synaptic action potential. the validity of our results, we are currently engaged in a quantitative comparison using neocortical pyramidal cells (Manwani et ai, 1998). Acknowledgements This research was supported by NSF, NIMH and the Sloan Center for Theoretical Neuroscience. We thank Idan Segev, Elad Schneidman, Moo London, YosefYarom and Fabrizio Gabbiani for illuminating discussions. References DeFelice, LJ. (1981) Membrane Noise. New York: Plenum Press. Cover, T.M., and Thomas, J.A. (1991) Elements of Information Theory. New York: Wiley. Koch, C. (1998) Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press. Mainen, Z.F. and Sejnowski, TJ. (1998) "Modeling active dendritic processes in pyramidal neurons," In: Methods in Neuronal Modeling: From Ions to Networks, Koch, C. and Segev, I., eds., Cambridge: MIT Press. Manwani, A. and Koch, C. (1998) "Synaptic transmission: An infonnation-theoretic perspective," In: Kearns, M., Jordan, M. and Solla, S., eds., Advances in Neural Information Processing Systems," Cambridge: MIT Press. Manwani, A., Segev, I., Yarom, Y and Koch, C. (1998) "Neuronal noise sources in membrane patches and linear cables," In: Soc. Neurosci. Abstr. Papoulis, A. (1991) Probability, Random Variables and Stochastic Processes. New York: McGrawHill. TuckweII, H.C. (1988) Introduction to Theoretical Neurobiology: I. New York: Cambridge University Press. Tuckwell, H.C. and Walsh, J.B. (1983) "Random currents through nerve membranes I. Unifonn poisson or white noise current in one-dimensional cables," Bioi. Cybem. 49:99-110. Yuste, R. and Tank, D. W. (1996) "Dendritic integration in mammalian neurons, a century after Cajal,"
|
1998
|
22
|
1,518
|
Kernel peA and De-Noising in Feature Spaces Sebastian Mika, Bernhard Scholkopf, Alex Smola Klaus-Robert Muller, Matthias Scholz, Gunnar Riitsch GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany {mika, bs, smola, klaus, scholz, raetsch} @first.gmd.de Abstract Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data. 1 peA and Feature Spaces Principal Component Analysis (PC A) (e.g. [3]) is an orthogonal basis transformation. The new basis is found by diagonalizing the centered covariance matrix of a data set {Xk E RNlk = 1, ... ,f}, defined by C = ((Xi - (Xk))(Xi - (Xk))T). The coordinates in the Eigenvector basis are called principal components. The size of an Eigenvalue >. corresponding to an Eigenvector v of C equals the amount of variance in the direction of v. Furthermore, the directions of the first n Eigenvectors corresponding to the biggest n Eigenvalues cover as much variance as possible by n orthogonal directions. In many applications they contain the most interesting information: for instance, in data compression, where we project onto the directions with biggest variance to retain as much information as possible, or in de-noising, where we deliberately drop directions with small variance. Clearly, one cannot assert that linear PCA will always detect all structure in a given data set. By the use of suitable nonlinear features, one can extract more information. Kernel PCA is very well suited to extract interesting nonlinear structures in the data [9]. The purpose of this work is therefore (i) to consider nonlinear de-noising based on Kernel PCA and (ii) to clarify the connection between feature space expansions and meaningful patterns in input space. Kernel PCA first maps the data into some feature space F via a (usually nonlinear) function <II and then performs linear PCA on the mapped data. As the feature space F might be very high dimensional (e.g. when mapping into the space of all possible d-th order monomials of input space), kernel PCA employs Mercer kernels instead of carrying Kernel peA and De-Noising in Feature Spaces 537 out the mapping <I> explicitly. A Mercer kernel is a function k(x, y) which for all data sets {Xi} gives rise to a positive matrix Kij = k(Xi' Xj) [6]. One can show that using k instead of a dot product in input space corresponds to mapping the data with some <I> to a feature space F [1], i.e. k(x,y) = (<I>(x) . <I>(y)). Kernels that have proven useful include Gaussian kernels k(x, y) = exp( -llx - Yll2 Ie) and polynomial kernels k(x, y) = (x·y)d. Clearly, all algorithms that can be formulated in terms of dot products, e.g. Support Vector Machines [1], can be carried out in some feature space F without mapping the data explicitly. All these algorithms construct their solutions as expansions in the potentially infinite-dimensional feature space. The paper is organized as follows: in the next section, we briefly describe the kernel PCA algorithm. In section 3, we present an algorithm for finding approximate pre-images of expansions in feature space. Experimental results on toy and real world data are given in section 4, followed by a discussion of our findings (section 5). 2 Kernel peA and Reconstruction To perform PCA in feature space, we need to find Eigenvalues A > 0 and Eigenvectors V E F\{O} satisfying AV = GV with G = (<I>(Xk)<I>(Xk)T).1 Substituting G into the Eigenvector equation, we note that all solutions V must lie in the span of <I>-images of the training data. This implies that we can consider the equivalent system A( <I>(Xk) . V) = (<I>(Xk) . GV) for all k = 1, ... ,f (1) and that there exist coefficients Q1 , ... ,Ql such that l V = L i=l Qi<l>(Xi) (2) Substituting C and (2) into (1), and defining an f x f matrix K by Kij := (<I>(Xi)· <I>(Xj)) = k( Xi, X j), we arrive at a problem which is cast in terms of dot products: solve fAa = Ko. (3) where 0. = (Q1, ... ,Ql)T (for details see [7]). Normalizing the solutions V k , i.e. (V k . Vk) = 1, translates into Ak(o.k .o.k) = 1. To extract nonlinear principal components for the <I>-image of a test point X we compute the projection onto the k-th component by 13k := (Vk . <I> (X)) = 2:f=l Q~ k(x, Xi). Forfeature extraction, we thus have to evaluate f kernel functions instead of a dot product in F, which is expensive if F is high-dimensional (or, as for Gaussian kernels, infinite-dimensional). To reconstruct the <I>-image of a vector X from its projections 13k onto the first n principal components in F (assuming that the Eigenvectors are ordered by decreasing Eigenvalue size), we define a projection operator Pn by (4) k=l If n is large enough to take into account all directions belonging to Eigenvectors with nonzero Eigenvalue, we have Pn<l>(Xi) = <I>(Xi). Otherwise (kernel) PCA still satisfies (i) that the overall squared reconstruction error 2:i II Pn<l>(Xi) - <I>(xdll2 is minimal and (ii) the retained variance is maximal among all projections onto orthogonal directions in F. In common applications, however, we are interested in a reconstruction in input space rather than in F. The present work attempts to achieve this by computing a vector z satisfying <I>(z) = Pn<l>(x). The hope is that for the kernel used, such a z will be a good approximation of X in input space. However. (i) such a z will not always exist and (ii) if it exists, I For simplicity, we assume that the mapped data are centered in F. Otherwise, we have to go through the same algebra using ~(x) := <I>(x) - (<I>(x;»). 538 S. Mika et al. it need be not unique.2 As an example for (i), we consider a possible representation of F. One can show [7] that <I> can be thought of as a map <I> (x) = k( x, .) into a Hilbert space 1{ k of functions Li ai k( Xi, .) with a dot product satisfying (k( x, .) . k(y, .)) = k( x, y). Then 1{k is caHed reproducing kernel Hilbert space (e.g. [6]). Now, for a Gaussian kernel, 1{k contains aHlinear superpositions of Gaussian bumps on RN (plus limit points), whereas by definition of <I> only single bumps k(x,.) have pre-images under <1>. When the vector P n<l>( x) has no pre-image z we try to approximate it by minimizing p(z) = 1I<I>(z) - Pn<l>(x) 112 (5) This is a special case of the reduced set method [2]. Replacing terms independent of z by 0, we obtain p(z) = 1I<I>(z)112 - 2(<I>(z) . Pn<l>(x)) + 0 (6) Substituting (4) and (2) into (6), we arrive at an expression which is written in terms of dot products. Consequently, we can introduce a kernel to obtain a formula for p (and thus V' % p) which does not rely on carrying out <I> explicitly n l p(z) = k(z, z) - 2 L f3k L a~ k(z, Xi) + 0 (7) k=l i=l 3 Pre-Images for Gaussian Kernels To optimize (7) we employed standard gradient descent methods. If we restrict our attention to kernels of the form k(x, y) = k(llx - Y1l2) (and thus satisfying k(x, x) == const. for all x), an optimal z can be determined as foHows (cf. [8]): we deduce from (6) that we have to maximize l p(z) = (<I>(z) . Pn<l>(x)) + 0' = L Ii k(z, Xi) + 0' (8) i=l where we set Ii = L~=l f3ka: (for some 0' independent of z). For an extremum, the gradient with respect to z has to vanish: V' %p(z) = L~=l lik'(llz - xi112)(Z - Xi) = O. This leads to a necessary condition for the extremum: z = Li tJixd Lj tJj, with tJi ,ik'(llz - xiIl2). For a Gaussian kernel k(x, y) = exp( -lix - Yll2 jc) we get L~=l Ii exp( -liz - xil12 jC)Xi z= l ~ Li=l liexp(-llz - xil12jc) We note that the denominator equals (<I>(z) . P n<l>(X)) (cf. (8». Making the assumption that Pn<l>(x) i= 0, we have (<I>(x) . Pn<l>(x)) = (Pn<l>(x) . Pn<l>(x)) > O. As k is smooth, we conclude that there exists a neighborhood of the extremum of (8) in which the denominator of (9) is i= O. Thus we can devise an iteration scheme for z by L~=l Ii exp( -llzt - xill2 jC)Xi Zt+l = l Li=l li exp(-lIz t - xil12jc) (10) Numerical instabilities related to (<I>(z) . Pn<l>(x)) being smaH can be dealt with by restarting the iteration with a different starting value. Furthermore we note that any fixed-point of (10) will be a linear combination of the kernel PCA training data Xi. If we regard (10) in the context of clustering we see that it resembles an iteration step for the estimation of 2If the kernel allows reconstruction of the dot-product in input space, and under the assumption that a pre-image exists, it is possible to construct it explicitly (cf. [7]). But clearly, these conditions do not hold true in general. Kernel peA and De-Noising in Feature Spaces 539 the center of a single Gaussian cluster. The weights or 'probabilities' T'i reflect the (anti-) correlation between the amount of cP (x) in Eigenvector direction Vk and the contribution of CP(Xi) to this Eigenvector. So the 'cluster center' z is drawn towards training patterns with positive T'i and pushed away from those with negative T'i, i.e. for a fixed-point Zoo the influence of training patterns with smaner distance to x wi11 tend to be bigger. 4 Experiments To test the feasibility of the proposed algorithm, we run several toy and real world experiments. They were performed using (10) and Gaussian kernels of the form k(x, y) = exp( -(llx - YI12)/(nc)) where n equals the dimension of input space. We mainly focused on the application of de-noising, which differs from reconstruction by the fact that we are allowed to make use of the original test data as starting points in the iteration. Toy examples: In the first experiment (table 1), we generated a data set from eleven Gaussians in RIO with zero mean and variance u2 in each component, by selecting from each source 100 points as a training set and 33 points for a test set (centers of the Gaussians randomly chosen in [-1, 1]10). Then we applied kernel peA to the training set and computed the projections 13k of the points in the test set. With these, we carried out de-noising, yielding an approximate pre-image in RIO for each test point. This procedure was repeated for different numbers of components in reconstruction, and for different values of u. For the kernel, we used c = 2u2 • We compared the results provided by our algorithm to those of linear peA via the mean squared distance of an de-noised test points to their corresponding center. Table 1 shows the ratio of these values; here and below, ratios larger than one indicate that kernel peA performed better than linear peA. For almost every choice of nand u, kernel PeA did better. Note that using alllO components, linear peA is just a basis transformation and hence cannot de-noise. The extreme superiority of kernel peA for small u is due to the fact that all test points are in this case located close to the eleven spots in input space, and linear PeA has to cover them with less than ten directions. Kernel PeA moves each point to the correct source even when using only a sman number of components. n=1 2 3 4 5 6 7 8 9 0.05 2058.42 1238.36 846.14 565.41 309.64 170.36 125.97 104.40 92.23 0.1 10.22 31.32 21.51 29.24 27.66 2:i.5:i 29.64 40.07 63.41 0.2 0.99 1.12 1.18 1.50 2.11 2.73 3.72 5.09 6.32 0.4 1.07 1.26 1.44 1.64 1.91 2.08 2.22 2.34 2.47 0.8 1.23 1.39 1.54 1.7U 1.8U 1.96 2.10 2.25 2.39 Table 1: De-noising Gaussians in RIO (see text). Performance ratios larger than one indicate how much better kernel PeA did, compared to linear PeA, for different choices of the Gaussians' std. dev. u, and different numbers of components used in reconstruction. To get some intuitive understanding in a low-dimensional case, figure 1 depicts the results of de-noising a half circle and a square in the plane, using kernel peA, a nonlinear autoencoder, principal curves, and linear PeA. The principal curves algorithm [4] iteratively estimates a curve capturing the structure of the data. The data are projected to the closest point on a curve which the algorithm tries to construct such that each point is the average of all data points projecting onto it. It can be shown that the only straight lines satisfying the latter are principal components, so principal curves are a generalization of the latter. The algorithm uses a smoothing parameter whic:h is annealed during the iteration. In the nonlinear autoencoder algorithm, a 'bottleneck' 5-layer network is trained to reproduce the input values as outputs (i.e. it is used in autoassociative mode). The hidden unit activations in the third layer form a lower-dimensional representation of the data, closely related to 540 S. Mika et al. PCA (see for instance [3]). Training is done by conjugate gradient descent. In all algorithms, parameter values were selected such that the best possible de-noising result was obtained. The figure shows that on the closed square problem, kernel PeA does (subjectively) best, followed by principal curves and the nonlinear autoencoder; linear PeA fails completely. However, note that all algorithms except for kernel PCA actually provide an explicit one-dimensional parameterization of the data, whereas kernel PCA only provides us with a means of mapping points to their de-noised versions (in this case, we used four kernel PCA features, and hence obtain a four-dimensional parameterization). kernel PCA nonlinear autoencoder Principal Curves linear PCA ~It%i 1J;:qf~~ I::f;~.;~,: '~ll Figure 1: De-noising in 2-d (see text). Depicted are the data set (small points) and its de-noised version (big points, joining up to solid lines). For linear PCA, we used one component for reconstruction, as using two components, reconstruction is perfect and thus does not de-noise. Note that all algorithms except for our approach have problems in capturing the circular structure in the bottom example. USPS example: To test our approach on real-world data, we also applied the algorithm to the USPS database of 256-dimensional handwritten digits. For each of the ten digits, we randomly chose 300 examples from the training set and 50 examples from the test set. We used (10) and Gaussian kernels with c = 0.50, equaling twice the average of the data's variance in each dimensions. In figure 4, we give two possible depictions of 1111 II iI 111111 fill 0"(1' . ___ •• 11m 111111111111 Figure 2: Visualization of Eigenvectors (see text). Depicted are the 2°, ... , 28 -th Eigenvector (from left to right). First row: linear PeA, second and third row: different visualizations for kernel PCA. the Eigenvectors found by kernel PCA, compared to those found by linear PCA for the USPS set. The second row shows the approximate pre-images of the Eigenvectors V k , k = 2°, ... ,28, found by our algorithm. In the third row each image is computed as follows: Pixel i is the projection of the <II-image of the i-th canonical basis vector in input space onto the corresponding Eigenvector in features space (upper left <II(el) . V k , lower right <II (e256) . Vk). In the linear case, both methods would simply yield the Eigenvectors oflinear PCA depicted in the first row; in this sense, they may be considered as generalized Eigenvectors in input space. We see that the first Eigenvectors are almost identical (except for signs). But we also see, that Eigenvectors in linear PeA start to concentrate on highfrequency structures already at smaller Eigenvalue size. To understand this, note that in linear PCA we only have a maximum number of 256 Eigenvectors, contrary to kernel PCA which gives us the number of training examples (here 3000) possible Eigenvectors. This Kernel peA and De-Noising in Feature Spaces 541 ••• & •••••••• 3.55 ••• &3 1.02 1.02 1.01 0.113 I.CI' 0.111 0.118 0.118 1.01 0.60 0.78 0.76 0.52 0.73 0.7( o.n 0.80 0.7( 0.7( 0.72 ••••••••••••• 3S •••• S! Figure 3: Reconstruction of USPS data. Depicted are the reconstructions of the first digit in the test set (original in last column) from the first n = 1, ... ,20 components for linear peA (first row) and kernel peA (second row) case. The numbers in between denote the fraction of squared distance measured towards the original example. For a small number of components both algorithms do nearly the same. For more components, we see that linear peA yields a result resembling the original digit, whereas kernel peA gives a result resembling a more prototypical 'three' also explains some of the results we found when working with the USPS set. In these experiments, linear and kernel peA were trained with the original data. Then we added (i) additive Gaussian noise with zero mean and standard deviation u = 0.5 or (ii) 'speckle' noise with probability p = 0.4 (i.e. each pixel flips to black or white with probability p/2) to the test set. For the noisy test sets we computed the projections onto the first n linear and nonlinear components, and carried out reconstruction for each case. The results were compared by taking the mean squared distance of each reconstructed digit from the noisy test set to its original counterpart. As a third experiment we did the same for the original test set (hence doing reconstruction, not de-noising). In the latter case, where the task is to reconstruct a given example as exactly as possible, linear peA did better, at least when using more than about 10 components (figure 3). This is due to the fact that linear peA starts earlier to account for fine structures, but at the same time it starts to reconstruct noise, as we will see in figure 4. Kernel PCA, on the other hand, yields recognizable results even for a small number of components, representing a prototype of the desired example. This is one reason why our approach did better than linear peA for the de-noising example (figure 4). Taking the mean squared distance measured over the whole test set for the optimal number of components in linear and kernel PCA, our approach did better by a factor of 1.6 for the Gaussian noise, and 1.2 times better for the 'speckle' noise (the optimal number of components were 32 in linear peA, and 512 and 256 in kernel PCA, respectively). Taking identical numbers of components in both algorithms, kernel peA becomes up to 8 (!) times better than linear peA. However, note that kernel PCA comes with a higher computational complexity. 5 Discussion We have studied the problem of finding approximate pre-images of vectors in feature space, and proposed an algorithm to solve it. The algorithm can be applied to both reconstruction and de-noising. In the former case, results were comparable to linear peA, while in the latter case, we obtained significantly better results. Our interpretation of this finding is as follows. Linear peA can extract at most N components, where N is the dimensionality of the data. Being a basis transform, all N components together fully describe the data. If the data are noisy, this implies that a certain fraction of the components will be devoted to the extraction of noise. Kernel peA, on the other hand, allows the extraction of up to f features, where f is the number of training examples. Accordingly, kernel peA can provide a larger number of features carrying information about the structure in the data (in our experiments, we had f > N). In addition, if the structure to be extracted is nonlinear, then linear peA must necessarily fail, as we have illustrated with toy examples. These methods, along with depictions of pre-images of vectors in feature space, provide some understanding of kernel methods which have recently attracted increasing attention. Open questions include (i) what kind of results kernels other than Gaussians will provide, 542 S. Mika et al. Figure 4: De-Noising of USPS data (see text). The left half shows: top: the first occurrence of each digit in the test set, second row: the upper digit with additive Gaussian noise (0' = 0.5), following five rows: the reconstruction for linear PCA using n = 1,4,16,64,256 components, and, last five rows: the results of our approach using the same number of components. In the right half we show the same but for 'speckle' noise with probability p = 0.4. (ii) whether there is a more efficient way to solve either (6) or (8), and (iii) the comparison (and connection) to alternative nonlinear de-noising methods (cf. [5]). References [1] B. Boser, I. Guyon, and V.N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proc. COLT, pages 144-152, Pittsburgh, 1992. ACM Press. [2] C.J.c. Burges. Simplified support vector decision rules. In L. Saitta, editor, Prooceedings, 13th ICML, pages 71-77, San Mateo, CA, 1996. [3] K.I. Diamantaras and S.Y. Kung. Principal Component Neural Networks. Wiley, New York,1996. [4] T. Hastie and W. Stuetzle. Principal curves. JASA, 84:502-516,1989. [5] S. Mallat and Z. Zhang. Matching Pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397-3415, December 1993. [6] S. Saitoh. Theory of Reproducing Kernels and its Applications. Longman Scientific & Technical, Harlow, England, 1988. [7] B. Scholkopf. Support vector learning. Oldenbourg Verlag, Munich, 1997. [8] B. Scholkopf, P. Knirsch, A. Smola, and C. Burges. Fast approximation of support vector kernel expansions, and an interpretation of clustering as approximation in feature spaces. In P. Levi et. a1., editor, DAGM'98, pages 124 - 132, Berlin, 1998. Springer. [9] B. Scholkopf, A.J. Smola, and K.-R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299-1319,1998.
|
1998
|
23
|
1,519
|
Multiple Paired Forward-Inverse Models for Human Motor Learning and Control Masahiko Haruno* mharuno@hip.atr.co.jp Daniel M. Wolpert t wolpert@hera.ucl.ac.uk Mitsuo Kawato* o kawato(Q)hip.atr.co.jp * ATR Human Information Processing Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan. tSobell Department of Neurophysiology, Institute of Neurology, Queen Square, London WelN 3BG, United Kingdom. °Dynamic Brain Project, ERATO, JST, Kyoto, Japan. Abstract Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and oftpn uncprtain environmental conditions. This paper describes a new modular approach to human motor learning and control, baspd on multiple pairs of inverse (controller) and forward (prpdictor) models. This architecture simultaneously learns the multiple inverse models necessary for control as well as how to select the inverse models appropriate for a given em'ironm0nt. Simulations of object manipulation demonstrates the ability to learn mUltiple objects, appropriate generalization to novel objects and the inappropriate activation of motor programs based on visual cues, followed by on-line correction, seen in the "size-weight illusion". 1 Introduction Given the multitude of contexts within which we must act, there are two qualitatively distinct strategies to motor control and learning. The first is to uSP a Single controller which would need to be highly complex to allow for all possible scenarios. If this controller were unable to encapsulate all the contexts it would need to adapt pvery time the context of the movement changed before it could produce appropriate motor commands- -this would produce transient and possibly large performancp errors. Alternatively, a modular approach can be used in which multiple controllers co-exist, with each controller suitable for onp or a small set of contexts. Such a modular strategy' has been introduced in the "mixture of experts" architecture for supervised learning [6]. This architecture comprises a set of expert networks and a gating network which performs classification by combining each expert's output. These networks are trained simultaneously so that the gating network splits the input spacp into regions in which particular experts can specialize. To apply such a modular strategy to motor control two problems must bp solved. First 32 M Haruno, D. M Wolpert and M Kawato how are the set of inverse models (controllers) learned to cover the contexts which might be experienced the module learning problem. Second, given a set of inverse modules (controllers) how are the correct subset selected for the current context-the module selection problem. From human psychophysical data we know that such a selection process must be driven by two distinct processes; feedforward switching based on sensory signals such as the perceived size of an object, and switching based on feedback of the outcome of a movement. For example, on picking up a object which appears heavy, feedforward switching may activate controllers responsible for generating a large motor impulse. However, feedback processes, based on contact with the object, can indicate that it is in fact light thereby switching control to inverse models appropriate for a light object. In the coutext of motor control and learning, Gomi and Kawato [4J combined the feedback-error-learning [7J approach and the mixture of experts architecture to learn multiple inverse models for different manipulated objects. They used both the visual shapes of the manipulated objects and intrinsic signals, such as somatosensory feedback and efference copy of the motor command, as the inputs to the gating network. Using this architecture it was quite difficult to acquire multiple inverse models. This difficulty arose because a single gating network needed to divide up, based solely on control error, the large input space into complex regions. Furthermore, Gomi and Kawato's model could not demonstrate feedforward controller selection prior to movement execution. Here we describe a model of human motor control which addresses these problems and can solve the module learning and selection problems in a computationally coherent manner. The basic idea of the model is that the brain contains multiple pairs (modules) of forward (predictor) and inverse (controller) models (~fPFIM) [10J. Within each module, the forward and inverse models are tightly coupled both during their acquisition and use, in which the forward models determine the contribution (responsibility) of each inverse model's output to the final motor command. This architecture can simultaneously learn the mult.iple inverse models necessary for control as well as how to select the inverse models appropriate for a given environment in both a feedforward and a feedback manner. 2 Multiple paired forward-inverse models contextual sIgnal etlerence copy 01 motor command desIred arm trajectory actual arm trajectory --- ---:------ : , Feedback: : . 1 _ __ ___ !~~?~~~_r:'!'.t?~7?~':l_a_n_~ . _~ ':. : controller : .'.. . ~ . Figure 1: A schematic diagram showing how MPFIM architecture is used to control arm movement while manipulating different objects. Parenthesized numbers in the figure relate to the equations in the text. Multiple Paired Forward-Inverse Modelsfor Human Motor Learning and Control 33 2.1 Motor learning and feedback selection Figure 1 illustrates how the MPFIM architecture can be used to learn and control arm movements when the hand manipulates different objects. Central to the multiple paired forward-inverse model is the notion of dividing up experience using predictive forward models. We consider n undifferentiated forward models which each receive the current state, Xl, and motor command, Ut, as input. The output of the ith forward model is xl+!, the prediction of the next state at time t (1) where wI are the parameters of a function approximator ¢ (e.g. neural network weights) used to model the forward dynamics. These predicted next states are compared to the actual next state to provide the responsibility signal which represents the extent to which each forward model presently accounts for the behavior of the system. Based on the prediction errors of the forward models, the responsibility signal, AL for the i-th forward-inverse model pair (module) is calculated by the soft-max function (2) where X, is the true state of the system and a is a scaling constant. The soft-max transforms the errors using the exponential function and then normalizes these values across the modules, so that the responsibilities lie between 0 and 1 and sum to lover the modules. Those forward models which capture the current behavior, and therefore produce small prediction errors, will have high responsibilities 1. The responsibilities are then used to control the learning of the forward models in a competitive manner, with those models with high responsibilities receiving proportionally more of their error signal than modules \vith low responsibility. The competitive learning among forward models is similar in spirit to "annealed competition of experts" architecture [9]. '" i d d¢z Ai) dil \/( Ai ....JoW, = f/ll -d . (XI - X, = fd ./1, Xt - Xt ) wi wi (3) For each forward model there is a paired inverse model whose inputs are the desired next state X;+I and the current state Xt. The ith inverse model produces a motor command ul as output i _ ,1,( Z * ) Ut 'f/ at, x t+I ' Xt (4) where al are the parameters of some function approximator 'lb. The total motor command is the summation of the outputs from these inverse models using the responsibilities. A:, to weight the contributions. n 11 Ut = LA~U: = LA;t.b(a;,x;+l,xd (5) i=1 ;=1 Once again. the responsibilities are used to weight the learning of each inverse model. This ensures t hat inverse models learns only when their paired forward models make accurate predictions. Although for supervised learning the desired control command u; is needed (hut is generally not available), we can approximate (ui - uD with the feedback motor command signal u fb [7] . I Because selecting modules can be regarded as a hidden state estimation problem, an alternative way to determine appropriate forward models is to use the E~1 algorithm [3J. 34 M. Haruno, D. M. Wolpert and M. Kawato (6) In summary, the responsibility signals are used in three ways- first to gate the learning of the forward models (Equation 3), second to gate the learning of the inverse models (Equation 6), and third to gate the contribution of the inverse models to the final motor command (Equation 5). 2.2 Multiple responsibility predictors: Feedforward selection While the system described so far can learn mUltiple controllers and switch between them based on prediction errors, it cannot provide switching before a motor command has been generated and the consequences of this action evaluated. To allow the system to switch controllers based on contextual information, we introduce a new component, the responsibility predictor (RP). The input to this module, yt, contains contextual sensory information (Figure 1) and each RP produces a prediction of its own module's responsibility (7) These estimated responsibilities can then be compared to the actual responsibilities A.~ generated from the responsibility estimator. These error signals are used to update the weights of the RP by supervised learning. Finally a mechanism is required to combine the responsibility estimates derived from the feed forward RP and from the forward models' prediction errors derived from feedback. We determine the final value of responsibility by using Bayes rule; multiplying the transformed feedback errors e- lx,-5;;12/O'2 by the feed forward responsibility ~; and then normalizing across the modules within the responsibility estimator: ~ ie-IXt -5;; 12/20'2/ ",n ~j e-Ixt -5;{1 2 /20'2 t ~)=l t The estimates of the responsibilities produced by the RP can be considered as prior probabilities because they are computed before the movement execution based only on extrinsic signals and do not rely on knowing the consequences of the action. Once an action takes place, the forward models' errors can be calculated and this can be thought of as the likelihood after the movement execution based on knowledge of the result of the movement. The final responsibility which is the product of the prior and likelihood, normalized across the modules, represents the posterior probability. Adaptation of the RP ensures that the prior probability becomes closer to the posterior probability. 3 Simulation of arm tracking while manipulating objects 3.1 Learning and control of different objects ~I M K a (J J 5.0 8.0 2.0 7.0 3.0 10.0 4.0 1.0 1.0 Figure 2: Schematic illustration of the simulation experiment in which the arm makes reaching movements while grasping different objects with mass M, damping Band spring K. The object properties are shown in the Table. Multiple Paired Forward-Inverse Models for Human Motor Learning and Control 35 To examine motor learning and control we simulated a task in which the hand had to track a given trajectory (30 s shown in Fig. 3 (b)), while holding different objects (Figure 2). The manipulated object was periodically switched every 5 s between three different objects Ct, {3 and 'Y in this order. The physical characteristics of these objects are shown in Figure 2. The task was exactly the same as that of Gomi and Kawato [4], and simulates recent grip force-load force coupling experiments by Flanagan and Wing [2]. In the first simulation, three forward-inverse model pairs (modules) were used: the same number of modules as the number of objects. We assumed the existence of a perfect inverse dynamic model of the arm for the control of reachiilg movements. In each module, both forward (¢ in (1)) and inverse ('IjJ in (4)) models were implemented as a linear neural network2 . The use of linear networks allowed M, Band K to be estimated from the forward and inverse model weights. Let MJ ,Bf ,Kf be the estimates from the jth forward model and Mj,B},Kj be the estimates from the jth inverse model. Figure 3(a) shows the evolution of the forward model estimates of MJ ,Bf ,Kf for the three modules during learning. During learning the desired trajectory (Fig. 3(b)) was repeated 200 times. The three modules started from randomly selected initial conditions (open arrows) and converged to very good approximations of the three objects (filled arrows) as shown in Table 1. Each of the three modules converged to Ct, {3 and 'Y objects, respectively. It is interesting to note that all the estimates of the forward models are superior to those of inverse models. This is because the inverse model learning depends on how modules are switched by the forward models . ... -J , . (a) Figure 3: (a) Learning acquisition of three pairs of forward and inverse models corresponding to three objects. (b) Responsibility signals from the three modules (top 3) and tracking performance (bottom) at the beginning (left) and at the end (right) of learning. 2 3 5.0071 8.0029 7.0040 3.0010 4.0000 0.9999 5.0102 7.8675 6.9554 3.0467 Table 1: Learned object characteristics 4.0089 0.9527 Figure 3(b) shows the performance of the model at the beginning (left) and end (right) of learning. The top 3 panels show the responsibility signals of Ct, {3 and 'Y modules in 2 Any kind of architecture can be adopted instead of linear networks 36 M Hanlno, D. M Wolpert and M Kawato this order, and the bottom panel shows the hand's actual and desired trajectories. At the start of learning, the three modules were equally poor and thus generated almost equal responsibilities (1/3) and were involved in control almost equally. As a result, the overall control performance was poor with large trajectory errors. However, at the end of learning, the three modules switched almost perfectly (only three noisy spikes were observed in the top 3 panels on the right), and no trajectory error was visible at this resolution in the bottom panel. If we compare these results with Figure 7 of Gomi and Kawato [4] for the same task, the superiority of the MPFIM compared to the gating-expert architecture is apparent. Note that the number of free parameters (synaptic weights) is smaller in the current architecture than the other. The difference in performance comes from two features of the basic architecture. First, in the gating architecture a single gating network tries to divide the space while many forward models splits the space in MPFIM. Second, in the gating architecture only a single control error is used to divide the space, but mUltiple prediction errors are simultaneously utilized in MPFIM. 3.2 Generalization to a novel object A natural question regarding MPFIM architecture is how many modules need to be used. In other words, what happens if the number of objects exceeds the number of modules or an already trained MPFIM is presented with an unfamiliar object. To examine this, the MPFIM trained from 4 objects a,(3" and <5 was presented with a novel object 'fJ (its (M, B, K) is (2.02,3.23,4.47)). Because the object dynamics can be represented in a 3-dimensional parameter space and the 4 modules already acquired define 4 vertices of a tetrahedron within the 3-D space, arbitrary object dynamics contained within the tetrahedron can be decomposed into a weighted average of the existing 4 forward modules (internal division point of the 4 vertices). The theoretically calculated weights of 'fJ were (0.15,0.20,0.35,0.30). Interestingly, each module's responsibility signal averaged over trajectory was (0.14,0.24,0.37,0.26). Although the responsibility was computed in the space of accelerations prediction by soft-max and had no direct relation to the space of (M, B, K), the two vectors had very similar values. This demonstrates the flexibility of MPFIM architecture which originates from its probabilistic soft-switching mechanism. This is in sharp contrast to the hard switching of Narendra [8] for which only one controller can be selected at a time. 3.3 Feedforward selection and the size-weight illusion Figure 4: Responsibility predictions based on contextual information of 2-D object shapes (top 3 traces) and corresponding acceleration error of control induced by the illusion (bottom trace) In this section, we simulated prior selection of inverse models by responsibility predictors based on contextual information, and reproduce the size-weight illusion. Each object was associated with a 2-D shape represented as a 3x3 binary matrix, which was randomly placed at one of four possible locations on a 4x4 retinal matrix (see Gomi Multiple Paired Forward-Inverse Models for Human Motor Learning and Control 37 and Kawato for more details). The retinal matrix was used as the contextual input to the RP (3-layer sigmoidal feedforward network). During the course of learning, the combination of manipulated objects and visual cues were fixed as A-a, B-,B and C-y. After 200 iterations of the trajectory, the combination A--y was presented for the first. Figure 4 plots the responsibility signals of the three modules (top 3 traces) and corresponding acceleration error of the control induced by the illusion (bottom trace). The result replicates the size-weight illusion [1, 5] seen in the erroneous responsibility prediction of the a responsibility predictor based on the contextual signal A and its correction by the responsibility signal calculated by the forward models. Until the onset of movement (time 0) , A was always associated with light Ct, and C was always associated with heavy -y. Prior to movement when A was associated with -y, the a module was switched on by the visual contextual information, but soon after the movement was initiated, the responsibility signal from the forward model's prediction dominated, and the -y module was properly selected. Furthermore, after a while, the responsibility predictor of the modules were re-Iearned to capture this new association between the objects visual shape and its dynamics. In conclusion, the MPFIM model of human motor learning and control, like the human motor system, can learn multiple tasks, shows generalization to new tasks and an ability to switch between tasks appropriately. Acknowledgments We thank Zoubin Ghahramani for helpful discussions on the Bayesian formulation of this model. Partially supported by Special Coordination Funds for promoting Science and Technology at the Science and Technology Agency of Japanese govenmnent, and by HFSP grant. References [1] E. Brenner, B. Jeroen, and J . Smeets. Size illusion influences how we lift but not how we grasp an object. Exp Brain Res, 111:473- 476, 1996. [2] J.R. Flanagan and A. Wing. The role of internal models in motion planning and control: Evidence from grip force adjustments during movements of hand-held loads. J Neurosci, 17(4):1519- 1528, 1997. [3] A.M. Fraser and A. Dimitriadis. Forecasting probability densities by using hidden Markov models with mixed states. In A.S. Wiegand and N.A. Gershenfeld, editors, Time series prediction: Forecasting the future and understanding the past, pages 265-282. Addison-Wesley, 1993. [4] H. Gomi and M. Kawato. Recognition of manipulated objects by motor learning with modular architecture networks. Neural Networks, 6:485- 497, 1993. [5] A. Gordon, H. Forssberg, R. Johansson, and G. Westling. Visual size cues in th~ I>rogramming of manipulative forces during precision grip. Exp Brain Res, 83.477-482, 1991. [6] R. Jacobs, M. Jordan, S. Nowlan, and G. Hinton. Adaptive mixture of local experts. Neural Computation, 3:79- 87, 1991. [7] M. Kawato. Feedback-error-Iearning neural network for supervised learning. In R. Eckmiller, editor, Advanced neural computers, pages 365- 372. North-Holland, 1990. [8] K. Narendra and J. Balakrishnan. Adaptive control using multiple models. IEEE Transaction on Automatic Control, 42(2):171 -187, 1997. [9] K. Pawelzik, J. Kohlmorgen, and K. Muller. Annealed competition of experts f<?r a segmentation and classification of switching dynamics. Neural Computation, 8.340- 356, 1996. [10] D.M. \Volpert and M. Kawato. Multiple paired forward and inverse models for motor control. Neural Networks, 11:1317- 1329, 1998.
|
1998
|
24
|
1,520
|
SMEM Algorithm for Mixture Models N aonori U eda Ryohei Nakano {ueda, nakano }@cslab.kecl.ntt.co.jp NTT Communication Science Laboratories Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0237 Japan Zoubin Ghahramani Geoffrey E. Hinton zoubin@gatsby.uc1.ac.uk g.hinton@ucl.ac.uk Gatsby Computational Neuroscience Unit, University College London 17 Queen Square, London WC1N 3AR, UK Abstract We present a split and merge EM (SMEM) algorithm to overcome the local maximum problem in parameter estimation of finite mixture models. In the case of mixture models, non-global maxima often involve having too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations we repeatedly perform simultaneous split and merge operations using a new criterion for efficiently selecting the split and merge candidates. We apply the proposed algorithm to the training of Gaussian mixtures and mixtures of factor analyzers using synthetic and real data and show the effectiveness of using the split and merge operations to improve the likelihood of both the training data and of held-out test data. 1 INTRODUCTION Mixture density models, in particular normal mixtures, have been extensively used in the field of statistical pattern recognition [1]. Recently, more sophisticated mixture density models such as mixtures of latent variable models (e.g., probabilistic PCA or factor analysis) have been proposed to approximate the underlying data manifold [2]-[4]. The parameter of these mixture models can be estimated using the EM algorithm [5] based on the maximum likelihood framework [3] [4]. A common and serious problem associated with these EM algorithm is the local maxima problem. Although this problem has been pointed out by many researchers, the best way to solve it in practice is still an open question. Two of the authors have proposed the deterministic annealing EM (DAEM) algorithm [6], where a modified posterior probability parameterized by temperature is derived to avoid local maxima. However, in the case of mixture density models, local maxima arise when there are too many components of a mixture models in one part of the space and too few in another. It is not possible to move a component from the overpopulated region to the underpopulated region without passing 600 N Ueda, R. Nakano, Z. Ghahramani and G. E. Hinton through positions that give lower likelihood. We therefore introduce a discrete move that simultaneously merges two components in an overpopulated region and splits a component in an underpopulated region. The idea of split and merge operations has been successfully applied to clustering or vector quantization (e.g., [7]). To our knowledge, this is the first time that simultaneous split and merge operations have been applied to improve mixture density estimation. New criteria presented in this paper can efficiently select the split and merge candidates. Although the proposed method, unlike the DAEM algorithm, is limited to mixture models, we have experimentally comfirmed that our split and merge EM algorithm obtains better solutions than the DAEM algorithm. 2 Split and Merge EM (SMEM) Algorithm The probability density function (pdf) of a mixture of M density models is given by M p(x; 8) = L amP(xlwm; Om), where am 2: 0 and (1) m=l The p(xlwm; Om) is a d-dimensional density model corresponding to the component W m . The EM algorithm, as is well known, iteratively estimates the parameters 8 = {(am, Om), m = 1, ... , M} using two steps. The E-step computes the expectation of the complete data log-likelihood. Q(818(t») = L L P(wmlx; 8(t») logamP(xlwm; Om), (2) X m where P(wmlx; 8(t») is the posterior probability which can be computed by P( I' 8(t») = a~p(xlwm; o~;h Wm X, M (t) (t) . Lm'=l am,p(xlwm,;Om') (3) Next, the M-step maximizes this Q function with respect to 8 to estimate the new parameter values 8(t+1). Looking at (2) carefully, one can see that the Q function can be represented in the form of a direct sum; i.e., Q(818(t») = L~=l qm(818(t»), where qm(818(t») = LXEx P(wmlx; 8(t») logamP(xlwm; Om) and depends only on am and Om. Let 8* denote the parameter values estimated by the usual EM algorithm. Then after the EM algorithm has converged, the Q function can be rewritten as Q* = q7 + q; + qk + L q:n. (4) m,m,/i,j,k We then try to increase the first three terms of the right-hand side of (4) by merging two components Wi and Wj to produce a component Wi', and splitting the component Wk into two components Wj' and Wk" To reestimate the parameters of these new components, we have to initialize the parameters corresponding to them using 8*. The initial parameter values for the merged component Wi' can be set as a linear combination of the original ones before merge: and O*~ P(w · lx·8*)+O*~ P(w ·lx · 8*) Oi' = t wX t, J wX J, Lx P(wil x ; 8*) + Lx P(wjlx; 8*) (5) SMEM Algorithm for Mixture Models 601 On the other hand, as for two components Wj' and Wk', we set (6) where t is some small random perturbation vector or matrix (i.e., Iltll«IIOk 11)1. The parameter reestimation for m = i', j' and k' can be done by using EM steps, but note that the posterior probability (3) should be replaced with (7) so that this reestimation does not affect the other components. a~p(xlwm; o~;h L (t) (I .O(t)) x L P(wm'lx; 8*), m = i',j', k'. am,p x Wm, m' m'=i,j,k m'=i',j',k' (7) Clearly Lm'=i',j',k' P(wm, Ix; 8(t)) = Lm=i,j,k P{wmlx; 8*) always holds during the reestimation process. For convenience, we call this EM procedure the partial EM procedure. After this partial EM procedure, the usual EM steps, called the full EM procedure, are performed as a post processing. After these procedures, if Q is improved, then we accept the new estimate and repeat the above after setting the new paramters to 8*. Otherwise reject and go back to 8* and try another candidate. We summarize these procedures as follows: [SMEM Algorithm] 1. Perform the usual EM updates. Let 8* and Q* denote the estimated parameters and corresponding Q function value, respectively. 2. Sort the split and merge candidates by computing split and merge criteria (described in the next section) based on 8*. Let {i, j, k}c denote the cth candidate. 3. For c = 1, ... , Cmax , perform the following: After initial parameter settings based on 8 *, perform the partial EM procedure for {i, j, k } c and then perform the full EM procedure. Let 8** be the obtained parameters and Q** be the corresponding Q function value. If Q** > Q*, then set Q* f- Q**, 8* f- 8** and go to Step 2. 4. Halt with 8* as the final parameters. Note that when a certain split and merge candidate which improves the Q function value is found at Step 3, the other successive candidates are ignored. There is therefore no guarantee that the split and the merge candidates that are chosen will give the largest possible improvement in Q. This is not a major problem, however, because the split and merge operations are performed repeatedly. Strictly speaking, Cmax = M(M -1)(M - 2)/2, but experimentally we have confirmed that Cmax '" 5 may be enough. 3 Split and Merge Criteria Each of the split and merge candidates can be evaluated by its Q function value after Step 3 of the SMEM algorithm mentioned in Sec.2. However, since there are so many candidates, some reasonable criteria for ordering the split and merge candidates should be utilized to accelerate the SMEM algorithm. In general, when there are many data points each of which has almost equal posterior probabilities for any two components, it can be thought that these two components 1 In the case of mixture Gaussians, covariance matrices Ei , and E k , should be positive definite. In this case, we can initialize them as Ei , = Ek , = det(Ek)l/d Id indtead of (6). 602 N. Ueda. R. Nakano. Z. Ghahramani and G. E. Hinton might be merged. To numerically evaluate this, we define the following merge criterion: Jmerge(i,j; 8*) = Pi(8*fpj (8*), (8) where Pi(8*) = (P(wilxl; 8*), ... , P(wilxN; 8*))T E nN is the N-dimensional vector consisting of posterior probabilities for the component Wi. Clearly, two components Wi and Wj with larger Jmerge(i,j; 8*) should be merged. As a split criterion (Jsplit), we define the local Kullback divergence as: J (k 8 *) J ( 8*) I Pk(X; 8*) d split ; = Pk x; og (I e) x, P x Wk; k (9) which is the distance between two distributions: the local data density Pk(X) around the component Wk and the density of the component Wk specified by the current parameter estimate ILk and ~k' The local data density is defined as: (10) This is a modified empirical distribution weighted by the posterior probability so that the data around the component Wk are focused. Note that when the weights are equal, i.e., P(wklx; 8*) = 11M, (10) is the usual empirical distribution, i.e., Pk(X; 8*) = (liN) E~=l 6(x - xn). Since it can be thought that the component with the largest Jsplit (k; 8*) has the worst estimate of the local density, we should try to split it. Using Jmerge and Jsp/it, we sort the split and merge candidates as follows. First, merge candidates are sorted based on Jmerge. Then, for each sorted merge' candidate {i,j}e, split candidates excluding {i,j}e are sorted as {k}e. By combining these results and renumbering them, we obtain {i, j , k }e. 4 Experiments 4.1 Gaussian mixtures First, we show the results of two-dimensional synthetic data in Fig. 1 to visually demonstrate the usefulness of the split and merge operations. Initial mean vectors and covariance matrices were set to near mean of all data and unit matrix, respectively. The usual EM algorithm converged to the local maximum solution shown in Fig. l(b), whereas the SMEM algorithm converged to the superior solution shown in Fig. led) very close to the true one. The split of the 1st Gaussian shown in Fig. l(c) seems to be redundant, but as shown in Fig. led) they are successfully merged and the original two Gaussians were improved. This indicates that the split and merge operations not only appropriately assign the number of Gaussians in a local data space, but can also improve the Gaussian parameters themselves. Next, we tested the proposed algorithm using 20-dimensional real data (facial images) where the local maxima make the optimization difficult. The data size was 103 for training and 103 for test. We ran three algorithms (EM, DAEM, and SMEM) for ten different initializations using the K-means algorithm. We set M = 5 and used a diagonal covariance for each Gaussian. As shown in Table 1, the worst solution found by the SMEM algorithm was better than the best solutions found by the other algorithms on both training and test data. SMEM Algorithmfor Mixture Models (a) True Gaussians and (b) Result by EM (t=72) generated data ';q\' ... ~.t./ . :~~1 :'(:0: . . ~.~ . .. . ' " ;': . \.';,,: " • i,:"' ..... . co ," 2 .: .... : ... : ...... . . ,,' :.' ,' .. 603 ·{J·· .... O·:6 " '..!'" ~:. " • I .' • :: . ::, •• t ,' •• (c) Example of split (d) Final result by SMEM and merge (t=141) (t=212) Figure 1: Gaussian mixture estimation results. Table 1: Log-likelihood I data point initiall value EM DAEM mean -159.1 -148.2 -147.9 Training std 1.n 0.24 0.04 data max -157.3 -147.7 -147.8 min -163.2 -148.6 -147.9 mean -168.2 -159.8 -159.7 Test std 2.80 1.00 0.37 data max -165.5 -158.0 -159.6 min -174.2 -160.8 -159.8 Table 2: No. of iterations EM DAEM SMEM mean 47 147 155 sId 16 39 44 max 65 189 219 min 37 103 109 SMEM -145.1 0.08 -145.0 -145.2 -155.9 0.09 -155.9 -156.0 -145 '~-150 ~ '0 8-'55 £ ~ g.-,,, ...J -1&5 , , , , I EM ste~ with split and'merge I : I : : I-------'--~ ,-. ----~--~~ ____ , ~ ~ .. .. ~ m ~ ~ ~ ~ = No. of iterations Figure 2: Trajectories of loglikelihood. Upper (lower) corresponds to training (test) data. Figure 2 shows log-likelihood value trajectories accepted at Step 3 of the SMEM algorithm during the estimation process 2. Comparing the convergence points at Step 3 marked by the '0' symbol in Fig. 2, one can see that the successive split and merge operations improved the log-likelihood for both the training and test data, as we expected. Table 2 compares the number of iterations executed by the three algorithms. Note that in the SMEM algorithm, the EM-steps corresponding to rejected split and merge operations are not counted. The average rank of the accepted split and merge candidates was 1.8 (STD=O.9), which indicates that the proposed split and merge criteria work very well. Therefore, the SMEM algorithm was about 155 x 1.8/47 c::: 6 times slower than the original EM algorithm. 4.2 Mixtures of factor analyzers A mixture of factor analyzers (MFA) can be thought of as a reduced dimension mixture of Gaussians [4]. That is, it can extract locally linear low-dimensional manifold underlying given high-dimensional data. A single FA model assumes that an observed D-dimensional variable x are generated as a linear transformation of some lower K-dimensionallatent variable z rv N(O, I) plus additive Gaussian noise v rv N(O, w). w is diagonal. That is, the generative model can be written as 2Dotted lines in Fig. 2 denote the starting points of Step 2. Note that it is due to the initialization at Step 3 that the log-likelihood decreases just after the split and merge. 604 N. Ueda, R. Nakano, Z. Ghahrarnani and G. E. Hinton ··~~~~--~~·-X~l"· ·~·~~~~--~'"O':' ,, -x-:.,. ·~~~~--~~·-x~; (a) Initial values (b) Result by EM (c) Result by SMEM Rgure 3: Extraction of 1 D manifold by using a mixture of factor analyzers. x = Az + v + J-L. Here J-L is a mean vector. Then from simple calculation, we can see that x t'V N(J-L, AAT + '11). Therefore, in the case of a M mixture of FAs, x t'V L~=l omN(J-Lm, AmA~ + 'lim). See [4] for the details. Then, in this case, the Q function is also decomposable into M components and therefore the SMEM algorithm is straightforwardly applicable to the parameter estimation of the MFA models. Figure 3 shows the results of extracting a one-dimensional manifold from threedimensional data (nOisy shrinking spiral) using the EM and the SMEM algorithms3. Although the EM algorithm converged to a poor local maxima, the SMEM algorithm successfully extracted data manifold. Table 3 compares average log-likelihood per data point over ten different initializations. The log-likelihood values were drastically improved on both training and test data by the SMEM algorithm. The MFA model is applicable to pattern recognition tasks [2][3] since once an MFA model is fitted to each class, we can compute the posterior probabilities for each data point. We tried a digit recognition task (10 digits (classes))4 using the MFA model. The computed log-likelihood averaged over ten classes and recognition accuracy for test data are given in Table 4. Clearly, the SMEM algorithm consistently improved the EM algorithm on both log-likelihood and recognition accuracy. Note that the recognition accuracy by the 3-nearest neighbor (3NN) classifier was 88.3%. It is interesting that the MFA approach by both the EM and SMEM algorithms could outperform the nearest neighbor approach when K = 3 and M = 5. This suggests that the intrinsic dimensionality of the data would be three or so. ' 3In this case, each factor loading matrix Am becomes a three dimensional column vector corresponding to each thick line in Fig. 3. More correctly, the center position and the direction of each thick line are f..Lm and Am, respectively. And the length of each thick line is 2 IIAmll. 4The data were created using the degenerate Glucksman's feature (16 dimensional data) by NTT labs.[8]. The data size was 200/class for training and 200/class for test. SMEM Algorithm/or Mixture Models 605 Table 4: Digit recognition results Table 3: Log-likelihood I data point Log-likelihood / data point Recognition rate ("!o) EM SMEM EM SMEM EM SMEM K=3 M=5 -3.18 -3.15 89.0 91.3 M=10 -3.09 -3.05 87.5 88.7 Training -7.68 (0.151) -7.26 (0.017) Test -7.75 (0.171) -7.33 (0.032) K=8 M=5 -3.14 -3.11 85.3 87.3 M=10 -3.04 -3.01 82.5 85.1 O:STD 5 Conclusion We have shown how simultaneous split and merge operations can be used to move components of a mixture model from regions of the space in which there are too many components to regions in which there are too few. Such moves cannot be accomplished by methods that continuously move components through intermediate locations because the likelihood is lower at these locations. A simultaneous split and merge can be viewed as a way of tunneling through low-likelihood barriers, thereby eliminating many non-global optima. In this respect, it has some similarities with simulated annealing but the moves that are considered are long-range and are very specific to the particular problems that arise when fitting a mixture model. Note that the SMEM algorithm is applicable to a wide variety of mixture models, as long as the decomposition (4) holds. To make the split and merge method efficient we have introduced criteria for deciding which splits and merges to consider and have shown that these criteria work well for low-dimensional synthetic datasets and for higher-dimensional real datasets. Our SMEM algorithm conSistently outperforms standard EM and therefore it would be very useful in practice. References [1] MacLachlan, G. and Basford K., "Mixture models: Inference and application to clustering," Marcel Dekker, 1988. [2] Hinton G. E., Dayan P., and Revow M., "Modeling the minifolds of images of handwritten digits," IEEE Trans. PAMI, vol.8, no.1, pp. 65-74, 1997. [3] Tipping M. E. and Bishop C. M., "Mixtures of probabilistic principal component analysers," Tech. Rep. NCRG-97-3, Aston Univ. Birmingham, UK, 1997. [4] Ghahramani Z. and Hinton G. E., "The EM algorithm for mixtures of factor analyzers," Tech. Report CRG-TR-96-1, Univ. of Toronto, 1997. [5] Dempster A. P., Laird N. M. and Rubin D. B., "Maximum likelihood from incomplete data via the EM algorithm," Journal of Royal Statistical Society B, vol. 39, pp. 1-38, 1977. [6] Ueda N. and Nakano R., "Deterministic annealing EM algorithm," Neural Networks, voLl1, no.2, pp.271-282, 1998. [7] Ueda N. and Nakano R., "A new competitive learning approach based on an equidistortion principle for designing optimal vector quantizers," Neural Networks, vol.7, no.8, pp.1211-1227, 1994. [8] Ishii K., "Design of a recognition dictionary using artificially distorted characters," Systems and computers in Japan, vol.21, no.9, pp. 669-677, 1989.
|
1998
|
25
|
1,521
|
Learning Lie Groups for Invariant Visual Perception* Rajesb P. N. Rao and Daniel L. Ruderman Sloan Center for Theoretical Neurobiology The Salk Institute La Jolla, CA 92037 {rao,ruderrnan}@salk.edu Abstract One of the most important problems in visual perception is that of visual invariance: how are objects perceived to be the same despite undergoing transformations such as translations, rotations or scaling? In this paper, we describe a Bayesian method for learning invariances based on Lie group theory. We show that previous approaches based on first-order Taylor series expansions of inputs can be regarded as special cases of the Lie group approach, the latter being capable of handling in principle arbitrarily large transfonnations. Using a matrixexponential based generative model of images, we derive an unsupervised algorithm for learning Lie group operators from input data containing infinitesimal transfonnations. The on-line unsupervised learning algorithm maximizes the posterior probability of generating the training data. We provide experimental results suggesting that the proposed method can learn Lie group operators for handling reasonably large I-D translations and 2-D rotations. 1 INTRODUCTION A fundamental problem faced by both biological and machine vision systems is the recognition of familiar objects and patterns in the presence of transfonnations such as translations, rotations and scaling. The importance ofthis problem was recognized early by visual scientists such as J. J. Gibson who hypothesized that "constant perception depends on the ability of the individual to detect the invariants" [6]. Among computational neuroscientists, Pitts and McCulloch were perhaps the first to propose a method for perceptual invariance ("knowing universals") [12]. A number of other approaches have since been proposed [5, 7, 10], some relying on temporal sequences of input patterns undergoing transfonnations (e.g. [4]) and others relying on modifications to the distance metric for comparing input images to stored templates (e.g. [15]). In this paper, we describe a Bayesian method for learning in variances based on the notion of continuous transfonnations 'and Lie group theory. We show that previous approaches based on first-order Taylor series expansions of images [1, 14] can be regarded as special cases of the Lie group approach. Approaches based on first-order models can account only for small transfonnations due to their assumption of a linear generative model for the transfonned images. The Lie approach on the other hand utilizes a matrix-exponential based generative model which can in principle handle arbitrarily large transfonnations once the correct transfonnation operators have been learned. Using Bayesian principles, we derive an on-line unsupervised algorithm for learning Lie group operators from input data containing infinitesimal transfonnations. Although Lie groups have previously "This research was supported by the Alfred P. Sloan Foundation. Learning Lie Groups 8ll been used in visual perception [2], computer vision [16] and image processing [9], the question of whether it is possible to learn these groups directly from input data has remained open. Our preliminary experimental results suggest that in the two examined cases of l-D translations and 2-D rotations, the proposed method can learn the corresponding Lie group operators with a reasonably high degree of accuracy, allowing the use of these learned operators in transformation-invariant vision. 2 CONTINUOUS TRANSFORMATIONS AND LIE GROUPS Suppose we have a point (in general, a vector) 10 which is an element in a space F. Let T 10 denote a transformation of the point 10 to another point, say It. The transformation operator T is completely specified by its actions on all points in the space F. Suppose T belongs to a family of operators T. We will be interested in the cases where I is a group i.e. there exists a mapping f : I x I -t I from pairs of transformations to another transformation such that (a) f is associative, (b) there exists a unique identity transformation, and (c) for every TEl, there exists a unique inverse transformation of T. These properties seem reasonable to expect in general for transformations on images. Continuous transformations are those which can be made infinitesimally small. Due to their favorable properties as described below, we will be especially concerned with continuous transformation groups or Lie groups. Continuity is associated with both the transformation operators T and the group T. Each TEl is assumed to implement a continuous mapping from F -t F. To be concrete, suppose T is parameterized by a single real number x. Then, the group I is continuous if the function T{x) : 1R -t I is continuous i.e. any TEl is the image of some x E 1R and any continuous variation of x results in a continuous variation of T . Let T{O) be equivalent to the identity transformation. Then, as x -t 0, the transformation T{x) gets arbitrarily close to identity. Its effect on 10 can be written as (to first order in x): T{x)/o ~ (1 + xG)/o for some matrix G which is known as the generator of the transformation group. A macroscopic transformation It = I{x) = T{x)/o can be produced by chaining together a number of these infinitesimal transformations. For example, by dividing the parameter x into N equal parts and performing each transformation in tum, we obtain: I{x) = {1 + (X/N)G)N 10 (1) In the limit N -t 00, this expression reduces to the matrix exponential equation: I{x) = ezG 10 (2) where 10 is the initial or "reference" input. Thus, each of the elements of our one-parameter Lie group can be written as: T{x) = ezG • The generatorG ofthe Lie group is related to the derivative ofT{x) with respect to x: d~T = GT. This suggests an alternate way of deriving Equation 2. Consider the Taylor series expansion of a transformed input 1 (x) in terms of a previous input 1 (O): d/{O) Jl. I{O) x2 I{x) = I{O) + ~x + ---;J;22 +... (3) where x denotes the relative transformation between I{x) and I{O). Defining d~1 = GI for some operator matrix G, we can rewrite Equation 3 as: I{x) = ezG 10 which is the same as equation 2 with 10 = I{O). Thus, some previous approaches based on first-order Taylor series expansions [ 1, 14] can be viewed as special cases ofthe Lie group model. 3 LEARNING LIE TRANSFORMATION GROUPS Our goal is to learn the generators G of particular Lie transformation groups directly from input data containing examples of infinitesimal transformations. Note that learning the generator of a transformation effectively allows us to remain invariant to that transformation (see below). We assume that during natural temporal sequences of images containing transformations, there are "small" image changes corresponding to deterministic sets of pixel changes that are independent of what the 812 (a) 1(.) (b) 1(0) • kG k ( -;;-) 1(0) (c) R. P. N. Rao and D. L. Ruderman N ........ l: FA111118lIon 01 Object Iclen...,. N ........ 2: EoIlnuIIIoa 01 Tronot ...... tIon "',i; .... ... ... • • 'I " • Figure 1: Network Architecture and Interpolation Function. (a) An implementation of the proposed approach to invariant vision involving two cooperating recurrent networks, one estimating transformations and the other estimating object features. The latter supplies the reference image 1(0) to the transformation network. (b) A locally recurrent elaboration of the transformation network for implementing Equation 9. The network computes e",GI(O) = 1(0) + Lk(xkGk jk!)I(O). (c) The interpolation function Q used to generate training data (assuming periodic, band-limited signals). actual pixels are. The rearrangements themselves are universal as in for example image translations. The question we address is: can we learn the Lie group operator G given simply a series of "before" and "after" images? Let the n x 1 vector 1(0) be the "before" image and I(x) the "after" image containing the infinitesimal transformation. Then, using results from the previous section, we can write the following stochastic generative model for images: I(x) = e",GI(O) + n (4) where n is assumed to be a zero-mean Gaussian white noise process with variance (J2. Since learning using this full exponential generative model is difficult due to multiple local minima, we restrict ourselves to transformations that are infinitesimal. The higher order terms then become negligible and we can rewrite the above equation in a more tractable form: ~I = xGI(O) + n (5) where ~I = I( x) - 1(0) is the difference image. Note that although this model is linear, the generator G learned using infinitesimal transformations is the same matrix that is used in the exponential model. Thus, once learned, this matrix can be used to handle larger tr,ansformations as well (see experimental results). Suppose we are given M image pairs as data. We wish to find the n x n matrix G and the transformations x which generated the data set. To do so, we take a Bayesian maximum a posteriori approach using Gaussian priors on x and G. The negative log of the posterior probability of generating the data is given by: 1 1 1 E = -logP[G, xll(x), 1(0)] = 2(J2 (~I-xGI(O))T (~I-xGI(O))+ 2(J;x2 + 2gTC-lg (6) where (J~ is the variance of the zero-mean Gaussian prior on x, g is the n 2 x 1 vector form of G and C is the covariance matrix associated with the Gaussian prior on G. Extending this equation Learning Lie Groups 813 to multiple image data is accomplished straightforwardly by summing the data-driven tenn over the image pairs (we assume G is fixed for all images although the transfonnation x may vary). For the experiments, u, U x and C were chosen to be fixed scalar values but it may be possible to speed up learning and improve accuracy by choosing C based on some knowledge of what we expect for infinitesimal image transfonnations (for example, we may define each entry in C to be a function only of the distance between pixels associated with the entry and exploit the fact that C needs to be symmetric; the efficacy of this choice is currently under investigation). The n x n generator matrix G can be learned in an unsupervised manner by perfonning gradient descent on E, thereby maximizing the posterior probability of generating the data: . 8E T G = -a 8G = a(al - xGI(O»(xl(O» - ac(G) (7) where a is a positive constant that governs the learning rate and c(G) is the n x n matrix fonn of the n 2 x 1 vector c-1 g. The learning rule for G above requires the value of x for the current image pair to be known. We can estimate x by perfonning gradient descent on E with respect to x (using a fixed previously learned value for G): x = -f388E = f3(GI(O»T(al - xGI(O» ~x (8) x U x The learning process thus involves alternating between the fast estimation of x for the given image pair and the slower adaptation ofthe generator matrix G using this x. Figure 1 (a) depicts a possible network implementation of the proposed approach to invariant vision. The implementation, which is reminiscent of the division oflabor between the dorsal and ventral streams in primate visual cortex [3], uses two parallel but cooperating networks, one estimating object identity and the other estimating object transfonnations. The object network is based on a standard linear generative model of the fonn: 1(0) = Ur + DO where U is a matrix of learned object "features" and r is the feature vector for the object in 1(0) (see, for example, [11, 13]). Perceptual constancy is achieved due to the fact that the estimate of object identity remains stable in the first network as the second network attempts to account for any transfonnations being induced in the image, appropriately conveying the type of transfonnation being induced in its estimate for x (see [14] for more details). The estimation rule for x given above is based on a first-order model (Equation 5) and is therefore useful only for estimating small (infinitesimal) transfonnations. A more general rule for estimating larger transfonnations is obtaining by perfonning gradient descent on the optimization function given by the matrix-exponential generative model (Equation 4): x = -y(exGGI(O»)T(I(x) - exGI(O» _lx (9) u; Figure 1 (b) shows a locally recurrent network implementation of the matrix exponential computation required by the above equation. 4 EXPERIMENTAL RESULTS Training Data and Interpolation Function. For the purpose of evaluating the algorithm, we generated synthetic training data by subjecting a randomly generated image (containing unifonnly random pixel intensities) to a known transfonnation. Consider a given I-D image 1(0) with image pixels given by I (j), j = 1, ... , N. To be able to continuously transfonn 1(0) sampled at discrete pixel locations by infinitesimal (sub-pixel) amounts, we need to employ an interpolation function. We make use of the Shannon-Whittaker theorem [8] stating that any band-limited signal I (j), with j being any real number, is uniquely specified by its sufficiently close equally spaced discrete samples. Assuming that our signal is periodic i.e. I(j + N) = I(j) for all j. the Shannon-Whittaker theorem in one dimension can be written as: I(j) = E::~ I(m) E:-oo sinc[1r(j - m - Nr)] where sinc[x] = sin(x)Jx. After some algebraic manipulation and simplification, this can be reduced to: I(j) = E::~ I(m)Q(j - m) where the interpolation function Q is given by: 814 R. P. N. Rao and D. L. Ruderman Analytical Operator # 10 Real Imaginary (a) -1 B~ Learned Operator # 10 Real Ima~nary 0.5 (b) 0 -0.5 BIB Figure 2: Learned Lie Operators for 1·0 Translations. (a) Analytically-derived 20 x 20 Lie operator matrix G, operator for the 10th pixel (10th row of G), and plot of real and imaginary parts of the eigenvalues of G. (b) Learned G matrix, 10th operator, and plot of eigenvalues of the learned matrix. Q(x) = (1/N)[1 + 2 L::~~-l cos(271'px/N)]. Figure 1 (c) shows this interpolation function. To translate 1(0) by an infinitesimal amount x E ~,we use: I(j + x) = L:~:~ I(m)Q(j + x - m). Similarly, to rotate or translate 2-D images, we use the 2-D analog of the above. In addition to being able to generate images with known transformations, the interpolation function also allows one to derive an analytical expression for the Lie operator matrix directly from the derivative of Q. This allows us to evaluate the results oflearning. Figure 2 (a) shows the analytically-derived G matrix for I-D infinitesimal translations of 20-pixel images (bright pixels = positive values, dark = negative). Also shown alongside is one of the rows of G (row 10) representing the Lie operator centered on pixel 10. Learning 1·D Translations. Figure 2 (b) shows the results of using Equation 7 and 50, 000 training image pairs forlearning the generator matrix for I-D translations in 20-pixel images. The randomly generated first image of a training pair was translated left or right by 0.5 pixels (C- 1 = 0.0001 and learning rate a = 0.4 was decreased by 1.0001 after each training pair). Note that as expected for translations, the rows of the learned G matrix are identical except for a shift: the same differential operator (shown in Figure 2 (b» is applied at each image location. A comparison of the eigenvalues of the learned matrix with those of the analytical matrix (Figure 2) suggests that the learning algorithm was able to learn a reasonably good approximation of the true generator matrix (to within an arbitrary multiplicative scaling factor). To further evaluate the learned matrix G, we ascertained whether G could be used to generate arbitrary translations of a given reference image using Equation 2. The results are encouraging as shown in Figure 3 (a), although we have noticed a tendency for the appearance of some artifacts in translated images if there is significant high-frequency content in the reference image. Estimating Large Transformations. The learned generator matrix can be used to estimate large translations in images using Equation 9. Unfortunately, the optimization function can contain local minima (Figure 3 (b» . The local minima however tend to be shallow and of approximately the same value, with a unique well-defined global minimum. We therefore searched for the global minimum by performing gradient descent with several equally spaced starting values and picked the minimum of the estimated values after convergence. Figure 3 (c) shows results ofthis estimation process. Learning 2·D Rotations. We have also tested the learning algorithm in 2-D images using image plane rotations. Training image pairs were generated by infinitesimally rotating images with random pixel intensities 0.2 radians clockwise or counterclockwise. The learned operator matrix (for three different spatial scales) is shown in Figure 4 (a). The accuracy of these matrices was tested Learning Lie Groups x I(x) 1.5_,: 4.5_ \ 7.5_ JO.5. r: 13.5 E I(x) i 1(0) 1(0) (a) I(x) x _-1.5 ·~ _-4.5 _-7.5 ~;~. -10.5 II -13.5 (b) -_ .. _x = 8.9787 (9) x = -7.9805 (-8) x = 15.9775 (16) x = 19.9780 (20) x = 2.9805 (3) x = 26.9776 (27) (e) x = -1.9780 (-2) x =-18.9805 (-19) x = 4.9774 (5) 815 Figure 3: Generating and Estimating Large Transformations. (a) An original reference image 1(0) was translated to varying degrees by using the learned generator matrix G and varying x in Equation 2. (b) The negative log likelihood optimization function for the matrix-exponential generative model (Equation 4) which was used for estimating large translations. The globally minimum value for x was found by using gradient descent with multiple starting points. (c) Comparison of estimated translation values with actual values (in parenthesis) for different pairs of reference (1(0) and translated images (I(x) shown in the form of a table. by using them in Equation 2 for various rotations x. As shown in Figure 4 (b) for the 5 x 5 case, the learned matrix appears to be able to rotate a gi ven reference image between -1800 and + 1800 about an initial position (for the larger rotations, some minor artifacts appear near the edges). 5 CONCLUSIONS Our results suggest that it is possible for an unsupervised network to learn visual invariances by learning operators (or generators) for the corresponding Lie transformation groups. An important issue is how local minima can be avoided during the estimation of large transformations. Apart from performing multiple searches, one possibility is to use coarse-to-fine techniques, where transformation estimates obtained at a coarse scale are used as starting points for estimating transformations at finer scales (see, for example, [1]). A second possibility is to use stochastic techniques that exploit the specialized stucture of the optimization function (Figure 1 (c)). Besides these directions of research, we are also investigating the use of structured priors on the generator matrix G to improve learning accuracy and speed. A concurrent effort involves testing the approach on more realistic natural image sequences containing a richer variety of transformations.! References [1] M. J. Black and A. D. Jepson. Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. In Proc. of the Fourth European Conference on Computer Vision (ECCV), pages 329-342, 1996. [2] P. C. Dodwell. The Lie transformation group model of visual perception. Perception and Psychophysics, 34(1):1-16,1983. [3] D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1:1-47,1991. 1 The generative model in the case of multiple transformations is given by: I(x) = eL;:l ",;Gi 1(0) + n where Gi is the generator for the ith type of transformation and Xi is the value of that transformation in the input image. 816 R. P. N. Rao and D. L. Ruderman Initial Final • ' . , ,.,. ..... -. • (a) Figure 4: Learned Lie Operators for 2-D Rotations. (a) The initial and converged values of the Lie operator matrix for 2D rotations at three different scales (3 x 3, 5 x 5 and 9 x 9). (b) Examples of arbitrary rotations of a 5 x 5 reference image 1(0) generated by using the learned Lie operator matrix (although only results for integer-valued x between -4 and 4 are shown, rotations can be generated for any real-valued x). [4] P. Foldiak. Learning in variance from transformation sequences. Neural Computation, 3(2): 194-200, 1991. [5] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36: 193-202, 1980. [6] J.J. Gibson. The Senses Considered as Perceptual Systems. Houghton-Mifflin, Boston, 1966. [7] Y. LeCun, B. Boser, J. S. Denker, B. Henderson, R. E. Howard. W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541-551,1989. [8] R. J. Marks II. Introduction to Shannon Sampling and Interpolation Theory. New York: Springer-Verlag, 1991. [9] K. Nordberg. Signal representation and processing using operator groups. Technical Report Linkoping Studies in Science and Technology, Dissertations No. 366, Department of Electrical Engineering, Linkoping University, 1994. [10] B. A. 0lshausen, C. H. Anderson, and D. C. Van Essen. A multiscale dynamic routing circuit for forming size- and position-invariant object representations. Journal of Computational Neuroscience, 2:45-62,1995. [11] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 381:607-609,1996. [12] W. Pitts and W.S. McCulloch. How we know universals: the perception of auditory and visual forms. Bulletin of Mathematical Biophysics, 9:127-147,1947. [13] R. P. N. Rao and D. H. Ballard. Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Computation, 9(4):721-763,1997. [14] R. P. N. Rao and D. H. Ballard. Developmentoflocalized oriented receptive fields by learning a translation-invariant code for natural images. Network: Computation in Neural Systems, 9(2):219-234,1998. [15] P. Simard, Y. LeCun, and J. Denker. Efficient pattern recognition using a new transformation distance. In Advances in Neural Information Processing Systems V, pages 5(}-'58, San Mateo, CA, 1993. Morgan Kaufmann Publishers. [16] L. Van Gool, T. Moons, E. Pauwels, and A. Oosterlinck. Vision and Lie's approach to invariance. Image and Vision Computing, 13(4):259-277,1995.
|
1998
|
26
|
1,522
|
Tractable Variational Structures for Approximating Graphical Models David Barber Wim Wiegerinck {davidb,wimw}@mbfys,kun,nl RWCP* Theoretical Foundation SNNt University of Nijmegen 6525 EZ Nijmegen, The Netherlands. Abstract Graphical models provide a broad probabilistic framework with applications in speech recognition (Hidden Markov Models), medical diagnosis (Belief networks) and artificial intelligence (Boltzmann Machines). However, the computing time is typically exponential in the number of nodes in the graph. Within the variational framework for approximating these models, we present two classes of distributions, decimatable Boltzmann Machines and Tractable Belief Networks that go beyond the standard factorized approach. We give generalised mean-field equations for both these directed and undirected approximations. Simulation results on a small benchmark problem suggest using these richer approximations compares favorably against others previously reported in the literature. 1 Introduction Graphical models provide a powerful framework for probabilistic inference[l] but suffer intractability when applied to large scale problems. Recently, variational approximations have been popular [2, 3, 4, 5], and have the advantage of providing rigorous bounds on quantities of interest, such as the data likelihood, in contrast to other approximate procedures such as Monte Carlo methods[l]. One of the original models in the neural networks community, the Boltzmann machine (BM), belongs to the class of undirected graphical models. The lack of a suitable algorithm has hindered its application to larger problems. The deterministic BM algorithm[6], a variational procedure using a factorized approximating distribution, speeds up the learning of BMs, although the simplicity of this approximation can lead to undesirable effects [7] . Factorized approximations have also been successfully applied to sigmoid belief networks[4]. One approach to producing a more accurate approximation is to go beyond the class of factorized approximating models by using, for example, mixtures of factorized models. However, it may be that very many mixture components are needed to obtain a significant improvement beyond using the factorized approximation[5]. In this paper, after describing the variational learnOReal World Computing Partnership tFoundation for Neural Networks 184 D. Barber and W Wiegerinck ing framework, we introduce two further classes of non-factorized approximations, one undirected (decimatable BMs in section (3)) and the other, directed (Tractable Belief Networks in section (4)) . To demonstrate the potential benefits of these methods, we include results on a toy benchmark problem in section (5) and discuss their relation to other methods in section (6). 2 Variational Learning We assume the existence of a graphical model P with known qualitative structure but for which the quantitative parameters of the structure remain to be learned from data. Given that the variables can be considered as either visible (V) or hidden (H), one approach to learning is to carry out maximum likelihood on the visible variables for each example in the dataset. Considering the KL divergence between the true distribution P(HIV) and a distribution Q(H), " Q(H) KL(Q(H),P(H/V» = ~ Q(H) In P(H/V} ~ 0 H and using P(H/V) = P(H, V}/ pev) gives the bound In P(V) 2: - L Q(H) In Q(H) + L Q(H) In P(H, V) (1) H H Betraying the connection to statistical physics, the first term is termed the "entropy" and the second the "energy". One typically chooses a variational distribution Q so that the entropic term is "tractable". We assume that the energy E(Q) is similarly computable, perhaps with recourse to some extra variational bound (as in section (5)). By tractable, we mean that all necessary marginals and desired quantities are computationally feasible, regardless of the issue of the scaling of the computational effort with the graph size. Learning consists of two iterating steps: first optimize the bound (1) with respect to the parameters of Q, and then with respect to the parameters of P(H, V). We concentrate here on the first step. For clarity, we present our approach for the case of binary variables Si E {O, 1} ,i = LN. We now consider two classes of approximating distributions Q. 3 Undirected Q: Decimatable Boltzmann Machines Boltzmann machines describe probability distributions parameterized by a symmetric weight matrix J 1 Q(s) = Z expfjJ, fjJ == L JijSiSj = s·Js (2) ij where the normalization constant, or "partition function" is Z = Es exp fjJ. For convenience we term the diagonals of J the "biases", hi = Jii . Since In Z (J, h) is a generating function for the first and second order statistics of the variables s, the entropy is tractable provided that Z is tractable. For general connection structures, J, computing Z is intractable as it involves a sum over 2N states; however, not all Boltzmann machines are intractable. A class of tractable structures is described by a set of so-called decimation rules in which nodes from the graph can be removed one by one, fig(l). Provided that appropriate local changes are made to the BM parameters, the partition function of the reduced graph remains unaltered (see eg [2]). For example, node c in fig(l) can be removed, provided that the weight matrix J and bias h are transformed, J -t JI, h -t hi, with J~c = Jtc = h~ = 0 and I _ 1 (1 + e he ) (1 + ehe+2(Jae+Jbe)) I _ 1 + ehc+2Ja/b.c Jab Jab + 2ln (1 + ehe+2Jae) (1 + ehe+2Jbc ) ' ha/ b ha/ b + In 1 + e he (3) Tractable Variational Strnctures for Approximating Graphical Models 185 Figure 1: A decimation rule for BMs. We can remove the upper node on the left so that the partition function of the reduced graph is the same. This requires a simple change in the parameters J, h coupling the two nodes on the right (see text). By repeatedly applying such rules, Z is calculable in time linear in N. 3.1 Fixed point (Mean Field) Equations Using (2) in (1), the bound we wish to optimize with respect to the parameters B = (J, h) of Q has the form (( ... ) denotes averages with respect to Q) B(B) = - (¢) + In Z + E(B) where E(B) is the energy. Differentiating (4) with respect to Jij(i =1= j) gives 8B 8E 8J .. = - L Fij,ktlkl + 8J· · tJ kl tJ (4) (5) where Fij,kl = (SiSjSkSI) (SiSj) (SkSI) is the Fisher information matrix. A similar expression holds for the bias parameters, h, so that we can form a linear fixed point equation in the total parameter set B where the derivatives of the bound vanish. This suggests the iterative solution, Bnew = F- 1 'Voj where the right hand side is evaluated at the current parameter values, Bold. 4 Directed Q: Tractable Belief Networks Belief networks are products of conditional probability distributions, Q(H) = II Q(Hil 1Ti) (6) iEH in which 1Ti denotes the parents of node i (see for example, [1]). The efficiency of computation depends on the underlying graphical structure of the model and is exponential in the maximal clique size (of the moralized triangulated graph [1]). We now assume that our model class consists of belief networks with a fixed, tractable graphical structure. The entropy can then be computed efficiently since it decouples into a sum of averaged entropies per site i (Q(1TJ == 1 if 1Ti = ¢), H iEH 7ri H, Note that the conditional entropy at each site i is trivial to compute since the values required can be read off directly from the definition of Q (6). By assumption, the marginals Q(1Ti) are tractable, and can be found by standard methods, for example using the Junction Tree Algorithm[I]. To optimize the bound (1), we parameterize Q via its conditional probabilities, qi(1Ti) == Q(Hi = II1Ti). The remaining probability Q(Hi = 011Ti) follows from 186 D. Barber and W. Wiegerinck normalization. We therefore have a set {qi(1I'dI1l'i = (0 . .. 0), ... ,(1 ... I)} of variational parameters for each node in the graph. Setting the gradient of the bound with respect to the qi (11' d 's equal to zero yields the equations (8) with (9) where a (z) = 1/ (1 + e- Z ). The gradient V'ilTi is with respect to qi(1I'i). The explicit evaluation of the gradients can be performed efficiently, since all that need to be differentiated are at most scalar functions of quantities that depend again only linearly on the parameters Qi(1I'd . To optimize the bound, we iterate (8) till convergence, analogous to using factorized models[4]. However, the more powerful class of approximating distributions described by belief networks should enable a much tighter bound on the likelihood of the visible units. 5 Application to Sigmoid Belief Networks We now describe an application of these non-factorized approximations to a particular class of directed graphical models, sigmoid belief networks[8J for which the conditional distributions have the form Wij = 0 if j tJ. 1I'i. The joint distribution then has the form P(H, V) = II exp [ZiSi -In(1 + eZi)J (10) (11) where Zi = 2:j WijSj + ki. In (11) it is to be understood that the visible units are set to their observed values. In the lower bound (1) , unfortunately, the average of In P(H, V) is not tractable, since (In [1 + eZ ]) does not decouple into a polynomial number of single site averages. Following [4J we use therefore the bound (12) where ~ is a variational parameter in [0, IJ. We can then define the energy function E(Q,O = L Wij (SiSj) + L kdsi) - L ki~i - LIn (e-~iZ; + e(1-~;) Zi) ij i i i (13) where ki = ki - 2:j ~j Wji. Expect for the final term, the energy is a function of first or second order statistics of the variables. For using a BM as the variational distribution, the final terms of (13) (e-~iZi) = 2:H e</>-~iZi /Z are simply the ratio of two partition functions, with the one in the numerator having a shifted bias. This is therefore tractable, provided that we use a tractable BM Q. Similarly, if we are using a Belief Network as the variational distribution, all but the last term in (13) is trivially tractable, provided that Q is tractable. We write the terms (e-~iZ;) = e-~ihi 2:HR(H), where R(H) = Ilj R(Hj I1l'j) and R(Hj I1l'j) == Tractable Variational Structures for Approximating Graphical Models 187 (a) Directed graph toy problem. Hidden units are black e e e e e e Lii o 0~.02~--:0C':" .04:-' (c) disconnected (,standard mean field') 16 parameters, mean: 0.01571. Max. clique size: 1 e /1 e e e I'\.. e e (e) trees - 20 parameters, mean: 0.0089. Max. clique size: 2 (b) Decimatable BM - 25 parameters, mean: 0.0020. e e e-e-e-e (d) chain - 19 parameters, mean: 0.01529. Max. clique size: 2 e e ~ e e e e o 0.02 0.04 (f) network - 28 parameters, mean: 0.00183. Max. clique size: 3 Figure 2: (a) Sigmoid Belief Network for which we approximate In P(V) . (b): BM approximation. (c,d,e,f): Structures of the directed approximations on H. For each structure, histograms of the relative error between the true log likelihood and the lower bound is plotted. The horizontal scale has been fixed to [0,0.05] in all plots. The maximum clique size refers to the complexity of computation for each approximation, which is exponential in this quantity. The number of parameters includes the vector €. Q (Hj In j) exp ( -~Jij Hj). Rand Q have the same graphical structure and we can therefore use message propagation techniques again to compute (e-{iZi). To test our methods numerically, we generated 500 networks with parameters {Wij , kj } drawn randomly from the uniform distribution over [-1, 1J. The lower bounds Fv for several approximating structures are compared with the true log likelihood, using the relative error [ = Fv/lnP(V} -1, fig. 2. These show that considerable improvements can be obtained when non-factorized variational distributions are used. Note that a 5 component mixture model (~ 80 variational parameters) yields [ = 0.01139 On this problem [5F. These results suggest therefore that exploiting knowledge of the graphical structure of the model is useful. For instance, the chain (fig. 2(b» with no graphical overlap with the original graph shows hardly any improvement over the standard mean field approximation. On the other hand, the tree model (fig. 2(c), which has about the same number of parameters, but a larger overlap with the original graph, does improve considerably over the mean field approximation (and even over the 5 component mixture model). By increasing the overlap, as in fig. 2(d), the improvement gained is even greater. 188 D. Barber and W. Wiegerinck 6 Discussion In this section, we briefly explain the relationship of the introduced methods to other, "non-factorized" methods in the literature, namely node-elimination[9] and substructure variation[lO]. 6.1 Graph Partitioning and Node Elimination A further class of approximating distributions Q that could be considered are those in which the nodes can be partitioned into clusters, with independencies between the clusters. For expositional clarity, consider two partitions, s = (S1' S2), and define Q to be factorized over these partitions2 , Q = Q1(sdQ2(S2). Using this Q in (1), we obtain (with obvious notational simplifications) InP(V) 2:: - (lnQ1)1 - (InQ2)2 + (InP)1.2 (14) A functional derivative with respect to Ql and Q2 gives the optimal forms: Q2 = exp (InP)1/Z2 (15) If we substitute this form for Q2 in (14) and use Z2 = E exp (In P)l' we obtain InP(V) 2:: - (InQ1)1 + In L exp (InP)1 (16) 2 In general, the final term may not have a simple form. In the case of approximating a BM P , InP = SI·JllSI + 2s1·J12S2 + s2·h2S2 -lnZpo Used in (16), we get: In P(V) 2:: - (In Q1)1 -In Zp + (SI ·Jll S1)1 + In L exp (S2· J22S2 + 2s2 ·J21 ($1)1) 2 (17) so that the final term of (17) is the normalizing constant of a BM with connection matrix h2 and whose diagonals are shifted by J21 (SI)1' One can therefore identify a set of nodes S1 which, when eliminated, reveal a tractable structure on the nodes S2. The nodes that were removed are compensated for by using a variational distribution Q1(sd. If P is a BM, then the optimal Q1 has its weights fixed to those of P restricted to variables S1, but with variable biases shifted by J12 (S2)2' Restricting Q1 to factorized models, we recover the node elimination bound [9] which can readily be improved by considering non-factorized distributions Q1 (for example those introduced in this paper), see fig(3) . Note, however, that there is no apriori guarantee that using such partitioned approximations will lead to a better approximation than that obtained from a tractable variational distribution defined on the whole graph, but which does not have such a product form. Using a product of conditional distributions over clusters of nodes is developed more fully in [11]. 6.2 Substructure Variation The process of using a Q defined on the whole graph but for which only a subset of the connections are adaptive is termed substructure variation [10]. In the context of BMs, Saul et al [2] identified weights in the original intractable distribution P that, if set to zero, would lead to a tractable graph Q(s) = P(slh, J, Jintractable = 0). To compensate for these removed weights they allowed the biases in Q to vary such that the KL divergence between Q and P is minimized. In general, this is a weaker method than one in which potentially all the parameters in the approximating network are adaptive, such as using a decimatable BM. 2In the case of fully connected BMs, for computing with a Q which is the product of K partitions (each of which is fully connected say), the computing time reduces from 2N for the "intractable" P to K2N/K for Q, which can be a considerable reduction. Tractable Variational Structures for Approximating Graphical Models 189 o o o 0 o 0 ~o (a) Intractable Model (b) "Naive" mean field (c) Node elimination (d) Partioning Figure 3: (a) A non-decimatable 5 node BM. (b) The standard factorized approximation. (c) Node Elimination (d) Partitioning, where a richer distribution is considered on the eliminated nodes. A solid line denotes a weight fixed to those in the original graph. A solid node is fixed , and an open node represents a variable bias. 7 Conclusion Finding accurate, controllable approximations of graphical models is crucial if their application to large scale problems is to be realised. We have elucidated two general classes of tractable approximations, both based on the Kullback-Leibler divergence. Future interesting directions include extending the class of distributions to higher order Boltzmann Machines (for which the class of decimation rules is greater), and to mixtures of these approaches. Higher order perturbative approaches are considered in [12]. These techniques therefore facilitate the approximating power of tractable models which can lead to a considerable improvement in performance. [1] E. Castillo, J . M. Gutierrez, and A. S. Hadi. Expert Systems and Probabilistic Network Models. Springer, 1997. [2] L. K. Saul and M. I. Jordan. Boltzmann Chains and Hidden Markov Models. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems, pages 435- 442. MIT Press, 1995. NIPS 7. [3] T. Jaakkola. Variational Methods for Inference and Estimation in Graphical Models. PhD thesis, Massachusetts Institute of Technology, 1997. [4] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean Field Theory for Sigmoid Belief Networks. Journal of Artificial Intelligence Research, 4:61- 76, 1996. [5] C.M. Bishop, N. Lawrence, T. Jaakkola, and M. I. Jordan. Approximating Posterior Distributions in Belief Networks using Mixtures. MIT Press, 1998. NIPS 10. [6] C. Peterson and J. R. Anderson. A Mean Field Theory Learning Algorithm for Neural Networks. Complex Systems, 1:995- 1019, 1987. [7] Conrad C. Galland. The limitations of deterministic Boltzmann machine learning. Network: Computation in Neural Systems, 4:355- 379, 1993. [8] R. Neal. Connectionist learning of Belief Networks. Artificial Intelligence, 56:71-113, 1992. [9] T . S. Jaakkola and M. I. Jordan. Recursive Algorithms for Approximating Probabilities in Graphical Models. MIT Press, 1996. NIPS 9. [10] L. K. Saul and M. I. Jordan. Exploiting Tractable Substructures in Intractable Networks. MIT Press, 1996. NIPS 8. [11] W. Wiegerinck and D. Barber. Mean Field Theory based on Belief Networks for Approximate Inference. 1998. ICANN 98. [12] D. Barber and P. van de Laar. Variational Cumulant Expansions for Intractable Distributions. Journal of Artificial Intelligence Research, 1998. Accepted.
|
1998
|
27
|
1,523
|
A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser Richard J. Coggins, Raymond J.W. Wang and Marwan A. Jabri Computer Engineering Laboratory School of Electrical and Infonnation Engineering, J03 University of Sydney, 2006, Australia. {richardc, jwwang, marwan} @seda1.usyd.edu.au Abstract In this paper we describe the architecture, implementation and experimental results for an Intracardiac Electrogram (ICEG) classification and compression chip. The chip processes and vector-quantises 30 dimensional analogue vectors while consuming a maximum of 2.5 J-tW power for a heart rate of 60 beats per minute (1 vector per second) from a 3.3 V supply. This represents a significant advance on previous work which achieved ultra low power supervised morphology classification since the template matching scheme used in this chip enables unsupervised blind classification of abnonnal rhythms and the computational support for low bit rate data compression. The adaptive template matching scheme used is tolerant to amplitude variations, and inter- and intra-sample time shifts. 1 INTRODUCTION Implantable cardioverter defibrillators (ICDs) are devices used to monitor the electrical activity of the heart muscle and to apply appropriate levels of electrical stimulation if abnonnal conditions are detected. Despite the considerable success of ICDs they suffer from a number of limitations including an inability to detect and treat some abnonnal heart rhythms and limited data recording capabilities. We have previously shown that micropower analogue Multi-Layer Perceptron (MLP) neural networks can be trained to separate such arrhythmia [4]. However, MLPs are best suited to learning the boundary between classes whereas a vector quantization scheme allows a measure of the probability density of the morphological types to be estimated. Many analogue vector quantiser (VQ) chips have been reported in the literature. For example, a 16x256 500 kHz 50 mW 2 J-tm CMOS vector AID converter [10] and a 16 x 16 300 kHz 0.7 mW 2 J-tm CMOS analogue VQ [1]. These correspond to an energy per match 672 R. J. Coggins, R. J. W. Wang and M. A. Jabri per dimension of 24 pI and 9 pI respectively. The integrated circuit (lC) described in this paper is distinguished from these approaches in that it is specifically targeted for the low power, low bandwidth application of ICEG classification and compression. Our chip achieves vector matching (without the winner take all function) to 7 bit 30 dimensional vectors with three coefficient linear prediction, at an energy consumption of 15 pI per template per dimension using a 1.2 pm CMOS process. Although this figure is greater than that for [1] it should be noted that in [1] the mean absolute error metric is used rather than the squared Euclidean distance and no provision is provided for linear transformation of the incoming analogue vector. 2 ADAPTIVE DATA COMPRESSION Recording of ICEGs in ICDs is currently very limited due to the amount of memory available and the power/area cost of implementing all but the simplest compression techniques. Micropower template matching however, enables large amounts of the signal to be encoded as template indices plus amplitude parameters. Effective compression ofthe ICEG requires adaptation to the short term non-stationary behaviour of the ICEG [2] . In particular, short term amplitude variations, lag variation, phase variation and ectopic beats (which originate from the ventricles of the heart and have differing morphology) reduce the achievable compression. The impact of ectopic beats can be reduced by increasing the number of templates. This can often be achieved without increasing the code book search complexity by using associated timing features. The amplitude and shift variations require short term adaptation of the template matching in order to minimise the residual error and hence raise the compression ratio at fixed distortion. 2.1 Amplitude and Shift Invariant Matching In order to facilitate analogue implementation, a backward prediction procedure is used rather than the usual forward prediction [8]. This approach allows the incoming analogue template to be manipulated in the analogue domain for amplitude and shift invariance purposes. Consider the long term backward prediction problem described by, () -() b ( ) b {x(n + a + 1) - x(n + a - I)} rb n = x n OX n + a-I 2 (1) where rb (n) denotes the backward residuals, x is a template which is a function of previous beats, x( a) is the sampled ICEG signal, a the time index, n is the template index and bo and bl are the amplitude and phase coefficients respectively. bo scales the current beat to match the template and hence is an amplitude term. b1 scales the central difference of the current beat and is a function of the amplitude and phase corrections required to minimise the residuals. To see why this is a phase term consider the Taylor expansion of Ax(t + ¢) to the first derivative term around t, Ax(t + ¢) = Ax(t) + A¢x' (t) (2) where ¢ is a small phase shift of x(t) and A is the amplitude factor. When ¢ is due to sampling jitter then, -t :::; ¢ :::; t, where T is the sampling period. Provided that x(t) is sampled according to the Nyquist criterion, ¢ is sufficiently small for the first derivative term to adequately account for the sampling jitter. Hence, bi accounts for the residual error remaining after optimisation of the integer a. a is approximately determined by the beat detector of the ICD which attempts to detect the fiducial point of heart beats using filters and comparators. bo and b1 can be determined by minimising the squared error between the current signal window and the previously recorded template which in this case has a closed form solution in terms of correlation coefficients. However, in Section 3 we present an alternative iterative procedure suited to low-power analogue implementations. A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser 673 3 SYSTEM ARCHITECTURE & IMPLEMENTATION 'n. LL r , .. ::;'.I~ 1'11':::-""~fC"'-1 •• ... ~) ••• "'J I'U =Figure I: Left: Block diagram of the adaptive linear transfonn VQ chip. Middle: Floorplan of the chip. Right: Photomicrograph of the chip. The ICEG is first high pass filtered to remove the DC and then is bandpass filtered to prevent aliasing and enhance the high frequency component for beat detection. (This is the filtering approach already existing in an ICD and therefore not implemented by us). This then feeds the discrete time analogue delay line, which is continuously sampling the signal at 250 Hz. The analogue samples are then transfonned by a two layer network. The first layer implements the linear prediction by adjusting the amplitude bo and the phase of the analogue vector. Note that the phase consists of two components, the coarse part a corresponding to sample lags and the fine part b1 corresponding to intra-sample lags. The second layer calculates the distance between the linearly predicted vector and the template wen) to be matched. A comparator is provided so that a match to within a given threshold may be detected. 3.1 Chip Architecture Input to the IC is via a single analogue channel which is sampled by a bucket brigade device of length 30. The resultant 30 dimensional analogue vector is adaptively linear transfonned to facilitate a shift and scale invariant match to a digital (7 bit per dimension) template. The IC generates digital representations of the square of the Euclidean distance between the transfonned analogue vector and the digital template. A block diagram of the IC appears in Figure I. The IC has been fabricated. Perfonnance figures in this paper are based on measurements of the chip fabricated in a 1.2/-Lm CMOS MOSIS process. The block diagram shows the input signal being sampled by the bucket brigade device (BBD)[4]. The signal is sampled at a rate of 250 Hz. Existing circuitry in the defibrillator detects the peak of the heart beat and hence indicates a coarse alignment (due to detection jitter) to the template stored in the template DACs (TDACs). The BBD continues to sample until the coarse alignment is attained at which point the IC is biased up. The BBD now contains a segment of the ICEG corresponding to one heart beat. The digital error output is then monitored with the linear transfonn blocks configured to I: I mappings until an error minimum is detected indicating optimal sampling alignment. The three linear transfonn coefficient DACs (CDACs) which are common to the 30 linear transfoqn blocks may then be adapted to further reduce the matching error. The transfonnation can be represented by yen) = aox(n - 1) + alx(n) + a2x(n + 1) where ao corresponds to CDACO etc. This constitutes a general linear long tenn prediction [8]. Constraining CDACO and CDAC2 to be equal magnitudes and opposite signs results in a minimisation of errors due to phase and amplitude variation and a simpler adaptation procedure. The matching error is computed via the squarer blocks and the summing node. The matching error consists of both a magnitude and exponent thereby increasing the dynamic range of the error representation. 674 R. J. Coggins, R. J. W Wang and M. A. Jabri The magnitude is the output of the squarer block. The exponent is determined by control of a current reference in the squaring circuit. A reference DAC and precision current comparator provide the means of successive approximation AID conversion of the matching error current [ERR. Using this scheme heart beat morphology can be classified by loading different templates (TDAC values). A stream of beats may be compressed by identifying matches with continuously updated representations of previous beats. Close matches are encoded by an index and an amplitude coefficent while poor matches are encoded by quanti sed residuals which have been minimised by the linear prediction. 3.2 Adaptation and Learning The first step in the learning process is to determine a, the coarse phase lag. This can be achieved by shifting the delay line and evaluating the error until a minimum is reached. Once the coarse phase lag a has been determined the error function to be minimised to compensate for amplitude and phase variations is given by E = E~I (bOXi+bI~Xi-Wi)2, where the subscript i implicitly incorporates the coarse phase a. This is a quadratic in bo and bl . bo and bi can be optimised separately provided cross terms in E are negligible. Here the cross terms are given by E~I 2bobIXi~Xi = bobI(XN+IXN XIXO). Thus, if the end points of the N point window have approximately the same value (as is usually the case for ICEG beats) then the cross terms in E are negligible and bo and bi can be optimised separately. So the only remaining issue is how to optimise a single parameter. A simple linear search takes at most 2b evaluations of E where b is the number of bits. A search based on bisection takes b + 2 evaluations. Techniques involving gradient descent and conjugate gradient lead to more complex learning logic with minor reductions in the number of evaluations. Therefore, bisection is the best compromise between the number of evaluations and the complexity of the learning state machine. Once the best template match has been achieved, learning may also then be applied to the template itself depending on the application and context. For example, in the case of adaptive classification a weight perturbation algorithm [6] could be used to adapt the template for morphological drift based on heart rate information. Similarly, for a data compression application, if the template match exceeds a fidelity criterion the template may be adapted and the template changes logged in the compression record. 3.3 Building Blocks In order to implement the template matcher, sub-threshold analogue VLSI building blocks were designed. All transistors in the building blocks operate in weak inversion exclusively. We do not have the space to describe all of the building blocks, so we will focus here on the linear transform and squarer cells. 3.3.1 Linear Transform Cell The linear transform (LT) cell consists of three linearised differential pairs [7] with their biases controlled by the coefficient DACs (CDACs) (see Figure 2(a». The nature ofthe linearisation is controlled by the ratio of the aspect ratios.ofM3 to M5 and M4 to M6. Methods for choosing this ratio are discussed in [5]. Denoting the aspect ratio of a transistor by S we chose S3/ S5 = S4/ S6 = 4. This introduces some ripple in the transconductance while increasing the asymptotic saturation voltage to 4nUT compared to nUT for the ordinary differential pair. Signed coefficients are achieved by switches at the outputs of the differential pairs. The template DACs (TDACs) have differential outputs to form the difference y(n) - w(n) where w(n) is the nth template value. A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser 675 3.3.2 Squaring Cell The squaring function must meet the following design constraints. It should have current inputs and outputs in order to avoid linear current to voltage conversion at low currents. The squared current must be normalised to the original linear range to avoid excessive power consumption. The squaring function should avoid the MOS square law approach in order to conserve space and power, and the the available voltage range should be 3.3 V rail to rail. RCLK1 D--~--*"-<l VIOl V.. RClK2 D---ol<t+--<l L-:-t-~-+--o (a) (b) Figure 2: (a) Circuit diagram of one of three the linear transform linearised differential pairs in the LT cell. (b) Circuit diagram of the squarer (SQ cell) and the summing node. The choices available then are restricted to weak inversion circuits. The circuit (see Figure 2(b)) used relies on the translinear principle [9]. Here, loops of MOS g-s diode structures operating in weak inversion are used to form a normalised squared current which is summed to form the final normalised output. The translinear loops are implemented with P-type transistors in separate N-wells to avoid the body effect. Positive and negative inputs are squared separately using the RCLK signals and then added at the output. 3.4 Circuit Performance Table 1: Summary of electrical specifications of the chip. Item Conditions Template dimension Adaptation coefficients DAC Precision Max. Error per dimensiona LSB bias Power comsumption Excludes squarer error gain control Weighted lateral PNP CDACx=64, DCBBD, wlr to TDACs TDACs=CDACl=64, duty cycleb = 3.2% Value 30 3 7 bits 2 bits 2nA 2.5 J-LW a Excludes error at 1st CDACO stage. b For 1 bpm, chip biased up 8/250 of the time. We provide three measures of the performance of the chip along with a summary of its basic electrical characteristics which is shown in Table 1. The first measure characterises the accuracy of the template matching function relative to the available precision of the template. This is summarised by the Maximum Error per dimension in Table 1 which was produced by inputing a zero offset DC signal into the BBD and setting each CDAC in turn to one half of its maximum value. The TDACs were then adjusted so as to minimise the output of the squarer. Therefore, the reSUlting TDAC values indicate the accumulated effects of transistor mismatches through each path to the squarer output. The curves generated are averages over 80 trials to remove noise influences (where as the classification performance 676 R. J. Coggins, R. J. W Wang and M. A. Jabri shown in Table Irefvterr-tab includes such influences). The curves showed that except for the input stage corresponding to CDACO (stage 30) the accumulated mismatches influence the two least significant bits of the TDACs. A larger error of 4 bits for the first stage feeding CDACO was due to a design oversight of not providing a dummy capacitive load to the input end of the BBD (stage 30 of CDACO derives its input from the input BBD cell, which does not have the full capacitive loading of three linearised differential pairs as on the rest of the cells). Table 2: Relative impact on the error output of the chip for the adaptation steps of alignment, amplitude and phase correction for patient No. 2s ST rhythm. The errors are normalised to the non-aligned error. A numerical simulation is provided for comparison to the chip performance. Adaptation step No align Align Amplitude Phase Chip Error 1.0 0.31 0.16 0.07 Std. Dev. 0.04 0.07 0.05 0.01 Simulation Error 1.0 0.41 0.37 0.32 Std. Dev. 0.28 0.35 0.22 0.16 The second performance measure uses real heart patients ICEG (Sinus Tachycardia) ST data. Table 2 shows the normalised output error of the chip averaged over 107 heart beats while being compared to the 10th beat in the series. The normalised error was measured from a mirrored version of the current at the output of the chip. The adaptation steps shown in the table are as follows. "No align" implies that the error for the template match is determined only by the approximate alignment provided by a numerical simulation of the beat detector of the ICD. "Align" corresponds to coarse alignment where the matching error is calculated up to two samples either side to determine the best positioning of the input in the BBD. "Amplitude" corresponds to adaptation of the amplitude coefficient by adjustment ofCDAC1. "Phase" corresponds to adaptation of the difference between CDAC2 and CDACO. Each of the adaptations reduces the error of the match with the coarse alignment being most significant. An idealised limited precision numerical simulation of the error calculation is also provided in the table for comparison. It can be seen that the amplitude and phase adaptation steps lower the relative error more for the chip than in the simulation. This is most likely due to the adaptation on the chip also compensating for the analogue noise and imprecision as well as the variability of the original data. The third performance measure illustrates the ability of the chip to solve a blind classification problem and is summarised in Table 3. The safe rhythm of the patient is Sinus Tachycardia (ST). For each patient one beat is chosen at random as the template and is loaded into the TDACs of the chip. The 20 beats subsequent to the chosen template are then used to determine the average error between templates after adaptation. Twice this error is then used as the classifier threshold for "safe" versus "unknown". The ST and VT data sets for the patient are then passed through the chip and classified giving the column "% Correct chip". For comparison the expected best performance for the data set are also reproduced in the table from previous work by the authors [3]. The results indicate that a very simple blind classification algorithm when combined with the adaptive template matching capabilities of the chip shows good performance for 4 out of 5 patients. 4 CONCLUSION We have presented a micropower learning vector quantization system that can provide hardware support for both signal classification and compression of ICEG signals. The analogue block can be used to implement several different classification and compression algorithms A Micropower CMOS Adaptive Amplitude and Shift Invariant Vector Quantiser 677 Table 3: Performance of the chip on a blind classification task for 5 patients with Ventricular Tachycardia (VT) 1: 1 retrograde conduction compared to classification bounds. a The R point search interval was increased to 4 for this patient. depending on how the template matching capability is utilised. By providing significant compression capability in an lCD, a larger data base of natural onset cardiac arrhythmia should become available, leading to improved designs of ICD based adaptive classification and compression systems. 5 ACKNOWLEDGEMENTS The work in this paper was funded by the Australian Research Council and Telectronics Pacing Systems Ltd, Sydney, Australia. References [1] G. Cauwenberghs and V. Pedroni. A Charge-Based CMOS Parallel Analog Vector Quantiser. In NIPS, volume 7, pages 779-786. MIT Press, 1995. [2] R.J. Coggins. Low Power Signal Compression and Classification for Implantable Defibrillators. PhD thesis, University of Sydney, Sydney, Australia, 1996. [3] R.J. Coggins and M.A. Jabri. Classification and Compression of ICEGs using Gaussian Mixture Models. In J. Principe, L. Giles, N. Morgan, and E. Wilson, editors, Neural Networks for Signal Processing, volume 7, pages 226-235. IEEE, 1997. [4] R.J. Coggins, M.A. Jabri, B.G. Flower, and S.J. Pickard. A Hybrid Analog and Digital VLSI Neural Network for Intracardiac Morphology Classification. IEEE Journal of Solid-State Circuits, 30(5):542-550, May 1995. [5] M. Furth and A. Andreou. Linearised Differential Transconductors in Subthreshold CMOS. Electronics Letters, 31(7):545-547, 1995. [6] M.A. Jabri and B.G. Flower. Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks. IEEE Transactions on Neural Networks, 3(1):154-157, January 1992. [7] F. Krummenacher and N Joehl. A 4Mhz CMOS Continuous Time Filter with On Chip Automatic Tuning. IEEE Journal of Solid-State Circuits, 23(3):750-758, June 1986. [8] G. Nave and A. Cohen. ECG Compression Using Long Term Prediction. IEEE Trans. Biomed. Eng., 40(9):877-885, 1993. [9] E. Seevinck. Analysis and Synthesis of Translinear Integrated Circuits. Elsevier, 1988. [to] G.T. Tyson, S. Fallahi, and A.A. Abidi. An 8b CMOS Vector AID converter. In Proceedings of the International Solid State Circuits Conference, pages 38-39,1993.
|
1998
|
28
|
1,524
|
Convergence Rates of Algorithms for Visual Search: Detecting Visual Contours A.L. Yuille Smith-Kettlewell Inst. San Francisco, CA 94115 Abstract James M. Coughlan Smith-Kettlewell Inst. San Francisco, CA 94115 This paper formulates the problem of visual search as Bayesian inference and defines a Bayesian ensemble of problem instances. In particular, we address the problem of the detection of visual contours in noise/clutter by optimizing a global criterion which combines local intensity and geometry information. We analyze the convergence rates of A * search algorithms using results from information theory to bound the probability of rare events within the Bayesian ensemble. This analysis determines characteristics of the domain, which we call order parameters, that determine the convergence rates. In particular, we present a specific admissible A * algorithm with pruning which converges, with high probability, with expected time O(N) in the size of the problem. In addition, we briefly summarize extensions of this work which address fundamental limits of target contour detectability (Le. algorithm independent results) and the use of non-admissible heuristics. 1 Introduction Many problems in vision, such as the detection of edges and object boundaries in noise/clutter, see figure (1), require the use of search algorithms. Though many algorithms have been proposed, see Yuille and Coughlan (1997) for a review, none of them are clearly optimal and it is difficult to judge their relative effectiveness. One approach has been to compare the results of algorithms on a representative dataset of images. This is clearly highly desirable though determining a representative dataset is often rather subjective. In this paper we are specifically interested in the convergence rates of A * algorithms (Pearl 1984). It can be shown (Yuille and Coughlan 1997) that many algorithms proposed to detect visual contours are special cases of A * . We would like to understand what characteristics of the problem domain determine the convergence 642 A. L. Yuille and J. M Coughlan Figure 1: The difficulty of detecting the target path in clutter depends, by our theory (Yuille and Coughlan 1998), on the order parameter K. The larger K the less computation required. Left, an easy detection task with K = 3.1. Middle, a hard detection task K = 1.6. Right, an impossible task with K = -0.7. rates. We formulate the problem of detecting object curves in images to be one of statistical estimation. This assumes statistical knowledge of the images and the curves, see section (2). Such statistical knowledge has often been used in computer vision for determining optimization criteria to be minimized. We want to go one step further and use this statistical knowledge to determine good search strategies by defining a Bayesian ensemble of problem instances. For this ensemble, we can prove certain curve and boundary detection algorithms, with high probability, achieve expected time convergence in time linear with the size of the problem. Our analysis helps determine important characteristics of the problem, which we call order parameters, which quantify the difficulty of the problem. The next section (2) of this paper describes the basic statistical assumptions we make about the domain and describes the mathematical tools used in the remaining sections. In section (3) we specify our search algorithm and establish converEence rates. We conclude by placing this work in a larger context and summarizing recent extensions. 2 Statistical Background Our approach assumes that both the intensity properties and the geometrical shapes of the target path (i.e. the edge contour) can be determined statistically. This path can be considered .to be a set of elementary path segments joined together. We first consider the intensity properties along the edge and then the geometric properties. The set of all possible paths can be represented by a tree structure, see figure (2). The image properties at segments lying on the path are assumed to differ, in a statistical sense, from those off the path. More precisely, we can design a filter ¢(.) with output {Yx = ¢(I(x))} for a segment at point x so that: P(Yx) = Pon(Yx), if "XII lies on the true path P(Yx) = Poff(Yx), if "X'I lies off the true path. (1) For example, we can think of the {Yx} as being values of the edge strength at point x and Pon, Poll being the probability distributions of the response of ¢(.) on and off an edge. The set of possible values of the random variable Yx is the alphabet with alphabet size M (Le. Yx can take any of M possible values). See (Geman and Jedynak 1996) for examples of distributions for Pon, Pol I used in computer vision applications. We now consider the geometry of the target contour. We require the path to be made up of connected segments Xl, X2, ... , x N. There will be a Markov probability distribution Pg(Xi+I!Xi) which specifies prior probabilistic knowledge of the target. Convergence Rates of Algorithmsfor Visual Search: Detecting Visual Contours 643 It is convenient, in terms of the graph search algorithms we will use, to consider that each point x has a set of Q neighbours. Following terminology from graph theory, we refer to Q as the branching factor. We will assume that the distribution Pg depends only on the relative positions of XHI and Xi. In other words, Pg(XHllxi) = PLlg(XHl - Xi). An important special case is when the probability distribution is uniform for all branches (Le. PLlg(Ax) = U(Ax) = I/Q, VAx). The joint distribution P(X, Y) of the road geometry X and filter responses Y determines the Bayesian Ensemble. By standard Bayesian analysis, the optimal path X* = {xi, ... , XN} maximizes the sum of the log posterior: (2) where the sum i is taken over all points on the target. U(Xi+l - Xi) is the uniform distribution and its presence merely changes the log posterior E(X) by a constant value. It is included to make the form of the intensity and geometric terms similar, which simplifies our later analysis. We will refer to E(X) as the reward of the path X which is the sum of the intensity rewards log Pon (Y(~jl) and the geometric rewards log PL:>.g (Xi+l -Xi) Poll (Y(~i» U(Xi+l -Xi) It is important to emphasize that our results can be extended to higher-order Markov chain models (provided they are shift-invariant). We can, for example, define the x variable to represent spatial orientation and position of a small edge segment. This will allow our theory to apply to models, such as snakes, used in recent successful vision applications (Geman and Jedynak 1996). (It is straightforward to transform the standard energy function formulation of snakes into a Markov chain by discretizing and replacing the derivatives by differences. The smoothness constraints, such as membranes and thin plate terms, will transform into first and second order Markov chain connections respectively). Recent work by Zhu (1998) shows that Markov chain models of this type can be learnt using Minimax Entropy Learning theory from a representative set of examples. Indeed Zhu goes further by demonstrating that other Gestalt grouping laws can be expressed in this framework and learnt from representative data. Most Bayesian vision theories have stopped at this point. The statistics of the problem domain are used only to determine the optimization criterion to be minimized and are not exploited to analyze the complexity of algorithms for performing the optimization. In this paper, we go a stage further. We use the statistics ofthe problem domain to define a Bayesian ensemble and hence to determine the effectiveness of algorithms for optimizing criteria such as (2). To do this requires the use of Sanov's theorem for calculating the probability of rare events (Cover and Thomas 1991). For the road tracking problem this can be re-expressed as the following theorem, derived in (Yuille and Coughlan 1998): Theorem 1. The probabilities that the spatially averaged log-likelihoods on, and off, the true curve are above, or below, threshold T are bounded above as follows: Pr{.!. t {log Pon(y(Xi») }on < T} :s; (n + I)M2-nD(PTlfPon) (3) n i=l Poff (Y(Xi») Pr{.!. t{lOg Pon(Y(Xi») }off > T}:S; (n+ I)M2-nD(PTIIPOI/) , (4) n i=l POff(Y(Xi») 644 A. L. Yuille and J. M. Coughlan where the subscripts on and off mean that the data is generated by Pon, Po", PT(y) = p;;;>'(T) (y)P;;p jZ(T) where a ::; "\(T) ::; 1 is a scalar which depends on the threshold T and Z(T) is a normalization factor. The value of "\(T) is determined by the constraint 2: y PT (y) log ;'°In}(~) = T. In the next section, we will use Theorem 1 to determine a criterion for pruning the search based on comparing the intensity reward to a threshold T (pruning will also be done using the geometric reward). The choice of T involves a trade-off. If T is large (Le. close to D(PonllPoff)) then we will rapidly reject false paths but we might also prune out the target (true) path. Conversely, if T is small (close to -D(PoffllPon)) then it is unlikely we will prune out the target path but we may waste a lot of time exploring false paths. In this paper we choose T large and write the fall-off factors (Le. the exponents in the bounds of equations (3,4)) as D(PTllPon) = tl (T), D(PTilPoff) = D(PonilPoff) - t2(T) where tl (T), t2(T) are positive and (tl(T),t2(T)) t-+ (0,0) as T t-+ D(PonilPoff ). We perform a similar analysis for the geometric rewards by substituting P 6.g , U for Pon , Pol I' We choose a threshold T satisfying -D(UIIP6.g) < T < D(P6.gllU). The results of Theorem 1 apply with the obvious substitutions. In particular, the alphabet factor becomes Q (the branching factor). Once again, in this paper, we choose T to be large and obtain fall-off factors D(Pt'IIP6.g) = El (T), D(Pt'IIU) = D(P6.gllU) - E2(T). 3 Tree Search: A *, heuristics, and block pruning We now consider a specific example, motivated by Geman and Jedynak (1996), of searching for a path through a search tree. In Geman and Jedynak the path corresponds to a road in an aerial image and they assume that they are given an initial point and direction on the target path. They have a branching factor Q = 3 and, in their first version, the prior probability of branching is considered to be the uniform distribution (later they consider more sophisticated priors). They assume that no path segments overlap which means that the search space is a tree of size QN where N is the size of the problem (Le. the longest length). The size of the problem requires an algorithm that converges in O(N) time and they demonstrate an algorithm which empirically performs at this speed. But no proof of convergence rates are given in their paper. It can be shown, see (Yuille and Coughlan 1997), that the Geman and Jedynak algorithm is a close approximation to A * which uses pruning. (Observe that Geman and Jedynak's tree representation is a simplifying assumption of the Bayesian model which assumes that once a path diverges from the true path it can never recover, although we stress that the algorithm is able to recover from false starts - for more details see Coughlan and Yuille 1998). We consider an algorithm which uses an admissible A * heuristic and a pruning mechanism. The idea is to examine the paths chosen by the A * heuristic. As the length of the candidate path reaches an integer multiple of No we prune it based on its intensity reward and its geometric reward evaluated on the previous No segments, which we call a segment block. The reasoning is that few false paths will survive this pruning for long but the target path will survive with high probability. We prune on the intensity by eliminating all paths whose intensity reward, averaged over the last No segments, is below a threshold T (recall that -D(PoffllPon) < T < D(PonllPoff) and we will usually select T to take values close to D(PonllPoff)). In addition, we prune on the geometry by eliminating all paths whose geometric rewards, averaged over the last No segments, are below T (where -D(UIIP6.g) < T < D(P6.gllU) with T typically being close to D(P6.gllU)). More precisely, we Convergence Rates of AlgOrithms for Visual Search: Detecting Visual Contours 645 discard a path provided (for any integer z ~ 0): 1 (z~o I Pon(Yi) 1 (z+l)No PLlg(Llxi) ~ og < T, or No L log U(Llx.) < T. No i=zNo+l Poff(yd i=zNo+l t (5) There are two important issues to address: (i) With what probability will the algorithm converge?, (ii) How long will we expect it take to converge? The next two subsections put bounds on these issues. 3.1 Probability of Convergence Because of the pruning, there is a chance that there will be no paths which survive pruning. To put a bound on this we calculate the probability that the target (true) path survives the pruning. This gives a lower bound on the probability of convergence (because there could be false paths which survive even if the target path is mistakenly pruned out). The pruning rules removes path segments for which the intensity reward r I or the geometric reward r 9 fails the pruning test. The probability of failure by removing a block segment of the true path, with rewards r~, r~, is Pr(r~ < T or r~ < T) ::; Pr(r~ < T) + Pr(r~ < T) ::; (No + 1)M2-NoE1 (T) + (No + 1)Q2-NoilCT), where we have used Theorem 1 to put bounds on the probabilities. The probability of pruning out any No segments of the true path can therefore be made arbitrarily small by choosing No, T, T so as to make Notl and NOtl large. It should be emphasized that the algorithm will not necessarily converge to the exact target path. The admissible nature of the heuristic means that the algorithm will converge to the path with highest reward which has survived the pruning. It is highly probable that this path is close to the target path. Our recent results (Coughlan and Yuille 1998, Yuille and Coughlan 1998) enable us to quantify this claim. 3.2 Bounding the Number of False Paths Suppose we face a Q-nary tree. We can order the false paths by the stage at which they diverge from the target (true) path, see figure (2). For example, at the first branch point the target path lies on only one of the Q branches and there are Q - 1 false branches which generate the first set of false paths Fl' Now consider all the Q -1 false branches at the second target branch, these generate set F2 . As we follow along the true path we keep generating these false sets Fi . The set of all paths is therefore the target path plus the union of the Fi (i = 1, ... , N). To determine convergence rates we must bound the amount of time we spend searching the Fi. If the expected time to search each Fi is constant then searching for the target path will at most take constant· N steps. Consider the set Fi of false paths which leave the true path at stage i. We will apply our analysis to block segments of Fi which are completely off the true path. If (i -1) is an integer multiple of No then all block segments of Fi will satisfy this condition. Otherwise, we will start our analysis at the next block and make the worse case assumption that all path segments up till this next block will be searched. Since the distance to the next block is at most No - 1, this gives a maximum number of QNo-l starting blocks for any branch of Fi . Each Fi also has Q - 1 branches and so this gives a generous upper bound of (Q - l)Q No-l starting blocks for each Fi . 646 A. L. Yuille and J. M. Coughlan Figure 2: The target path is shown as the heavy line. The false path sets are labelled as Fl ,F2 , etc. with the numbering depending on how soon they leave the target path. The branching factor Q = 3. For each starting block, we wish to compute (or bound) the expected number of blocks that are explored thereafter. This requires computing the fertility of a block, the average number of paths in the block that survive pruning. Provided the fertility is smaller than one, we can then apply results from the theory of branching processes to determine the expected number of blocks searched in Fi . The fertility q is the number of paths that survive the geometric pruning times the probability that each survives the intensity pruning. This can be bounded (using Theorem 1) by q :s q where: q = QN0(No + I)Q2-No{D(hgIIU)-€2(T)}(No + I)M2-No{D(PonIIPoff)-E2(T)} = (No + I)Q+M 2-No {D(Pon IIPof! )-H(Pag)-E2(T)-€2(T)}, (6) where we used the fact that D(PLlgIIU) = 10gQ - H(PLlg). Observe that the condition q < 1 can be satisfied provided D(PonllPolf )-H(PLlg) > O. This condition is intuitive, it requires that the edge detector information, quantified by D(PonIIPolf )' must be greater than the uncertainty in the geometry measured by H(PLlg). In other words, the better the edge detector and the more predictable the path geometry then the smaller q will be. We now apply the theory of branching processes to determine the expected number of blocks explored from a starting block in Fi 'L~o qZ = 1/(1 - q). The number of branches of Fi is (Q - 1), the total number of segments explored per block is at most QNo, and we explore at most QNo-l segments before reaching the first block. The total number of Fi is N. Therefore the total number of segments wastefully explored is at most N(Q - 1) 1~qQ2No-1. We summarize this result in a theorem: Theorem 2. Provided q = (No + I)Q+M2- NoK < 1, where the order parameter K = D(PonllPolf) - H(PLlg) - €2(T) - €2(T), then the expected number of false segments explored is at most N(Q - 1) 1~qQ2No-1. Comment The requirement that q < 1 is chiefly determined by the order parameter K = D (Pon IlPolf ) - H (P Llg) €2 (T) - f2 (T). Our convergence proofrequires that K > 0 and will break down if K < O. Is this a limitation of our proof? Or does it correspond to a fundamental difficulty in solving this tracking problem? In more recent work (Yuille and Coughlan 1998) we extend the concept of order parameters and show that they characterize the difficulty of visual search problem independently of the algorithm. In other words, as K 1----7 0 the problem becomes impossible to solve by any algorithm. There will be too many false paths which have better rewards than the target path. As K 1----7 0 there is a phase transition in the ease of solving the problem. Convergence Rates of Algorithmsfor Visual Search: Detecting Visual Contours 647 4 Conclusion Our analysis shows it is possible to detect certain types of image contours in linear expected time (with given starting points). We have shown how the convergence rates depend on order parameters which characterize the problem domain. In particular, the entropy of the geometric prior and the Kullback-Leibler distance between Pon and Pof f allow us to quantify intuitions about the power of geometrical assumptions and edge detectors to solve these tasks. Our more recent work (Yuille and Coughlan 1998) has extended this work by showing that the order parameters can be used to specify the intrinsic (algorithm independent) difficulty of the search problem and that phase transitions occur when these order parameters take critical values. In addition, we have proved convergence rates for A * algorithms which use inadmissible heuristics or combinations of heuristics and pruning (Coughlan and Yuille 1998). As shown in (Yuille and Coughlan 1997) many of the search algorithms proposed to solve vision search problems, such as (Geman and Jedynak 1996), are special cases of A * (or close approximations). We therefore hope that the results of this paper will throw light on the success of the algorithms and may suggest practical improvements and speed ups. Acknow ledgements We want to acknowledge funding from NSF with award number IRI-9700446, from the Center for Imaging Sciences funded by ARO DAAH049510494, and from an ASOSRF contract 49620-98-1-0197 to ALY. We would like to thank L. Xu, D. Snow, S. Konishi, D. Geiger, J. Malik, and D. Forsyth for helpful discussions. References [1] J .M. Coughlan and A.L. Yuille. "Bayesian A * Tree Search with Expected O(N) Convergence Rates for Road Tracking." Submitted to Artificial Intelligence. 1998. [2] T.M. Cover and J.A. Thomas. Elements of Information Theory. Wiley Interscience Press. New York. 1991. [3] D. Geman. and B. Jedynak. "An active testing model for tracking roads in satellite images". IEEE Trans. Patt. Anal. and Machine Intel. Vol. 18. No.1, pp 1-14. January. 1996. [4] J. Pearl. Heuristics. Addison-Wesley. 1984. [5] A.L. Yuille and J. Coughlan. " Twenty Questions, Focus of Attention, and A *" . In Energy Minimization Methods in Computer Vision and Pattern Recognition. Ed. M. Pellilo and E. Hancock. Springer-Verlag. (Lecture Notes in Computer Science 1223). 1997. [6] A.L. Yuille and J .M~ Coughlan. "Visual Search: Fundamental Bounds, Order Parameters, Phase Transitions, and Convergence Rates." Submitted to Pattern Analysis and Machine Intelligence. 1998. [7] S.C. Zhu. "Embedding Gestalt Laws in Markov Random Fields". Submitted to IEEE Computer Society Workshop on Perceptual Organization in Computer Vision.
|
1998
|
29
|
1,525
|
Learning a Continuous Hidden Variable Model for Binary Data Daniel D. Lee Bell Laboratories Lucent Technologies Murray Hill, NJ 07974 ddlee~bell-labs.com Haim Sompolinsky Racah Institute of Physics and Center for Neural Computation Hebrew University Jerusalem, 91904, Israel haim~fiz.huji . ac.il Abstract A directed generative model for binary data using a small number of hidden continuous units is investigated. A clipping nonlinearity distinguishes the model from conventional principal components analysis. The relationships between the correlations of the underlying continuous Gaussian variables and the binary output variables are utilized to learn the appropriate weights of the network. The advantages of this approach are illustrated on a translationally invariant binary distribution and on handwritten digit images. Introduction Principal Components Analysis (PCA) is a widely used statistical technique for representing data with a large number of variables [1]. It is based upon the assumption that although the data is embedded in a high dimensional vector space, most of the variability in the data is captured by a much lower climensional manifold. In particular for PCA, this manifold is described by a linear hyperplane whose characteristic directions are given by the eigenvectors of the correlation matrix with the largest eigenvalues. The success of PCA and closely related techniques such as Factor Analysis (FA) and PCA mixtures clearly indicate that much real world data exhibit the low dimensional manifold structure assumed by these models [2, 3]. However, the linear manifold structure of PCA is not appropriate for data with binary valued variables. Binary values commonly occur in data such as computer bit streams, black-and-white images, on-off outputs of feature detectors, and electrophysiological spike train data [4]. The Boltzmann machine is a neural network model that incorporates hidden binary spin variables, and in principle, it should be able to model binary data with arbitrary spin correlations [5]. Unfortunately, the 516 D. D. Lee and H. Sompolinsky Figure 1: Generative model for N-dimensional binary data using a small number p of continuous hidden variables. computational time needed for training a Boltzmann machine renders it impractical for most applications. In these proceedings, we present a model that uses a small number of continuous hidden variables rather than hidden binary variables to capture the variability of binary valued visible data. The generative model differs from conventional peA because it incorporates a clipping nonlinearity. The resulting spin configurations have an entropy related to the number of hidden variables used, and the resulting states are connected by small numbers of spin flips. The learning algorithm is particularly simple, and is related to peA by a scalar transformation of the correlation matrix. Generative Model Figure 1 shows a schematic diagram of the generative process. As in peA, the model assumes that the data is generated by a small number P of continuous hidden variables Yi . Each of the hidden variables are assumed to be drawn independently from a normal distribution with unit variance: P(Yi) = exp( -yt /2)/~. (1) The continuous hidden variables are combined using the feedforward weights Wij , and the N binary output units are then calculated using the sign of the feedforward acti vations: P Xi = L WijYj (2) j=l Si sgn(xi). (3) Since binary data is commonly obtained by thresholding, it seems reasonable that a proper generative model should incorporate such a clipping nonlinearity. The generative process is similar to that of a sigmoidal belief network with continuous hidden units at zero temperature. The nonlinearity will alter the relationship between the correlations of the binary variables and the weight matrix W as described below. The real-valued Gaussian variables Xi are exactly analogous to the visible variables of conventional peA. They lie on a linear hyperplane determined by the span of the matrix W, and their correlation matrix is given by: cxx = (xxT ) = WW T . (4) Learning a Continuous Hidden Variable Model for Binary Data Y2 """"'" -t-"' ..... . ~ .. , ...... ,.",'" +.' . ' , , . , . 4+ : .... "" "'/"~'l=LWl 'Y'~O •• ' : J J . . . . . . . . . +++ , . . . . . . . . . : x3 r , , , "" x2~ 0 , , , "~ 517 Figure 2: Binary spin configurations Si in the vector space of continuous hidden variables Yj with P = 2 and N = 3. By construction, the correlation matrix CXX has rank P which is much smaller than the number of components N. Now consider the binary output variables Si = sgn(xd· Their correlations can be calculated from the probability distribution of the Gaussian variables Xi: where (CSS)ij = (SiSj) = J IT dYk P(Xk) sgn(Xi) sgn(Xj) k (5) (6) The integrals in Equation 5 can be done analytically, and yield the surprisingly simple result: (CSS ) .. _ sin-1 'J (2) [C~.X 1 'J 11" JCfix elf . (7) Thus, the correlations of the clipped binary variables CSS are related to the correlations of the corresponding Gaussian variables CXX through the nonlinear arcsine function. The normalization in the denominator of the arcsine argument reflects the fact that the sign function is unchanged by a scale change in the Gaussian variables. Although the correlation matrix CSS and the generating correlation matrix cn are easily related through Equation 7, they have qualitatively very different properties. In general, the correlation matrix CSS will no longer have the low rank structure of CXX. As illustrated by the translationally invariant example in the next section, the spectrum of CSS may contain a whole continuum of eigenvalues even though cxx has only a few nonzero eigenvalues. PCA is typically used for dimensionality reduction of real variables; can this model be used for compressing the binary outputs Si? Although the output correlations CSS no longer display the low rank structure of the generating CXX , a more appropriate measure of data compression is the entropy of the binary output states. Consider how many of the 2N possible binary states will be generated by the clipping process. The equation Xi = Ej WijYj = 0 defines a P - 1 dimensional hyperplane in the P-dimensional state space of hidden variables Yj, which are shown as dashed lines in Figure 2. These hyperplanes partition the half-space where Si = +1 from the 518 5;=+1 5;=-1 L IL.--__ --II ______ ...... 1 , '-, , C)()( D. D. Lee and H. Sompolinsky css '., ., , , , ... ... "' ... ... ... 10.2 '-----'-__ ~~ ___ ~.............J 10° 10' 102 Eigenvalue rank Figure 3: Translationally invariant binary spin distribution with N = 256 units. Representative samples from the distribution are illustrated on the left, while the eigenvalue spectrum of CSS and CXX are plotted on the right. region where Si = -1. Each of the N spin variables will have such a dividing hyperplane in this P-dimensional state space, and all of these hyperplanes will generically be unique. Thus, the total number of spin configurations Si is determined by the number of cells bounded by N dividing hyperplanes in P dimensions. The number of such cells is approximately NP for N » P, a well-known result from perceptrons [6]. To leading order for large N, the entropy of the binary states generated by this process is then given by S = P log N. Thus, the entropy of the spin configurations generated by this model is directly proportional to the number of hidden variables P . How is the topology of the binary spin configurations Si related to the PCA manifold structure of the continuous variables Xi? Each of the generated spin states is represented by a polytope cell in the P dimensional vector space of hidden variables. Each polytope has at least P + 1 neighboring polytopes which are related to it by a single or small number of spin flips. Therefore, although the state space of binary spin configurations is discrete, the continuous manifold structure of the underlying Gaussian variables in this model is manifested as binary output configurations with low entropy that are connected with small Hamming distances. Translationally Invariant Example In principle, the weights W could be learned by applying maximum likelihood to this generative model; however, the resulting learning algorithm involves analytically intractable multi-dimensional integrals. Alternatively, approximations based upon mean field theory or importance sampling could be used to learn the appropriate parameters [7]. However, Equation 7 suggests a simple learning rule that is also approximate, but is much more computationally efficient [8]. First, the binary correlation matrix CSS is computed from the data. Then the empirical CSS is mapped into the appropriate Gaussian correlation matrix using the nonlinear transformation: CXX = sin(7l'Css /2). This results in a Gaussian correlation matrix where the variances of the individual Xi are fixed at unity. The weights Ware then calculated using the conventional PCA algorithm. The correlation matrix cxx is diagonalized, and the eigenvectors with the largest eigenvalues are used to form the columns of Learning a Continuous Hidden Variable Model for Binary Data 519 w to yield the best low rank approximation CXX ~ WWT . Scaling the variables Xi will result in a correlation matrix CXX with slightly different eigenvalues but with the same rank. The utility of this transformation is illustrated by the following simple example. Consider the distribution of N = 256 binary spins shown in Figure 3. Half of the spins are chosen to be positive, and the location of the positive bump is arbitrary under the periodic boundary conditions. Since the distribution is translationally invariant, the correlations CIl depend only on the relative distance between spins Ii - jl. The eigenvectors are the Fourier modes, and their eigenvalues correspond to their overlap with a triangle wave. The eigenvalue spectrum of css is plotted in Figure 3 as sorted by their rank. In this particular case, the correlation matrix CSS has N /2 positive eigenvalues with a corresponding range of values. Now consider the matrix CXX = sin(-lI'Css /2). The eigenvalues of CXX are also shown in Figure 3. In contrast to the many different eigenvalues CSS, the spectrum of the Gaussian correlation matrix CXX has only two positive eigenvalues, with all the rest exactly equal to zero. The corresponding eigenvectors are a cosine and sine function. The generative process can thus be understood as a linear combination of the two eigenmodes to yield a sine function with arbitary phase. This function is then clipped to yield the positive bump seen in the original binary distribution. In comparison with the eigenvalues of CSS, the eigenvalue spectrum of CXX makes obvious the low rank structure of the generative process. In this case, the original binary distribution can be constructed using only P = 2 hidden variables, whereas it is not clear from the eigenvalues of CSS what the appropriate number of modes is. This illustrates the utility of determining the principal components from the calculated Gaussian correlation matrix cxx rather than working directly with the observable binary correlation matrix CSS. Handwritten Digits Example This model was also applied to a more complex data set. A large set of 16 x 16 black and white images of handwritten twos were taken from the US Post Office digit database [9]. The pixel means and pixel correlations were directly computed from the images. The generative model needs to be slightly modified to account for the non-zero means in the binary outputs. This is accomplished by adding fixed biases ~i to the Gaussian variables Xi before clipping: Si = sgn(~i + Xi). (8) The biases ~i can be related to the means of the binary outputs through the expression: ~i = J2CtX erf- 1 (Si). (9) This allows the biases to be directly computed from the observed means of the binary variables. Unfortunately, with non-zero biases, the relationship between the Gaussian correlations CXX and binary correlations CSS is no longer the simple expression found in Equation 7. Instead, the correlations are related by the following integral equation: Given the empirical pixel correlations CSS for the handwritten digits, the integral in Equation 10 is numerically solved for each pair of indices to yield the appropriate 520 D. D. Lee and H Sompolinsky 102 ~------~------~------~-------.------~ .... CSS ..... .... .... ..... "'to " ~ , , , 103 L-____ ~ ______ ~ __ ~ __ ~ ______ ~ ______ ~ 50 100 150 200 250 Eigenvalue Rank Morph 2 2 2 2 ;2 a 2 ~ a Figure 4: Eigenvalue spectrum of CSS and CXx for handwritten images of twos. The inset shows the P = 16 most significant eigenvectors for cxx arranged by rows. The right side of the figure shows a nonlinear morph between two different instances of a handwritten two using these eigenvectors. Gaussian correlation matrix CXX . The correlation matrices are diagonalized and the resulting eigenvalue spectra are shown in Figure 4. The eigenvalues for CXX again exhibit a characteristic drop that is steeper than the falloff in the spectrum of the binary correlations CSs. The corresponding eigenvectors of CXX with the 16 largest positive eigenvalues are depicted in the inset of Figure 4. These eigenmodes represent common image distortions such as rotations and stretching and appear qualitatively similar to those found by the standard PCA algorithm. A generative model with weights W corresponding to the P = 16 eigenvectors shown in Figure 4 is used to fit the handwritten twos, and the utility of this nonlinear generative model is illustrated in the right side of Figure 4. The top and bottom images in the figure are two different examples of a handwritten two from the data set, and the generative model is used to morph between the two examples. The hidden values Yi for the original images are first determined for the different examples, and the intermediate images in the morph are constructed by linearly interpolating in the vector space of the hidden units. Because of the clipping nonlinearity, this induces a nonlinear mapping in the outputs with binary units being flipped in a particular order as determined by the generative model. In contrast, morphing using conventional PCA would result in a simple linear interpolation between the two images, and the intermediate images would not look anything like the original binary distribution [10]. The correlation matrix CXX also happens to contain some small negative eigenvalues. Even though the binary correlation matrix CSS is positive definite, the transformation in Equation 10 does not guarantee that the resulting matrix CXx will also be positive definite. The presence of these negative eigenvalues indicates a shortcoming of the generative processs for modelling this data. In particular, the clipped Gaussian model is unable to capture correlations induced by global Learning a Continuous Hidden Variable Model for Binary Data 521 constraints in the data. As a simple illustration of this shortcoming in the generative model, consider the binary distribution defined by the probability density: P({s}) tX lim.B-+ooexp(-,BLijSiSj). The states in this distribution are defined by the constraint that the sum of the binary variables is exactly zero: Li Si = O. Now, for N 2: 4, it can be shown that it is impossible to find a Gaussian distribution whose visible binary variables match the negative correlations induced by this sum constraint. These examples illustrate the value of using the clipped generative model to learn the correlation matrix of the underlying Gaussian variables rather than using the correlations of the outputs directly. The clipping nonlinearity is convenient because the relationship between the hidden variables and the output variables is particularly easy to understand. The learning algorithm differs from other nonlinear PCA models and autoencoders because the inverse mapping function need not be explicitly learned [11, 12]. Instead, the correlation matrix is directly transformed from the observable variables to the underlying Gaussian variables. The correlation matrix is then diagonalized to determine the appropriate feedforward weights. This results in a extremely efficient training procedure that is directly analogous to PCA for continuous variables. We acknowledge the support of Bell Laboratories, Lucent Technologies, and the US-Israel Binational Science Foundation. We also thank H. S. Seung for helpful discussions. References [1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag. [2] Bartholomew, DJ (1987). Latent variable models and factor analysis. London: Charles Griffin & Co. Ltd. [3] Hinton, GE, Dayan, P & Revow, M (1996). Modeling the manifolds of images of handwritten digits. IEEE Transactions on Neural networks 8,65- 74. [4] Van Vreeswijk, C, Sompolinsky, H, & Abeles, M. (1999). Nonlinear statistics of spike trains. In preparation. [5] Ackley, DH, Hinton, GE, & Sejnowski, TJ (1985). A learning algorithm for Boltzmann machines. Cognitive Science 9, 147-169. [6] Cover, TM (1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Electronic Comput. 14, 326- 334. [7] Tipping, ME (1999). Probabilistic visualisation of high-dimensional binary data. Advances in Neural Information Processing Systems ~1. [8] Christoffersson, A (1975). Factor analysis of dichotomized variables. Psychometrika 40, 5- 32. [9] LeCun, Yet al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation i, 541-551. [10] Bregler, C, & Omohundro, SM (1995). Nonlinear image interpolation using manifold learning. Advances in Neural Information Processing Systems 7,973980. [11] Hastie, T and Stuetzle, W (1989). Principal curves. Journal of the American Statistical Association 84, 502-516. [12] Demers, D, & Cottrell, G (1993). Nonlinear dimensionality reduction. Advances in Neural Information Processing Systems 5, 580-587. Risk Sensitive Reinforcement Learning Ralph Neuneier Siemens AG, Corporate Technology D-81730 Mtinchen, Germany Ralph.Neuneier@mchp.siemens.de Oliver Mihatsch Siemens AG, Corporate Technology D-81730 Mtinchen, Germany Oliver.Mihatsch@mchp.siemens.de Abstract As already known, the expected return of a policy in Markov Decision Problems is not always the most suitable optimality criterion. For many applications control strategies have to meet various constraints like avoiding very bad states (risk-avoiding) or generating high profit within a short time (risk-seeking) although this might probably cause significant costs. We propose a modified Q-Iearning algorithm which uses a single continuous parameter K E [-1, 1] to determine in which sense the resulting policy is optimal. For K = 0, the policy is optimal with respect to the usual expected return criterion, while K -+ 1 generates a solution which is optimal in worst case. Analogous, the closer K is to -1 the more risk seeking the policy becomes. In contrast to other related approaches in the field of MDPs we do not have to transform the cost model or to increase the state space in order to take risk into account. Our new approach is evaluated by computing optimal investment strategies for an artificial stock market. 1 WHY IT SOMETIMES PAYS TO ACT CAUTIOUSLY Reinforcement learning (RL) deals with the computation of favorable control policies in sequential decision task. Its theoretical framework of Markov Decision Problems (MDPs) evaluates and compares policies by their expected (sometimes discounted or averaged) sum of the immediate returns or costs per time step (Bertsekas & Tsitsiklis, 1996). But there are numerous applications which require a more sophisticated control scheme: e. g. a policy should take into account that bad outcomes or states may be possible even if they are very rare because they are so disastrous, that they should be certainly avoided. An obvious example is the field of finance where the main question is how to invest resources among various opportunities (e.g. assets like stocks, bonds, etc.) to achieve remarkable returns while simultaneously controlling the risk exposure of the investments due to changing markets or economic conditions. Many traders try to achieve this by a Markovitz-like portfolio management which distributes capital according to return and risk /032 R. Neuneier and 0. Mihatsch estimates of the assets. A new approach using reinforcement learning techniques which additionally integrates trading costs and other market imperfections has been proposed in Neuneier, 1998. Here, these algorithms are naturally extended such that an explicit risk control is now possible. The investor can decide how much risk shelhe is willing to accept and then compute an optimal risk-averse investment strategy. Similar trade-off scenarios can be formulated in robotics, traffic control and further application areas. The fact that the popular expected value criterion is not always suitable has been already known in the field of AI (Koenig & Simmons, 1994), control theory and reinforcement learning (Heger, 1994 and Szepesvari, 1997). Several techniques have been proposed to handle this problem. The most obvious way is to transform the sum of returns "Et rt using an appropriate utility function U which reflects the desired properties of the solution. Unfortunately, interesting nonlinear utility functions incorporating the variance of the return, such as U("Et rt) = "Et rt - >'("Et rt - E("Et rt))2, lead to non-Markovian decision problems. The popular class of exponential utility functions U("Et rt) = exp(>'"Et rt) preserves the Markov property but requires time dependent policies even for discounted infinite horizon MDPs. Furthermore, it is not possible to formulate a corresponding modelfree learning algorithm. A further alternative changes the state space model by including past returns as an additional state element at the cost of a higher dimensionality of the MDP. Furthermore, it is not always clear in which way the states should be augmented. One may also transform the cost model, i. e. by punishing large losses stronger than minor costs. While requiring a significant amount of prior knowledge, this also increases the complexity of the MDP. In contrast to these approaches we modify the popular Q-learning algorithm by introducing a control parameter which determines in which sense the resulting policy is optimal. Intuitively and loosely speaking, our algorithm simulates the learning behavior of an optimistic (pessimistic) person by overweighting (underweighting) experiences which are more positive (negative) than expected. This main idea will be made more precise in section 2 and mathematically thoroughly analyzed in section 3. Using artificial data, we demonstrate some properties of the new algorithm by constructing an optimal risk-avoiding investment strategy (section 4). 2 RISK SENSITIVE Q-LEARNING For brevity we restrict ourselves to the subclass of infinite horizon discounted Markov decision problems (MDP). Furthermore, we assume the immediate rewards being deterministic functions of the current state and control action. Let S = {I, ... , n} be the finite state space and U be the finite action space. Transition probabilities and immediate rewards are denoted by Pij(U) and 9i(U), respectively. 'Y denotes the discount factor. Let II be the set of all deterministic policies mapping states to control actions. A commonly used objective is to learn a policy 1r that maximizes ( Q' (i, u) '~g,(u) + E {t, 'Y'g" (,,(i,)) } ) (1) quantifying the expected reward if one executes control action U in state i and follows the policy 1r thereafter. It is a well-known result that the optimal Q-values Q*(i,u) := maX7rETIQ7r (i, u) satisfy the following optimality equation Q*(i,u) = 9i(U) + 'Y ~ Pij(U) maxQ*(j,u') Vi E S,u E U. (2) L...J u'EU jES Any policy 1f with 1f(i) = argmaxuEU Q* (i, u) is optimal with respect to the expected reward criterion. Risk Sensitive Reinforcement Learning 1033 The Q-function Q1r averages over the outcome of all possible trajectories (series of states) of the Markov process generated by following the policy 1r. However, the outcome of a specific realization of the Markov process may deviate significantly from this mean value. The expected reward criterion does not consider any risk, although the cases where the discounted reward falls considerably below the mean value is of a living interest for many applications. Therefore, depending on the application at hand the expected reward approach is not always appropriate. Alternatively, Heger (1994) and Littman & Szepesvari (1996) present a performance criterion that exclusively focuses on risk avoiding policies: maximize (Q< (i, u) ,= 9i(U) + "i~f {t, 7' 9;,(1T(i,»}) . (3) p(ll,t 2, ... »o The Q-function Q1r (i, u) denotes the worst possible outcome if one executes control action u in state i and follows the policy 1r thereafter. The corresponding optimality equation for Q*(i, u) := max1rEn Q1r (i, u) is given by Q*(i,u) = 9i(U) + / min maxQ*(j,u'). (4) )ES u'EUPij(U»O Any policy 1[ satisfying 1[( i) = arg maxuE U Q* (i, u) is optimal with respect to this minimal reward criterion. In most real world applications this approach is too restrictive because it takes very rare events (that in practice never happen) fully into account. This usually leads to policies with a lower average performance than the application requires. An investment manager, for instance, which acts with respect to this very pessimistic objective function will not invest at all. To handle the trade-off between a sufficient average performance and a risk avoiding (risk seeking) behavior, we propose a family of new optimality equations parameterized by a meta-parameter /'i, (-1 < /'i, < 1): o = ~ Pij(U)X" (9i(U) + / max Q,.(j, u') - Q,.(i, u)) Vi E S, u E U (5) ~ u'EU jES where X,. (x) := (1 /'i, sign(x) )x. (In the next section we will show that a unique solution Q,. of the above equation (5) exists.) Obviously, for /'i, = 0 we recover equation (2), the optimality equation for the expected reward criterion. If we choose /'i, to be positive (0 < /'i, < 1) then we overweight negative temporal differences 9i(U) + / max Q,.(j, u') - Q,.(i, u) < 0 (6) u'EU with respect to positive ones. Loosely speaking, we overweight transitions to states where the future return is lower than the average one. On the other hand, we underweight transitions to states that promise a higher return than in the average. Thus, an agent that behaves according to the policy 1r,.(i) := argmaxuEU Q,.(i,u) is risk avoiding if /'i, > O. In the limit /'i, -+ 1 the policy 1r,. approaches the optimal worst-case policy 1[, as we will show in the following section. (To get an intuition about this, the reader may easily check that the optimal worst-case Q-value Q* fulfills the modified optimality equation (5) for /'i, = 1.) Similarly, the policy 1r,. becomesrisk seeking if we choose /'i, to be negative. It is straightforward to formulate a risk sensitive Q-Iearning algorithm that bases on the modified optimality equation (5). Let Q,.(i, u; w) be a parametric approximation of the Q-function Q,.(i,u). The states and actions encountered at time step k during simulation are denoted by ik and Uk respectively. At each time step apply the following update rule: d(k) 9ik (Uk) + / max Q,.(ik+l, u'; w(k)) - Q,.(ik, Uk; w(k)), u'EU w(k+1) W(k) + a~k) X"(d(k))\7 wQ,.(ik, Uk; w(k)), (7) 1034 R. Neuneier and 0. Mihatsch where o:~k) denotes a stepsize sequence. The following section analyzes the properties of the new optimality equations and the corresponding Q-Iearning algorithm. 3 PROPERTIES OF THE RISK SENSITIVE Q-FUNCTION Due to space limitations we are not able to give detailed proofs of our results. Instead, we focus on interpreting their practical consequences. The proofs will be published elsewhere. Before formulating the mathematical results, we introduce some notation to make the exposition more concise. Using an arbitrary stepsize 0 < 0: < 1, we define the value iteration operator corresponding to our modified optimality equation (5) as Ta , ~[Q](i, u) := Q(i, u) + 0: L Pij(U)X~ (9i(U) +, ~~ Q(j, u') - Q(i, u)). (8) jES The operator Ta,~ acts on the space of Q-functions. For every Q-function Q and every state-action pair (i, u) we define N~[Q](i, u) to be the set of all successor states j for which maxu'EU Q(j, u') attains its minimum: N~[Q](i,u):= {j E Slpij(u) > o and maxQ(j,u') = min maxQV,u')}. (9) u'EU j'es u'EU Pij,(U) >0 Let p~[Q]( i, u) := 2:jE N" [Q](i,u) Pij (u) be the probability of transitions to such successor states. We have the following lemma ensuring the contraction property of Ta,~. Lemma 1 (Contraction Property) Let IQI = maxiES,uEU Q(i, u) and 0 < 0: < 1,0 < , < 1. Then ITa,~[Qd Ta,~[Q2ll ::; (1 - 0:(1 -11>:1)(1 - ,)) IQ1 - Q21 VQ1, Q2. (10) The operator Ta,~ is contracting, because 0 < (1 - 0:(1 11>:1)(1 - ,)) < 1. The lemma has several important consequences. 1. The risk sensitive optimality equation (5), i. e. Ta,~[Ql = Q has a unique solution Q~ for all -1 < I>: < 1. 2. The value iteration procedure Qnew := Ta,~[Ql converges towards Q~. 3. The existing convergence results for traditional Q-Iearning (Bertsekas & Tsitsiklis 1997, Tsitsiklis & Van Roy 1997) remain also valid in the risk sensitive case I>: i- O. Particularly, risk sensitive Q-learning (7) converges with probability one in the case of lookup table representations as well as in the case of optimal stopping problems combined with linear representations. 4. The speed of convergence for both, risk sensitive value iteration and Q-Iearning becomes worse if 11>:1 -7 1. We can remedy this to some extent if we increase the stepsize 0: appropriately. Let 7r ~ be a greedy policy with respect to the unique solution Q ~ of our modified optimality equation; that is 7r~(i) = argmaxuEuQ~(i,u). The following theorem examines the performance of 7r ~ for the risk avoiding case I>: 2: O. It gives us a feeling about the expected outcome Q'Ir" and the worst possible outcome Q'Ir" of policy 7r~ for different values of 1>:. The theorem clarifies the limiting behavior of 7r ~ if I>: -7 1. Risk Sensitive Reinforcement Learning 1035 Theorem 2 Let 0 ~ /\, < 1. The following inequalities hold componentwise, i. e. for each pair (i,u) E S x U. o ~ Q* - Qrr" ~ 2/\'-1' (Q* - Q*) (11) -, o ~ PK[QK](Q* - Qrr,,) ~ (1- /\,) -'-(Q* - Q*) (12) 2/\, 1-, Moreover, lim 0"" = Q* and lim Qrr" = Q*. K~O K~l-The difference Q * - Q* between the optimal expected reward and the optimal worst case reward is crucial in theabove inequalities. It measures the amount of risk being inherent in our MDP at hand. Besides the value of /\', this quantity essentially influences the difference between the performance of the policy 7r K and the optimal performance with respect to both, the expected reward and the worst case criterion. The second inequality (12) states that the performance of policy 7r K in the worst case sense tends to the optimal worst case performance if /\, -+ 1. The "speed of convergence" is influenced by the quantity PK [Q K], i. e. the probability that a worst case transition really occurs. (Note that PK [Q KJ is bounded from below.) A higher probability PK [Q KJ of worst case transitions implies a stronger risk avoiding attitude of the policy 7r K. 4 EXPERIMENTS: RISK-AVERSE INVESTMENT DECISIONS Our algorithm is now tested on the task of constructing an optimal investment policy for an artificial stock price analogous to the empirical analysis in Neuneier, 1998. The task, illustrated as a MDP in fig. 1, is to decide at each time step (e. g. each day or after each mayor event on the market) whether to buy the stock and therefore speculating on increasing stock prices or to keep the capital in cash which avoids potential losses due to decreasing stock prices. disturbancies financial market investments return investor rates, prices Figure 1. The Markov Decision Problem: Xt = ($t, Kt)' at = J-L(xt} p(xt+llxt} r(xt,at,$t+d state: market $t and portfolio K t policy J-L, actions transition probabilities return function 2.-----~----__ ----__ ----__ ----__ --__. ' .9 1 . B .~: : : i '.5 1. , Figure 2. A realization of the artificial stock price for 300 time steps. It is obvious that the price follows an increasing trend but with higher values a sudden drop to low values becomes more and more probable. It is assumed, that the investor is not able to influence the market by the investment decisions. This leads to a MDP with some of the state elements being uncontrollable and results in two computationally import implications: first, one can simulate the investments by historical data without investing (and potentially losing) real money. Second, one can formulate a very efficient (memory saving) and more robust Q-Ieaming algorithms. Due to space restriction we skip a detailed description of these algorithms and refer the interested reader to Neuneier, 1998. 1036 R. Neuneier and O. Mihatsch The artificial stock price is in the range of [1, 2]. The transition probabilities are chosen such that the stock market simulates a situation where the price follows an increasing trend but with higher values a drop to very low values becomes more and more probable (fig. 2). The state vector consists of the current stock price and the current investment, i. e. the amount of money invested in stocks or cash. Changing the investment from cash to stocks results in some transaction costs consisting of variable and fixed terms. These costs are essential to define the investment problem as a MDP because they couple the actions made at different time steps. Otherwise we could solve the problem by a pure prediction of the next stock price. The function which quantifies the immediate return for each time step is defined as follows: if the capital is invested in cash, then there is nothing to earn even if the stock price increases, if the investor has bought stocks the return is equal the relative change of the stock price weighted by the invested amount of capital minus the transaction costs which apply if one changed from cash to stocks. o < .. to! on ,took> 1 .. ", 0.5 ...... 1.5 • tode prtce S caon ,ncaoh o coptto! ,"stocks 1 It .. o.s ~ , capital 1.5 R'1 stclCln 1 _todc price • Figure 3. Left: Risk neutral policy, K, = O. Right: A small bias of K, = 0.3 against risk changes the policy if one is not invested (transaction costs apply in this case) . Figure 4. Left: K, = 0.5 yields a stronger risk averse attitude. Right: With K, = 0.8 the policy becomes also more cautious if already invested in stocks. Figure 5. Left: K, = 0.9 leads to a policy which invests in stocks in only 5 cases. Right: The worst case solution never invests because there is always a positive probability for decreasing stock prices. As a reinforcement learning method, Q-Iearning has to interact with the environment (here the stock market) to learn optimal investment behavior. Thus, a training set of 2000 data points is generated. The training phase is divided into epochs which consists of as many trials as data in the training set exist. At every trial the algorithm selects randomly a stock price from the data set, chooses a random investment state and updates the tabulated Qvalues according to the procedure given in Neuneier, 1998. The only difference of our new risk averse Q-Iearning is that negative experiences, i. e. smaller returns than in the mean, are overweighted in comparison to positive experiences using the /\,-factor of eq. (7). Using different /\, values from 0 (recovering the original Q-Iearning procedure) to 1 (leading to worst case Q-Iearning) we plot the resulting policies as mappings from the state space to control actions in figures 3 to 5. Obviously, with increasing /\, the investor acts more and more cautiously because there are less states associated with an investment decision for stocks. In the extreme case of /\, = 1, there is no stock investment at all in order to avoid any loss. The policy is not useful in practice. This supports our introductory comments that worst case Q-learning is not appropriate in many tasks. Risk Sensitive Reinforcement Learning QQ-plol of Ihe distributions 0.8,---.-----r--,---.--.---.----,------,-----r---, 0.7 "+ ,. 0.6 " • ~ .. 0.5 i .... 0.4 o ! ~ 0.3 N i 0.2 ~ J! : 0.1 'ii c 1I ot--~WIIfl!III!'I"""" ... -0.1 '.,' , ' + + ., .. +' "" -O .2':--::'.,---L---:-'-:---::"---:-'-::---'----:c-'-::----'~--'-,____---,J ~ ~ 0 ~ u ~ U U MUM quanlU ... of the cla .. leal approach: KaO 1037 Figure 6. The quantiles of the distributions of the discounted sum of returns for It = 0.2 (0) and It = 0.4 (+) are plotted against the quantiles for the classical risk neutral approach It = O. The distributions only differ significantly for negative accumulated returns (left tail of the distributions). For further analysis, we specify a risky start state io for which a sudden drop of the stock price in the near future is very probable. Starting at io we compute the cumulated discounted rewards of 10000 different trajectories following the policies 11"0, 11"0.2 and 11"0.4 which have been generated using K, = 0 (risk neutral), K, = 0.2 and K, = 0.4. The resulting three data sets are compared using a quantile-quantile plot whose purpose is to determine whether the samples come from the same distribution type. If they do so, the plot will be linear. Fig. 6 clearly shows that for higher K,-values the left tail of the distribution (negative returns) bends up indicating a fewer number of losses. On the other hand there is no significant difference for positive quantiles. In contrast to naive utility functions which penalizes high variance in general, our risk sensitive Q-Iearning asymmetrically reduces the probability for losses which may be more suitable for many applications. 5 CONCLUSION We have formulated a new Q-Iearning algorithm which can be continuously tuned towards risk seeking or risk avoiding policies. Thus, it is possible to construct control strategies which are more suitable for the problem at hand by only small modifications of Q-Iearning algorithm. The advantage of our approch in comparison to already known solutions is, that we have neither to change the cost nor the state model. We can prove that our algorithm converges under the usual assumptions. Future work will focus on the connections between our approach and the utility theoretic point of view. References D. P. Bertsekas, J. N. Tsitsiklis (1996) Neuro-Dynamic Programming. Athena Scientific. M. Heger (1994) Consideration of Risk and Reinforcement Learning, in Machine Learning, proceedings of the 11 th International Conference, Morgan Kaufmann Publishers. S. Koenig, R. G. Simmons (1994) Risk-Sensitive Planning with Probabilistic Decision Graphs. Proc. of the Fourth Int. Conf. on Principles of Knowledge Representation and Reasoning (KR). M.L. Littman, Cs. Szepesvari (1996), A generalized reinforcement-learning model: Convergence and applications. In International Conference of Machine Learning '96. Bari. R. Neuneier (1998) Enhancing Q-learning for Optimal Asset Allocation, in Advances in Neural Information Processing Systems /0, Cambridge, MA: MIT Press. M. L. Puterman (1994), Markov Decision Processes, John Wiley & Sons. Cs. Szepesvari (1997) Non-Markovian Policies in Sequential Decision Problems, Acta Cybernetica. J. N. Tsitsiklis, B. Van Roy (1997) Approximate Solutions to Optimal Stopping Problems, in Advances in Neural Information Processing Systems 9, Cambridge, MA: MIT Press.
|
1998
|
3
|
1,526
|
Learning to Find Pictures of People Sergey Ioffe Computer Science Division U.C. Berkeley Berkeley CA 94720 iojJe(Cj)cs. be1·keley. edu David Forsyth Computer Sciencp Division U.C. Berkeley Berkeley CA 94720 daf@cs.beTkeley. edv Abstract Finding articulated objects, like people, in pictures present.s a particularly difficult object. recognition problem. We show how t.o find people by finding putative body segments, and then construct.ing assemblies of those segments that are consist.ent with the constraints on the appearance of a person that result from kinematic properties. Since a reasonable model of a person requires at. least nine segments, it is not possible to present every group to a classifier. Instead, the search can be pruned by using projected versions of a classifier that accepts groups corresponding to people. We describe an efficient projection algorithm for one popular classifier , and demonstrate that our approach can be used to determine whether images of real scenes contain people. 1 Introduction Several t.ypical collpctions containing over ten million images are listed in [2]. There is an extensiw literature on obtaining images from large collections using features computed from t.he whole image, including colour histograms, texture measures and shape measures; a partial review appears in [5]. However, in the most comprehensive field study of usage pract.ices (a paper by Enser [2] surveying the use of the Hulton Deutsch collection), t.here is a clear user preference for searching these collections on image semantics. An ideal search tool ,,,ould be a quite general object recognition system that could be adapted quickly and easily to the types of objects sought by a user. An important special case is finding people and determining what they are doing. This is hard, because people have many internal degrees of freedom. We follow the approach of [3], and represent people as collections of cylinders, each representing a body segment. Regions that could be the projections of cylinders are easily found using techniques similar to those of [1]. Once these regions ate found , they must be assembled Learning to Find Pictures of People 783 int.o collect.ions t.hat. are consistent with the appearance of images of real people, which are constrained by the kinematics of human joints; consistency is tested wit.h a classifier. Since t.here are many candidate segment.s, a brute force search is impossible. \Ve show how this search can be pruned using projections of the classifier. 2 Learning to Build Seglnent Configurations Suppose that. ;V segments have been found in an image, and there are m body parts. We will define a labeling as a set L = {(Ll , sd , (l2, S2), .. . , (h·, sd} of pairs where each segment. Si E {1 .. . N} is labeled with the labelli E {1 . .. m}. A labeling is complete if it represents a full m-segment configuration (Fig. 2( a,b)). Assume we have a classifier C that for any complete labeling L output.s C( L) > 0 if L corresponds to a person-like configuration, and C (L) < 0 otherwise. Finding all the possible body configurations in an image is equivalent. t.o finding all the complete labelings L for which C(L) > O. This cannot be done with brute-force search t.hrough the entire set.. The search can be pruned if, for an (incomplete) labeling L' there is no complete L ;2 L' such that G(L) > O. For inst.ance, if two segments cannot represent the upper and lower left. arm, as in Figure la, then we do not consider any complete labelings where they are labeled as such. Projected classifiers make the search for body configura tions efficient. by pruning la belings using the properties of smaller sub-Iabelings (as in [7], who use manually determined bounds and do not learn the tests). Given a classifier G which is a function of a set of features whose values depend on segments with labels l1 . . . Im , t.he projected classifier Cil (k is a function of of all those features that depend only on the segments with labels 11 ... lh ' In particular, GIllk(1') > 0 if there is some ext.ension L of l' such that C(L) > 0 (see figure l).The converse need not be true: t.he feature values required to bring a projected point inside the positive . volUl11f' of C' may not be realized with any labeling of t.he current Sf't. of segments 1, . .. , N. For a projected classifier to be usefuL it must be easy to compute the projection, and it must be effective in rejecting labelings at. an early stage. These are strong rf'quirements which are not satisfied by most good classifiers; for example, in our f'xperience a support vector machine with a posit.ive definit.e quadratic kernel projects easily but typically yields unrestrictive projected classifiers. 2.1 Building Labelings Increm entally Assume we have a classifier C that accepts assemblies corresponding to people and that we can construct. projected classifiers as we need them. We will now show how t.o use them to ronst.ruct labelings, using a pyramid of classifiers. A pyramid of classifiers (Fig. 1 (c)) , determined by the classifier C and a permutation of labels (11 .. . ld consists of nodes NI, ... IJ corresponding to each of the projected classifiers CI , .I J • i ~ j. Each of the bottom-level nodes NI , receives the set of all segments ill the image as the input. The top node Nil 1m OUt.pUt.S t.he set of all complete labelings L = {(/ 1,sIl . . . (lm,sm)) such that G(L) > 0, i.e. the set of all assemblies in t.he image classified as people. Further, each node NI , . I, outputs the set of all sub-labelings L = {(li,sil . . . (lj,Sj)) such that GI, I)(L) > O. ThE' node:,> Nt , at t.he bottom level work by selecting all segments Si in the image for which n, {(I,.:>i)} > O. Each of the remaining nodes has t.wo part.s: merging and filt.ering. The merying stage of node NI, .. I J merges the outputs of its children by computing t.he set of all la belings {(li, s;) . .. (lj, Sj)} where {(Ii , sd ... (lj -1, S j - tl} 784 S. Ioffe and D. Forsyth y(sl,s2) \J. · . · . · . · . : x(sJ) II x(sJ) .. '--_---'-_--'-_---' __ -'--_segments a b c Figure 1: (a) Two segments that cannot correspond to the left upper and lower arm. Any configuration where they do can be rejected using a projected classifier regardless of the other segments that might appear in the configuration. (b) ProJecting a classifier G {( [1, SI), ([2, S2)}' The shaded area is the volume classified as positive, for the feature set {x (SI), y( SI , S2)} . Finding the projection Gil amounts to projecting off the features that cannot be computed from SI only, i. e., Y(SI' S2}. (c) A pyramid of classifiers. Each node outputs sub-assemblies accepted by the corresponding projected classifier. Each node except those in the bottom row works by forming labelings from the outputs of its two children, and filtering the result using the corresponding projected classifier. The top node outputs the set of all complete labelings that correspond to body configurations. and {(li+l, si+d . .. (Ij, Sj)} are in the outputs of NI,lj_1 and NI,+l .. lj' respectively. The filtering stage then selects, from the resulting set of labelings, those for which G1, ... lj(·) > 0, and the resulting set is the output of Nl, . lj' It is clear, from the definition of projected classifiers, that the output of the pyramid is, in fact, the set of all complete L for which G(L) > 0 (note that GIl 1m = G) . The only constraint on the order in which the outputs of nodes are computed is that children nodes have to be applied before parents. In our implementation, we use nodes Nl, . l j where j changes from 1 to m, and, for each j, i changes from j down to 1. This is equivalent to computing sets of labelings of the form {(II , stl ... (lj, Sj)} in order, where getting (j + I)-segment labelings from j-segment ones is itself an incremental process, whereby we check labels againstlj +l in the order [j, lj-I, . . . , [1. In practice, we choose the latter order on the fly for each increment step using a greedy algorithm, to minimize the size of labeling sets that are constructed (note that in this case the classifiers no longer form a pyramid) . The order (11 .. . lm) in which labels are added to an assembly needs to be fixed. We determine this order with a greedy algorithm by running a large segment set through the labeling builder and choosing the next label to add so as to minimize the number of labelings that result. 2.2 Classifiers that Project In our problem, each segment from the set {I .. . N} is a rectangle in some position and orientation. Given a complete labeling L = {(I, SI), ... , (m, sm)} , we want to have G(L) > 0 iff the segment arrangement produced by L looks like a person . Learning to Find Pictures of People 0.25 0.4 0.15 a b )' ~ -----, , 0.47 , 0.25 , =0.25+0.22 , , , 0.62 , 0.4 , , =0.4+0.22 , , , 0.15 0.37 , , =0.15+0.22 - - - -- -----0 0.22 ~ 0.22 C 785 ~ -------1 0.85 , , =0.25+0.6' , 1.0 : fO.15 , =0.4+0.6 : to.25 , 0.75 , , =0.15+0.6 ' ------_. 0.6 x =0.22+0.38 0.6 " x Figure 2: (a) All segments extracted for an image. (b) A labeled segment configuration corresponding to a person, where T=torso, LUA=left upper arm, etc. The head is not marked because we are not looking for it with our method. The single left leg segment in (a) has been broken in (b) to generate the upper and lower leg segments. (c) (top) A combination of a bounding box (the dashed line) and a boosted classifier, for two features x and y. Each plane in the boosted classifier is a thick line with the positive half-space indicated by an arrow; the associated weight {3 is shown next to the arrow. The shaded area is the positive volume of the classifier, which are the points P where LJ wJ{P(f)) > 1/2. The weights wx(-) and wy{') are shown along the x- and y-axes, respectively, and the total weight wx{P{x)) + Wy{P{y)) is shown for each region of the bounding box. (bottom) The projected classifier, given by wx{P{x)) > 1/2 - 8 = 0.1 whel'P 8 = maxp(y) wy{P{y)) = max{0.25, 0.4, 0.15} = 0.4. Each feature will depend on a few segments (1 to 3 in our experiments). Our kinematic features are invariant to translation, uniform scaling or rotation of the segment set, and include angles between segments and ratios of lengths, widths and distances. We expect the features that correspond to human configurations to lie within small fractions of their possible value ranges. This suggests using an axisaligned bounding box, with bounds learned from a collection of positive labelings, for a good first separation, and then using a boosted version of a weak classifier that splits the feature space on a single feature value (as in [6]). This classifier projects particularly well, using a simple algorithm described in section 2.3. Each weak classifier (Fig. 2(c)) is defined by the feature Ij on which the split is made, the position Pj of the splitting hyperplane, and the direct.ion dj E {I, -I} that determines which half-space is positive. A point P is classified as positive iff dj{P{fj) - Pj) > 0, where P{fj) is the value of feature /j. The boosting algorithm will associate a weight {3j with each plane {so that Lj {3j = 1), and the resulting classifier will classify a point as positive iffLd,(p(f,)-Pi»o{3j > 1/2, that is, iff the total weight of the weak classifiers that classify the point as positive is at least a half of the total weight of the classifiers. The set {/j} may have repeating features (which may have different Pj, dj and Wj values), and does not need to span the entire feature set. By grouping together the weights corresponding to planes splitting on the same feature, we finally rewrite the classifier as LJ wJ(P(f)) > 1/2, where 'U'J(P(f)) = 786 S. Joffe and D. Forsyth LfJ=j, dJ (P(f)-Pl »0 j3j is the weight associated with the particular value of feature f, is a piece-wise constant function and depends on in which of the intervals given by {pj I fj = f} this value falls. 2.3 Projecting a Boosted Classifier Given a classifier constructed as above, we need to construct classifiers that depend on on some identified subset of the features. The geometry of our classifiers whose positive regions consist of unions of axis-aligned bounding boxes makes this easy to do. Let 9 be the feature to be projected away perhaps because the value depends on a label that is not available. The projection of the classifier should classify a point pi in the (lower-dimensional) feature space as positive iffmaxp Lj Wj (P(f)) > 1/2 where P is a point which projects into pi but can have any value for P(g). We can rewrite this expression as LNg Wj(PI(f)) + maXp(g) wg(P(g)) > 1/2. The value of J = maxwg(P(g)) is readily available and independent of P'. We can see that, with the feature projected away, we obtain Lj Wj (Pi (f)) > 1/2 - J. Any number of features can be project.ed away in a sequence in this fashion . An example of the projected classifier is shown in Figure 2( c). The classifier C we are using allows for an efficient building of labelings, in that the features do not need to be recomputed when we move from G/t.l k to Gil .lk+l. We achieve this efficiency by carrying along with a labeling L = {(it , SI) ... (lk' Sk)} the sum <T(L) = L.jEF(II.lk) Wj(P(f)) where F(ll ... Ik ) is the set of all features computable from the segments labeled as 11, ... , lk' and {P(f)} the values of these features. When we add another segment. to get L' = {(II , sd .. . (lk+l, Sk+d}, we can compute <T(L') = <T(L) + LjEF(II .lk+d\F(lllk) 11'j(PI(f)). In other words, when we add a labellk+l, we need to compute only those features that require Sk+l for their computation. 3 Experimental Results We report results for a system that automatically identifies potential body segments (using the techniques described in [4]), and then applies the assembly process described above. Images for which assemblies that are kinematically consistent with a person are reported as having people in them. The segment finder may find either 1 or 2 segments for each limb, depending on whether it is bent or straight; because the pruning is so effective, we can allow segments to be broken into two equal halves lengt.hwise (like the left leg in Fig. 2(b)), both of which are tested. 3.1 Training The training set included 79 images without people, selected randomly from t.he COREL dat.abase, and 274 images each with a single person on uniform background. The images wit.h people have been scanned from books of human models [10]. All segments in the test images were reported; in the control images, only segments whose int.erior corresponded to human skin in colour and texture were reported. Control images, both for the training and for the test set, were chosen so that all had at least 30% of their pixels similar to human skin in colour and texture. This gives a more realistic test of the system performance by excluding regions that are obviously not human, and reduces the number of segments in the control images to the same order of magnitude as those in the test images. Learning to Find Pictures of People 787 Features II Test Control I Features II False Neg. False Pos. 367 II 120 28 I I 367 II 37 ~ 1~~ 567 120 86 567 49 0 a b Table 1: (a) Number of images of people (test) and without people (control) processed by the classifiers with 367 and 567 features. (b) False negative rim ages with a person where no body configuration was found) and false positive (images with no people where a person was detected) rates. The models are all wearing either swim suits or no clothes, otherwise segment finding fails; it is an open problem to segment people wearing loose clothing. There is a wide variation in the poses of the training examples, although all body segments are visible. The sets of segments corresponding to people were then hand-labeled. Of the 274 images with people, segments for each body part were found in 193 images. The remaining 81 resulted in incomplete configurations, which could still be used for computing the bounding box used to obtain a first separation. Since we assume that if a configuration looks like a person then its mirror image would too, we double the number of body configurations by flipping each one about a vertical axis. The bounding box is then computed from the resulting .548 points in the feature space, without looking at the images without people. The boosted classifier was trained to separate two classes: the 193 x 2 = 386 points corresponding to body configurations, and 60727 points that did not correspond to people but lay in the bounding box, obtained by using the bounding box classifier to incrementally build labelings for the images with no people. We added 1178 synthetic positive configurations obtained by randomly selecting each limb and the torso from one of the 386 real images of body configurations (which were rotated and scaled so the torso positions were the same in all of them) to give an effect of joining limbs and torsos from different images rather like children's flip-books. Remarkably, tlw boosted classifier classified each of the real data points correctly but misclassified 976 out of the 1178 synthetic configurations as negative; the synthetic examples were unexpectedly more similar to the negative examples than the real positive examples were. 3.2 Results The test dataset was separate from the training set and included 120 images with a person on a uniform background, and varying numbers of control images, reported in Table 1. We report results for two classifiers, one using 567 features and the other using a subset of 367 of those features. Table 1 b shows the false positive and false negative rates achieved for each of the two classifiers. By marking 51 % of test images and only 10% of control images, the classifier using 567 features compares extremely favorably with that of [3], which marked 54% of test images and 38% of control images using hand-tuned tests to form groups of four segments. In 55 of the 59 images where there was a false negative, a segment corresponding to a body part was missed by the segment finder, meaning that t he overall system performance significantly understates the classifier performance. There are few signs of overfitting, probably because the features are highly redundant. Using the larger set of features makes labeling faster (by a factor of about five), because more configurations are rejected earlier. 788 S. loffe and D. Forsyth 4 Conclusions and Future Work Groups of segments that satisfy kinematic constraints, learned from images of real people, quite reliably correspond to people and can be used to identify them. Our trick of projecting classifiers is effective at pruning an otherwise completely unmanageable correspondence search. Future issues include: fusing responses from face finders (such as those of [11, 9]; exploiting patterns of shading on human limbs to get better selectivity (as in [8]); determining the configuration of the person, which might tell what they are doing; and exploiting the kinematic similarities between humans and many animals to build systems that can find many different types of animal without searching the classes one by one. References [1] J .M. Brady and H. Asada. Smoothed local symmetries and their implementation. International Journal of Robotics Research, 3(3), 1984. [2] P.G.B. Enser. Query analysis in a visual information retrieval context. 1. Document and Text Management, 1(1):25-52, 1993. [3] M. M. Fleck, D. A. Forsyth, and C. Bregler. Finding naked people. In European Confel'ence on Computer Vision 1996. Vol. II, pages 592-602, 1996. (4] D.A. Forsyth and M.M. Fleck. Body plans. In IEEE Conf. on ComputEr Vision and Pattern Recognition, 1997. [5] D.A. Forsyth, J. Malik, M.M. Fleck, H. Greenspan, T. Leung, S. Belongie, C. Carson, and C. Bregler. Finding pictures of objects in large collections of images. In Proc. '2 'nd Intel'national Workshop on Object Representation in Computer Vision, 1996. [6] Y. Freund and R.E. Schapire. Experiments with a new boosting algorithm. In Machine Learning - 1.'3, 1996. [7] W.E.L. Grimson and T. Lozano-Perez. Localizing overlapping parts by searching the interpretation tree. IEEE Trans. Patt. Anal. Mach. Intell. , 9(4):469-482, 1987. [8] J. Haddon and D.A. Forsyth. Shading primitives. In Int. Conf. on Computer Vision, 1997. to appear. [9] H.A. Rowley, S. Baluja, and T. Kanade. Human face detection in visual scenes. In D.S. Touretzky, M.C. Mozer, and M.E. Hasselmo, editors, Advances in Neural Information Processing 8, pages 875-881, 1996. [10] Elte Shuppan. Pose file, volume 1-7. Books Nippan, 1993-1996. A collection of photographs of human models, annotated in Japanese. [11] K-K Sung and T. Poggio. Example based learning for view based face detection. Ai memo 1521, MIT, 1994.
|
1998
|
30
|
1,527
|
Visualizing Group Structure* Marcus Held, Jan Puzicha, and Joachim M. Buhmann Institut fur Informatik III, RomerstraBe 164, D-53117 Bonn, Germany email: {heldjanjb}.cs.uni-bonn.de. VVVVVV: http://vvv-dbv.cs.uni-bonn.de Abstract Cluster analysis is a fundamental principle in exploratory data analysis, providing the user with a description of the group structure of given data. A key problem in this context is the interpretation and visualization of clustering solutions in high- dimensional or abstract data spaces. In particular, probabilistic descriptions of the group structure, essential to capture inter-cluster relationships, are hardly assessable by simple inspection ofthe probabilistic assignment variables. VVe present a novel approach to the visualization of group structure. It is based on a statistical model of the object assignments which have been observed or estimated by a probabilistic clustering procedure. The objects or data points are embedded in a low dimensional Euclidean space by approximating the observed data statistics with a Gaussian mixture model. The algorithm provides a new approach to the visualization of the inherent structure for a broad variety of data types, e.g. histogram data, proximity data and co-occurrence data. To demonstrate the power of the approach, histograms of textured images are visualized as an example of a large-scale data mining application. 1 Introduction Clustering and visualization are key issues in exploratory data analysis and are fundamental principles of many unsupervised learning schemes. For a given data set, the aim of any clustering approach is to extract a description of the inherent group structure. The object space is partitioned into groups where each partition -This work has been supported by the German Research Foundation (DFG) under grant #BU 914/3-1, by the German Israel Foundation for Science and Research Development (GlF) under grant #1-0403-001.06/95 and by the Federal Ministry for Education, Science and Technology (BMBF #01 M 3021 A/4). Visualizing Group Structure 453 is as homogeneous as possible and two partitions are maximally heterogeneous. For several reasons it is useful to deal with probabilistic partitioning approaches: 1. The data generation process itself might be stochastic, resulting in overlapping partitions. Thus, a probabilistic group description is adequate and provides additional information about the inter-cluster relations. 2. The number of clusters might be chosen too large. Forcing the algorithm to a hard clustering solution creates artificial structure not supported by the data. On the other hand, superfluous clusters can be identified by a probabilistic group description . 3. There exists theoretical and empirical evidence that probabilistic assignments avoid over-fitting phenomena [7]. Several well-known clustering schemes result in fuzzy cluster assignments: For the most common type of vector- valued data, heuristic fuzzy clustering methods were suggested [4, 5] . In a more principled way, deterministic annealing algorithms provide fuzzy clustering solutions for a given cost function with a rigorous statistical foundation and have been developed for vectorial [9], proximity [6] and histogram data [8]. In mixture model approaches the assignments of objects to groups are interpreted as missing data. Its conditional expectations given the data and the estimated cluster parameters are computed during the E- step in the corresponding EM-algorithm and can be understood as assignment probabilities. The aim of this contribution is to develop a generic framework to visualize such probabilities as distances in a low dimensional Euclidean space. Especially in high dimensional or abstract object spaces, the interpretation of fuzzy group structure is rather difficult, as humans do not perform very well in interpreting probabilities. It is, therefore, a key issue to make an interpretation of the cluster structure more feasible. In contrast to multidimensional scaling (MDS), where objects are embedded in low dimensional Euclidean spaces by preserving the original inter object distances [3], our approach yields a mixture model in low dimensions, where the probabilities for assigning objects to clusters are maximally preserved. The proposed approach is similar in spirit to data visualization methods like projection pursuit clustering, GTM [1], simultaneous clustering and embedding [6]' and hierarchical latent variable models [2]. It also aims on visualizing high dimensional data. But while the other methods try to model the data itself by a low dimensional generator model, we seek to model the inferred probabilistic grouping structure. As a consequence, the framework is generic in the sense that it is applicable to any probabilistic or fuzzy group description. The key idea is to interpret a given probabilistic group description as an observation of an underlying random process. We estimate a low- dimensional statistical model by maximum likelihood inference which provides the visualization. To our knowledge the proposed algorithm provides the first solution to the visualization of distributional data, where the observations of an object consists of a histogram of measured features. Such data is common in data mining applications like image retrieval where image similarity is often based on histograms of color or texture features. Moreover, our method is applicable to proximity and co- occurrence data. 2 Visualizing Probabilistic Group Structure Let a set of N (abstract) objects CJ = {01 , ... , ON} be given which have been partitioned into K groups or clusters. Let the fuzzy assignment of object OJ to cluster Cv be given by qjv E [0,1], where we assume 2:~=1 qjv = 1 to enable a probabilistic interpretation. We assume that there exists an underlying "true" assignment of 454 M Held, J Puzicha and J M Buhmann objects to clusters which we encode by Boolean variables Miv denoting whether object OJ belongs to (has been generated by) cluster Cv . We thus interpret qiv as an empirical estimate of the probability P(Miv = 1). For notational simplicity, we summarize the assignment variables in matrices Q = (qiv) and M = (Miv). The key idea for visualizing group structure is to exploit a low-dimensional statistical model which "explains" the observed qiv. The parameters are estimated by maximum likelihood inference and provide a natural data visualization. Gaussian mixture models in low dimensions (typically d = 2 or d = 3) are often appropriate but the scheme could be easily extended to other classes, e.g. hierarchical models. To define the Gaussian mixture model, we first introduce a set of prototypes Y = {Y1, ... ,YK} C JRd representing the K clusters, and a set vector-valued object parameters X = {Xl, ... ,XN} C JRd. To model the assignment probabilities, the prototypes Y and the data points X are chosen such that the resulting assignment probabilities are maximally similar to the given frequencies Q. For the Gaussian mixture model we have Note that the probability distribution is invariant under translation and rotation of the complete parameter sets X, y. In addition, the scale parameter f3 could be dropped since a change of f3 only results in a rescaling of the prototypes Y and the data points X. For the observation Q the log-likelihood is given by1 N K LQ (X,Y) = LLqivlogmiv . (2) i=l v=l It is worth to note that when the qiv = (Miv}ptrue are estimates obtained by a factorial distribution, i.e. ptrue(M) = I1 Lv Mivqiv, then maximizing (2) is identical to minimizing the Kullback- Leibler (KL-)divergence DKdptruellP) = LM p true log (ptrue IP). In that case the similarity to the recent approach of Hofmann et al. [6] proposed as the minimization of DKdPllptrue) becomes apparent. Compared to [6] the role of P and ptrue is interchanged. From an informationtheoretic viewpoint DKdptruellP) is a better choice as it quantifies the coding inefficiency of assuming the distribution P when the true distribution is p true. Note that the choice of the KL-divergence as a distortion measure for distributions follows intrinsically from the likelihood principle. Maximum likelihood estimates are derived by differentiation: ~ qiv 8m iv ~ (~ ) L..J-. -8. =-2f3L..Jqiv L..J m i/1Y/1-Yv , m~v x, v=l v=l /1=1 (3) N K N K LL qj~ 88miV =-2f3LLqiv(miO'- JO'v)(Xi-YO') i=l v=l mw yO' i=l v=l N -2f3L (miO' - qiO') (Xi - yO') (4) i=l The gradients can be used for any gradient descent scheme. In the experiments, we used (3)-(4) in conjunction with a simple gradient descent technique, which has 1 Here, it is implicitly assumed that all qiv have been estimated based on the same amount of information. Visualizing Group Structure 455 0.8 0.6 + Figure 1: Visualization of two-dimensional artificial data. Original data generated by the mixture model with f3 = 1.0 and 5 prototypes. Crosses denote the data points Xi, circles the prototypes Ya. The embedding prototypes are plotted as squares, while the embedding data points are diamonds. The contours are given by !(x) = maXa (exp (-f3llx - Ya 112)/L~=1 exp (-f3llx - Y/JW)), For visualization purposes the embedding is translated and rotated in the correct position. been observed to be efficient and reliable up to a few hundred objects. From (4) an explicit formula for the prototypes may be recovered Ya (5) which can be interpreted as an alternative centroid rule. The position of the prototypes is dominated by objects with a large deviation between modeled and measured assignment probabilities. Note that (5) should not be used as an iterative equation as the corresponding fixed point is not contractive. 3 Results As a first experiment we discuss the approximately recoverable case, where we sample from (1) to generate artificial two-dimensional data and infer the positions of the sample points and of the prototypes by the visualizing group structure approach (see Fig. 1). Due to iso- contour lines in the generator density and in the visualization density not all data positions are recovered exactly. We like to emphasize that the complete information available on the grouping structure of the data is preserved, since the mean KL-divergence is quite small (Ri 2.10.10- 5). It is worth mentioning that the rank-order of the assignments of objects i to clusters (}' is completely preserved. For many image retrieval systems image similarity has been defined as similarity of occurring feature coefficients, e.g. colors or texture features. In [7], a novel statistical mixture model for distributional data, the probabilistic histogram clustering (ACM), has been proposed which we applied to extract the group structure inherent in image databases based on histograms of textured image features. The ACM explains the observed data by the generative model: 456 M. Held. J puzicha and J M. Buhmann Figure 2: Embedding of the VisTex database with MDS. 1. select an object OJ E 0 with probability Pi, 2. choose a cluster Ca according to the cluster membership Mia of Oi, 3. sample a feature Vj E V from the cluster conditional distribution qjla. This generative model is formalized by K P (OJ, vjIM,p, q) = Pi L Miaqjla (6) a=1 The parameters are estimated by maximum likelihood inference. The assignments Mia are treated as unobserved data in an (annealed) EM procedure, which provides a probabilistic group description. For the details we refer to [7]. In the experiments, texture features are extracted by a bank of 12 Gabor filters with 3 scales and 4 orientations. Different Gabor channels are assumed to be independently distributed, which results in a concatenated histogram of the empirically measured channel distributions. Each channel was discretized into 40 bins resulting in a 480 dimensional histogram representing one image. For the experiments two different databases were used. In Fig. 3 a probabilistic K = 10 cluster solution with 160 images containing different textures taken from the Brodatz album is visualized. The clustering algorithm produces 8 well separated clusters, while the two clusters in the mid region exhibit substantial overlap. A close inspection of these two clusters indicates that the fuzziness of the assignments in this area is plausible as the textures in this area have similar frequency components in common. The result for a more complex database of 220 textured images taken from the MIT VisTex image database with a large range of uniformly and non-uniformly textured images is depicted in Fig. 4. This plot indicates that the proposed approach provides a structured view on image databases. Especially the upper left cluster yields some Visualizing Group Structure 457 insight in the clustering solution, as this cluster consists of a large range of nonuniformly textured images, enabling the user to decide that a higher number of clusters might yield a better solution. The visualization approach fits naturally in an interactive scenario, where the user can choose interactively data points to focus his examination to certain areas of interest in the clustering solution. For comparison, we present in Fig. 2 a multidimensional scaling (Sammon's mapping [3]) solution for the VisTex database. A detailed inspection of this plot indicates, that the embedding is locally quiet satisfactory, while no global structure of the database is visible. This is explained by the fact, that Sammon's mapping only tries to preserve the object distances, while our novel approach first extracts group structure in a high dimensional feature space and than embeds this group structure in a low dimensional Euclidean space. While MDS completely neglects the grouping structure we do not care for the exact inter object distances. 4 Conclusion In this contribution, a generic framework for the low-dimensional visualization of probabilistic group structure was presented. The effectiveness of this approach was demonstrated by experiments on artificial data as well as on databases of textured images. While we have focussed on histogram data the generality of the approach makes it feasible to visualize a broad range of different data types, e.g. vectorial, proximity or co-occurrence data. Thus, it is useful in a broad variety of applications, ranging from image or document retrieval tasks, the analysis of marketing data to the inspection of protein data. We believe that this technique provides the user substantial insight in the validity of clustering solutions making the inspection and interpretation of large databases more practicable. A natural extension of the proposed approach leads to the visualization of hierarchical cluster structures by a hierarchy of visualization plots. References [1] C.M. Bishop, M. Svensen, and C.K.1. Williams. GTM: the generative topographic mapping. Neural Computation, 10(1):215-234, 1998. [2] C.M. Bishop and M. E. Tipping. A hierarchical latent variable model for data visualization. Technical Report NCRG/96/028, Neural Computing Research Group Dept. Of Computer Science & Applied Mathematics, Aston University, 1998. [3] T.F. Cox and M.A.A. Cox. Multidimensional Scaling, volume 59 of Mongraphs On statistics and applied probability. Chapman & Hall, London, New York, 1994. [4] J.C. Dunn. A fuzzy relative of the ISODATA process and its use in detecting wellseparated clusters. Journal of Cybernetics, 3:32-57, 1975. [5] 1. Gath and A. Geva. Unsupervised optimal fuzzy clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11:773-781, 1989. [6] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. PAMI, 19(1):1- 25, 1997. [7] T. Hofmann, J. Puzicha, and M. I. Jordan. Learning from dyadic data. In Advances in Neural Information Processing Systens 11. MIT Press, 1999. [8] F.C.N. Pereira, N.Z. Tishby, and L. Lee. Distributional clustering of English words. In 30th Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, pages 183-190, 1993. [9] K. Rose, E. Gurewitz, and G.C. Fox. A deterministic annealing approach to clustering. Pattern Recognition Letters, 11(9):589-594, September 1990. 458 M. Held. J. Puzicha and J. M. Buhmann Figure 3: Visualization of a probabilistic grouping structure inferred for a database of 160 Brodatz textures. A mean KL-divergence of 0.031 is obtained. Figure 4: Visualization of a probabilistic grouping structure inferred for 220 images of the VisTex database. A mean KL-divergence of 0.0018 is obtained.
|
1998
|
31
|
1,528
|
Modeling Surround Suppression in VI Neurons with a Statistically-Derived Normalization Model Eero P. Simoncelli Center for Neural Science, and Courant Institute of Mathematical Sciences New York University eero.simoncelli@nyu.edu Abstract Odelia Schwartz Center for Neural Science New York University odelia@cns.nyu.edu We examine the statistics of natural monochromatic images decomposed using a multi-scale wavelet basis. Although the coefficients of this representation are nearly decorrelated, they exhibit important higher-order statistical dependencies that cannot be eliminated with purely linear proc~ssing. In particular, rectified coefficients corresponding to basis functions at neighboring spatial positions, orientations and scales are highly correlated. A method of removing these dependencies is to divide each coefficient by a weighted combination of its rectified neighbors. Several successful models of the steady -state behavior of neurons in primary visual cortex are based on such "divisive normalization" computations, and thus our analysis provides a theoretical justification for these models. Perhaps more importantly, the statistical measurements explicitly specify the weights that should be used in computing the normalization signal. We demonstrate that this weighting is qualitatively consistent with recent physiological experiments that characterize the suppressive effect of stimuli presented outside of the classical receptive field. Our observations thus provide evidence for the hypothesis that early visual neural processing is well matched to these statistical properties of images. An appealing hypothesis for neural processing states that sensory systems develop in response to the statistical properties of the signals to which they are exposed [e.g., 1, 2]. This has led many researchers to look for a means of deriving a model of cortical processing purely from a statistical characterization of sensory signals. In particular, many such attempts are based on the notion that neural responses should be statistically independent. The pixels of digitized natural images are highly redundant, but one can always find a linear decomposition (i.e., principal component analysis) that eliminates second-order corResearch supported by an Alfred P. Sloan Fellowship to EPS, and by the Sloan Center for Theoretical Neurobiology at NYU. 154 E. P Simoncelli and 0. Schwartz relation. A number of researchers have used such concepts to derive linear receptive fields similar to those determined from physiological measurements [e.g., 16,20]. The principal components decomposition is, however, not unique. Because of this, these early attempts required additional constraints, such as spatial locality and/or symmetry, in order to achieve functions approximating cortical receptive fields. More recently, a number of authors have shown that one may use higher-order statistical measurements to uniquely constrain the choice of linear decomposition [e.g., 7, 9]. This is commonly known as independent components analysis. Vision researchers have demonstrated that the resulting basis functions are similar to cortical receptive fields, in that they are localized in spatial position, orientation and scale [e.g., 17, 3]. The associated coefficients of such decompositions are (second-order) decorrelated, highly kurtotic, and generally more independent than principal components. But the response properties of neurons in primary visual cortex are not adequately described by linear processes. Even if one chooses to describe only the mean firing rate of such neurons, one must at a minimum include a rectifying, saturating nonlinearity. A number of authors have shown that a gain control mechanism, known as divisive normalization, can explain a wide variety of the nonlinear behaviors of these neurons [18, 4, II, 12,6]. In most instantiations of normalization, the response of each linear basis function is rectified (and typically squared) and then divided by a uniformly weighted sum of the rectified responses of all other neurons. PhYSiologically, this is hypothesized to occur via feedback shunting inhibitory mechanisms [e.g., 13, 5]. Ruderman and Bialek [19] have discussed divisive normalization as a means of increasing entropy. In this paper, we examine the joint statistics of coefficients of an orthonormal wavelet image decomposition that approximates the independent components of natural images. We show that the coefficients are second-order decorrelated, but not independent. In particular, pairs of rectified responses are highly correlated. These pairwise dependencies may be eliminated by dividing each coefficient by a weighted combination of the rectified responses of other neurons, with the weighting determined from image statistics. We show that the resulting model, with all parameters determined from the statistics of a set of images, can account for recent physiological observations regarding suppression of cortical responses by stimuli presented outside the classical receptive field. These concepts have been previously presented in [21, 25]. 1 Joint Statistics of Orthonormal Wavelet Coefficients Multi-scale linear transforms such as wavelets have become popular for image representation. 'TYpically, the basis functions of these representations are localized in spatial position, orientation, and spatial frequency (scale). The coefficients resulting from projection of natural images onto these functions are essentially uncorrelated. In addition, a number of authors have noted that wavelet coefficients have significantly non-Gaussian marginal statistics [e.g., 10,14]. Because of these properties, we believe that wavelet bases provide a close approximation to the independent components decomposition for natural images. For the purposes of this paper, we utilize a typical separable decomposition, based on symmetric quadrature mirror filters taken from [23]. The decomposition is constructed by splitting an image into four subbands (lowpass, vertical, horizontal, diagonal), and then recursively splitting the lowpass subband. Despite the decorrelation properties of the wavelet decomposition, it is quite evident that wavelet coefficients are not statistically independent [26, 22]. Large-magnitude coefficients (either positive or negative) tend to lie along ridges with orientation matching that of the subband. Large-magnitude coefficients also tend to occur at the same relative spatiallocations in subbands at adjacent scales, and orientations. To make these statistical relationships
|
1998
|
32
|
1,529
|
Familiarity Discrimination of Radar Pulses Eric Grangerl, Stephen Grossberg2 Mark A. RUbin2 , William W. Streilein2 1 Department of Electrical and Computer Engineering Ecole Poly technique de Montreal Montreal, Qc. H3C 3A 7 CAN ADA 2Department of Cognitive and Neural Systems, Boston University Boston, MA 02215 USA Abstract The ARTMAP-FD neural network performs both identification (placing test patterns in classes encountered during training) and familiarity discrimination (judging whether a test pattern belongs to any of the classes encountered during training). The performance of ARTMAP-FD is tested on radar pulse data obtained in the field, and compared to that of the nearest-neighbor-based NEN algorithm and to a k > 1 extension of NEN. 1 Introduction The recognition process involves both identification and familiarity discrimination. Consider, for example, a neural network designed to identify aircraft based on their radar reflections and trained on sample reflections from ten types of aircraft A . . . J. After training, the network should correctly classify radar reflections belonging to the familiar classes A . .. J, but it should also abstain from making a meaningless guess when presented with a radar reflection from an object belonging to a different, unfamiliar class. Familiarity discrimination is also referred to as "novelty detection," a "reject option," and "recognition in partially exposed environments." ARTMAP-FD, an extension of fuzzy ARTMAP that performs familiarity discrimination, has shown its effectiveness on datasets consisting of simulated radar range profiles from aircraft targets [1, 2]. In the present paper we examine the performance of ARTMAP-FD on radar pulse data obtained in the field, and compare it 876 E. Granger, S. Grossberg, M. A. Rubin and W. W. Streilein to that of NEN, a nearest-neighbor-based familiarity discrimination algorithm, and to a k > 1 extension of NEN. 2 Fuzzy ARTMAP Fuzzy ARTMAP [3] is a self-organizing neural network for learning, recognition, and prediction. Each input a learns to predict an output class K. During training, the network creates internal recognition categories, with the number of categories determined on-line by predictive success. Components of the vector a are scaled so that each ai E [0,1] (i = 1 ... M). Complement coding [4] doubles the number of components in the input vector, which becomes A = (a, a C ), where the ith component of a C is ai = (I-ad. With fast learning, the weight vector w) records the largest and smallest component values of input vectors placed in the /h category. The 2M-dimensional vector Wj may be visualized as the hyperbox Rj that just encloses all the vectors a that selected category j during training. Activation of the coding field F2 is determined by the Weber law choice function Tj(A) =1 A 1\ Wj 1 /(0:+ 1 Wj I), where (P 1\ Q)i = min(Pi , Qj) and 1 P 1= L;~ 1 Pi I· With winner-take-all coding, the F2 node J that receives the largest Fl -+ F2 input Tj becomes active. Node J remains active if it satisfies the matching criterion: 1 Al\wj 1/ 1 A 1 = 1 Al\wj 1 /M > p, where p E [0,1] is the dimensionless vigilance parameter. Otherwise, the network resets the active F2 node and searches until J satisfies the matching criterion. If node J then makes an incorrect class prediction, a match tracking signal raises vigilance just enough to induce a search, which continues until either some F2 node becomes active for the first time, in which case J learns the correct output class label k( J) = K; or a node J that has previously learned to predict K becomes active. During testing, a pattern a that activates node J is predicted to belong to the class K = k( J). 3 ARTMAP-FD Familiarity measure. During testing, an input pattern a is defined as familiar when a familiarity function ¢(A) is greater than a decision threshold T Once a category choice has been made by the winner-take-all rule, fuzzy ARTMAP ignores the size of the input TJ. In contrast, ARTMAP-FD uses TJ to define familiarity, taking ¢(A) = TJ(A) = 1 A 1\ WJ 1 (1) TjlAX 1 WJ 1 ' where TjlAX =1 WJ 1 /(0:+ 1 WJ I)· This maximal value of TJ is attained by each input a that lies in the hyperbox RJ, since 1 A 1\ W J 1 = 1 W J 1 for these points. An input that chooses category J during testing is then assigned the maximum familiarity value 1 if and only if a lies within RJ. Familiarity discrimination algorithm. ARTMAP-FD is identical to fuzzy ARTMAP during training. During testing, ¢(A) is computed after fuzzy ARTMAP has yielded a winning node J and a predicted class K = k(J). If ¢(A) > I, ARTMAP-FD predicts class K for the input a. If ¢(A) ::; I, a is regarded as belonging to an unfamiliar class and the network makes no prediction. Note that fuzzy ARTMAP can also abstain from classification, when the baseline vigilance parameter 15 is greater than zero during testing. Typically 15 = ° during training, to maximize code compression. In radar range profile simulations such Familiarity Discrimination of Radar Pulses 877 as those described below, fuzzy ARTMAP can perform familiarity discrimination when p > 0 during both training and testing. However, accurate discrimination requires that p be close to 1, which causes category proliferation during training. Range profile simulations have also set p = 0 during both training and testing, but with the familiarity measure set equal to the fuzzy ARTMAP match function: (2) This approach is essentially equivalent to taking p = 0 during training and p > 0 during testing, with p =,. However, for a test set input a E RJ, the function defined by (2) sets ¢(A) =1 w J 1 / M, which may be large or small although a is familiar. Thus this function does not provide as good familiarity discrimination as the one defined by (1), which always sets ¢(A) = 1 when a E RJ. Except as noted, all the simulations below employ the function (1), with p = O. Sequential evidence accumulation. ART-EMAP (Stage 3) [5] identifies a test set object's class after exposure to a sequence of input patterns, such as differing views, all identified with that one object. Training is identical to that of fuzzy ART MAP, with winner-take-all coding at F2 . ART-EMAP generally employs distributed F2 coding during testing. With winner-take-all coding during testing as well as training, ART-EMAP predicts the object's class to be the one selected by the largest number of inputs in the sequence. Extending this approach, ARTMAP-FD accumulates familiarity measures for each predicted class K as the test set sequence is presented. Once the winning class is determined, the object's familiarity is defined as the average accumulated familiarity measure of the predicted class during the test sequence. 4 Familiarity discrimination simulations Since familiarity discrimination involves placing an input into one of two sets, familiar and unfamiliar, the receiver operating characteristic (ROC) formalism can be used to evaluate the effectiveness of ARTMAP-FD on this task. The hit rate H is the fraction of familiar targets the network correctly identifies as familiar and the false alarm rate F is the fraction of unfamiliar targets the network incorrectly identifies as familiar. An ROC curve is a plot of H vs. F, parameterized by the threshold'Y (i.e., it is equivalent to the two curves Fh) and Hh)) . The area under the ROC curve is the c-index, a measure of predictive accuracy that is independent of both the fraction of positive (familiar) cases in the test set and the positive-case decision threshold 'Y. An ARTMAP-FD network was trained on simulated radar range profiles from 18 targets out of a 36-target set (Fig. la). Simulations tested sequential evidence accumulation performance for 1, 3, and 100 observations, corresponding to 0.05, 0.15, and 5.0 sec. of observation (smooth curves, Fig. Ib) . As in the case of identification [6], a combination of multiwavelength range profiles and sequential evidence accumulation produces good familiarity discrimination, with the c-index approaching 1 as the number of sequential observations grows. Fig. Ib also demonstrates the importance of the proper choice of familiarity measure. The jagged ROC curve was produced by a familiarity discrimination simulation identical to that which resulted in the IOO-sequential-view smooth curve, but using the match function (2) instead of ¢ as given by (1). 878 E. Granger, S. Grossberg, M A. Rubin and W. W. Streilein o o 0.2 0.4 0.6 08 F (b) IO , - --------r I ' F ~_~~~II ·""'\"MA '-"-.. 'Y (c) T. Figure l:(a) 36 simulation targets with 6 wing positions and 6 wing lengths, and 100 scattering centers per target. Boxes indicate randomly selected familiar targets. (b) ROC curves from ARTMAP-FD simulations, with multiwavelength range profiles having 40 center frequencies. Sequential evidence accumulation for 1, 3 and 100 views uses familiarity measure (1) (smooth curves); and for 100 views uses the match function (2) (jagged curve). (c) Training and test curves of miss rate M = (1- H) and false alarm rate F vs threshold 1', for 36 targets and one view, Training curves intersect at the point where "y = r p (predicted); and test curves intersect near the point where l' = ra (optimal). The training curves are based on data from the first training epoch, the test curves on data from 3 training epochs. 5 Familiarity threshold selection When a system is placed in operation, one particular decision threshold 'Y = r must be chosen. In a given application, selection of r depends upon the relative cost of errors due to missed targets and false alarms. The optimal r corresponds to a point on the parameterized ROC curve that is typically close to the upper left-hand corner of the unit square, to maximize correct selection of familiar targets (H) while minimizing incorrect selection of unfamiliar tar gets (F) . Validation set method. To determine a predicted threshold r p , the training data is partitioned into a training subset and a validation subset. The network is trained on the training subset, and an ROC curve (F(r) , H(r)) is calculated for the validation subset. r p is then taken to be the point on the curve that maximizes [H(r) - F(r)]. (For ease of computation the symmetry point on the curve, where 1 - H('y) = F(r), can yield a good approximation.) For a familiarity discrimination task the validation set must include examples of classes not present in the training set. Once rp is determined, the training subset and validation subset should be recombined and the network retrained on the complete training set. The retrained network and the predicted threshold r p are then employed for familiarity discrimination on the test set. On-line threshold determination. During ARTMAP-FD training, category nodes compete for new patterns as they are presented. When a node J wins the competition, learning expands the category hyperbox RJ enough to enclose the training pattern a. The familiarity measure ¢ for each training set input then becomes equal to 1. However, before this learning takes place, ¢ can be less than 1, and the degree to which this initial value of ¢ is less than 1 reflects the distance from the training pattern to RJ. An event of this type- a training pattern successfully coded by a category node-is taken to be representative of familiar test-set patterns. The corresponding initial values of ¢ are thus used to generate a training Familiarity Discrimination of Radar Pulses 879 hit rate curve, where H("() equals the fraction of training inputs with cp > ,. What about false alarms? By definition, all patterns presented during training are familiar. However, a reset event during training (Sec. 2) resembles the arrival of an unfamiliar pattern during testing. Recall that a reset occurs when a category node that predicts class K wins the competition for a pattern that actually belongs to a different class k. The corresponding values of cp for these events can thus be used to generate a training false-alarm rate curve, where F("() equals the fraction of match-tracking inputs with initial cp > "(. Predictive accuracy is improved by use of a reduced set of cp values in the trainingset ROC curve construction process. Namely, training patterns that fall inside RJ, where cp = I, are not used because these exemplars tend to distort the miss rate curve. In addition, the first incorrect response to a training input is the best predictor of the network's response to an unfamiliar testing input, since sequential search will not be available during testing. Finally, giving more weight to events occurring later in the training process improves accuracy. This can be accomplished by first computing training curves H("() and F("() and a preliminary predicted threshold r p using the reduced training set; then recomputing the curves and r p from data presented only after the system has activated the final category node of the training process (Fig. Ic). The final predicted threshold r p averages these values. This calculation can still be made on-line, by taking the "final" node to be the last one activated. Table I shows that applying on-line threshold determination to simulated radar range profile data gives good predictions for the actual hit and false alarm rates, H A and FA. Furthermore, the HA and FA so obtained are close to optimal, particularly when the ROC curve has a c-index close to one. The method is effective even when testing involves sequential evidence accumulation, despite the fact that the training curves use only single views of each target. 6 NEN Near-enough-neighbor (NEN) [7, 8] is a familiarity discrimination algorithm based on the single nearest neighbor classifier. For each familiar class K, the familiarity threshold t:l.K is the largest distance between any training pattern of class K and its nearest neighbor also of class K. During testing, a test pattern is declared unfamiliar if the distance to its nearest neighbor is greater than the threshold t:l.K corresponding to the class K of that nearest neighbor. We have extended NEN to k > I by retaining the above definition of the t:l.K's, while taking the comparison during testing to be between t:l.K and the distance between the test pattern and the closest of its k nearest neighbors which is of the class K to which the test pattern is deemed to belong. 7 Radar pulse data Identifying the type of emitter from which a radar signal was transmitted is an important task for radar electronic support measures (ESM) systems. Familiarity discrimination is a key component of this task, particularly as the continual proliferation of new emitters outstrips the ability of emitter libraries to document every sort of emitter which may be encountered. The data analyzed here, gathered by Defense Research Establishment Ottawa, con880 E. Granger, S. Grossberg, M. A. Rubin and W W Streilein 3x3 6x6 6x6* actual optimal actual optimal actual optimal hit rate 0.81 0.86 0.77 0.77 0.99 0.98 false alarm rate 0.11 0.14 0.24 0.23 0.06 0.02 accuracy 0.95 1.00 0.93 1.00 1.00 1.00 Table 1: Familiarity discrimination, using ARTMAP-FD with on-line threshold prediction, of simulated radar range profile data. Training on half the target classes (boxed "aircraft" in Fig. la) , testing on all target classes. (In 3x3 case, 4 classes out of 9 total used for training.) Accuracy equals the fraction of correctly-classified targets out of familiar targets selected by the network as familiar. The results for the 6x6' dataset involve sequential evidence accumulation, with 100 observations (5 sec.) per test target. Radar range profile simulations use 40 center frequencies evenly spaced between 18GHz and 22GHz, and wp x wl simulated targets, where wp =number of wing positions and wi =number of wing lengths. The number of range bins (2/3 m. per bin) is 60, so each pattern vector has (60 range bins) x (40 center frequencies) = 2400 components. Training patterns are at 21 evenly spaced aspects in a 10° angular range and, for each viewing angle, at 15 downrange shifts evenly spaced within a single bin width. Testing patterns are at random aspects and downrange shifts within the angular range and half the total range profile extent of (60 bins) x (2/3 m.) =40 m. method ARTMAP-FD NEN city-block metric Euclidean metric k-l k-5 k - 25 k-l k-5 k - 25 hit rate 0.95 0.94 0.94 0.93 0.94 0.93 0.92 f. a. rate 0.02 0.13 0.04 0.02 0.14 0.05 0.02 accuracy 1.00 1.00 1.00 1.00 0.99 1.00 1.00 [memory [I 21 II 446 Table 2: Familiarity discrimination of radar pulse data set, using ARTMAP-FD and NEN with different metrics and values of k. Figure given for memory is twice number of F2 nodes (due to complement coding) for ARTMAP-FD, number of training patterns for NEN. Training (single epoch) on first three quarters of data in classes 1-9, testing on other quarter of data in classes 1-9 and all data in classes 10-12. (Values given are averages over four cyclic permutations of the the 12 classes.) ARTMAP-FD familiarity threshold determined by validation-set method with retraining. sist of radar pulses from 12 ship borne navigation radars [9]. Fifty pulses were collected from each radar, with the exception of radars #7 (100 pulses) and #8 (200 pulses). The pulses were preprocessed to yield 800 I5-component vectors. with the components taking values between a and l. 8 Results From Table 2, ARTMAP-FD is seen to perform effective familiarity discrimination on the radar pulse data. NEN (k = 1) performs comparatively poorly. Extensions of NEN to k > 1 perform well. During fielded operation these would incur the cost of the additional computation required to find the k nearest neighbors of the current test pattern, as well as the cost of higher memory requirements] relative to ARTMAP-FD. The combination of low hit rate with low false alarm rate obtained by NEN on the simulated radar range profile datasets (Table 3) suggests that the algorithm performs poorly here because it selects a familiarity threshold which is 1 The memory requirements of kNN pattern classifiers can be reduced by editing techniques[8], but how the use of these methods affects performance of kNN-based familiarity discrimination methods is an open question. Familiarity Discrimination of Radar Pulses 881 method II ARTMAP -FD Ill-rk -----.-1 .,.....,_...-,--,-N_E ...... N....,..-...--.-_..---r-.----..-i __ __ k - 5 k - 99 k - 1 I k - 5 dataset II 3x3 I 6x6 II 3x3 6x6 hit rate 0.81 0.77 0.11 0.11 0.11 0.14 0.14 false alarm rate 0.11 0.24 0.00 0.00 0.00 0.00 0.00 accuracy 0.95 0.93 1.00 1.00 1.00 1.00 1.00 I memory II 12 I 88 II 1260 5670 Table 3: Familiarity discrimination of simulated radar range profiles using ARTMAP-FD and NEN with different values of k. Training and testing as in Table 1. ARTMAP-FD familiarity threshold determined by on-line method. City-block metric used with NEN; results with Euclidean metric were slighlty poorer. too high. ARTMAP-FD on-line threshold selection, on the other hand, yields a value for the familiarity threshold which balances the desiderata of high hit rate and low false alarm rate. This research was supported in part by grants from the Office of Naval Research, ONR NOOOI4-95-1-0657 (S. G.) and ONR NOOOI4-96-1-0659 (M. A. R., W. W. S.) , and by a grant from the Defense Advanced Research Projects Agency and the Office of Naval Research, ONR NOOOI4-95-1-0409 (S. G., M. A. R. , W W. S.). E. G. was supported in part by the Defense Research Establishment Ottawa and the Natural Sciences and Engineering Research Council of Canada. References [1] Carpenter, G. A., Rubin, M. A. , & Streilein, W . W ., ARTMAP-FD: Familiarity discrimination applied to radar target recognition, in ICNN'97: Proceedings of the IEEE International Conference on Neural Networks, Houston, June 1997; [2] Carpenter, G. A., Rubin, M. A., & Streilein, W. W ., Threshold Determination for ARTMAP-FD Familiarity Discrimination, in C. H. Dagli et al., eds., Intelligent Engineering Systems Through Artificial Neural Networks, 1, 23-28, ASME, New York, 1997. [3] Carpenter, G. A., Grossberg, S., Markuzon, N., Reynolds, J. H., & Rosen, D. E., Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps, IEEE Transactions on N eural Networks, 3, 698-713, 1992. [4] Carpenter, G. A., Grossberg, S., & Rosen. D. B. , Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system, Neural Networks, 4,759-771, 1991. [5] Carpenter, G. A., & Ross, W . D. , ART-EMAP: A neural network architecture for object recognition by evidence accumulation, IEEE Transactions on Neural Networks, 6, 805-818, 1995. [6] Rubin, M. A., Application of fuzzy ARTMAP and ART-EMAP to automatic target recognition using radar range profiles, Neural Networks , 8, 1109-1116, 1995. [7] Dasarathy, E. V.,.Is your nearest neighbor near enough a neighbor?, in Lainious, D. G. and Tzannes, N. S., eds. Applications and Research in Informations Systems and Sciences, 1, 114-117, Hemisphere Publishing Corp. , Washington, 1977. [8] Dasarathy, B. V., ed., Nearest Neighbor(NN) Norm: NN Pattern Classification Techniques, IEEE Computer Society Press, Los Alamitos, CA, 1991. [9] Granger, E., Savaria, Y, Lavoie, P., & Cantin, M.-A., A comparison of self-organizing neural networks for fast clustering of radar pulses, Signal Processing, 64, 249-269, 1998.
|
1998
|
33
|
1,530
|
Graphical Models for Recognizing Human Interactions Nuria M. Oliver, Barbara Rosario and Alex Pentland 20 Ames Street, E15-384C, Media Arts and Sciences Laboratory, MIT Cambridge, MA 02139 {nuria, rosario, sandy}@media.mit.edu Abstract We describe a real-time computer vision and machine learning system for modeling and recognizing human actions and interactions. Two different domains are explored: recognition of two-handed motions in the martial art 'Tai Chi', and multiple-person interactions in a visual surveillance task. Our system combines top-down with bottom-up information using a feedback loop, and is formulated with a Bayesian framework. Two different graphical models (HMMs and Coupled HMMs) are used for modeling both individual actions and multiple-agent interactions, and CHMMs are shown to work more efficiently and accurately for a given amount of training. Finally, to overcome the limited amounts of training data, we demonstrate that 'synthetic agents' (Alife-style agents) can be used to develop flexible prior models of the person-to-person interactions. 1 INTRODUCTION We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in two different scenarios: (1) complex, twohanded action recognition in the martial art of Tai Chi and (2) detection and recognition of individual human behaviors and multiple-person interactions in a visual surveillance task. In the latter case, the system is particularly concerned with detecting when interactions between people occur, and classifying them. Graphical models, such as Hidden Markov Models (HMMs) [6] and Coupled Hidden Markov Models (CHMMs) [3, 2], seem appropriate for modeling and, classifying human behaviors because they offer dynamic time warping, a well-understood training algorithm, and a clear Bayesian semantics for both individual (HMMs) and interacting or coupled (CHMMs) generative processes. A major problem with this data-driven statistical approach, especially when modeling rare or anomalous behaviors, is the limited number of training examples. A major emphasis of our work, therefore, is on efficient Bayesian integration of both prior knowledge with evidence from data. We will show that for situations involving multiple independent (or partially independent) agents the Coupled HMM approach generates much better results than traditional HMM methods. In addition, we have developed a synthetic agent or Alife modeling environment for building and training flexible a priori models of various behaviors using software agents. Simulation with these software agents yields synthetic data that can be used to train prior models. These prior models can then be used recursively in a Bayesian framework to fit real behavioral data. Graphical Models for Recognizing Human Interactions 925 This synthetic agent approach is a straightforward and flexible method for developing prior models, one that does not require strong analytical assumptions to be made about the form of the priorsl . In addition, it has allowed us to develop robust models even when there are only a few examples of some target behaviors. In our experiments we have found that by combining such synthetic priors with limited real data we can easily achieve very high accuracies at recognition of different human-to-human interactions. The paper is structured as follows: section 2 presents an overview of the system, the statistical models used for behavior modeling and recognition are described in section 3. Section 4 contains experimental results in two different real situations. Finally section 5 summarizes the main conclusions and our future lines of research. 2 VISUAL INPUT We have experimented using two different types of visual input. The first is a realtime, self-calibrating 3-D stereo blob tracker (used for the Tai Chi scenario) [1], and the second is a real-time blob-tracking system [5] (used in the visual surveillance task). In both cases an Extended Kalman filter (EKF) tracks the blobs' location, coarse shape, color pattern, and velocity. This information is represented as a low-dimensional, parametric probability distribution function (PDF) composed of a mixture of Gaussians, whose parameters (sufficient statistics and mixing weights for each of the components) are estimated using Expectation Maximization (EM). This visual input module detects and tracks moving objects body parts in Tai Chi and pedestrians in the visual surveillance task and outputs a feature vector describing their motion, heading, and spatial relationship to all nearby moving objects. These output feature vectors constitute the temporally ordered stream of data input to our stochastic state-based behavior models. Both HMMs and CHMMs, with varying structures depending on the complexity of the behavior, are used for classifying the observed behaviors. Both top-down and bottom-up flows of information are continuously managed and combined for each moving object within the scene. The Bayesian graphical models offer a mathematical framework for combining the observations (bottom-up) with complex behavioral priors (top-down) to provide expectations that will be fed back to the input visual system. 3 VISUAL UNDERSTANDING VIA GRAPHICAL MODELS: HMMs and CHMMs Statistical directed acyclic graphs (DAGs) or probabilistic inference networks (PINs hereafter) can provide a computationally efficient solution to the problem of time series analysis and modeling. HMMs and some of their extensions, in particular CHMMs, can be viewed as a particular and simple case of temporal PIN or DAG. Graphically Markov Models are often depicted 'rolled-out in time' as Probabilistic Inference Networks, such as in figure 1. PINs present important advantages that are relevant to our problem: they can handle incomplete data as well as uncertainty; they are trainable and easier to avoid overfitting; they encode causality in a natural way; there are algorithms for both doing prediction and probabilistic inference; they offer a framework for combining prior knowledge and data; and finally they are modular and parallelizable. Traditional HMMs offer a probabilistic framework for modeling processes that have structure in time. They offer clear Bayesian semantics, efficient algorithms for state and parameter estimation, and they automatically perform dynamic time warping. An HMM is essentially a quantization of a system's configuration space into a small number of discrete states, together with probabilities for transitions between 1 Note that our priors have the same form as our posteriors, namely, they are graphical models. 926 N. M. Oliver, B. Rosario and A. Pentland Hidden .... kov Model Coupled ~1ddo.II.rk'" 1101101 1° S;c"rrrr H H I- I S~'~ "... . .. .i ~S' o ° '--0' Obsc:tvallou .-.. - ....... -...... . ........... Figure 1: Graphical representation of a HMM and a CHMM rolled-out in time states. A single finite discrete variable indexes the current state of the system. Any information about the history of the process needed for future inferences must be reflected in the current value of this state variable. However many interesting real-life problems are composed of multiple interacting processes, and thus merit a compositional representation of two or more variables. This is typically the case for systems that have structure both in time and space. With a single state variable, Markov models are ill-suited to these problems. In order to model these interactions a more complex architecture is needed. Extensions to the basic Markov model generally increase the memory of the system (durational modeling), providing it with compositional state in time. We are interested in systems that have compositional state in space, e.g., more than one simultaneous state variable. It is well known that the exact solution of extensions of the basic HMM to 3 or more chains is intractable. In those cases approximation techniques are needed ([7, 4, 8, 9]). However, it is also known that there exists an exact solution for the case of 2 interacting chains, as it is our case [7, 2]. We therefore use two Coupled Hidden Markov Models (CHMMs) for modeling two interacting processes, whether they are separate body parts or individual humans. In this architecture state chains are coupled via matrices of conditional probabilities modeling causal (temporal) influences between their hidden state variables. The graphical representation of CHMMs is shown in figure 1. From the graph it can be seen that for each chain, the state at time t depends on the state at time t - 1 in both chains. The influence of one chain on the other is through a causal link. In this paper we compare performance of HMMs and CHMMs for maximum a posteriori (MAP) state estimation. We compute the most likely sequence of states S within a model given the observation sequence 0 = {01' ... , on}. This most likely sequence is obtained by S = argmaxsP(SIO). In the case of HMMs the posterior state sequence probability P(SIO) is given by T P(SIO) = P31P31(0I) IIP3t(Ot)P3tI31_1 (1) t=2 where S = {a1,"" aN} is the set of discrete states, St E S corresponds to the state at time t. Pilj == P31 =a,13t_l=aJ is the state-to-state transition probability (i.e. probability of being in state ai at time t given that the system was in state aj at time t - 1). In the following we will write them as P3tI3t-l' Pi == P31=a, = P31 are the prior probabilities for the initial state. Finally Pi(Ot) == P3t=a,(Ot) = P3t(od are the output probabilities for each state2 . For CHMMs we need to introduce another set of probabilities, P 3tI3:_ 1 , which cor2The output probability is the probability of observing Ot given state a, at time t Graphical Models for Recognizing Human Interactions 927 respond to the probability of state St at time t in one chain given that the other chain -denoted hereafter by superscript I - was in state S~_l at time t - 1. These new probabilities express the causal influence (coupling) of one chain to the other. The posterior state probability for CHMMs is expressed as P p (Ol)P,P ,(d) T P(SIO) "'1"'1 "'1"'1 1 II P P P P ( ) (') = P(O) x "',1",-1 " :I"'~_I ,,;1",-1 ""I";_IPs, 0t p,,; °t t=2 (2) where St, s~; Ot, o~ denote states and observations for each of the Markov chains that compose the CHMMs. In [2] a deterministic approximation for maximum a posterior (MAP) state estimation is introduced. It enables fast classification and parameter estimation via EM, and also obtains an upper bound on the cross entropy with the full (combinatoric) posterior which can be minimized using a subspace that is linear in the number of state variables. An "N-heads" dynamic programming algorithm samples from the O(N) highest probability paths through a compacted state trellis, with complexity O(T( C N)2) for C chains of N states apiece observing T data points. The cartesian product equivalent HMM would involve a combinatoric number of states, typically requiring OCT N 2C ) computations. We are particularly interested in efficient, compact algorithms that can perform in real-time. 4 EXPERIMENTAL RESULTS Our first experiment is with a version of Tai Chi Ch 'uan (a Chinese martial and meditative art) that is practiced while sitting. Using our self-calibrating, 3-D stereo blob tracker [1], we obtained 3D hand tracking data for three Tai Chi gestures involving two, semi-independent arm motions: the left single whip, the left cobra, and the left brush knee. Figure 4 illustrates one of the gestures and the blob-tracking. A detailed description of this set of Tai Chi experimental results can be found in [3] and viewed at http://nuria . www.media.mit. edurnurial chmm/taichi . html. . ~ I "", Figure 2: Selected frames from 'left brush knee.' We collected 52 sequences, roughly 17 of each gesture and created a feature vector consisting of the 3-D (x, y, z) centroid (mean position) of each of the blobs that characterize the hands. The resulting six-dimensional time series was used for training both HMMs and CHMMs. We used the best trained HMMs and CHMMs using 10-crossvalidation to classify the full data set of 52 gestures. The Viterbi algorithm was used to find the maximum likelihood model for HMMs and CHMMs. Two-thirds ofthe testing data had not been seen in training, including gestures performed at varying speeds and from slightly different views. It can be seen from the classification accuracies, shown in table 1, that the CHMMs outperform the HMMs. This difference is not due to intrinsic modeling power, however; from earlier experiments we know that when a large number of training samples is available then HMMs can reach similar accuracies. We conclude thus that for data where there are two partially-independent processes (e.g., coordinated but not exactly linked), the CHMM method requires much less training to achieve a high classification accuracy. Table 1 illustrates the source of this training advantage. The numbers between 928 N. M Oliver, B. Rosario and A. Pentland Table 1: Recognition accuracies for HMMs and CHMMs on Tai Chi gestures. The expressions between parenthesis correspond to the number of parameters of the largest bestscoring model. Recognition Results on Tai Chi Gestures Single HMMs Coupled HMMs (CHMMs) Accuracy 69.2% (25+30+180) 100% (27+18+54) parenthesis correspond to the number of degrees of freedom in the largest bestscoring model: state-to-state probabilities + output means + output covariances. The conventional HMM has a large number of covariance parameters because it has a 6-D output variable; whereas the CHMM architecture has two 3-D output variables. In consequence, due to their larger dimensionality HMMs need much more training data than equivalent CHMMs before yielding good generalization results. Our second experiment was with a pedestrian video surveillance task 3; the goal was first to recognize typical pedestrian behaviors in an open plaza (e.g., walk from A to B, run from C to D), and second to recognize interactions between the pedestrians (e.g., person X greets person V). The task is to reliably and robustly detect and track the pedestrians in the scene. We use in this case 2-D blob features for modeling each pedestrian. In our system one of the main cues for clustering the pixels into blobs is motion, because we have a static background with moving objects. To detect these moving objects we build an eigenspace that models the background. Depending on the dynamics of the background scene the system can adaptively relearn the eigenbackground to compensate for changes such as big shadows. The trajectories of each blob are computed and saved into a dynamic track memory. Each trajectory has associated a first order EKF that predicts the blob's position and velocity in the next frame As before, the appearance of each blob is modeled by a Gaussian PDF in RGB color space, allowing us to handle occlusions. Figure 3: Typical Image from pedestrian plaza. Background mean image, input image with blob bounding boxes and blob segmentation image The behaviors we examine are generated by pedestrians walking in an open outdoor environment. Our goal is to develop a generic, compositional analysis of the observed behaviors in terms of states and transitions between states over time in such a manner that (1) the states correspond to our common sense notions of human behaviors, and (2) they are immediately applicable to a wide range of sites and viewing situations. Figure 3 shows a typical image for our pedestrian scenario, the pedestrians found, and the final segmentation. Two people (each modeled as its own generative process) may interact without wholly determining each others' behavior. Instead, each of them has its own internal dynamics and is influenced (either weakly or strongly) by others. The probabilities PStIS~_1 and PS;ISt_l from equation 2 describe this kind of interactions and CHMMs are intended to model them in as efficient a manner as is possible. We would like to have a system that will accurately interpret behaviors and interactions within almost any pedestrian scene with at most minimal training. As we have 3 Further information about this system can be found at http:/www.vismod.www.media.mit.edu/ nuria/humanBehavior IhumanBehavior .html Graphical Models for Recognizing Human Interactions 929 already mentioned, une critical problem is the generation of models that capture our prior knowledge about human behavior. To achieve this goal we have developed a modeling environment that uses synthetic agents to mimic pedestrian behavior in a virtual environment. The agents can be assigned different behaviors and they can interact with each other as well. Currently they can generate 5 different interacting behaviors and various kinds of individual behaviors (with no interaction). These behaviors are: following, meet and walk together (inter1); approach, meet and go on separately (inter2) or go on together (inter3); change direction in order to meet, approach, meet and continue together (inter4) or go on separately (inter5). The parameters of this virtual environment are modeled using data drawn from a 'generic' set of real scenes. By training the models of the synthetic agents to have good generalization and invariance properties, we can obtain flexible prior models for use when learning the human behavior models from real scenes. Thus the synthetic prior models allow us to learn robust behavior models from a small number of real behavior examples. This capability is of special importance in a visual surveillance task, where typically the behaviors of greatest interest are also the rarest. To test our behavior modeling in the pedestrian scenario, we first used the detection and tracking system previously described to obtain 2-D blob features for each person in several hours of video. More than 20 examples of following and the two first types of meeting behaviors were detected and processed. CHMMs were then used for modeling three different behaviors: following, meet and continue together, and meet and go on separately. Furthermore, an interaction versus no interaction detection test was also performed (HMMs performed so poorly at this task that their results are not reported). In addition to velocity, heading, and position, the feature vectors consisted of the derivative of the relative distance between two agents, their degree of alignment (dot product of their velocity vectors) and the magnitude of the difference in their velocity vectors. We tested on this video data using models trained with two types of data: (1) 'Prioronly models', that is, models learned entirely from our synthetic-agents environment and then applied directly to the real data with no additional training or tuning of the parameters; and (2) 'Posterior models', or prior-pIus-real data behavior models trained by starting with the prior-only model and then 'tuning' the models with data from this specific site, using eight examples of each type of interaction. Recognition accuracies for both these 'prior' and 'posterior' CHMMs are summarized in table 2. It is noteworthy that with only 8 training examples, the recognition accuracy on the remaining data could be raised to 100%. This demonstrates the ability to accomplish extremely rapid refinement of our behavior models from the initial a priori models. Table 2: Accuracies on real pedestrian data, (a) only a priori models, (b) posterior models (with site-specific training) Accuracy on Real Pedestrian Data (a)Prior CHMMs (b ) Posterior CHMMs No-inter Interl Inter2 Inter3 90.9 100 93.7 100 100 100 100 100 In a visual surveillance system the false alarm rate is often as important as the classification accuracy4 To analyze this aspect of our system's performance, we calculated the system's ROC curve. For accuracies of 95% the false alarm rate was less than 0.01. 4In an ideal automatic surveillance system, all the targeted behaviors should be detected with a close-to-zero false alarm rate, so that we can reasonably alert a human operator to examine them further. 930 N. M. Oliver, B. Rosario and A. Pentland 5 SUMMARY, CONCLUSIONS AND FUTURE WORK In this paper we have described a computer vision system and a mathematical modeling framework for recognizing different human behaviors and interactions in two different real domains: human actions in the martial art of Tai Chi and human interactions in a visual surveillance task. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. Two different state-based statistical learning architectures, namely HMMs and CHMMs, have been proposed and compared for modeling behaviors and interactions. The superiority of the CHMM formulation has been demonstrated in terms of both training efficiency and classification accuracy. A synthetic agent training system has been created in order to develop flexible prior behavior models, and we have demonstrated the ability to use these prior models to accurately classify real behaviors with no additional training on real data. This fact is specially important, given the limited amount of training data available. Future directions under current investigation include: extending our agent interactions to more than two interacting processes; developing a hierarchical system where complex behaviors are expressed in terms of simpler behaviors; automatic discovery and modeling of new behaviors (both structure and parameters); automatic determination of priors, their evaluation and interpretation; developing an attentional mechanism with a foveated camera along with a more detailed representation of the behaviors; evaluating the adaptability of off-line learned behavior structures to different real situations; and exploring a sampling approach for recognizing behaviors by sampling the interactions generated by our synthetic agents. Acknowledgments Sincere thanks to Michael Jordan, Tony Jebara and Matthew Brand for their inestimable help. References 1. A. Azarbayejani and A. Pentland. Real-time self-calibrating stereo person-tracker using 3-D shape estimation from blob features. In Proceedings, International Conference on Pattern Recognition, Vienna, August 1996. IEEE. 2. M. Brand. Coupled hidden markov models for modeling interacting processes. November 1996. Submitted to Neural Computation. 3. M. Brand and N. Oliver. Coupled hidden markov models for complex action recognition. In In Proceedings of IEEE CVPR97, 1996. 4. Z. Ghahramani and M. 1. Jordan. Factorial hidden Markov models. In D. S. Touretzky, M. C. Mozer, and M. Hasselmo, editors, NIPS, volume 8, Cambridge, MA, 1996. MITP. 5. N. Oliver, B. Rosario, and A. Pentland. Statistical modeling of human behaviors. In To appear in Proceedings of CVPR98, Perception of Action Workshop, 1998. 6 1. R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. PIEEE, 77(2):257- 285, 1989. 7. L. K. Saul and M. 1. Jordan. Boltzmann chains and hidden Markov models. In G. Tesauro, D. S. Touretzky, and T. Leen, editors, NIPS, volume 7, Cambridge, MA , 1995. MITP. 8. P. Smyth, D. Heckerman, and M. Jordan. Probabilistic independence networks for hidden Markov probability models. AI memo 1565, MIT, Cambridge, MA, Feb 1996. 9 C. Williams and G. E. Hinton. Mean field networks that learn to discriminate temporally distorted strings. In Proceedings, connectionist models summer school, pages 18- 22, San Mateo, CA, 1990. Morgan Kaufmann.
|
1998
|
34
|
1,531
|
An entropic estimator for structure discovery Matthew Brand Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge MA 02139 brand@merl.com Abstract We introduce a novel framework for simultaneous structure and parameter learning in hidden-variable conditional probability models, based on an en tropic prior and a solution for its maximum a posteriori (MAP) estimator. The MAP estimate minimizes uncertainty in all respects: cross-entropy between model and data; entropy of the model; entropy of the data's descriptive statistics. Iterative estimation extinguishes weakly supported parameters, compressing and sparsifying the model. Trimming operators accelerate this process by removing excess parameters and, unlike most pruning schemes, guarantee an increase in posterior probability. Entropic estimation takes a overcomplete random model and simplifies it, inducing the structure of relations between hidden and observed variables. Applied to hidden Markov models (HMMs), it finds a concise finite-state machine representing the hidden structure of a signal. We entropically model music, handwriting, and video time-series, and show that the resulting models are highly concise, structured, predictive, and interpretable: Surviving states tend to be highly correlated with meaningful partitions of the data, while surviving transitions provide a low-perplexity model of the signal dynamics. 1 . An entropic prior In entropic estimation we seek to maximize the information content of parameters. For conditional probabilities, parameters values near chance add virtually no information to the model, and are therefore wasted degrees of freedom. In contrast, parameters near the extrema {O, I} are informative because they impose strong constr·aints on the class of signals accepted by the model. In Bayesian terms, our prior should assert that parameters that do not reduce uncertainty are improbable. We can capture this intuition in a surprisingly simple form: For a model of N conditional probabilities 9 = {(h , .. . , () N } we write (1) whence we can see that the prior measures a model's freedom from ambiguity (H(9) is an entropy measure). Applying Pe (.) to a multinomial yields the posterior p (LlI) P(wI9)Pe (9) [II ()W;] Pe(9) II (){;/;+w; e U W <X P(w) <X . 1 P(w) <X . z z z (2) where Wi is evidence for event type i. With extensive evidence this distribution converges to "fair"(ML) odds for w, but with scant evidence it skews to stronger odds. 724 M Brand 1.1 MAP estimator To obtain MAP estimates we set the derivative of log-posterior to zero, using Lagrange multipliers to ensure L:i (}i = 1, W' 1+ (); +IOg(}i+"\ (3) We obtain (}i by working backward from the Lambert W function, a multi-valued inverse function satisfying W(x)eW(x) =x. Taking logarithms and setting y = logx, 0= -W(x) -logW(x) + logx - W(eY ) -log W(e Y ) + y -1 IjW(eY ) +logljW(eY )+logz+y-Iogz -z zjW(eY ) + 10gzjW(eY ) + y -logz (4) Setting (}i = zjW(eY ) , y = 1 +"\+logz, and z = -Wi, eqn. 4 simplifies to eqn. 3, implying (5) Equations 3 and 5 together yield a quickly converging fix-point equation for ..\ and therefore for the entropic MAP estimate. Solutions lie in the W -1 branch of Lambert's function. See [Brand, 1997] for methods we developed to calculate the little-known W function. 1.2 Interpretation The negated log-posterior is equivalent to a sum of entropies: H(O) + D(wIIO) + H(w) (6) Maximizing Pe(Olw) minimizes entropy in all respects: the parameter entropy H(O); the cross-entropy D (w "0) between the parameters 0 and the data's descriptive statistics w; and the entropy of those statistics H (w), which are calculated relative to the structure of the model. Equivalently, the MAP estimator minimizes the expected coding length, making it a maximally efficient compressor of messages consisting of the model and the data coded relative to the model. Since compression involves separating essential from accidental structure, this can be understood as a form of noise removal. Noise inflates the apparent entropy of a sampled process; this systematically biases maximum likelihood (ML) estimates toward weaker odds, more so in smaller samples. Consequently, the entropic prior is a countervailing bias toward stronger odds. An Entropic Estimator for Structure Discovery 725 1.3 Model trimming Because the prior rewards sparse models, it is possible to remove weakly supported parameters from the model while improving its posterior probability, such that Pe (O\(}iIX) > Pe (OIX). This stands in contrast to most pruning schemes, which typically try to minimize damage to the posterior. Expanding via Bayes rule and taking logarithms we obtain (7) where hi((}i) is the entropy due to (}i. For small (}i, we can approximate via differentials: (). {)H(O) ~ {)(}i > () . {)logP(XIO) t {)(} i (8) By mixing the left- and right-hand sides of equations 7 and 8, we can easily identify trimmable parameters-those that contribute more to the entropy than the log-likelihood. E.g., for multinomials we set hi ((}i) = -(}i log (}i against r.h.s. eqn. 8 and simplify to obtain [ {)logP(XIO)] < exp {)(}i (9) Parameters can be trimmed at any time during training; at convergence trimming can bump the model out of a local probability maximum, allowing further training in a lowerdimensional and possibly smoother parameter subspace. 2 Entropic HMM training and trimming In entropic estimation of HMM transition probabilities, we follow the conventional E-step, calculating the probability mass for each transition to be used as evidence w: T-l Ij,i L aj(t) Pilj Pi(Xt+1) fh(t + 1) (10) where PilJ is the current estimate of the transition probability from state j to state i; Pi(Xt+d is the output probability of observation Xt+1 given state i, and Q, {3 are obtained from forward-backward analysis and follow the notation of Rabiner [1989]. For the Mstep, we calculate new estimates {Pilj h = 0 by applying the MAP estimator in § 1.1 to each w = {,j,i k That is, w is a vector of the evidence for each kind of transition out of a single state; from this evidence the MAP estimator calculates probabilities O. (In BaumWelch re-estimation, the maximum-likelihood estimator simply sets Pilj = Ij,i/ 2:i Ij,d In iterative estimation, e.g., expectation-maximization (EM), the entropic estimator drives weakly supported parameters toward zero, skeletonizing the model and concentrating evidence on surviving parameters until their estimates converge to near the ML estimate. Trimming appears to accelerate this process by allowing slowly dying parameters to leapfrog to extinction. It also averts numerical underflow errors. For HMM transition parameters, the trimming criterion of egn. 9 becomes (11 ) where Ij (t) is the probability of state j at time t. The multinomial output distributions of a discrete-output HMM can be en tropically re-estimated and trimmed in the same manner. 726 Entropic versus ML HMM models of Bach chorales 90 go 7S.-.-~-.-----,..., o S 20 5 IS 25 3S .t5.....,5----+.-.,-~ , states at initialization r:\\~ ~~ M. Brand Figure 1: Left: Sparsification, classification, and prediction superiority of entropically estimated HMMs modeling Bach chorales. Lines indicate mean performance over 10 trials; error bars are 2 standard deviations. Right: High-probability states and subgraphs of interest from an entropically estimated 35-state chorale HMM. Tones output by each state are listed in order of probability. Extraneous arcs have been removed for clarity. 3 Structure learning experiments To explore the practical utility of this framework, we will use entropically estimated HMMs as a window into the hidden structure of some human-generated time-series. Bach Chorales: We obtained a dataset of melodic lines from 100 of I.S. Bach's 371 surviving chorales from the UCI repository [Merz and Murphy, 1998], and transposed all into the key of C. We compared entropically and conventionally estimated HMMs in prediction and classification tasks, training both from identical random initial conditions and trying a variety of different initial state-counts. We trained with 90 chorales and testing with the remaining 10. In ten trials, all chorales were rotated into the test set. Figure 1 illustrates that despite substantial loss of parameters to sparsification, the entropically estimated HMMs were, on average, better predictors of notes. (Each test sequence was truncated to a random length and the HMMs were used to predict the first missing note.) They also were better at discriminating between test chorales and temporally reversed test chorales-challenging because Bach famously employed melodic reversal as a compositional device. With larger models, parameter-trimming became state-trimming: An average of 1.6 states were "pinched off" the 35-state models when all incoming transitions were deleted. While the conventionally estimated HMMs were wholly uninterpretable, in the entropically estimated HMMs one can discern several basic musical structures (figure 1, right), including self-transitioning states that output only tonic (C-E-G) or dominant (G-B-D) triads, lower- or upper-register diatonic tones (C-D-E or F-G-A-B), and mordents (A-nGA). We also found chordal state sequences (F-A-C) and states that lead to the tonic (C) via the mediant (E) or the leading tone (B). Handwriting: We used 2D Gaussian-output HMMs to analyze handwriting data. Training data, obtained from the UNIPEN web site [Reynolds, 1992], consisted of sequences of normalized pen-position coordinates taken at 5msec intervals from 10 different individuals writing the digits 0-9. The HMMs were estimated from identical data and initial conditions (random upper-diagonal transition matrices; random output parameters). The diagrams in Figure 2 depict transition graphs of two HMMs modeling the pen-strokes for the digit "5," mapped onto the data. Ellipses indicate each state's output probability iso-contours (receptive field); X s and arcs indicate state dwell and transition probabilities, respectively, by their thicknesses. Entropic estimation induces an interpretable automaton that captures essential structure and timing of the pen-strokes. 50 of the 80 original transition parameters An Entropic Estimator for Structure Discovery 727 ConlUStOn MatrIX WIth 93 0% acct.JIlIcy eonrus.on Matnll WIth 96 0% accuracy .. . ,. ~~Y", '. :,y :,"' .s -': . " / .,-!.:~ ,~_~~" 6 . -, a. conventional b. en tropic c. conventional d. en tropic Figure 2: (a & b): State machines of conventionally and entropically estimated hidden Markov models of writing "S." (c & d): Confusion matrices for all digits. were trimmed. Estimation without the entropic prior results in a wholly opaque model, in which none of the original dynamical parameters were trimmed. Model concision leads to better classification-the confusion matrices show cumulative classificMion error over ten trials with random initializations. Inspection of the parameters for the model in 2b showed that all writers began in states 1 or 2. From there it is possible to follow the state diagram to reconstruct the possible sequences of pen-strokes: Some writers start with the cap (state 1) while others start with the vertical (state 2); all loop through states 3-8 and some return to the top (via state 10) to add a horizontal (state 12) or diagonal (state 11) cap. Office activity: Here we demonstrate a model of human activity learned from mediumto long-term ambient video. By activity, we mean spatio-temporal patterns in the pose, position, and movement of one's body. To make the vision tractable, we consider the activity of a single person in a relatively stable visual environment, namely, an office. We track the gross shape and position of the office occupant by segmenting each image into foreground and background pixels. Foreground pixels are identified with reference to an acquired statistical model of the background texture and camera noise. Their ensemble properties such as motion or color are modeled via adaptive multivariate Gaussian distributions, re-estimated in each frame. A single bivariate Gaussian is fitted to the foreground pixels and we record the associated ellipse parameters [mean x , meany, timeanx , timeany, mass, timass, elongation, eccentricity]. Sequences of these observation vectors are used to train and test the HMMs. Approximately 30 minutes of data were taken at SHz from an SGI IndyCam. Data was collected automatically and at random over several days by a program that started recording whenever someone entered the room after it had been empty S+ minutes. Backgrounds were re-Iearned during absences to accommodate changes in lighting and room configuration. Prior to training, HMM states were initialized to tile the image with their receptive fields, and transition probabilities were initialized to prefer motion to adjoining tiles. Three sequences ranging from 1000 to 1900 frames in length were used for entropic training of 12, 16,20, 2S, and 30-state HMMs. Entropic training yielded a substantially sparsified model with an easily interpreted state machine (see figure 3). Grouping of states into activities (done only to improve readability) was done by adaptive clustering on a proximity matrix which combined Mahalonobis distance and transition probability between states. The labels are the author's description of the set of frames claimed by each state cluster during forward-backward analysis of test data. Figure 4 illustrates this analysis, showing frames from a test sequence to which specific states are strongly tuned. State S (figure 3 right) is particularly interesting-it has a very non-specific receptive field, no self-transition, and an extremely low rate of occupancy. Instead of modeling data, it serves to compress the model by summarizing transition patterns that are common to several other states. The entropic model has proven to be quite superior for segmented new video into activities and detecting anomalous behavior. 728 irlitialization - -~ . ~ : .. ~ ." tinalmochtl .-. ." . ... ' .. ~ ~ M Brand Figure 3: Top: The state machine found by en tropic training (left) is easily labeled and interpreted. The state machine found by conventional training (right) is not, begin fully connected. Bottom: Transition matrices after (1) initialization, (2) entropic training, (3) conventional training, and (4 & 5) entropic training from larger initializations. The top row indicates initial probabilities of each state; each subsequent row indicates the transition probabilities out of a state. Color key: 0 = 0; • = 1. The state machines above are extracted from 2 & 3. Note that 4 & 5 show the same qualitative structure as 2, but sparser, while 3 shows no almost no structure at all. Figure 4: Some sample frames assigned high state-specific probabilities by the model. Note that some states are tuned to velocities, hence the difference between states 6 and 11. 4 Related work HMMs: The literature of structure-learning in HMMs is based almost entirely on generateand-test algorithms. These algorithms work by merging [Stokke and Omohundro, 1994] or splitting [Takami and Sagayama, 1991] states, then retraining the model to see if any advantage has been gained. Space constraints force us to summarize a recent literature review: There are now more than 20 variations and improvements on these approaches, plus some heuristic constructive algorithms (e.g., [Wolfertstetter and Ruske, 1995]). Though these efforts use a variety of heuristic techniques and priors (including MDL) to avoid detrimental model changes, much of the computation is squandered and reported run-times often range from hours to days. Entropic estimation is exact. monotonic, and orders of magnitude faster-only slightly longer than standard EM parameter estimation. MDL: Description length minimization is typically done via gradient ascent or search via model comparison; few estimators are known. Rissanen [1989] introduced an estimator for binary fractions, from which Vovk [1995] derived an approximate estimator for Bernoulli An Entropic Estimator for Structure Discovery 729 models over discrete sample spaces. It approximates a special case of our exact estimator, which handles multinomial models in continuous sample spaces. Our framework provides a unified Bayesian framework for two issues that are often treated separately in MDL: estimating the number of parameters and estimating their values. MaxEnt: Our prior has different premises and an effect opposite that of the "standard" MaxEnt prior e-aD(9i1 9o). Nonetheless, our prior can be derived via MaxEnt reasoning from the premise that the expectation of the perplexity over all possible models is finite [Brand, 1998]. More colloquially, we almost always expect there to be learnable structure. Extensions: For simplicity of exposition (and for results that are independent of model class), we have assumed prior independence of the parameters and taken H (8) to be the combined parameter entropies of the model's component distributions. Depending on the model class, we can also provide variants of eqns. 1-8 for H (8) =conditional entropy or H (8) =entropy rate of the model. In Brand [1998] we present entropic MAP estimators for spread and covariance parameters with applications to mixtures-of-Gaussians, radial basis functions, and other popular models. In the same paper we generalize eqns. 1-8 with a temperature term, obtaining a MAP estimator that minimizes the free energy of the model. This folds deterministic annealing into EM, turning it into a quasi-global optimizer. It also provides a workaround for one known limitation of entropy minimization: It is inappropriate for learning from data that is atypical of the source process. Open questions: Our framework is currently agnostic w.r.t. two important questions: Is there an optimal trimming policy? Is there a best entropy measure? Other questions naturally arise: Can we use the entropy to estimate the peakedness of the posterior distribution, and thereby judge the appropriateness of MAP models? Can we also directly minimize the entropy of the hidden variables, thereby obtaining discriminant training? 5 Conclusion Entropic estimation is highly efficient hillclimbing procedure for simultaneously estimating model structure and parameters. It provides a clean Bayesian framework for minimizing all entropies associated with modeling, and an E-MAP algorithm that brings the structure of a randomly initialized model into alignment with hidden structures in the data via parameter extinction. The applications detailed here are three of many in which entropically estimated models have consistently outperformed maximum likelihood models in classification and prediction tasks. Most notably, it tends to produce interpretable models that shed light on the structure of relations between hidden variables and observed effects. References Brand, M. (1997). Structure discovery in conditional probability models via an entropic prior and parameter extinction. NeuraL Computation To appear; accepted 8/98. Brand, M. (1998). Pattern discovery via entropy minimization. To appear in Proc .. ArtificiaL Intelligence and Statistics #7. Merz, C. and Murphy, P. (1998). UCI repository of machine learning databases. Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286. Reynolds, D. (1992). Handwritten digit data. UNIPEN web site, hUp:llhwr.nici.kun.nl/unipen/. Donated by HP Labs, Bristol, England. Rissanen, J. (1989). Stochastic CompLexit)' and StatisticaL Inquiry. World Scientific. Stolcke, A. and Omohundro, S. (1994). Best-first model merging for hidden Markov model induction. TR-94-003, International Computer Science Institute, U.c. Berkeley. Takami, 1.-1. and Sagayama, S. (1991). Automatic generation of the hidden Markov model by successive state splitting on the contextual domain and the temporal domain. TR SP91-88, IEICE. Vovk, V. G. (1995). Minimum description length estimators under the optimal coding scheme. In Vitanyi, P., editor, Proc. ComputationaL Learning Theory / Europe, pages 237-251. Springer-Verlag. Wolfertstetter, F. and Ruske, G. (1995). Structured Markov models for speech recognition. In InternationaL Conference on Acoustics. Speech. and SignaL Processing, volume I, pages 544-7.
|
1998
|
35
|
1,532
|
The Effect of Correlations on the Fisher Information of Population Codes Hyoungsoo Yoon hyoung@fiz.huji.ac.il Haim Sompolinsky hairn@fiz.huji.ac.il Racah Institute of Physics and Center for Neural Computation Hebrew University, Jerusalem 91904, Israel Abstract We study the effect of correlated noise on the accuracy of population coding using a model of a population of neurons that are broadly tuned to an angle in two-dimension. The fluctuations in the neuronal activity is modeled as a Gaussian noise with pairwise correlations which decays exponentially with the difference between the preferred orientations of the pair. By calculating the Fisher information of the system, we show that in the biologically relevant regime of parameters positive correlations decrease the estimation capability of the network relative to the uncorrelated population. Moreover strong positive correlations result in information capacity which saturates to a finite value as the number of cells in the population grows. In contrast, negative correlations substantially increase the information capacity of the neuronal population. 1 Introduction In many neural systems, information regarding sensory inputs or (intended) motor outputs is found to be distributed throughout a localized pool of neurons. It is generally believed that one of the main characteristics of the population coding scheme is its redundancy in representing information (Paradiso 1988; Snippe and Koenderink 1992a; Seung and Sompolinsky 1993). Hence the intrinsic neuronal noise, which has detrimental impact on the information processing capability, is expected to be compensated by increasing the number of neurons in a pool. Although this expectation is universally true for an ensemble of neurons whose stochastic variabilities are statistically independent, a general theory of the efficiency of population coding when the neuronal noise is correlated within the population, has been lacking. The conventional wisdom has been that the correlated variability limits the information processing capacity of neuronal ensembles (Zohary, Shadlen, and Newsome 1994). 168 H. Yoon and H. Sompolinsky However, detailed studies of simple models of a correlated population that code for a single real-valued parameter led to apparently contradicting claims. Snippe and Koenderink (Snippe and Koenderink 1992b) conclude that depending on the details of the correlations, such as their spatial range, they may either increase or decrease the information capacity relative to the un correlated one. Recently, Abbott and Dayan (Abbott and Dayan 1998) claimed that in many cases correlated noise improves the accuracy of population code. Furthermore, even when the information is decreased it still grows linearly with the size of the population. If true, this conclusion has an important implication on the utility of using a large population to improve the estimation accuracy. Since cross-correlations in neuronal activity are frequently observed in both primary sensory and motor areas (Fetz, Yoyama, and Smith 1991; Lee, Port, Kruse, and Georgopoulos 1998), understanding the effect of noise correlation in biologically relevant situations is of great importance. In this paper we present an analytical study of the effect of noise correlations on the population coding of a pool of cells that code for a single one-dimensional variable, an angle on a plane, e.g. , an orientation of a visual stimulus, or the direction of an arm movement. By assuming that the noise follows the multivariate Gaussian distribution, we investigate analytically the effect of correlation on the Fisher information. This model is similar to that considered in (Snippe and Koenderink 1992b; Abbott and Dayan 1998). By analyzing its behavior in the biologically relevant regime of tuning width and correlation range, we derive general conclusions about the effect of the correlations on the information capacity of the population. 2 Population Coding with Correlated Noise We consider a population of N neurons which respond to a stimulus characterized by an angle (), where -1r < () ~ 1r. The activity of each neuron (indexed by i) is assumed to be Gaussian with a mean h((}) which represents its tuning curve, and a uniform variance a. The noise is assumed to be pairwise-correlated throughout the population. Hence the activity profile of the whole population, R = {rl, r2, .. . , r N } , given a stimulus (), follows the following multivariate Gaussian distribution. P(RI(}) = Nexp (-~ l:(ri - h((}»)Gi-/(rj - fj((})) t ,J (1) where N is a normalization constant and Cj is the correlation matrix. Gij = ac5ij + bij (1 c5ij ). (2) It is assumed that the tuning curves of all the neurons are identical in form but peaked at different angles, that is fi((}) = f((} - ¢i) where the preferred angles ¢i are distributed uniformly from -1r to 1r with a lattice spacing, w, which is equal to 21r IN. We further assume that the noise correlation between a pair of neurons is only a function of their preferred angle difference, i.e., bij ((}) = b(ll¢i ¢jll) where lI(}l (}211 is defined to be the relative angle between (}l and (}2, and hence its maximum value is 1r. A decrease in the magnitude of neuronal correlations with the dissimilarity in the preferred stimulus is often observed in cortical areas. We model this by exponentially decaying correlations bij = b exp( _1I¢i - ¢j II) p where p specifies the angular correlation length. (3) Fisher Information of Correlated Population Codes 169 The amount of information that can be extracted from the above population will depend on the decoding scheme. A convenient measure of the information capacitv in the population is given by the Fisher information, which in our case is (for it given stimulus 8) where J(8) = L giGi-/ gj i ,j . (e) = {) Ii (8) gt ae . (4) (5) The utility of this measure follows from the well known Cramer-Rao bound for the variance of any unbiased estimators, i.e., ((8 - iJ)2) 2: 1/ J(8). For the rest of this paper, we will concentrate on the Fisher information as a function of the noise correlation parameters, band p, as well as the population size N. 3 Results In the case of un correlated population (b by (Seung and Sompolinsky 1993) 0) , the Fisher information is given (6) n \vhere gn is the Fourier transform of gj, defined by -'l.n'P] 1 L . A. gn = N e gj. (7) j The mode number n is an integer running from _N:;l to N:;l (for odd N) and ¢i = -7f(N + 1)/N + iw, i = 1, .. . , N. Likewise, in the case of b ::j:. 0, J is given by J = NL Ignl 2 Gn n where Gn are the eigenvalues of the covariance matrix, t,] (8) N+I (a _ 2&) + 2b 1 - .\ cos(nuJ) - (_ 1)11.\ -y- C'os(nw)(1 - .\) 1 - 2,\ cos(nw) + .\2 (9) where w = 7J, .\ = e- w / p , and N is assumed to be an odd integer. Note that thE' covariance matrix Gij remains positive definite as long as (10) where the lower bound holds for general N while the upper bound is valid for large N. To evaluate the effect of correlations in a large population it is important to specify the appropriate scales of the system parameters. We consider here the biologically relevant case of broadly tuned neurons that have a smoothly varying tuning curve with a single peak. When the tuning curve is smoothly varying, Ignl 2 will be a rapidly decaying function as n increases beyond a characteristic value which is 170 H. Yoon and H. Sompolinsky proportional to the inverse of the tuning width, a. We further assume a broad tuning, namely that the tuning curve spans a substantial fraction of the angular extent. This is consistent with the observed typical values of half-width at half height in visual and motor areas, which range from 20 to 60 degrees. Likewise, it is reasonable to assume that the angular correlation length p spans a substantial fraction of the entire angular range. This broad tuning of correlations with respect to the difference in the preferred angles is commonly observed in cortex (Fetz, Yoyama, and Smith 1991; Lee, Port, Kruse, and Georgopoulos 1998). To capture these features we will consider the limit of large N while keeping the parameters p and a constant. Note that keeping a of order 1 implies that substantial contributions to Eq. (8) come only from n which remain of order 1 as N increases. On the other hand, given the enormous variability in the strength of the observed crosscorrelations between pairs of neurons in cortex, we do not restrict the value of b at this point. Incorporating the above scaling we find that when N is large 1 is given by N 2 p- 2 + n 2 , 1=~~19nl p-2+n2+(~)(1-(-1)ne-7I'/p) ' (11) Inspection of the denominator in the above equation clearly shows that for all positive values of b, 1 is smaller than 10 , On the other hand, when b is negative 1 is larger than 10 , To estimate the magnitude of these effects we consider below three different regimes. 1.0 """"'----,-----r---.....,------, D.8 .I 0.6 .10 0.4 0.2 0.0 () 1000 2000 3000 400() N Figure 1: Normalized Fisher information when p '" 0(1) (p = 0.257r was used). a = 1 and b = 0.1 , 0.01, and 0.001 from the bottom. We used a circular Gaussian tuning curve, Eq. (13), with fmax = 10 and a = 0.27r. Strong positive correlations: We first discuss the regime of strong positiw correlations, by which we mean that a < b/a "" 0(1). In this case the second term in the denominator of Eq. (11) is of order Nand Eq. (11) becomes 7rp 2 p-2 + n 2 1 = b L 19n1 1 _ (-1)ne-7I'/P' (12) n This result implies that in this regime the Fisher information in the entire population does not scale linearly with the population size N but saturates to a sizeindependent finite limit. Thus, for these strong correlations, although the number of neurons in the population may be large, the number of independent degrees of freedom is small. We demonstrate the above phenomenon by a numerical evaluation of 1 for the following choice of tuning curve f(O) = fmax exp ((cos(O) - 1)/a2 ) (13) Fisher Information of Correlated Population Codes 171 with (J = 0.211". The results are shown in Fig. 1 and Fig. 2. The results of Fig. 1 clearly show the substantial decrease in J as b increases. The reduction in J I Jo when b '" 0(1) indicates that J does not scale with N in this limit. Fig. 2 shows t.he saturation of J when N increases. For p = 0.1 and 1 ((c) and (d)), J saturates at. about N = 100, which means that for these parameter values the network contains at most 100 independent degrees of freedom. When the correlation range becomes either smaller or bigger, the saturation becomes less prominent (( a) and (b)) , which is further explained later in the text. 40 .30 J 20 (c) 10 ( ti) 200 400 600 800 N Figure 2: Saturation of Fisher information with the correlation coefficient kept fixed: a = 1 and b = 0.5. Both p '" 0(1) ((c) p = 0.1 and (d) p = 1) and other extreme limits ((a) p = 0.01 and (b) p = 10) are shown. Tuning curve with fmax = 1 and (J = 0.211" was used for all four curves. Weak positive correlations: This regime is defined formally by positive values of b which scale as bla '" O( -k). In this case, while J is still smaller than .10 the suppressive effects of the correlations are not as strong as in the first case. This is shown in Fig. 3 (bottom traces) for N = 1000. While J is less than Jo , it is still a substantial fraction of Jo , indicating J is of order N. 2.3 2.0 .:L 1.3 J" 1.0 0.3 0 1 2 3 4 P Figure 3: Normalized Fisher information when p '" 0(1) and bla '" O(~). N = 1000, a = 1, fmax = 10, and (J = 0.211". The top curves represent negative h (b = -0.005 and -0.002 from the top) and the bottom ones positive b (b = 0.01 and 0.005 from the bottom). Weak negative correlations: So far we have considered the case of positive b. As stated above, Eq. (11) implies that when b < 0, J > Jo . The lower bound of b (Eq. (10)) means that when the correlations are negative and p is of order 1 th!' amplitude of the c:orrelations must be small. It scales as bla = biN with b which is of order 1 and is larger than bmin = -(11"lp)/(I- exp(-11"lp)). In this regime (.] - .10) IN retains a finite positive value even for large N. This enhancement call 172 H. Yoon and H. Sompolinsky , , be made large if b comes close to bmin . This behavior is shown in Fig. 3 (upper traces). Note that, for both positive and negative weak correlations, the curves have peaks around a characteristic length scale p '" a, which is 0.211" in this figure. Extremely long and short range correlations: Calculation with strictly uniform correlations, i. e., bij = b, shows that in this case the positive correlations enhance the Fisher information of the system, leading to claims that this might be a gelleri<: result (Abbott and Dayan 1998). Here we show that this behavior is special to cases where the correlations essentially do not vary in strength. We consider the case p '" O(N). This means that the strength of the correlations is the same for all the neurons up to a correction of order liN. In this limit Eq. (11) is not valid, and the Fisher information is obtained from Eq. (8) and Eq. (9), (14) where {} = wpl4. Note that even in this extreme regime, only for {} > 1 is 1 guaranteed to be always larger than .10' Below this value the sign of 1 -.10 depends on the particular shape of the tuning curve and the value of b. In fact, a more detailed analysis (Yoon and Sompolinsky 1998) shows that as soon as p« O(VN), 1 - 10 < 0, as in the case of p rv 0(1) discussed above. The crossover between these two opposite behaviors is shown in Fig. 4. For comparison the case with p rv 0(1) is also shown. 4.0 3.0 .J 2.0 .In 1.D n.o n.D n.2 0.4 D.G 0.8 1.0 b Figure 4: Normalized Fisher information when bla rv 0(1). N = 1000 and a = 1. When p '" 0(1), increasing b always decreases the Fisher information (bottom curve p = 0.2511"). However, this trend is reversed when p ,....., O(VN) and when p > ~N .1 - .10 becomes always positive. From the top p = 400, 50, and 25. Another extreme regime is where the correlation length p scales as 1 IN but the tuning width remains of order 1. This means that a given neuron is correlated with a small number of its immediate neighbors, which remains finite as N ~ 00. In this limit, the Fishel' information becomes, again from Eq. (8) and Eq. (9), _ N(>..-l_l) 2 1 - a(>..-1_1)+2bI:19nl. " (15 ) In this case, the behavior of 1 is similar to the cases of weak correlations discussed above. The information remains of order N but the sign of 1 - 10 depends on the sign of b. Thus, when the amplitude of the positive correlation function is 0(1), .] increases linearly with N in the two opposite extremes of very large and very small p as shown in Fig. 2 ((a) and (b)). Fisher Information of Correlated Population Codes 173 4 Discussion In this paper we have studied the effect of correlated variability of neuronal activity OIl the maximum accuracy of the population coding. We have shown that the effect of correlation on the information capacity of the population crucially depends on the scale of correlation length. We argue that for the sensory and motor areas which are presumed to utilize population coding, the tuning of both the correlations and the mean response profile is broad and of the same order. This implies that each neuron is correlated with a finite fraction of the total number of neurons, N, and a given stimulus activates a finite fraction of N. We show that in this regime positive correlations always decrease the information. When they are strong enough in amplitude they reduce the number of independent degrees of freedom to a finite number even for large population. Only in the extreme case of almost uniform correlations the information capacity is enhanced. This is reasonable since to overcome the positive correlations one needs to subtract the responses of different neurons. But in general this will reduce their signal by a larger amount. When the correlations are uniform, the reduction of the correlated noise by subtraction is perfect and can be made in a manner that will little affect the signal component. Acknow ledgments H.S. acknowledges helpful discussions with Larry Abbott and Sebastian Seung. This research is partially supported by the Fund for Basic Research of the Israeli Academy of Science and by a grant from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel. References L. F. Abbott and P. Dayan (1998). The effect of correlated variability on the accuracy of a population code. Neural Camp., in press. E. Fetz, K. Yoyama, and W. Smith (1991). Synaptic interactions between cortical neurons. In A. Peters and E. G. Jones (Eds.) , Cerebral Cortex, Volume 9. New York: Plenum Press. D. Lee, N. L. Port, W. Kruse, and A. P. Georgopoulos (1998). Variability and correlated noise in the discharge of neurons in motor and parietal areas of the primate cortex. J. Neurosci. 18, 1161- 1170. M. A. Paradiso (1988). A theory for the use of visual orientation informatioll which exploits the columnar structure of striate cortex. BioI. Cybern. 58, 35- 49. H. S. Seung and H. Sompolinsky (1993). Simple models for reading neuronal population codes. Proc. Natl. Acad. Sci. USA 90, 10749- 10753. H. P. Snippe and J. J. Koenderink (1992a). Discrimination thresholds for channelcoded Hystems. Biol. Cybern. 66, 543- 551. H. P. Snippe and J. J. Koenderink (1992b). Information in chClnnel-coded system: correlated receivers. Biol. Cybern. 67, 183- 190. H. Yoon and H. Sompolinsky (1998). Population coding in neuronal systems with correlated noise, preprint. E. Zohary, M. N. Shadlen, and W. T. Newsome (1994). Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370, 140- 143.
|
1998
|
36
|
1,533
|
Spike-Based Compared to Rate-Based Hebbian Learning Richard Kempter* Institut fur Theoretische Physik Technische Universitat Munchen D-85747 Garching, Germany Wulfram Gerstner Swiss Federal Institute of Technology Center of Neuromimetic Systems, EPFL-DI CH-1015 Lausanne, Switzerland J. Leo van Hemmen Institut fur Theoretische Physik Technische Universitat Munchen D-85747 Garching, Germany Abstract A correlation-based learning rule at the spike level is formulated, mathematically analyzed, and compared to learning in a firing-rate description. A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and spiking can be separated. For a linear Poissonian neuron model which receives time-dependent stochastic input we show that spike correlations on a millisecond time scale play indeed a role. Correlations between input and output spikes tend to stabilize structure formation, provided that the form of the learning window is in accordance with Hebb's principle. Conditions for an intrinsic normalization of the average synaptic weight are discussed. 1 Introduction Most learning rules are formulated in terms of mean firing rates, viz., a continuous variable reflecting the mean activity of a neuron. For example, a 'Hebbian' (Hebb 1949) learning rule which is driven by the correlations between presynaptic and postsynaptic rates may be used to generate neuronal receptive fields (e.g., Linsker 1986, MacKay and Miller 1990, Wimbauer et al. 1997) with properties similar to those of real neurons. A rate-based description, however, neglects effects which are due to the pulse structure of neuronal signals. During recent years experimental and * email: kempter@physik.tu-muenchen.de (corresponding author) 126 R. Kempter. W Gerstner and J L. van Hemmen theoretical evidence has accumulated which suggests that temporal coincidences between spikes on a millisecond or even sub-millisecond scale play an important role in neuronal information processing (e.g., Bialek et al. 1991, Carr 1993, Abeles 1994, Gerstner et al. 1996). Moreover, changes of synaptic efficacy depend on the precise timing of postsynaptic action potentials and presynaptic input spikes (Markram et al. 1997, Zhang et al. 1998). A synaptic weight is found to increase, if presynaptic firing precedes a postsynaptic spike and decreased otherwise. In contrast to the standard rate models of Hebbian learning, the spike-based learning rule discussed in this paper takes these effects into account. For mathematical details and numerical simulations the reader is referred to Kempter et al. (1999). 2 Derivation of the Learning Equation 2.1 Specification of the Hebb Rule We consider a neuron that receives input from N » 1 synapses with efficacies Ji , 1 :::; i :::; N. We assume that changes of Ji are induced by pre- and postsynaptic spikes. The learning rule consists of three parts. (i) Let tf be the time of the m th input spike arriving at synapse i. The arrival of the spike induces the weight Ji to change by an amount win which can be positive or negative. (ii) Let tn be the nth output spike of the neuron under consideration. This event triggers the change of all N efficacies by an amount wout which can also be positive or negative. (iii) Finally, time differences between input spikes influence the change of the efficacies. Given a time difference s = tf - t n between input and output spikes, Ji is changed by an amount W(s) where the learning window W is a real valued function (Fig. 1). The learning window can be motivated by local chemical processes at the level of the synapse (Gerstner et al. 1998, Senn et al. 1999). Here we simply assume that such a learning window exist and take some (arbitrary) functional dependence W(s) . Figure 1: An example of a learning window W as a function of the delay s = tf - tn between a postsynaptic firing time tn and presynaptic spike arrival tf at synapse i. Note that for s < 0 the presynaptic spike precedes postsynaptic firing. Starting at time t with an efficacy Ji(t), the total change 6.Ji(t) = Ji(t + T) - Ji(t) in a time interval T is calculated by summing the contributions of all input and output spikes in the time interval [t, t + 7]. Describing the input spike train at synapse i by a series of 8 functions, s:n(t) = Lm 8(t - tf), and, similarly, output spikes by sout(t) = Ln 8(t - tn), we can formulate the rules (i)--:(iii): t+T [ t+T ] b.J,(t) = ! dt' Wi" S;"(t') + wont 8"nt(t') + ! dt" W(t" - t') S;"(t") 8"nt(t') (1) 2.2 Separation of Time Scales The total change 6..Ji (t) is subject to noise due to stochastic spike arrival and, possibly, stochastic generation of output spikes. We therefore study the expected development of the weights Ji , denoted by angular brackets. We make the substitution s = til - t' on the right-hand side of (1), divide both sides by T, and take Spike-Based Compared to Rate-Based Hebbian Learning 127 the expectation value: (tlJt·)(t) _1 I t+Tdt' T T t [win (s!n)(t') + W out (sout) (t')] 1 It+T It+T-t' +dt' ds W(s) (s!n(t' + s) sout(t')) T t t-t' (2) We may interpret (s~n)(t) for 1 ::; i ::; Nand (sout)(t) as instantaneous firing rates. I They may vary on very short time scales - shorter, e.g., than average interspike intervals. Such a model is consistent with the idea of temporal coding, since it does not rely on temporally averaged mean firing rates. We note, however, that due to the integral over time on the right-hand side of (2) temporal averaging is indeed important. If T is much larger than typical interspike intervals, we may define mean firing rates v!n(t) = (s~n)(t) and vout(t) = (sout)(t) where we have used the notation f(t) = T- l Itt+T dt' f(t'). The mean firing rates must be distinguished from the previously defined instantaneous rates (s~n) and (sout) which are defined as an expectation value and have a high temporal resolution. In contrast, the mean firing rates vin and vout vary slowly (time scale of the order of T) as a function of time. If the learning time T is much larger than the width of the learning window, the integration over s in (2) can be extended to run from -00 to 00 without introducing a noticeable error. With the definition of a temporally averaged correlation, (3) the last term on the right of (2) reduces to I~oo ds W(s) Ci(s; t). Thus, correlations between pre- and postsynaptic spikes enter spike-based Hebbian learning through Ci convolved with the learning window W. We remark that the correlation Ci(s; t) may change as a function of s on a fast time scale. Note that, by definition, s < 0 implies that a presynaptic spike precedes the output spike - and this is when we expect (for excitatory synapses) a positive correlation between input and output. As usual in the theory of Hebbian learning, we require learning to be a slow process. The correlation Ci can then be evaluated for a constant Ji and the left-hand side of (2) can be rewritten as a differential on the slow time scale of learning :t Ji(t) == ji = win v!n(t) + Wout vout(t) + i: ds W(S) Ci(S; t) (4) 2.3 Relation to Rate-Based Hebbian Learning In neural network theory, the hypothesis of Hebb (Hebb 1949) is usually formulated as a learning rule where the change of a synaptic efficacy Ji depends on the correlation between the mean firing rate vln of the i th presynaptic and the mean firing rate vout of a postsynaptic neuron, viz. , ji = ao + al v!n + a2 vout + a3 v!n vout + a4 (v~n)2 + a5 (vout)2 , (5) where ao, aI, a2, a3 , a4, and a5 are proportionality constants. Apart from the decay term ao and the 'Hebbian' term vin vout proportional to the product of input and 1 An example of rapidly changing instantaneous rates can be found in the auditory system. The auditory nerve carries noisy spike trains with a stochastic intensity modulated at the frequency of the applied acoustic tone. In the barn owl, a significant modulation of the rates is seen up to a frequency of 8 kHz (e.g., Carr 1993). 128 R. Kempler, W Gerstner and J L. van Hemmen output rates, there are also synaptic changes which are driven separately by the preand postsynaptic rates. The parameters ao, ... , as may depend on Ji . Equation (5) is a general formulation up to second order in the rates; see, e.g., (Linsker 1986). To get (5) from (4) two approximations are necessary. First, if there are no correlations between input and output spikes apart from the correlations contained in the rates, we can approximate (SJn(t + s) sout(t)) ~ (s~n)(t + s) (SOUtHt). Second, if these rates change slowly as compared to T, then we have Ci(s; t) ~ v;n(t+s) vout(t). Since we have assumed that the learning time T is long compared to the width of the learning window, we may simplify further and set vJn(t + s) ~ v!n(t), hence J~oo ds W(s) Ci(s; t) ~ W(O) vjn(t) vout(t), where W(O) = J~oo ds W(s). We may now identify W(O) with a3. By further comparison of (5) with (4) we identify win with al and wout with a2, and we are able to reduce (4) to (5) by setting ao = a4 = as = O. The above set of of assumption which is necessary to derive (5) from (4) does, however, not hold in general. According to the results of Markram et aI. (1997) the width of the learning window in cortical pyramidal cells is in the range of ~ 100 ms. A mean rate formulation thus requires that all changes of the activity are slow on a time scale of lOOms. This is not necessarily the case. The existence of oscillatory activity in the cortex in the range of 50 Hz implies activity changes every 20 ms. Much faster activity changes on a time scale of 1 ms and below are found in the auditory system (e.g., Carr 1993). Furthermore, beyond the correlations between mean activities additional correlations between spikes may exist; see below. Because of all these reasons, the learning rule (5) in the simple rate formulation is insufficient. In the following we will study the full spike-based learning equation (4). 3 Stochastically Spiking Neurons 3.1 Poisson Input and Stochastic Neuron Model To proceed with the analysis of (4) we need to determine the correlations Ci between input spikes at synapse i and output spikes. The correlations depend strongly on the neuron model under consideration. To highlight the main points of learning we study a linear inhomogeneous Poisson neuron as a toy model. Input spike trains arriving at the N synapses are statistically independent and generated by an inhomogeneous Poisson process with time-dependent intensities (Sin) (t) = ,\~n (t), with 1 ~ i ~ N. A spike arriving at tf at synapse i, evokes a postsynaptic potential (PSP) with time course E(t - tf) which we assume to be excitatory (EPSP). The amplitude is given by the synaptic efficacy Ji(t) > O. The membrane potential u of the neuron is the linear superposition of all contributions N u(t) = Uo + L L Ji(t) E(t t~) (6) i=l m where Uo is the resting potential. Output spikes are assumed to be generated stochastically with a time dependent rate ,\out(t) which depends linearly upon the membrane potential N ,\out(t) = f3 [u(t)l+ = Vo + L L Ji(t) E(t - tf)· (7) i=l m with a linear function f3[ul+ = f30 + f31 u for u > 0 and zero otherwise. After the second equality sign, we have formally set Vo = Uo + f30 and f31 = 1. vo > can Spike-Based Compared to Rate-Based Hebbian Learning 129 be interpreted as the spontaneous firing rate. For excitatory synapses a negative u is impossible and that's what we have used after the second equality sign. The sums run over all spike arrival times at all synapses. Note that the spike generation process is independent of previous output spikes. In particular, the Poisson model does not include refractoriness. In the context of (4), we are interested in the expectation values for input and output. The expected input is (s~n)(t) = A~n(t). The expected output is (sout)(t) = va + L Ji(t) 10 00 d8€(s) A~n(t - 8) , (8) t The expected output rate in (8) depends on the convolution of € with the input rates. In the following we will denote the convolved rates by A~n(t) = 1000 d8 €(8)A~n(t - 8). Next we consider the expected correlations between input and output, (s!n(t + 8) sout(t)), which we need in (3): (s~n (t + 8) sout(t)) = A~n (t + 8) [Va + Ji (t) €( -8) + L Jj (t) A~n(t)] (9) j The first term inside the square brackets is the spontaneous output rate. The second term is the specific contribution of an input spike at time t + 8 to the output rate at t. It vanishes for 8 > 0 (Fig. 2). The sum in (9) contains the mean contributions of all synapses to an output spike at time t. Inserting (9) in (3) and assuming the weights Jj to be constant in the time interval [t, t + T] we obtain Ci(8; t) = L Jj(t) A~n(t + 8) A~n(t) + A~n(t + 8) [Va + Ji(t) €( -8)]. (10) j For excitatory synapses, the second term gives for 8 < 0 a positive contribution to the correlation function - as it should be. (Recall that 8 < 0 means that a presynaptic spike precedes postsynaptic firing.) [ ... ](t') ---·r --------o -'-----r----,,---'---------___. t' t+s t 3.2 Learning Equation Figure 2: Interpretation of the term in square brackets in (9). The dotted line is the contribution of an input spike at time t + 8 to the output rate as a function of t', viz., Ji (t) € (t' t - 8). Adding this to the mean rate contribution, Va + Lj Jj(t') A~n(t') (dashed line), we obtain the rate inside the square brackets of (9) (full line). At time t' = t the contribution of an input spike at time t + 8 is Ji(t) €( -s). The assumption of identical and constant mean input rates, A~n(t) = v:n(t) = vin for all i, reduces the number of free parameters in (4) and eliminates all effects of rate coding. We introduce r~n(t) := [W(O)]-l J.~oo d8 W(8)A~n(t + 8) and define (11) Using (8), (10), (11) in (4) we find for the evolution on the slow time scale oflearning ji(t) = kl + L Jj(t) [Qij(t) + k2 + k3 bij] , where (12) j 130 R. Kempter. W Gerstner and 1. L. van Hemmen 4 Discussion [wout + W(O) vin] Vo + win v in [w out + W(O) vin] v in v in / ds€(-s) W(s) . (13) (14) (15) Equation (12), which is the central result of our analysis, describes the expected dynamics of synaptic weights for a spike-based Hebbian learning rule (1) under the assumption of a linear inhomogeneous Poisson neuron. Linsker (1986) has derived a mathematically equivalent equation starting from (5) and a linear graded response neuron, a rate-based model. An equation of this type has been analyzed by MacKay and Miller (1990). The difference between Linsker's equation and (12) is, apart from a slightly different notation, the term k36ij and the interpretation of Qij. 4.1 Interpretation of Qij In (12) correlations between spikes on time scales down to milliseconds or below can enter the driving term Qij for structure formation; cf. (11). In contrast to that, Linsker's ansatz is based on a firing rate description, where the term Qij contains correlations between mean firing rates only. In his Qij term, mean firing rates take the place of r~n and A~n. If we use a standard interpretation of rate coding, a mean firing rate corresponds to a temporally averaged quantity with an averaging window or a hundred milliseconds or more. Formally, we could define mean rates by temporal averaging with either €( s) or W(s) as the averaging window. In this sense, Linsker's 'rates' have been made more precise by (11). Note, however, that (11) is asymmetric: one of the rates should be convolved with €, the other one with W. 4.2 Relevance of the k3 term The most important difference between Linsker's rate-based learning rule and our Eq. (12) is the existence of a term k3 I: O. We now argue that for a causal chain of events k3 ex: I dx €(x) W( -x) must be positive. [We have set x = -s in (15).] First, without loss of generality, the integral can be restricted to x > 0 since €(x) is a response kernel and vanishes for x < O. For excitatory synapses, €(x) is positive for x > O. Second, experiments on excitatory synapses show that W(s) is positive for s < 0 (Markram et al. 1997, Zhang et al. 1998). Thus the integral I dx €(x) W( -x) is positive - and so is k3 . There is also a more general argument for k3 > 0 based on a literal interpretation of Hebb's statement (Hebb 1949). Let us recall that s < 0 in (15) means that a presynaptic spike precedes postsynaptic spiking. For excitatory synapses, a presynaptic spike which precedes postsynaptic firing may be the cause of the postsynaptic activity. [As Hebb puts it, it has 'contributed in firing the postsynaptic cell'.] Thus, the Hebb rul~ 'predicts' that for excitatory synapses W(s) is positive for s < O. Hence, k3 = vln I ds €( - s) W (s) > 0 as claimed above. A positive k3 term in (12) gives rise to an exponential growth of weights. Thus any existing structure in the distribution of weights is enhanced. This contributes to the stability of weight distributions, especially when there are few and strong synapses (Gerstner et al. 1996). Spike-Based Compared to Rate-Based Hebbian Learning 131 4.3 Intrinsic Normalization Let us suppose that no input synapse is special and impose the (weak) condition that N - 1 Li Qij = Qo > 0 independent of the synapse index j . We find then from (12) that the average weight Jo := N-l Li Ji has a fixed point Jo = -kd[Qo + k2 + N- 1k3 ]. The fixed point is stable if Qo + k2 + N- 1k3 < O. We have shown above that k3 > O. Furthermore, Qo > 0 according to our assumption. The only way to enforce stability is therefore a term k2 which is sufficiently negative. Let us now turn to the definition of k2 in (14). To achieve k2 < 0, either W(O) (the integral over W) must be sufficiently negative; this corresponds to a learning rule which is, on the average, anti-Hebbian. Or, for W(O) > 0, the linear term wout in (1) must be sufficiently negative. In addition, for excitatory synapses a reasonable fixed point Jo has to be positive. For a stable fixed point this is only possible for kl > 0, which, in turn, implies win to be sufficiently positive; cf. (13). Intrinsic normalization of synaptic weights is an interesting property, since it allows neurons to stay at an optimal operating point even while synapses are changing. Auditory neurons may use such a mechanism to stay during learning in the regime where coincidence detection is possible (Gerstner et al. 1996, Kempter et al. 1998). Cortical neurons might use the same principles to operate in the regime of high variability (Abbott, invited NIPS talk, this volume). 4.4 Conclusions Spike-based learning is different from simple rate-based learning rules. A spikebased learning rule can pick up correlations in the input on a millisecond time scale. Mathematically, the main difference to rate-based Hebbian learning is the existence of a k3 term which accounts for the causal relation between input and output spikes. Correlations between input and output spikes on a millisecond time scale playa role and tend to stabilize existing strong synapses. References Abeles M. , 1994, In Domany E. et al., editors, Models of Neural Networks II, pp. 121- 140, New York. Springer. Bialek W. et al. , 1991, Science, 252:1855- 1857. Carr C. E., 1993, Annu. Rev. Neurosci. , 16:223-243. Gerstner W. et al., 1996, Nature, 383:76-78. Gerstner W. et al., 1998, In W. Maass and C. M. Bishop., editors, Pulsed Neural Networks, pp. 353-377, Cambridge. MIT-Press. Hebb D.O., 1949, The Organization of Behavior. Wiley, New York. Kempter R. et al., 1998, Neural Comput. , 10:1987- 2017. Kempter R. et al., 1999, Phys. Rev. E, In Press. Linsker R., 1986, Proc. Natl. Acad. Sci. USA, 83:7508- 7512. MacKay D. J. C., Miller K. D., 1990, Network, 1:257- 297. Markram H. et al., 1997, Science, 275:213-215. Senn W. et al., 1999, preprint, Univ. Bern. Wimbauer S. et al., 1997, BioI. Cybern., 77:453-461. Zhang L.I. et al. , 1998, Nature, 395:37- 44
|
1998
|
37
|
1,534
|
Contrast adaptation in simple cells by changing the transmitter release probability Peter Adorjan Klaus Obennayer Dept. of Computer Science, FR2-1, Technical University Berlin Franklinstrasse 28/2910587 Berlin, Germany {adp, oby} @cs.tu-berlin.de http://www.ni.cs.tu-berlin.de Abstract The contrast response function (CRF) of many neurons in the primary visual cortex saturates and shifts towards higher contrast values following prolonged presentation of high contrast visual stimuli. Using a recurrent neural network of excitatory spiking neurons with adapting synapses we show that both effects could be explained by a fast and a slow component in the synaptic adaptation. (i) Fast synaptic depression leads to saturation of the CRF and phase advance in the cortical response to high contrast stimuli. (ii) Slow adaptation of the synaptic transmitter release probability is derived such that the mutual information between the input and the output of a cortical neuron is maximal. This component-given by infomax learning rule-explains contrast adaptation of the averaged membrane potential (DC component) as well as the surprising experimental result, that the stimulus modulated component (Fl component) of a cortical cell's membrane potential adapts only weakly. Based on our results, we propose a new experiment to estimate the strength of the effective excitatory feedback to a cortical neuron, and we also suggest a relatively simple experimental test to justify our hypothesized synaptic mechanism for contrast adaptation. 1 Introduction Cells in the primary visual cortex have to encode a wide range of contrast levels, and they still need to be sensitive to small changes in the input intensities. Because the signaling capacity is limited, this paradox can be resolved only by a dynamic adaptation to changes in the input intensity distribution: the contrast response function (CRF) of many neurons in the primary visual cortex shifts towards higher contrast values following prolonged presentation of high contrast visual stimuli (Ahmed et al. 1997, Carandini & Ferster 1997). On the one hand, recent experiments, suggest that synaptic plasticity has a major role Contrast Adaptation and Infomax 77 in contrast adaptation. Because local application of GABA does not mediate adaptation (Vidyasagar 1990) and the membrane conductance does not increase significantly during adaptation (Ahmed et al. 1997, Carandini & Ferster 1997), lateral inhibition is unlikely to account for contrast adaptation. In contrast, blocking glutamate (excitatory) autoreceptors decreases the degree of adaptation (McLean & Palmer 1996). Furthermore, the adaptation is stimulus specific (e.g. Carandini et al. 1998), it is strongest if the adapting and testing stimuli are the same. On the other hand, plasticity of synaptic weights (e.g. Chance et al. 1998) cannot explain the weak adaptation of the stimulus driven modulations in the membrane potential (FI component) (Carandini & Ferster 1997) and the retardation of the response phase after high contrast adaptation (Saul 1995). These experimental findings motivated us to explore how presynaptic factors, such as a long term plasticity mediated by changes in the transmitter release probability (Finlayson & Cynader 1995) affect contrast adaptation. 2 The single cell and the cortical circuit model The cortical cells are modeled as leaky integrators with a firing threshold of -55 mY. The interspike membrane potential dynamics is described by 8Vi(t) ~ Cm~ = -91eak (Vi(t) - Eresd - L9ij(t) (Vi(t) Esyn) . (I) J The postsynaptic conductance 9ij (t) is the integral over the previous presynaptic events and is described by the alpha-function (2) where t; is the arrival time of spike number s from neuron j. Including short term synaptic depressIOn, the effective conductance is weighted by the portion of the synaptic resource Pij (t) . Rij (t) that targets the postsynaptic side. The model parameters are Cm = 0.5 nF, 91eak = 31 nS, E rest = -65 mY, Esyn = -5 mY, 9'::':~x = 7.8 nS, and Tpeak = 1 ms, and the absolute refractory period is 2 ms, and after a spike, the membrane potential is reset 1 m V below the resting potential. Following Tsodyks & Markram (1997) a synapse between neurons j and i is characterized by the relative portion of the available synaptic transmitter or resource Rij. After a presynaptic event, Rij decreases by Pij Rij, and recovers exponentially, where Pij is the transmitter release probability. The time evolution of Rij between two presynaptic spikes is then A • (-(t -i)) Rij(t) = 1 - (1 - (Rij{t) - pij(t)Rij{t))) exp , Tree (3) where £ is the last spike time, and the recovery time constant Tree = 200 ms. Assuming Poisson distributed presynaptic firing, the steady state of the expected resource is (4) The stationary mean excitatory postsynaptic current (EPSC) Ii] (fj , Pij) is proportional to the presynaptic firing frequency fj and the activated transmitter Pij Ri] (fj , Pij ) Ii] (fj, Pij) ex f Pij Ri] (fj, Pij) . (5) The mean current saturates for high input rates /j and it also depends on the transmitter release probability Pij: with a high release probability the function is steeper at low presynaptic frequencies but saturates earlier than for a low release probability. 78 P Adorjem and K. Obermayer (a) 80 r====-'-----, _ --- p=O.55 ,'. § .... # .... -g 60 p=O.24 ,A .£ ,,/" 0 0 e 40 ,OD " J! 20 ~. 0 t-" " 0 ......... 0 OF'='-4-~o~~-----~ 10° 10' 102 Firing rate [Hz 1 (b) -641. .. -----.====:::::::;-] :' --p = 0.55 >--64.2 '\ • 02 ,. p= .4 S -64.4 : \ ;\ ~ : \ : \. ~ -64.6' ... (5 .. ~ -64.8 • -{)50 50 100 150 200 Time [ms[ Figure 1: Short term synaptic dynamics at high and low transmitter release probability, (a) The estimated transfer function O(f, p) for the cortical cells (Eq. 7) (solid and dashed lines) in comparison with data obtained by the integrate and fire model (Eq. 1, circles and asterisks). (b) EPSP trains for a series of presynaptic spikes at intervals of 31 ms (32 Hz). p=O.55 (0.24) corresponds to adaptation to 1 % (50% ) contrast (see Section 4). In order to study contrast adaptation. 30 leaky-integrator neurons are connected fully via excitatory fast adapting synapses. Each "cortical" leaky integrator neuron receives its "geniculate" input through 30 synapses. The presynaptic geniculate spike-trains are independent Poisson-processes. Modeling visual stimulation with a drifting grating, their rates are modulated sinusoidally with a temporal frequency of2 Hz. The background activity for each individual "geniculate" source is drawn from a Gaussian distribution with a mean of 20 Hz and a standard deviation of 5 Hz. In the model the mean geniculate firing rate (Fig. 2b) and the amplitude of modulation (Fig. 2a) increases with stimulus log contrast according to the experimental data (Kaplan et al. 1987). In the following simulations CRFs are determined according to the protocol of Carandini & Ferster (1997). The CRFs are calculated using an initial adaptation period of 5 s and a subsequent series of interleaved test and re-adaptation stimuli (1 s each). 3 The learning rule We propose that contrast adaptation in a visual cortical cell is a result of its goal to maximize the amount of information the cell's output conveys about the geniculate inputl . Following (Bell & Sejnowski 1995) we derive a learning rule for the transmitter release probability p to maximize the mutual information between a cortical cell's input and output. Let O(f, p) be the average output firing rate, f the presynaptic firing rate, and p the synaptic transmitter release probability. Maximizing the mutual information is then equivalent to maximizing the entropy of a neuron's output if we assume only additive noise: H [O(f,p)] -E[ In Prob(O(J,p))] [ Prob(f)] -E In I(}O(f,p)/(}f l E [In 1 (}O~;, p) I] - E[ In Prob(f)] (6) (In the following all equations apply locally to a synapse between neurons j and i.) In order to derive an analytic expression for the relation between 0 and f we use the fact that the EPSP amplitude converges to its steady state relatively fast compared to the modulation of the geniculate input to the visual cortex, and that the average firing rates of the I A different approach of maximizing mutual information between input and output of a single spiking neuron has been developed by Stemmler & Koch (1999). For non-spiking neurons this strategy has been demonstrated experimentally by. e.g. Laughlin (1994). Contrast Adaptation and Infomax 79 presynaptic neurons are approximately similar. Thus we approximate the activation function by O(f,p) ex S(f)pRoo(f,p), (7) where S(f) = Ire accounts for the frequency dependent summation of EPSCs. The parameters a = 1.8 and e = 15 Hz are determined by fitting O(f, p) to the firing rate of our integrate and fire single cell model (see Fig. 1 a). The objective function is then maximized by a stochastic gradient ascent learning rule for the release probability p op _ oH [O(f, p)] _ ~ 1 I oO(f, p) I Tadapt ot Op Op n of . (8) Evaluating the derivatives we obtain a non-Hebbian learning rule for the transmitter release probability p, op Tadapt ot 2 fR 1 Tree(fa - 1) Tree + - + ( p a + TreeP fa - 1) (9) where a = }- I~e' and the adaptation time constant Tadapt = 7 s (Ohzawa et al. 1985). This is similar in spirit to the anti-Hebbian learning mechanism for the synaptic strength proposed by Barlow & Foldiak (1989) to explain adaptation phenomena. Here, the first term is proportional to the presynaptic firing rate f and to the available synaptic resource R, suggesting a presynaptic mechanism for the learning. Because the amplitude of the EPSP is proportional to the available synaptic resource, we could interpret R as an output related quantity and -2Tree f R as an anti-Hebbian learning rule for the "strength of the synapse", i.e. the probability p of the transmitter release. The second term ensures that pis always larger than O. In the current model setup for the operating range of the presynaptic geniculate cells p also stays al ways less than 1. The third term modulates the adaptation slightly and increases the release probability p most if the input firing rate is close to 20 Hz, i.e. the stimulus contrast is low. Image contrast is related to the standard deviation of the luminance levels normalized by the mean. Because ganglion cells adapt to the mean luminance, contrast adaptation in the primary visual cortex requires only the estimation of the standard deviation. In a free viewing scenario with an eye saccade frequency of 2-3 Hz, the standard deviation can be estimated based on 10-20 image samples. Thus the adaptation rate can be fast (Tadapt = 7 s), and it should also be fast in order to maintain good a representation whenever visual contrast changes, e.g. by changing light conditions. Higher order moments (than the standard deviation) of the statistics of the visual world express image structure and are represented by the receptive fields' profiles. The statistics of the visual environment are relatively static, thus the receptive field profiles should be determined and constrained by another less plastic synaptic parameter. such as the maximal synaptic conductance 9max. 4 Results Figure 2 shows the average geniculate input, the membrane potential, the firing rate and the response phase of the modeled cortical cells as a function of stimulus contrast. The CRFs were calculated for two adapting contrasts 1 % (dashed line) and 50% (solid line). The cortical CRF saturates for high contrast stimuli (Fig. 2e). This is due to the saturation ofthe postsynaptic current (cf. Fig. I a) and thus induced by the short term synaptic depression. In accordance with the experimental data (e.g. Carandini et al. 1997) the delay of the cortical response (Fig. 2f) decreases towards high contrast stimuli. This is a consequence of fast synaptic depression (c.f. Chance et al. 1998). High modulation in the input firing rate leads to a fast transient rise in the EPSC followed by a rapid depression. 80 LGN U'40 <L) fJj --a :-20 ~ 0.0 .S ~ u:: 0 (a) 100 101 102 U' 40 .----~---, <L) fJj ~30 ~20 .S <c1o u o 0 l...-_~~_~ 100 101 102 (b) Contrast [%] P. Ador}an and K. Obermayer ....... 10 ,......, 30 > (,) <L) E fJj -....... , ~ 20 , ~(f ~ 5 ~ ,,- ~ , , .... 10 , , C 0.0 p--e' <L) .S .... , 0 ~ t:l.. ...... , ~ o a (c) 101 102 (e) 10 101 102 ....... >-54 .s ~ ~-58 '0 0.. U -0 ~..o--_ ,P.-El~-~ o ~-20 ~ ~ -40 ro ~ -60 , I I o 0-62 L--_~ __ --' 100 101 102 -80 l...-_~~_---' 100 101 102 (d) Contrast [%] (0 Contrast [%] Figure 2: The DC (a) and the FI (b) component of the geniculate input, and the response of the cortical units in the model with strong recurrent lateral connections and slow adaptation of the release probability on both the geniculocortical and lateral synapses. The Fl (c) and the DC (d) component of the subthreshold membrane potential of a single cortical unit, the Fl component of the firing rate (e). and the response phase (0 are plotted as a function of stimulus contrast after adaptation to 1 % (solid lines) and to 50% (dashed lines) contrast stimuli. The CRF for the membrane potential (c, d) is calculated by integrating Eg. I without spikes and without reset after spikes. The cortical circuitry involves strong recurrent lateral connections. The model predicts a shift of 3-5 mV in the DC component of the subthreshold membrane potential (Fig. 2d)- a smaller amount than measured by Carandini & Ferster (1997). Nevertheless, in accordance with the data the shift caused by the adaptation is larger than the change in the DC component of the membrane potential from I % contrast to 100% contrast. The largest shift in the DC membrane potential during adaptation occurs for small contrast stimuli because an alteration in the transmitter release probability has the largest effect on the postsynaptic current if the presynaptic firing rate is close to the geniculate background activity of 20 Hz. The maximal change in the Fl component (Fig. 2c) is around 5m V and it is half of the increase in the FI component of the membrane potential from 1 % contrast to 100% contrast. The CRF for the cortical firing rate (Fig. 2e) shifts to the right and the slope decreases after adaptation to high contrast. The model predicts that the probability p for the transmitter release decreases by approximately a factor of two. The Fl component of the cortical firing rate decreases after adaptation because after tonic decrease in the input modulated membrane potential, the over-threshold area of its FI component decreases. The adaptation in the Fl firing rate is fed back via the recurrent excitatory connections resulting in the observable adaptation in the FI membrane potential. Without lateral feedback (Fig. 3) the Fl component of the membrane potential is basically independent of the contrast adaptation. At high release probability a steep rise of the EPSC to a high amplitude peak is followed by rapid depression if the input is increasing. At low release probability the current increases slower to a lower amplitude, but the depression is Contrast Adaptation and lnfomax ,......, >-54 E ......... ~ ...... 5-58 o 0.. ,......, 30 u tU <Jl -. .§ 20 ......... ~ 10 btl t: ·c 1Z 0 102 (c) 10° 0 i-20 '"0 ....... tU -40 ...... ~ r..-...---..-__ -.l ~ -60 U -o--a-o ___ .(). __ ~ 81 ,......, 25 u tU -a 20 E ~ 15 - 10 ~ btl t: 5 .~ 101 102 1Z 0 (a) 10° 101 102 Contrast [%] 0 0-62 -80 -60 L--~~_~...-l 10° 101 102 10° 101 102 10° 101 102 (b) Contrast [%] (d) Contrast [%] (b) Contrast [%] Figure 3 Figure 4 Figure 3: The membrane potential (a, b), the phase (d) of the Fl component of the firing rate, and the Fl component (c) averaged for the modeled cortical cells after adaptation to 1 % (dashed lines) and 50% (solid lines) contrast. The weight of cortical connections is set to zero. The CRF for the membrane potential (a, b) is calculated by integrating Eq. 1 without spikes and without reset after spikes. Figure 4: Hysteresis curve revealed by following the ramp method protocol (Carandini & Ferster 1997). After adaption to I % contrast, test stimuli of 2 s duration were applied with a contrast successively increasing from 1 % to 100% (asterisks). and then decreasing back to 1 % (circles). less pronounced too. As a consequence, the power at the first harmonic (Fl component) of the subthreshold membrane potential does not change if the release probability is modulated. It is modulated to a large extent by the recurrent excitatory feedback. The adaptation of the Fl component of the firing rate could therefore be used to measure the effective strength of the recurrent excitatory input to a simple ceIl in the primary visual cortex. Additional simulations (data not shown) revealed that changing the transmitter release probability of the geniculocortical synapses is responsible for the adaptation in our model network. Fixing the value of p for the geniculocortical synapses abolishes contrast adaptation, while fixing the release probability p for the lateral synapses has no effect. Simulations show that increasing the release probability of the recurrent excitatory synapses leads to oscillatory activity (e.g. Senn et al. 1996) without altering the mean activity of simple cells. These results suggest an efficient functional segregation of feedforward and recurrent excitatory connections. Plasticity of the geniculocortical connections may playa key role in contrast adaptation, while-without affecting the CRF-plasticity of the recurrent excitatory synapses could could playa key role in dynamic feature binding and segregation in the visual cortex (e.g. Engel et al. 1997). Figure 4 shows the averaged CRF of the cortical model neurons revealed by the ramp method (see figure caption) for strong recurrent feedback and adapting feedforward and recurrent synapses. We find hysteresis curves for the Fl component of the firing rate simi82 P. Adorjan and K. Obermayer lar to the results reported by Carandini & Ferster (1997), and for the response phase. In summary, by assuming two different dynamics for a single synapse we explain the saturation of the CRFs, the contrast adaptation, and the increase in the delay of the cortical response to low contrast stimuli. For the visual cortex of higher mammals, adaptation of release probability p as a substrate for contrast adaptation is so far only a hypothesis. This hypothesis, however, is in agreement with the currently available data, and could additionally be justified experimentally by intracellular measurements of EPSPs evoked by stimulating the geniculocortical axons. The model predicts that after adaptation to a low contrast stimulus the amplitude of the EPSPs decreases steeply from a high value, while it shows only small changes after adaptation to a high contrast stimulus (cf. Fig. 1 b). Acknowledgments The authors are grateful to Christian Piepenbrock for fruitful discussions. Funded by the German Science Foundation (Ob 102/2-1, GK 120-2). References Ahmed, B., Allison, J. D., Douglas, R. 1. & Martin, K A. C. (1997), 'Intracellular study of the contrast-dependence of neuronal activity in cat visual cortex.' , Cerebral Cortex 7,559-570. Barlow, H. B. & F6ldiak, P. (1989), Adaptation and decorrelation in the cortex, in R. Durbin, C. Miall & c. Mitchison, eds, 'The computing neuron', Workingham: Addison-Wesley, pp. 54-72. Bell, A. 1. & Sejnowski, T. J. (1995), 'An information-maximization approach to blind sepertation and blind deconvolution', Neur. Comput. 7(6),1129-1159. Carandini, M. & Ferster, D. (1997), 'A tonic hyperpolarization underlying contrast adaptation in cat visual cortex', Science 276, 949-952. Carandini, M., Heeger, D. J. & Movshon, 1. A. (1997), 'Linearity and normalization in simple cells of the macaque primary visual cortex' , J. Neurosci. 17,8621-8644. Carandini, M., Movshon, J. A. & Ferster, D. (1998), 'Pattern adaptation and cross-orientation interactions in the primary visual cortex' , Neuropharmacology 37, 501-51 I. Chance, F. S., Nelson, S. B. & Abbott, L. F. (1998), 'Synaptic depression and the temporal response characteristics of V I cells', 1. Neurosci. 18,4785-4799. Engel, A. K, Roelfsema, P. R., Fries, P .. Brecht, M. & Singer. W (1997), 'Role of the temporal domain for response selection and perceptual binding', Cerebral Cortex 7,571-582. Finlayson, P. G. & Cynader, M. S. (I 995), 'Synaptic depression in visual cortex tissue slices: am in vitro model for cortical neuron adaptation', Exp. Brain Res. 106, 145-155. Kaplan, E., Purpura, K & Shapley, R. M. (1987), 'Contrast affects the transmission of visual information through the mammalian lateral geniculate nucleus', J. PhysioL. 391, 267-288. Laughlin, S. B. (1994), 'Matching coding, circuits, cells, and molecules to signals: general principles of retinal design in the fly's eye', Prog. Ret. Eye Res. 13, 165-196. McLean, 1. & Palmer, L. A. (1996), 'Contrast adaptation and excitatory amino acid receptors in cat striate cortex', Vis. Neurosci. 13, 1069-1087. Ohzawa, I., ScJar, G. & Freeman, R. D. (1985), 'Contrast gain control in the cat's visual system', 1. Neurophysiol. 54, 651-667. Saul, A. B. (1995), 'Adaptation in single units in visual cortex: response timing is retarted by adapting', Vis. Neurosci. 12, 191-205. Senn, W, Wyler, K, Streit, J., Larkum, M., Luscher, H.-R., H. Mey, L. M. a. D. S., Vogt, K & Wannier, T. (1996), 'Dynamics of a random neural network with synaptic depression', Neural Networks 9, 575-588. Stemmler, M. & Koch, C. (1999), Information maximization in single neurons, in 'Advances in Neural Information Processing Systems NIPS II'. same volume. Tsodyks, M. V. & Markram, H. (1997), 'The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability', Proc. NatL. Acad. Sci. 94, 719-723. Vidyasagar, T. R. (1990), 'Pattern adaptation in cat visual cortex is a co-operative phenomenon', Neurosci. 36, 175-179.
|
1998
|
38
|
1,535
|
Optimizing Correlation Algorithms for Hardware-based Transient Classification R. Timothy Edwardsl, Gert Cauwenberghsl, and Fernando J. Pineda2 1 Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218 2 Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 e-mail: {tim, gert, fernando}@bach.ece. jhu. edu Abstract The perfonnance of dedicated VLSI neural processing hardware depends critically on the design of the implemented algorithms. We have previously proposed an algorithm for acoustic transient classification [1]. Having implemented and demonstrated this algorithm in a mixed-mode architecture, we now investigate variants on the algorithm, using time and frequency channel differencing, input and output nonnalization, and schemes to binarize and train the template values, with the goal of achieving optimal classification perfonnance for the chosen hardware. 1 Introduction At the NIPS conference in 1996 [1], we introduced an algorithm for classifying acoustic transient signals using template correlation. While many pattern classification systems use template correlation [2}, our system differs in directly addressing the issue of efficient implementation in analog hardware, to overcome the area and power consumption drawbacks of equivalent digital systems. In the intervening two years, we have developed analog circuits and built VLSI hardware implementing both the template correlation and the frontend acoustic processing necessary to map the transient signal into a time-frequency representation corresponding to the template [3, 4]. In the course of hardware development, we have been led to reevaluate the algorithm in the light of the possibilities and the limitations of the chosen hardware. The general architecture is depicted in Figure 1 (a), and excellent agreement between simulations and experimental output from a prototype is illustrated in Figure 1 (b). Issues of implementation efficiency and circuit technology aside, the this paper specifically addresses further improvements in classification perfonnance achievable by algorithmic modifications, tailored to the constraints and strengths of the implementation medium. Optimizing Correlation Algorithms/or Transient Classification Template Correlator NxM N Shift and Accumulate (a) cJtl '5 70 60 50 .9- 40 ::I o c 30 o ~ 20 ~ 8 -20 Simulated • Measured 20 40 60 80 100 120 140 Time (b) 679 Figure 1: (a) System architecture of the acoustic transient classifier (b) Demonstration of accurate computation in the analog correlator on a transient classification task. 2 The transient classification algorithm The core of our architecture performs the running correlation between an acoustic input and a set of templates for distiguishing between Z distinct classes. A simple template correlation equation for the acoustic transient classification can be written: M N cz[tJ = Kz L L x[t - n, mJ pz[n, mJ (1) m=l n=l where M is the number of frequency channels of the input, N is the maximum number of time bins in the window, and x is the array of input signals representing the energy content in each of the M bandpass frequency channels. The inputs x are normalized across channels using an L-l normalization so that the correlation is less affected by volume changes in the input. The matrix pz contains the template pattern values for pattern z out of a total of Z classes; K z is a constant gain coefficient for class z, and t is the current time. This formula produces a running correlation Cz [tJ of the input array with the template for class z. A signal is classified as belonging to class z when the output Cz exceeds the output for all other classes at a point in time t determined by simple segmentation of the input. To train and evaluate the system, we used a database of 22 recorded samples of 10 different classes of "everyday" transients such as the sounds made by aluminum cans, plastic tubs, handclaps, and the like. Each example transient recording was processed through a thirty-two channel constant-Q analog cochlear filter with output taps spaced on a logarithmic frequency scale [6]. For the simulations, the frontend system outputs were sampled and saved to disk, then digitally rectified and smoothed with a lowpass filter function with a 2 ms time constant. These thirty-two channel outputs representing short-term average energy in each frequency band were decimated to 500 Hz and normalized with the function M+l x[t, mJ = y[t, mJ/ Ly[t, kJ, (2) k=l where y[t, M + 1J is a constant-valued input added to the system in order to supress noise in the normalized outputs during periods of silence. The additional output x[t, M + 1J 680 R. T. Edwards, G. Cauwenberghs and F. J. Pineda becomes maximum during the periods of silence and minimum during presentation of a transient event. This extra output can be used to detect onsets of transients, but is not used in the correlation computation of equation (1). Template values pz are learned by automatically aligning all examples of the same class in the training set using a threshold on the normalization output x[t, M + 1], and averaging the values together over N samples. starting a few samples before the point of alignment. Class outputs are normalized relative to one another by mUltiplying each output by a gain factor K z , computed from the template values using the L-2 norm function M N Kz = L LPz[n,m]2. (3) m=l n=l We evaluated the accuracy of the system with a cross-validation loop in which we train the system on all of the database except one example of one class, then test on that remaining example. repeating the test for each of the 220 examples in the database. The baseline algorithm gives a classification accuracy of 96.4%. 3 Single-bit template values A major consideration for hardware implementations (both digital and analog) is the memory storage required by the templates, one of which is required for each class. Minimal storage space in terms of bits per template is practical only if the algorithm can be proved to perform acceptably well under decreased levels of quantization of the template values. At one bit per template location (i.e., M x N bits per template), the complexity of the hardware is greatly simplified, but it is no longer obvious what method is best to use for learning the template values, or for calculating the per-class gains. The choice of the method is guided by knowledge about the acoustic transients themselves, and simulation to evaluate its effect on the accuracy of a typical classification task. 4 Simulations of different zero-mean representations One bit per template value is a desirable goal, but realizing this goal requires reevaluating the original correlation equation. The input values to be correlated represent band-limited energy spectra, and range from zero to some maximum determined by the L-l normalization. To determine the value of a template bit, the averaged value over all examples of the class in the training set must be compared to a threshold (which itself must be determined), or else the input itself must be transformed into a form with zero average mean value. In the latter method, the template value is determined by the sign of the transformed input, averaged over all examples of the class in the training set. The obvious transformations of the input which provide a vector of zero-mean signals to the correlator are the time derivative of each input channel, and the difference between neighboring channels. Certain variations of these are possible, such as a center-surround computation of channel differences, and zero-mean combinations of time and channel differences. While there is evidence that center-surround mechanisms are common to neurobiological signal processing of various sensory modalities in the brain, including processing in the mammalian auditory cortex [5], time derivatives of the input are also plausible in light of the short time base of acoustic transient events. Indeed, there is no reason to assume a priori that channel differences are even meaningful on the time scale of transients. Table 1 shows simulation results, where classification accuracy on the cross-validation test is given for different combinations of continuous-valued and binary inputs and templates, Optimizing Correlation Algorithms for Transient Classification 681 Table I: Simulation results with different architectures. Method Both Binary Both Binary (1, -1) Binary (1,0) Cont. Input Binary Template Template One-to-One 96.40% Time Difference 85.59% 65.32% 59.46% 82.43% 81.98% Channel Difference 90.54% 53.60% 95.05% 94.59% 94.14% Center-Surround 92.79% 53.60% 95.05% 92.34% 92.34% and different zero-mean transformations of the input. There are several significant points to the results of these classification tasks. The first is to note that in spite of the fact that acoustic transient events are short-term and the time steps between the bins in the template as low as 2 ms, using time differences between samples does not yield reliable classification when either the input or the template or both is reduced to binary form. However, reliability remains high when the correlation is performed using channel differences. The implication is that even the shortest transient events have stable and reliable structure in the frequency domain, a somewhat surprising conclusion given the impulsive nature of most transients. Another interesting point is that we observe no significant difference between the use of pairwise channel differences and the more complicated center-surround mechanism (twice the channel value minus the value of the two neighboring channels). The slight decrease in accuracy for the center-surround in some instances is most likely due only to the fact that one less channel contributes information to the correlator than in the pairwise channel difference computation. When accuracy is constant, a hardware implementation will always prefer the simpler mechanism. Very little difference in accuracy is seen between the use of a binary (1, -1) representation and a binary (1,0) representation, in spite ofthe fact that all zero-valued template positions do not contribute to the correlation output. This lack of difference is a result of the choice of the L-l normalization across the input vector, which ensures that the part of the correlation due to positive template values is roughly the same magnitude as that due to negative template values, leading to a redundant representation which can be removed without affecting classification results. In analog hardware, particularly current-mode circuits, the (1,0) template representation is much simpler to implement. Time differencing of the input can be efficiently realized in analog hardware by commuting the time-difference calculation to the end of the correlation computation and implementing it with a simple switch-capacitor circuit. Taking differences between input channel values, on the other hand, is no so easily reduced to a simple hardware form. To find a reasonable solution, we simulated a number of different combinations of channel differencing and binarization. Table 2 shows a few examples. The first row is our standard implementation of channel differences using binary (1,0) templates and continuous-valued input. The drawback of this method in analog hardware is the matching between negative and positive parts of the correlation sum. We found two ways to get around this problem without greatly compromising the system performance: The first, shown in the second row of Table 2 is to add to the correlation sum only if the channel difference is positive and the template value is 1 (one-quadrant multiplication). Another (shown in the last row) is to add the maximum of each pair of channels if the template value is 1, which is preferable in that it uses the input values directly and does not require computing a difference at all. Unfortunately, it also adds a large component to the output which is related only to the total energy of the input and therefore is common to all class outputs, reducing the dynamic range of the system. 682 R. T Edwards. G. Cauwenberghs and F. J Pineda Table 2: Simulation results for different methods of computing channel differences method accuracy channel difference 94.14% one-quadrant multiply 92.34% maximum channel 93.69% 5 Optimization of the classifier using per-class gains The per-class gain values Kz in equation (1) are optimal for the baseline algorithm when using the L-2 normalization. The same normalization applied to the binary templates (when the template value is assumed to be either +1 or -1) yields the same K z value for all classes. This unity gain on all class outputs is assumed in all the simulations of the previous section. A careful evaluation of errors from several runs indicated the possibility that different gains on each channel could improve recognition rates, and simple experiments with values tweaked by hand proved this suspicion to be true. To automate the process of gain optimization, we consider the templates, as determined by averaging together examples of each class in the training set, to be fixed. Then we compute the correlation between each template and the aligned, averaged inputs for each class which were used to generate the templates. The result is a Z x Z matrix, which we denote C, of expected values for the correlation between a typical example of a transient input and the template for its own class (diagonal elements Cii ) and the templates for all other classes (off-diagonal elements Cij, i '=I j). Each column of C is like the correlator outputs on which we make a classification decision by choosing the maximum. Therefore we wish to maximize Cii with respect to all other elements in the same column. The only degree of freedom for adjusting these values is to multiply the correlation output of each template z by a constant coefficient K z . This corresponds to multiplying each row of C by K z . This per-class gain mechanism is easily transferred to the analog hardware domain. In the case of continuous-valued templates, an optimal solution can be directly evaluated and yields the L-2 normalization. However, for all binary forms of the template and/or the input, direct evaluation is impossible and the solution must be found by choosing an error function E to minimize or maximize. The error function must assign a large error to any off-diagonal element in a column that approaches or exceeds the diagonal element in that column, but must not force the cross-correlations to arbitrarily low negative values. A minimizing function that fits this description is E = L L exp (KjCji - KiCii ). i #i (4) This function unfortunately has no closed-form solution for the coefficients Ki, which must be determined numerically using Newton-Raphson or some other iterative method. Improvements in the recognition rates of the classification task using this optimization of per-class gains is shown in Table 3, where we have considered only the case of inputs and templates encoding channel differences. Although the database is small, the gains of 2 to 4% for the quantized cases are significant. For this particular simulation we used a different type of frontend section to verify that the performance of the correlation algorithm was not linked to a specific frontend architecture. To generate these performance values, we used sixteen channels with the inputs digitally processed through a constant-Q bandpass filter having a Q of 5.0 and with center frequencies spaced on a mel scale from 100Hz to 4500 Hz. The bandpass filtering was followed by rectification and smoothing with a lowpass filter function with a cutoff frequency scaled logarithmically across channels, from 60 Hz to 600 Hz. The channel output data were decimated to a 500 Hz rate. Half of the Optimizing Correlation Algorithms/or Transient Classification 683 database was used to train the system, and half used to test. Performance is similar to that reported in the previous section in spite of the fact that the number of channels was cut in half, and the number of training examples was also cut in half. Slight gains in performance are most likely due to the cleaner digital filtering of the recorded data. Table 3: System accuracy with and without per-class normalization. binarization accuracy, optimized accuracy, non-optimized none 100% 100% template only 93% 91% template & input 95% 91% 6 System Robustness We performed several additional experiment in addition to those covered in the previous sections. One of these was an evaluation of recognition accuracy as a function of the template length N (number of time bins), to determine what is a proper size for the templates. The result is shown in Figure 2 (a). This curve reaches a reliable maximum at about 50 time bins, from which our chosen size for the hardware implementation of 64 bins provides a safe margin of error. However, it is interesting to note that recognition accuracy does not drop to that of random chance until only two time bins are used (64 bits per template), and accuracy is nearly 50% with only 3 time bins (96 bits per template). 100~--~~~====~======~ 90 ~ 80 <fl. ;: 70 [:6 5 60 u u c( 50 E ~ 40 ~ (J) 30 20 1%~--~20~--~40~---OO~--~8~ 0----~100 Correlator length N (a) 100r---~--~----~--~----~--~ 80 20 -~O -20 - 10 0 10 20 30 SNR (dB) (b) Figure 2: (a) Effect of decreasing the number of time-bins. (b) Effect of white noise added to the correlator inputs. We made one evaluation of the robustness of the algorithm in the presence of noise by introducing additional white noise at the correlator inputs. The graph of Figure 2 (right) shows that accuracy remains high until the signal-to-noise ratio is roughly OdB. An interesting question to ask about the L-l normalization at the frontend is how the added constant normalization channel (y[t, M + 1]) affects the classification performance. If this channel is omitted, then the total instantaneous value of all outputs must equal the same value, even during periods of silence, in which low-level noise gets amplified. The nominal value of this channel was chosen to match the levels of noise in the transient recordings. For one of the cases of Table 1 (real input, binary (1,0) template, channel differencing at 684 R. T. Edwards, G. Cauwenberghs and F. 1. Pineda the input), we tried two other tests, one with the normalization constant doubled, and one with it omitted (zero). Doubling the normalization constant had no effect on the error rate, while omitting it caused the accuracy to drop only from 94.1 % to 92.3%. The conclusion is that for large templates, random noise has a low probability of producing a spurious positive correlation that would be classified as a transient. The classification algorithm is not largely dependent on input signal normalization. 7 Conclusions Starting from a template correlation architecture for acoustic transient classification targeted for high-density, low-power analog VLSI implementation, we have investigated several variants on the correlation algorithms, accounting for the strengths and constraints of the VLSI implementation medium while maintaining acceptable classification performance. Reduction of input and templates to binary form does not significantly affect performance, as long as they are transformed to encode the difference in neighboring channels of the original filterbank frontend outputs. This suggests that acoustic transient classification is not only amenable to implementation in simple analog hardware, but also in reasonably simple digital hardware. In looking for zero-mean representations of the input compatible with a binary template, we found that computing pairwise differences between channels gives a more robust representation than a time-differential form, as was reported previously in [1]. We have found that computing a center-surround function of the inputs yields virtually the same results as taking pairwise channel differences. Where hardware implementation is the goal, the pairwise difference function is preferred due to its greater simplicity. We have additionally shown that cross-correlations between aligned, averaged inputs and templates can be used with an iterative method to solve for optimal gain coefficients per class output, which yield better classification performance. This is a method which can be applied in general to all template correlation systems. References [1] F. J. Pineda, G. Cauwenberghs, R. T. Edwards, "Bangs, Clicks, Snaps, Thuds, and Whacks: An Architecture for Acoustic Transient Processing," Neural Information Processing Systems (NIPS), Denver, 1996. [2] K. P. Unnikrishnan, J. J. Hopfield, and D. W. Tank, "Connected-Digit SpeakerDependent Speech Recognition Using a Neural Network with Time-Delayed Connections," IEEE Transactions on Signal Processing, 39, pp. 698-713,1991. [3] R. T. Edwards, G. Cauwenberghs, and F. J. Pineda, "A Mixed~Signal Correlator for Acoustic Transient Classification," International Symposium on Circuits and Systems (ISCAS), Hong Kong, June 1997. [4] R. T. Edwards and G. Cauwenberghs, "A Second-Order Log-Domain Bandpass Filter for Audio Frequency Applications," International Symposium on Circuits and Systems (ISCAS), Monterey, CA, June 1998. [5] K. Wang and S. Shamma, "Representation of Acoustic Signals in the Primary Auditory Cortex," IEEE Trans. Audio and Speech Processing, 3(5), pp. 382-395, 1995. [6] F. J. Pineda, K. Ryals, D. Steigerwald, and P. Furth, "Acoustic Transient Processing using the Hopkins Electronic Ear," World Conference on Neural Networks, Washington, D.C., 1995.
|
1998
|
39
|
1,536
|
Convergence of The Wake-Sleep Algorithm Shiro Ikeda PRESTO,JST Wako, Saitama, 351-0198, Japan shiro@brain.riken.go.jp Shun-ichi Amari RIKEN Brain Science Institute Wako, Saitama, 351-0198,Japan amari@brain.riken.go.jp Hiroyuki Nakahara RIKEN Brain Science Institute hiro@brain.riken.go.jp Abstract The W-S (Wake-Sleep) algorithm is a simple learning rule for the models with hidden variables. It is shown that this algorithm can be applied to a factor analysis model which is a linear version of the Helmholtz machine. But even for a factor analysis model, the general convergence is not proved theoretically. In this article, we describe the geometrical understanding of the W-S algorithm in contrast with the EM (ExpectationMaximization) algorithm and the em algorithm. As the result, we prove the convergence of the W-S algorithm for the factor analysis model. We also show the condition for the convergence in general models. 1 INTRODUCTION The W-S algorithm[5] is a simple Hebbian learning algorithm. Neal and Dayan applied the W-S algorithm to a factor analysis mode1[7]. This model can be seen as a linear version of the Helmholtz machine[3]. As it is mentioned in[7], the convergence of the W-S algorithm has not been proved theoretically even for this simple model. From the similarity of the W-S and the EM algorithms and also from empirical results, the W-S algorithm seems to work for a factor analysis model. But there is an essential difference between the W-S and the EM algorithms. In this article, we show the em algorithm[2], which is the information geometrical version of the EM algorithm, and describe the essential difference. From the result, we show that we cannot rely on the similarity for the reason of the W-S algorithm to work. However, even with this difference, the W-S algorithm works on the factor analysis model and we can prove it theoretically. We show the proof and also show the condition of the W-S algorithm to work in general models. 240 S. Ikeda, S. Amari and H. Nakahara 2 FACTOR ANALYSIS MODEL AND THE W-S ALGORITHM A factor analysis model with a single factor is defined as the following generative model, Generative model x = J.t + yg + €, where x = (Xl,'" ,xn)T is a n dimensional real-valued visible inputs, y ,......, N(O, 1) is the single invisible factor, g is a vector of "factor loadings", J.t is the overall means vector which is set to be zero in this article, and € ,......, N(O, E) is the noise with a diagonal covariance matrix, E = diag( a;). In a Helmholtz machine, this generative model is accompanied by a recognition model which is defined as, Recognition model y = rT x + 15, where r is the vector of recognition weights and 15 ,......, N (0, 8 2 ) is the noise. When data Xl, ... ,XN is given, we want to estimate the MLE(Maximum Likelihood Estimator) of g and E. The W-S algorithm can be applied[7] for learning of this model. Wake-phase: From the training set {x s} choose a number of x randomly and for each data, generate y according to the recognition model y = rT x + 15,15 ,......, N(O, 8F). Update g and E as follows using these x's and y's, where a is a small positive number and (3 is slightly less than 1. gt+l gt + a(x - gtY)Y (1) al,t+l = (3a;,t + (1 - (3) (Xi - 9i,ty)2, (2) where denotes the averaging over the chosen data. Sleep-phase: According to the updated generative model x = ygt+l + €, y ,......, N(O, 1), € ,......, N(O,diag(a[+1))' generate a number of x and y. And update r and 8 2 as, rt + a(y - rTx)x ---=-(38; + (1 - (3)(y - rT x)2. By iterating these phases, they try to find the MLE as the converged point. (3) (4) For the following discussion, let us define two probability densities p and q, where p is the density of the generative model, and q is that of the recognition model. Let 0 = (g, E), and the generative model gives the density function of x and y as, p(y,x; 0) = exp (-~(y xT)A ( ~ ) -1/J(O)) (5) ( l+gTE-lgl_gTE-l) 1( 2 ) A= -E 19 E 1 ,1/J(O) = 2 Llog a i+(n+l)log211" , while the recognition model gives the distribution of y conditional to x as the following, q(ylx; "1) ,......, N(rT x, 8 2), where, "1 = (r , 8 2). From the data xl,'" ,XN, we define, 1 N C = N L XsXs T, q(x),......, N(O, C). s=l With this q( x), we define q(y, x; "1) as, q(y, x; "1) = q(x)q(ylx; "1) = exp ( - ~(y xT)B ( ~ ) -1/J("1)) (6) 1 ( 1 I _rT) 1 B = -2 2C 1 1', 1/J("1) = -2 (log 8 2 + log ICI + (n + 1) log 211") . 8 -r 8 + rr Convergence of the Wake-Sleep Algorithm 3 THE EM AND THE em ALGORITHMS FOR A FACTOR ANALYSIS MODEL 241 It is mentioned that the W-S algorithm is similar to the EM algorithm[ 4]([5][7]). But there is an essential difference between them. In this section, first, we show the EM algorithm. We also describe the em algorithm[2] which gives us the information geometrical understanding of the EM algorithm. With these results, we will show the difference between W-S and the EM algorithms in the next section. The EM algorithm consists of the following two steps. E-step: Define Q(O, Ot) as, 1 N Q(O,Ot) = N 2: Ep(Yiz. ;8.) [logp(y, xs; O)J M-step: Update 0 as, s=1 Ot+l = argmaxQ(O, Ot), 8 gt+l = T t"'-le t"'-1 T t"'-l ' gt L.Jt L.Jt gt + 1 + gt L.Jt gt ( gT E - 1e ) Et+l = diag C - gt+l t / -1 . 1 + gt Et gt (7) Ep [.J denotes taking the average with the probability distribution p. The iteration of these two steps converges to give the MLE. The EM algorithm only uses the generative model, but the em algorithm[2] also uses the recognition model. The em algorithm consists of the e and m steps which are defined as the e and m projections[l] between the two manifolds M and D. The manifolds are defined as follows. Model manifold M: M ~ {p(y, x; 0)10 = (g, diag(aD), 9 ERn, 0 < (Ii < oo}. DatamanifoldD: D ~ {q(y,x;1J)I1J = (r,s2),r E Rn,O < S < oo},q(x) include the matrix C which is defined by the data, and this is called the "data manifold". D M Figure 1: Information geometrical understanding of the em algorithm Figure 1 schematically shows the em algorithm. It consists of two steps, e and m steps. On each step, parameters of recognition and generative models are updated respectively. 242 S. Ikeda. S. Amari and H. Nakahara e-step: Update 7J as the e projection of p(y, x; 8d on D. 7Jt+1 = argminKL(q(7J)'p(8t)) (8) '1 hi 1gt 2 1 rt+l = T -1 ' St+l = T -1 . (9) 1 + gt Et gt 1 + gt h t gt where K L(q(7J),p(8)) is the Kullback-Leiblerdivergence defined as, KL(q(7J),p(8)) = E q(y ,:ll;'1) [log q~y,x:~~] p y,x, m-step: Update 8 as the m projection of q(y, x; 7Jd on M. 8t+1 = argminKL(q(7Jt+1),P(8)) (10) 9 Crt+l T gt+1 = 2 T C ' Et+1 = diag (C - gt+1rt+1C), (11) St+1 + rt+1 rt+1 By substituting (9) for rt+1 and s;+1 in (11), it is easily proved that (1) is equivalent to (7), and the em and EM algorithms are equivalent. 4 THE DIFFERENCE BETWEEN THE W-S AND THE EM ALGORITHMS The wake-phase corresponds to a gradient flow of the M-step[7] in the stochastic sense. But the sleep-phase is not a gradient flow of the E-step. In order to see these clear, we show the detail of the W-S phases in this section. First, we show the averages of (1), (2), (3) and (4), gt+1 = gt - 0.(8; + rTCrd (gt 2 cr;c ) St + rt rt Et+1 = ht - (1- [3) (ht - diag (C - 2(Crt)gT + (s; + rTCrdgtgT)) T ( hi=i.\9t+1) rt+1 = rt - o.(Et+1 + gt+1gt+1) rt T - 1 1 + gt+1 ht+1gt+1 S;+1 = s; - (1-[3) (s; ((1-g~1rd2 +rTht+1rt)). As the K-L divergence is rewritten as K L(q(7J),p(8)), 1 n+l K L(q(7J),p(8)) = "2tr(B- 1 A) - -2- + 'ljJ(8) - 'ljJ(7J), the derivatives of this K -L divergence with respect to 8 = (g , h) are, (12) (13) (14) (15) :gKL(q(7J)'P(8)) 2 ((S2 +rTCr)h-l) (g S2 +C;Tcr) (16) 8 8EK L(q(7J),p(8)) E-2 (E - diag (C - 2CrgT + (S2 + rTCr)ggT)) ~17) With these results, we can rewrite the wake-phase as, a 8 gt+1 = gt - 2,Et Bgt KL(q(7Jt) ,p(8t)) 2 8 Et+1 = ht - (1 - [3)ht BEt K L(q(7Jd ,p(8r)) (18) (19) Convergence of the Wake-Sleep Algorithm 243 Since E is a positive definite matrix, the wake-phase is a gradient flow of m-step which is defined as (0). On the other hand, K L(p( 0), q( 1])) is, 1 1 n KL(p(O),q(1])) = 2tr(A- B) -"2 +1/1(1]) -1/1(0). The derivatives of this K-L divergence respect to rand 82 are, 8 8r K L(p(O) , q(1])) 8 8(S2) K L(p(O) , q(1])) Therefore, the sleep-phase can be rewritten as, 0: 2 8 rt+l = rt - "2 8t 8rt K L(p(Ot+t} , q(1]t}) 8;+1 = 8; - (1- {3)(SF)2 8(~F)KL(P(Ot+1),q(1]d). (20) (21) (22) (23) These are also a gradient flow, but because of the asymmetricity of K-L divergence, (22), (23) are different from the on-line version of the m-step. This is the essential difference between the EM and W-S algorithms. Therefore, we cannot prove the convergence of the W-S algorithm based on the similarity of these two algorithms[7]. 11 I D KL(p(a),q (11» KL(q(l1l.P (a» Figure 2: The Wake-Sleep algorithm 5 CONVERGENCE PROPERTY We want to prove the convergence property of the W-S algorithm. If we can find a Lyapnov function for the W-S algorithm, the convergence is guaranteed[7]. But we couldn't find it. Instead of finding a Lyapnov function, we take the continuous time, and see the behavior ofthe parameters and K-L divergence, K L(q(1]t),p(Ot)). KL(q(1]),p(O)) is a function of g, r, E and 8 2 • The derivatives with respect to 9 and E are given in (16) and (17). The derivatives with respect to rand 8 2 are, 8 8r K L(q(1]),p(O)) (24) 8 8(82) K L(q(1]),p(O)) (25) 244 S. Ikeda, S. Amari and H. Nakahara On the other hand, we set the flows of g, T, E and S2 to follow the updating due to the W-S algorithm, that is, d dtg d -T dt ~E dt ~(S2) dt With theses results, dK L( q( 7]t), p( Ot)) / dt is, dKL(q(7]t),p(Ot)) 8KLdg 8KLdT 8KLdE 8KL d(S2) dt = 7i9 dt + fiT dt + 8E dt + 8(S2)--;{t· (26) (27) (28) (29) (30) First 3 terms in the right side of (30) are apparently non-positive. Only the 4th one is not clear. 8K L d(S2) I (2 ( T 2 T )) ( T -1 1 ) 8(S2) --;{t = -(3 St (1 - gt Tt) + Tt EtTt 1 + gt Et gt - sF 1+g[E;lgt(2 ( T 2 T ))(2 1 ) = 2 St (1 - gt Tt) + Tt EtTt St T -1 . St 1 + gt Et gt The K L(q(7]t), p(Ot)) does not decrease when s; stays between ((1 - g[ Tt)2 + T[ EtTt) and 1/(1 + g[ E;lgd. but if the following equation holds, these two are equivalent, E;lgt Tt = T 1 . (31) 1 + gt E; gt From the above results, the flows of g, T and E decrease K L(q(7]d, p( Ot)) at any time. s; converge to ((1- g[ Tt)2 +T[ EtTt) but it does not always decrease K L(q( 7]t), p( Od). But since T does converge to satisfy (31) independently of s;, finally s; converges to 1/ (1 + g[ E;lgt). 6 DISCUSSION This factor analysis model has a special property that p(ylx; 0) and q(ylx; 7]) are equivalent when following conditions are satisfied[7], E - 1g 2 T = S = ---,=-----:-1 + gT E - 1g ' 1 + gT E-1g 1 (32) From this property, minimizing K L(p( 0) , q( 7])) and K L( q( 7]), p( 0)) with respect to 7] leads to the same point. [ p(x;O)] [ P(Y1x;O)] KL(p(O) ,q(7])) =Ep(:Jl ;9) log q(x) + Ep(y ,:Jl ;9) log q(ylx;7]) (33) [ q(X)] [q(Y1X; 7])] K L(q(7]),p(O)) =Eq(:Jl) log p(x; 0) + Eq(y ,:Jl ;'1) log p(ylx; 0) , (34) both of (33) and (34) include 7] only in the second term of the right side. If (32) holds, those two terms are O. Therefore K L(p( 0) , q( 7])) and K L(q(7]), p(O)) are minimized at the same point. Convergence o/the Wake-Sleep Algorithm 245 We can use this result to modify the W-S algorithm. If the factor analysis model does not try wake- and sleep- phase alternately but "sleeps we11" untill convergence, it will find the TJ which is equivalent to the e-step in the em algorithm. Since the wake-phase is a gradient flow of the m-step, this procedure will converge to the MLE. This algorithm is equivalent to what is called the GEM(Generalized EM) algorithm[6]. The reason ofthe GEM and the W-S algorithms work is thatp(ylx; 6) is realizable with the recognition model q(ylx; TJ). If the recognition model is not realizable, the W-S algorithm won't converge to the MLE. We are going to show an example and conclude this article. Suppose the case that the average of y in the recognition model is not a linear function of r and x but comes through a nonlinear function f (.) as, Recognition model y = f (r T x) + <5, where f(·) is a function of single input and output and 6 ,...., N(O,8 2 ) is the noise. In this case, the generative model is not realizable by the recognition model in general. And minimizing (33) with respect to TJ leads to a different point from minimizing (34). K L(p( 6), q( TJ)) is minimized when rand 8 2 satisfies, Ep(~;9) [J(rT x)f'(rT x)x] = Ep(y , ~;9) [Y1'(rT x)x] (35) 8 2 = 1 Ep(y,~;9) [-2yf(rT x) + f2(rT x)], (36) while KL(q(TJ),p(6)) is minimized when rand 8 2 satisfies, (1 + gT E-1g)Eq(~;'I1) [f(rT x)1'(rT x)x] = Eq(~;'I1) [1'(rT x)xxT] E-1g (37) 2 1 8 = (38) 1 + gT E-1g Here, l' (.) is the derivative of f (.). If f (.) is a linear function, l' (.) is a constant value and (35), (36) and (37), (38) give the same TJ as (32), but these are different in general. We studied a factor analysis model, and showed that the W-S algorithm works on this model. From further analysis, we could show that the reason why the algorithm works on the model is that the generative model is realizable by the recognition model. We also showed that the W-S algorithm doesn't converge to the MLE if the generative model is not realizable with a simple example. Acknowledgment We thank Dr. Noboru Murata for very useful discussions on this work. References [1] Shun-ichi Amari. Differential-Geometrical Methods in Statistics, volume 28 of Lecture Notes in Statistics. Springer-Verlag, Berlin, 1985. [2] Shun-ichi Amari. Information geometry of the EM and em algorithms for neural networks. Neural Networks, 8(9):1379-1408, 1995. [3] Peter Dayan, Geoffrey E. Hinton, and Radford M. Neal. The Helmholtz machine. Neural Computation, 7(5):889-904,1995. [4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. R. Statistical Society, Series B, 39:1-38, 1977. [5] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The "wake-sleep" algorithm for unsupervised neural networks. Science, 268:1158-1160,1995. [6] Geoffrey J. McLachlan and Thriyambakam Krishnan. The EM Algorithm and Extensions. Wiley series in probability and statistics. John Wiley & Sons, Inc., 1997. [7] Radford M. Neal and Peter Dayan. Factor analysis using delta-rule wake-sleep learning. Neural Computation, 9(8):1781-1803,1997.
|
1998
|
4
|
1,537
|
Information Maximization in Single Neurons Martin Stemmler and Christof Koch Computation and Neural Systems Program Caltech 139-74 Pasadena, CA 91 125 Email: stemmler@klab.caltech.edu.koch@klab.caltech.edu Abstract Information from the senses must be compressed into the limited range of firing rates generated by spiking nerve cells. Optimal compression uses all firing rates equally often, implying that the nerve cell's response matches the statistics of naturally occurring stimuli. Since changing the voltage-dependent ionic conductances in the cell membrane alters the flow of information, an unsupervised, non-Hebbian, developmental learning rule is derived to adapt the conductances in Hodgkin-Huxley model neurons. By maximizing the rate of information transmission, each firing rate within the model neuron's limited dynamic range is used equally often. An efficient neuronal representation of incoming sensory information should take advantage of the regularity and scale invariance of stimulus features in the natural world. In the case of vision, this regularity is reflected in the typical probabilities of encountering particular visual contrasts, spatial orientations, or colors [1]. Given these probabilities, an optimized neural code would eliminate any redundancy, while devoting increased representation to commonly encountered features. At the level of a single spiking neuron, information about a potentially large range of stimuli is compressed into a finite range of firing rates, since the maximum firing rate of a neuron is limited. Optimizing the information transmission through a single neuron in the presence of uniform, additive noise has an intuitive interpretation: the most efficient representation of the input uses every firing rate with equal probability. An analogous principle for nonspiking neurons has been tested experimentally by Laughlin [2], who matched the statistics Information Maximization in Single Neurons (Hodgkin-Huxley Soma spiking conductances) 161 (coupling conductance) Figure 1: The model neuron contains two compartments to represent the cell's soma and dendrites. To maximize the information transfer, the parameters for six calcium and six potassium voltage-dependent conductances in the dendritic compartment are iteratively adjusted, while the somatic conductances responsible for the cell's spiking behavior are held fixed. of naturally occurring visual contrasts to the response amplitudes of the blowfly'S large monopolar cell. From a theoretical perspective, the central question is whether a neuron can "learn" the best representation for natural stimuli through experience. During neuronal development, the nature and frequency of incoming stimuli are known to change both the anatomical structure of neurons and the distribution of ionic conductances throughout the cell [3]. We seek a guiding principle that governs the developmental timecourse of the Na+, Ca2+ and K+ conductances in the somatic and dendritic membrane by asking how a neuron would set its conductances to transmit as much information as possible. Spiking neurons must associate a range of different inputs to a set of distinct responses-a more difficult task than 162 M. Stemmler and C. Koch keeping the firing rate or excitatory postsynaptic potential (EPSP) amplitude constant under changing conditions, two tasks for which learning rules that change the voltage-dependent conductances have recently been proposed [4, 5]. Learning the proper representation of stimulus information goes beyond simply correlating input and output; an alternative to the classic postulate of Hebb [6], in which synaptic learning in networks is a consequence of correlated activity between pre- and postsynaptic neurons, is required for such learning in a single neuron. To explore the feasibility of learning rules for information maximization, a simplified model of a neuron consisting of two electrotonic compartments, illustrated in fig. 1, was constructed. The soma (or cell body) contains the classic Hodgkin-Huxley sodium and delayed rectifier potassium conductances, with the addition of a transient potassium "A"current and an effective calcium-dependent potassium current. The soma is coupled through an effective conductance G to the dendritic compartment, which contains the synaptic input conductance and three adjustable calcium and three adjustable potassium conductances. The dynamics of this model are given by Hodgkin-Huxley-like equations that govern the membrane potential and a set of activation and inactivation variables, mi and hi, respectively. In each compartment of the neuron, the voltage V evolves as C dV - """ Pi hqi (E ) ill - ~ gi m i i i-V' (1) i where C is the membrane capacitance, gi is the (peak) value of the i-th conductance, Pi and qi are integers, and Ei are the ion-specific reversal potentials. The variables hi and mi obey first order kinetics of the type dm/dt = (moo (V) - m) /T(V), where moo (V) denotes the steady state activation when the voltage is clamped to V and T(V) is a voltage-dependent time constant. All parameters for the somatic compartment, with the exception of the adaptation conductance, are given by the standard model of Connor et al (1977) [7], This choice of somatic spiking conductances allows spiking to occur at arbitrarily low firing rates. Adaptation is modeled by a calcium-dependent potassium conductance that scales with the firing rate, such that the conductance has a mean value of 34 mS/cm2 Hz. The calcium and potassium conductances in the dendritic compartment have simple activation and inactivation functions described by distinct Boltzmann functions. Together with the peak conductance values, the midpoint voltages VI and slopes s of these Boltzmann functions 2 adapt to the statistics of stimuli. For simplicity, all time constants for the dendritic conductances are set to a constant 5 msec. For additional details and parameter values, see http://www.klab.caltech.edu/infomax. Hodgkin-Huxley models can exhibit complex behaviors on several timescales, such as firing patterns consisting of "bursts"-sequences of multiple spikes interspersed with periods of silence. We will, however, focus on models of regularly spiking cells that adapt to a sustained stimulus by spiking periodically. To quantify how much information about a continuous stimulus variable x the time-averaged firing rate f of a regularly spiking neuron carries, we use a lower bound [8] on the mutual information J(f; x) between the stimulus Information Maximization in Single Neurons 163 x and the firing rate f: hB(J; x) = -jIn (p(J) CTf(X)) p(x) dx -In(J27re), (2) where p(J) is the probability, given the set of all stimuli, of a firing rate f, and CTJ (x) is the variance of the firing rate in response to a given stimulus x. To maximize the information transfer, does a neuron need to "know" the arrival rates of photons impinging on the retina or the frequencies of sound waves hitting the ear's tympanic membrane? Since the ion channels in the dendrites only sense a voltage and not the stimulus directly, the answer to this question, fortunately, is no: maximizing the information between the firing rate f and the dendritic voltage Vdend(t) is equivalent to maximizing the information about the stimuli, as long as we can guarantee that the transformation from stimuli to firing rates is always one-to-one. Since a neuron must be able to adapt to a changing environment and shifting intra- and extracellular conditions [4], learning and relearning of the proper conductance parameters, such as the channel densities, should occur on a continual basis. An alphabet zoo of different calcium (Ca2+) conductances in neurons of the central nervous system, denoted 'L', 'N', 'P', 'R', and 'T' -conductances, reflects a wealth of different voltage and pharmacological properties [9], matching an equal diversity of potassium (K+) channels. No fewer than ten different genes code for various Ca2+ subunits, allowing for a combinatorial number of functionally different channels [10]. A self-regulating neuron should be able to express different ionic channels and insert them into the membrane. In information maximization, the parameters for each of the conductances, such as the number of channels, are continually modified in the direction that most increases the mutual information 1[1; Vdend (t)] each time a stimulus occurs. The standard approach to such a problem is known as stochastic approximation of the mutual information, which was recently applied to feedforward neural networks for blind source sound separation by Bell and Sejnowski [11]. We define a "free energy" :F = E(J) - (3-1 hB(J;X), where E(J) incorporates constraints on the peak or mean firing rate f, and (3 is a Lagrangean parameter that balances the mutual information and constraint satisfaction. Stochastic approximation then consists of adjusting the parameter r of a voltage-dependent conductance by (3) whenever a stimulus x is presented; this will, by definition, occur with probability p(x). In the model, the stimuli are taken to be maintained synaptic input conductances 9syn lasting 200 msec and drawn randomly from a fixed, continuous probability distribution. After an initial transient, we assume that the voltage waveform Vdend(t) settles into a simple periodic limit cycle as dictated by the somatic spiking conductances. We thus posit the existence of the invertible composition of maps, such that the input conductance 9syn maps onto a periodic voltage waveform Vdend(t) of period T, from thence onto an averaged current (1) = liT J: 1(t) dt to the soma, and then finally onto an output firing rate f. The last element in this chain of transformations, the steady-state current-discharge 164 : input probability - -----. : optimal firing rate - - - - - - . . f' 10 Llearnedfiringrate--__ _ r.I'J • I , .. c: I " ~' , ';, 0.8 ~ , " ~ , ~ I I , , " " ", \\ , , , , , ''" ~ .~ wm s..,.", 116 n5 " ~ 0.6 ~ " .g: " ...' , Q.. 0.4 ~ ~ , >' ·50 o 100 200 Time ImW'C1 s. : / ..5 0.2 ~/ _, , 0.0 ,'---'-----'-----'--- --'-----'----' 100 120 140 160 180 Synaptic Input Conductance [nSl M. Stemmler and C. Koch 60 N :r: 50 :::;' Ql U 40 '0 ~ ~ 30 g;c ·c ti: 20 Figure 2: The inputs to the model are synaptic conductances, drawn randomly from a Gaussian distribution of mean 141 nS and standard deviation of 25 nS with the restriction that the conductance be non-negative (dot-dashed line). The learning rule in eq. 4-maximizing the information in the cell's firing rate-was used to adjust the peak conductances, midpoint voltages, and slopes of the "dendritic" Ca2+ and K+ conductances over the course of 10.9 (simulated) minutes .. The learning rate decayed with time: 71(t) = 710 exp( -t/Tlearning) , with 710 = 4.3 X 10- 3 and Tlearning = 4.4 sec. The optimal firing rate response curve (dotted line) is asymptotically proportional to the cumulative probability distribution of inputs. The inset illustrates the typical timecourse of the dendritic voltage in the trained model. relationship at the soma, can be predicted from the theory of dynamical systems (see http://www.klab.caltech.edu/'''stemmler for details). The voltage and the conductances are nonlinearly coupled: the conductances affect the voltage, which, in turn, sets the conductances. Since the mutual information is a global property of the stimulus set, the learning rule for anyone conductance would depend on the values of all other conductances, were it not for the nonlinear feedback loop between voltages and conductances. This nonlinear coupling must satisfy the strict physical constraint of charge conservation: when the neuron is firing periodically, the average current injected by the synaptic and voltage-dependent conductances must equal the average current discharged by the neuron. Remarkably, charge conservation results in a learning mechanism that is strictly local, so that the mechanism for changing one conductance does not depend on the values of any other conductances. For instance, information maximization predicts that the peak calcium or potassium conductance 9i changes by each time a stimulus is presented. Here 71(t) is a time-dependent learning rate, the angular brackets indicate an average over the stimulus duration, and c( (Vdend)) is a simple function that is zero for most commonly encountered voltages, equal to a positive constant below some minimum, and equal to a negative constant above some maximum voltage. This Information Maximization in Single Neurons :>.. .::E :.0 0.1 ro ..0 8 0... 2 ro ~0.05 C .;:::: ~ original firing rate ------ optimal firing rate - - - -. - . " learned firing rate -- " , " \ I I I , , I I I ' I , " I, I ' I ' I ' , , , , //:-::----~:/---,---- ------.-/:_-''--.., ~ 0.0 =---' __ ---"''----_ ----"'---_--------0..'--__ "----'''-' 20 30 40 50 60 Firing Rate of Cell [Hz] 165 Figure 3: The probability distribution of firing rates before and after adaptation of voltagedependent conductances. Learning shifts the distribution from a peaked distribution to a much flatter one, so that the neuron uses each firing rate within the range [22, 59] Hz equally often in response to randomly selected synaptic inputs. function represents the constraint on the maximum and minimum firing rate, which sets the limit on the neuron's dynamic range. A constraint on the mean firing rate implies that c( (Vdend)) is simply a negative constant for all suprathreshold voltages. Under this constraint, the optimal distribution of firing rates becomes exponential (not shown). This latter case corresponds to transmitting as much information as possible in the rate while firing as little as possible. Given a stimulus x, the dominant term 8/8V(t) (mihi(Ej - V)) of eq. 4 changes those conductances that increase the slope of the firing rate response to x . A higher slope means that more of the neuron's limited range of firing rates is devoted to representing the stimulus x and its immediate neighborhood. Since the learning rule is democratic yet competitive, only the most frequent inputs "win" and thereby gain the largest representation in the output firing rate. In Fig. 2, the learning rule of eq. 4-generalized to also change the midpoint voltage and steepness of the activation and inactivation functions-has been used to train the model neuron as it responds to random, 200 msec long amplitude modulations of a synaptic input conductance to the dendritic compartment. The cell "learns" the statistical structure of the input, matching its adapted firing rate to the cumulative distribution function of the conductance inputs. The distribution of firing rates shifts from a peaked distribution to a much flatter one, so that all firing rates are used nearly equally often (Fig. 3). The information in the firing rate increases by a factor of three to 10.7 bits/sec, as estimated by adding a 5 msec, Gaussian-distributed noise jitter to the spike times. Changing how tightly the stimulus amplitudes are clustered around the mean will increase or decrease the slope of the firing rate response to input, without necessarily changing the average firing rate. Neuronal systems are known to adapt not only to the mean of 166 M. Stemmler and C. Koch the stimulus intensity, but also to the variance of the stimulus [12]. We predict that such adaptation to stimulus variance will occur not just at the level of networks of neurons, but also at the single cell level. While the detailed substrate for maximizing the information at both the single cell and network level awaits experimental elucidation, the terms in the learning rule of eq. 4 have simple biophysical correlates: the derivative term, for instance, is reflected in the stochastic flicker of ion channels switching between open and closed states. The transitions between simple open and closed states will occur at a rate proportional to (8/ 8V (m(V))) 'Y in equilibrium, where the exponent I is 1/2 or 1, depending on the kinetic model. To change the information transfer properties of the cell, a neuron could use state-dependent phosphorylation of ion channels or gene expression of particular ion channel subunits, possibly mediated by G-protein initiated second messenger cascades, to modify the properties of voltage-dependent conductances. The tools required to adaptively compress information from the senses are thus available at the subcellular level. References [1] D. L. Ruderman, Network 5(4), 517 (1995), R. J. Baddeley and P. J. B. Hancock, Proc. Roy. Soc. B 246, 219 (1991), J. J. Atick, Network 3, 213 (1992). [2] S. Laughlin, Z. Natuiforsch. 36c, 910 (1981). [3] Purves, D. Neural activity and the growth of the brain, (Cambridge University Press, NY, 1994); X. Gu and N. C. Spitzer, Nature 375, 784 (1995). [4] G. LeMasson, E. Marder, and L. F. Abbott, Science 259,1915 (1993). [5] A. J. Bell, Neurallnfonnation Processing Systems 4,59 (1992). [6] D. o. Hebb, The Organization of Behavior (Wiley, New York, 1949). [7] J. A. Connor, D. Walter, R. McKown, Biophys. J. 18, 81 (1977). [8] R. B. Stein, Biophys. J. 7,797 (1967). [9] R. B. Avery and D. Johnston, J. Neurosci. 16, 5567 (1996), F. Helmchen, K. Imoto, and B. Sakmann, Biophys. J. 70, 1069 (1996). [10] F. Hofmann, M. Biel, and V. Flockerzi, Ann. Rev. Neurosci. 17, 399 (t 994). [11] Y. Z. Tsypkin, Adaptation and Learning in Automatic Systems (Academic Press, NY, 1971))' R. Linsker, Neural Compo 4, 691 (1992), and A. J. Bell and T. J. Sejnowski, Neural Compo 7,1129 (1995). [12] S. M. Smirnakis et al., Nature 386, 69 (1997).
|
1998
|
40
|
1,538
|
Mechanisms of generalization perceptual learning • In Zili Lin Rutgers University, Newark Daphna Weinshall Hebrew University, Israel Abstract The learning of many visual perceptual tasks has been shown to be specific to practiced stimuli, while new stimuli require re-Iearning from scratch. Here we demonstrate generalization using a novel paradigm in motion discrimination where learning has been previously shown to be specific. We trained subjects to discriminate the directions of moving dots, and verified the previous results that learning does not transfer from the trained direction to a new one. However, by tracking the subjects' performance across time in the new direction, we found that their rate of learning doubled. Therefore, learning generalized in a task previously considered too difficult for generalization. We also replicated, in the second experiment, transfer following training with "easy" stimuli. The specificity of perceptual learning and the dichotomy between learning of "easy" vs. "difficult" tasks were hypothesized to involve different learning processes, operating at different visual cortical areas. Here we show how to interpret these results in terms of signal detection theory. With the assumption of limited computational resources, we obtain the observed phenomena direct transfer and change of learning rate for increasing levels of task 'difficulty. It appears that human generalization concurs with the expected behavior of a generic discrimination system. 1 Introduction Learning in biological systems is of great importance. But while cognitive learning (or "problem solving") is typically abrupt and generalizes to analogous problems, perceptual skills appear to be acquired gradually and specifically: Human subjects cannot generalize a perceptual discrimination skill to solve similar problems with different attributes. For example, in a visual discrimination task (Fig. 1), a subject who is trained to discriminate motion directions between 43° and 47° cannot use 46 Z. Liu and D. Weinshall this skill to discriminate 133° from 137°. Generalization has been found only when stimuli of different attributes are interleaved [7, 10], or when the task is easier [6, 1]. For example, a subject who is trained to discriminate 41 ° from 49° can later readily discriminate 131° from 139° [6]. The specificity of learning has been so far used to support the hypothesis that perceptual learning embodies neuronal modifications in the brain's stimulus-specific cortical areas (e.g., visual area MT) [9,3, 2, 5, 8, 4]. In contrast to previous results of learning specificity, we show in two experiments in Section 2 that learning in motion discrimination generalizes in all cases where specificity was thought to exist, although the mode of generalization varies. (1) When the task is difficult, it is direction specific in the traditional sense; but learning in a new direction accelerates. (2) When the task is easy, it generalizes to all directions after training in only one direction. While (2) is consistent with the findings reported in [6, 1], (1) demonstrate that generalization is the rule, not an exception limited only to "easy" stimuli. 2 Perceptual learning experiments __ st_ im_UI_Us_'-+-__ -+ ___ +-re_s_po_ns_e __ time SOOms Figure 1: Schematic of one trial. Left: the stimulus was a random dot pattern viewed in a circular aperture, spanning 8° of visual angle, moving in a given primary direction (denoted dir). The primary direction was chosen from 12 directions, separated by 30°. Right: the direction of each of the two stimuli was randomly chosen from two candidate directions (dir ± D./2). The subject judged whether the two stimuli moved in the same or different directions. Feedback was provided. The motion discrimination task is described in Fig. 1. In each trial, the subject was presented with two consecutive stimuli, each moving in one of two possible directions (randomly chosen from the two directions dir + ~/2 and dir ~/2). The directional difference I~I between the two stimuli was 8° in the easy condition, and 4° in the difficult condition. The experiment was otherwise identical to that in [2] that used I~I = 3°, except that our stimuli were displayed on an SGI computer monitor. I~I = 8° was chosen as the easy condition because most subjects found it relatively easy to learn, yet still needed substantial training. 2.1 A difficult task We trained subjects extensively in one primary direction with a difficult motion discrimination task (~ = 4°), followed by extensive training in a second primary direction. The two primary directions were sufficiently different so direct transfer between them was not expected [2] (Fig. 2). Subjects' initial performance in both directions was comparable, replicating the classical result of stimulus specific learning (no direct transfer). However, all subjects took only half as many training sessions to make the same improvement in the second direction. All subjects had extensive practice with the task prior to this experiment, thus the acceleration cannot be simply explained by familiarity. Mechanisms of Generalization in Perceptual Learning 47 Our results show that although perceptual learning did not directly transfer in this difficult task, it did nevertheless generalize to the new direction. The generalization was manifested as 100% increase in the rate of learning in the second direction. It demonstrates that the generalization of learning, as manifested via direct transfer and via increase in learning rate, may be thought of as two extremes of a continuum of possibilities. ':X"U~IL ~~ , 1._ .. _, ... 1 __ 2t>d"" ,l 20 '*i ll 05 1) 5 10 \S i'O 2S 30 os:) 5 ' 0 IS i'O 2S 30 S.sslon S ... lon , " <4 deg s~1KI l JX " I J7~~~::~~=:~ " " 2 L:...---.. ------.... 83 51 i __ l~do ' l l1O o.g l l S ••• lon Figure 2: Subjects DJ and ZL needed 20 training sessions in the first direction, and nine in the second; subject ZJX needed seven training sessions in the first, and four in the second. The rate of learning (the amount of improvement per session) in the second direction is significantly greater than in the first (t(2) = 13.41 ,p < 0.003). 2.2 An easy task We first measured the subjects' baseline performance in an easy task the discrimination of motion directions 8° apart in 12 primary directions (64 trials each, randomly interleaved). We then trained four subjects in one oblique primary direction (chosen randomly and counter-balanced among subjects) for four sessions, each with 700 trials. Finally, we measured again the subjects' performance in all directions. Every subject improved in all directions (Fig. 3). The performance of seven control subjects was measured without intermediate training; two more control subjects were added who were "trained" with similar motion stimuli but were asked to discriminate a brightness change instead. The control subjects improved as well, but significantly less (!ld' = 0.09 vs. 0.78, Fig. 3). Our results clearly show that training with an easy task in one direction leads to immediate improvement in other directions. Hence the learned skill generalized across motion directions. 3 A computational model We will now adopt a general framework for the analysis of perceptual learning results, using the language of signal detection theory. Our model accounts for the results in this paper by employing the constraint of limited computational resources. The model's assumptions are as follows. 1. In each trial, each of the two stimuli is represented by a population of measurements that encode all aspects of the stimulus, in particular, the output of localized direction detectors. The measurements are encoded as a vector. The decision as to whether the two stimuli are the same or not is determined by the difference of the two vectors. 2. Each component of the input measurements is characterized by its sensitivity for the discrimination task, e.g., how well the two motion directions can be discriminated apart based on this component. The entire population itself is generally divided into two sets: informative measurements with significant sensitivity, and 48 Z. Liu and D. Weinshall ~ ..... C' _ ___ __ ~ _A~~ _____ ........ d' d' 270 Slt>jects Figure 3: Left: Discrimination sensitivity d' of subject JY who was trained in the primary direction 3000 • Middle: d' of control subject YHL who had no training in between the two measurements. Right: Average d' (and standard error) for all subjects before and after training. Trained: results for the four trained subjects. Note the substantial improvement between the two measurements. For these subjects, the d' measured after training is shown separately for the trained direction (middle column) and the remaining directions (right column). Control: results for the nine control subjects. The control subjects improved their performance significantly less than the trained subjects (tld' 0.09 vs. 0.78 ; F(l, 11) = l4.79,p < 0.003). uninformative measurements with null sensitivity. In addition, informative measurements may vary greatly in their individual sensitivity. When many have high sensitivity, the task is easy. When most have low sensitivity, the task is difficult. We assume that sensitivity changes from one primary direction to the next, but the population of informative measurements remains constant. For example, in our psychophysical task localized directional signals are likely to be in the informative set for any motion direction, though their individual sensitivity will vary based on specific motion directions. On the other hand, local speed signals are never informative and therefore always belong to the uninformative set. 3. Due to limited computational capacity, the system can, at a time, only process a small number of components of the input vector. The decision in a single trial is therefore made based on the magnitude of this sub-vector, which may vary from trial to trial. In each trial the system rates the processed components of the sub-vector according to their sensitivity for the discrimination task. After a sufficient number of trials (enough to estimate all the component sensitivities of the sub-vector), the system identifies the least sensitive component and replaces it in the next trial with a new random component from the input vector. In effect, the system is searching from the input vector a sub-vector that gives rise to the maximal discrimination sensitivity. Therefore the performance of the system is gradually improving, causing learning from session to session in the training direction. 4. After learning in one training direction, the system identifies the sets of informative and uninformative measurements and include in the informative set any measurement with significant (though possibly low) sensitivity. In the next training direction, only the set of informative measurements is searched. The search becomes more efficient, and hence the acceleration of the learning rate. This accounts for the learning between training directions. We further assume that each stimulus generates a signal that is a vector of N measurements: {Id~l' We also assume that the signal for the discrimination task is the difference between two stimulus measurements: x = {Xi}~l' Xi = tlli . The Mechanisms of Generalization in Perceptual Learning 49 same/different discrimination task is to decide whether x is generated by noise the null vector 0, or by some distinct signal the vector S. At time t a measurement vector xt is obtained, which we denote x st if it is the signal S, and xnt otherwise. Assume that each measurement in xt is a normal random variable' xnt = {xnt}N xnt '" N(O a) x st = {xst}N x st '" N(II . a·) • 1 1= l' 1 , 1 , 1 1 = l' 1 1""" , 1 . We measure the sensitivity d' of each component. Since both the signal and noise are assumed to be normal random variables, the sensitivity of the i-th measurement in the discrimination task is d~ = lJ.lil/ai. Assuming further that the measurements are independent of each other and of time, then the combined sensitivity of M measurements is d' = JL/:~l (J.ldai)2. 3.1 Limited resources: an assumption We assume that the system can simultaneously process at most M « N of the original N measurements. Since the sensitivity d~ of the different measurements varies, the discrimination depends on the combined sensitivity of the particular set of M measurements that are being used. Learning in the first training direction, therefore, leads to the selection of a "good" subset of the measurements, obtained by searching in the measurement space. After searching for the best M measurements for the current training direction, the system divides the measurements into two sets: those with non-negligible sensitivity, and those with practically null sensitivity. This rating is kept for the next training direction, when only the first set is searched. One prediction of this model is that learning rate should not increase with exposure only. In other words, it is necessary for subjects to be exposed to the stimulus and do the same discrimination task for effective inter-directional learning to take place. For example, assume that the system is given N measurements: N /2 motion direction signals and N /2 speed signals. It learns during the first training direction that the N /2 speed signals have null sensitivity for the direction discrimination task, whereas the directional signals have varying (but significant) sensitivity. In the second training direction, the system is given the N measurements whose sensitivity profile is different from that in the first training direction, but still with the property that only the directional signals have any significant sensitivity (Fig. 4b). Based on learning in the first training direction, the system only searches the measurements whose sensitivity in the first training direction was significant, namely, the N /2 directional signals. It ignores the speed signals. Now the asymptotic performance in the second direction remains unchanged because the most sensitive measurements are within the searched population they are directional signals. The learning rate, however, doubles since the system searches a space half as large. 3.2 Simulation results To account for the different modes of learning, we make the following assumptions. When the task is easy, many components have high sensitivity d'. When the task is difficult, only a small number of measurements have high d'. Therefore, when the task is easy, a subset of M measurements that give rise to the best performance is found relatively fast. In the extreme, when the task is very easy (e.g., all the measurements have very high sensitivity), the rate of learning is almost instantaneous and the observed outcome appears to be transfer. On the other hand, when the task is difficult, it takes a long time to find the M measurements that give rise to the best performance, and learning is slow. 50 Z. Liu and D. Weinshall Figure 4: Hypothetical sensitivity profile for a population of measurements of speed and motion direction. Left: First training direction only the motion direction measurements have significant sensitivity (d' above 0.1), with measurements around 45 0 having the highest d'. Right: Second direction only the motion direction measurements have significant sensitivity, with measurements around 1350 having the highest d'. The detailed operations of the model are as follows. In the first training direction, the system starts with a random set of M measurements. In each trial and using feedback, the mean and standard deviation of each measurement is computed: J.L:t, ar for the signal and J.Lit, art for the noise. In the next trial, given M measurements M ( 1+1 ")2 (x'+1 n')2 { t+l}M th t 1 t J:" Xi -1-£; ! - 1-£; d 1 'fi Xi i=l' e sys em eva ua es u = L..d=l O'l' O'~t ,an c assl es x as the signal if <5 < 0, and noise otherwise. At time T, the worst measurement is identified as argval of mini d~, d~ 21J.Lf - J.LiTI/(ar + art). It is then replaced randomly from one of the remaining N - M measurements. The learning and decision making then proceed as above for another T iterations. This is repeated until the set of chosen measurements stabilizes. At the end, the decision is made based on the set of M measurements that have the highest sensitivities. ( 0 ! i ~ Figure 5: Simulated performance (percent correct) as function of time. Left: Difficult condition the number of measurements with high d~ is small (4 out of 150); there is no transfer from the first to the second training direction, but the learning rate is increased two-fold. This graph is qualitatively similar to the results shown in the top row of Fig. 2. Right: Easy condition the number of measurements with high d~ is large (72 out of 150); there is almost complete transfer from the first to the secQnd training direction. At the very beginning of training in the second direction, based on the measured d~ in the first direction, the measurement population is labeled as informative those with d~ larger than the median value, and uninformative the remaining measurements. The learning and decision making proceeds as above, while only informative measurements are considered during the search. In the simulation we used N = 150 measurements, with M = 4. Half of the N measurements (the informative measurements) had significant d~. In the second training direction, the sensitivities of the measurements were randomly changed, but only the informative measurements had significant d~. By varying the number of measurements with high di in the population of informative measurements, we get the different modes of generalization(Fig. 5). Mechanisms of Generalization in Perceptual Learning 51 4 Discussions In contrast to previous results on the specificity of learning, we broadened the search for generalization beyond traditional transfer. We found that generalization is the rule, rather than an exception. Perceptual learning of motion discrimination generalizes in various forms: as acceleration of learning rate (Exp. 1), as immediate improvement in performance (Exp. 2). Thus we show that perceptual learning is more similar to cognitive learning than previously thought, with both stimulus specificity and generalization as important ingredients. In our scheme, the assumption of the computational resource forced the discrimination system to search in the measurement space. The generalization phenomena transfer and increased learning rate occur due to improvement in search sensitivity from one training direction to the next, as the size of the search space decreases with learning. Our scheme also predicts that learning rate should only improve if the subject both sees the stimulus and does the relevant discrimination task, in agreement with the results in Exp. 1. Importantly, our scheme does not predict transfer per se, but instead a dramatic increase in learning rate that is equivalent to transfer. Our model is qualitative and does not make any concrete quantitative predictions. We would like to emphasize that this is not a handicap of the model. Our goal is to show, qualitatively, that the various generalization phenomena should not surprise us, as they should naturally occur in a generic discrimination system with limited computational resources. Thus we argue that it may be too early to use existing perceptual learning results for the identification of the cortical location of perceptual learning, and the levels at which modifications are taking place. References [1] Ahissar M and Hochstein S. Task difficulty and the specificity of perceptual learning. Nature, 387:401- 406, 1997. [2] Ball K and Sekuler R. A specific and enduring improvement in visual motion discrimination. SCience, 218:697-698, 1982. [3] Fiorentini A and Berardi N. Perceptual learning specific for orientation and spatial frequency. Nature, 287:43- 44, 1980. [4] Gilbert C D. Early perceptual learning. PNAS, 91:1195-1197, 1994. [5] Karni A and Sagi D. Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. PNAS, 88:4966- 4970, 1991. [6] Liu Z. Learning a visual skill that generalizes. Tech. Report, NECI, 1995. [7] Liu Z and Vaina L M. Stimulus specific learning: a consequence of stimulusspecific experiments? Perception, 24(supplement):21, 1995. [8] Poggio T, Fahle M, and Edelman S. Fast perceptual learning in visual hyperacuity. Science, 256:1018- 1021, May 1992. [9] Ramachandran V S. Learning-like phenomena in stereopsis. Nature, 262:382384, 1976. [10] Rubin N, Nakayama K, and Shapley R. Abrupt learning and retinal size specificity in illusory-contour perception. Current Biology, 7:461-467,1997.
|
1998
|
41
|
1,539
|
Probabilistic Image Sensor Fusion Ravi K. Sharma1 , Todd K. Leen2 and Misha Pavel1 1 Department of Electrical and Computer Engineering 2Department of Computer Science and Engineering Oregon Graduate Institute of Science and Technology P.O. Box 91000, Portland, OR 97291-1000 Email: {ravi,pavel}@ece.ogi.edu, tleen@cse.ogi.edu Abstract We present a probabilistic method for fusion of images produced by multiple sensors. The approach is based on an image formation model in which the sensor images are noisy, locally linear functions of an underlying, true scene. A Bayesian framework then provides for maximum likelihood or maximum a posteriori estimates of the true scene from the sensor images. Maximum likelihood estimates of the parameters of the image formation model involve (local) second order image statistics, and thus are related to local principal component analysis. We demonstrate the efficacy of the method on images from visible-band and infrared sensors. 1 Introduction Advances in sensing devices have fueled the deployment of multiple sensors in several computational vision systems [1, for example]. Using multiple sensors can increase reliability with respect to single sensor systems. This work was motivated by a need for an aircraft autonomous landing guidance (ALG) system [2, 3] that uses visible-band, infrared (IR) and radar-based imaging sensors to provide guidance to pilots for landing aircraft in low visibility. IR is suitable for night operation, whereas radar can penetrate fog. The application requires fusion algorithms [4] to combine the different sensor images. Images from different sensors have different characteristics arising from the varied physical imaging processes. Local contrast may be polarity reversed between visibleband and IR images [5 , 6] . A particular sensor image may contain local features not found in another sensor image, i.e., sensors may report complementary features. Finally, individual sensors are subject to noise. Fig. l(a) and l(b) are visible-band and IR images respectively, of a runway scene showing polarity reversed (rectangle) Probabilistic Image Sensor Fusion 825 and complementary (circle) features. These effects pose difficulties for fusion. An obvious approach to fusion is to average the pixel intensities from different sensors. Averaging, Fig. l(c), increases the signal to noise ratio, but reduces the contrast where there are polarity reversed or complementary features [7]. Transform-based fusion methods [8, 5, 9] selectfrom one sensor or another for fusion. They consist of three steps: (i) decompose the sensor images using a specified transform e.g. a multiresolution Laplacian pyramid, (ii) fuse at each level of the pyramid by selecting the highest energy transform coefficient, and (iii) invert the transform to synthesize the fused image. Since features are selected rather than averaged, they are rendered at full contrast, but the methods are sensitive to sensor noise, see Fig. l(d). To overcome the limitations of averaging or selection methods, and put sensor fusion on firm theoretical grounds, we explicitly model the production of sensor images from the true scene, including the effects of sensor noise. From the model, and sensor images, one can ask What is the most probable true scene? This forms the basis for fusing the sensor images. Our technique uses the Laplacian pyramid representation [5], with the step (ii) above replaced by our probabilistic fusion. A similar probabilistic framework for sensor fusion is discussed in ([10]). 2 The lInage Forlnation Model The true scene, denoted s, gives rise to a sensor image through a noisy, non-linear transformation. For ALG, s would be an image of the landing scene under conditions of uniform lighting, unlimited visibility, and perfect sensors. We model the map from the true scene to a sensor image by a noisy, locally affine transformation whose parameters are allowed to vary across the image (actually across the Laplacian pyramid) ai(~ t) = (3i(~ t) s(~ t) + O'i(~ t) + Ei(~ t) (1) where, s is the true scene, ai is ith sensor image, r == (x, y, k) is the hyperpixel location, with x, y the pixel coordinates and k the level of the pyramid, t is the time, 0' is the sensor offset, {3 is the sensor gain (which includes the effects of local polarity reversals and complementarity), and E is the (zero-mean) sensor noise. To simplify notation, we adopt the matrix form a = (3s + 0' + l (2) where a = [al,a2, . . . ,aqr, f3 = [(31,(32, ... , (3qr, Q' = [0'1,0'2, ... ,O'qr, s is a scalar and l = [El,E2, ... ,Eqr, and we have dropped the reference to location and time. Since the image formation parameters f3, Q', and the sensor noise covariance E~ can vary from hyperpixel to hyperpixel, the model can express local polarity reversals, complementary features, spatial variation of sensor gain, and noise. We do assume, however, that the image formation parameters and sensor noise distribu tion vary slowly with location 1 . Hence, a particular set of parameters is considered to hold true over a spatial region of several square hyperpixels. We will use this assumption implicitly when we estimate these parameters from data. The model (2) fits the framework of the factor analysis model in statistics [11, 12]. Here the hyperpixel values of the true scene s are the latent variables or 1 Specifically the parameters vary slowly on the spatia-temporal scales over which the true scene s may exhibit large variations. 826 R. K. Sharma, T. K. Leen and M. Pavel common factors, f3 contains the factor loadings, and the sensor noise £ values are the independent factors. Estimation of the true scene is equivalent to estimating the common factors from the observations a. 3 Bayesian Fusion Given the sensor intensities a, we will estimate the true scene s by appeal to a Bayesian framework. We assume that the probability density function of the latent variables s is a Gaussian with local mean so(~ t) and local variance u;(~ t). An attractive benefit of this setup is that the prior mean So might be obtained from knowledge in the form of maps, or clear-weather images of the scene. Thus, such database information can be folded into the sensor fusion in a natural way. The density on the sensor images conditioned on the true scene, P(als), is normal with mean f3 s+a and covariance E£ :::: diag[u;l' U;2" .. ,u;J The marginal density P(a) is normal with mean I'm :::: f3 So + a and covariance C :::: E£ + u;f3f3 T (3) Finally, the posterior density on s, given the sensor data a, P(sla) is also normal with mean M- 1 (f3T E;l (a -a)+ so/u;), and covariance M- 1 :::: (f/ E;l f3+ l/u;fl. Given these densities, there are two obvious candidates for probabilistic fusion: maximum likelihood (ML) 5 :::: max. P(als), and maximum a posteriori (MAP) 5:::: max. P(sla) . The MAP fusion estimate is simply the posterior mean 5:::: [f3TE;If3+1/u;r1 (f3TE;l(a_a) + so/un (4) (5) To obtain the ML fusion estimate we take the limit u; -+ 00 in either (4) or (5). For both ML and MAP, the fused image 5 is a locally linear combination of the sensor images that can, through the spatio-temporal variations in f3, a, and E£, properly respond to changes in the sensor characteristics that tax averaging or selection schemes. For example, if the second sensor has a polarity reversal relative to the first, then f32 is negative and the two sensor contributions are properly subtracted. If the first sensor has high noise (large u;J, its contribution to the fused image is attenuated. Finally, a feature missing from sensor 1 corresponds to f31 :::: O. The model compensates by accentuating the contribution from sensor 2. 4 Model Parameter Estimates We need to estimate the local image formation model parameters a(~ t), f3(~ t) and the local sensor noise covariance·E£(~ t). We estimate the latter from successive, motion compensated video frames from each sensor. First we estimate the average value at each hyperpixel (ai(t)), and the average square (a;(t)) by exponential moving averages. We next estimate the noise variance by the difference U;i (t) :::: a; (t) - ai 2 (t). To estimate f3 and a, we assume that f3, a, E£, So and u; are nearly constant over small spatial regions (5 x 5 blocks) surrounding the hyperpixel for which the Probabilistic Image Sensor Fusion 827 parameters are desired. Essentially we are invoking a spatial analog of ergodicity, where ensemble averages are replaced by spatial averages, carried out locally over regions in which the statistics are approximately constant. To form a maximum likelihood (ML) estimate of a, we extremize the data loglikelihood C = Z=;;=llog[P(an)] with respect to a to obtain a ML = I'a - f3so , (6) where I'a is the data mean, computed over a 5 x 5 hyperpixellocal region (N = 25 points). To obtain a ML estimate of f3, we set the derivatives of C with respect to f3 equal to zero and recover -1 (C - Ea)C f3 = 0 (7) where Ea is the data covariance matrix, also computed over a 5 x 5 hyperpixel local region. The only non-trivial solution to (7) is !-(X-l)t f3ML = E, U r (8) u~ where U , A are the principal eigenvector and eigenvalue of the weighted data co_ _1. _1. variance matrix, Ea == E, 2 EaE € 2, and r = ±l. An alternative to maximum likelihood estimation is the least squares (LS) approach [11] . We obtain the LS estimate aLS by minimizing with respect to a . This gives aLS = I'a - f3so . The least squares estimate f3LS is obtained by minimizing E{3 = II Ea - C W with respect to f3. The solution to this minimization is At f3LS = -Ur u~ (9) (10) (11) (12) where U, A are the principal eigenvector and eigenvalue of the noise-corrected covariance matrix (Ea - E f ), and r = ± l. 2 The estimation procedures cannot provide values of the priors u~ and So. Were we dealing with a single global model, this would pose no problem. But we must impose a constraint in order to smoothly piece together our local models. We impose that 11.811 = 1 everywhere, or by (12) u; = A. Recall that A is the leading eigenvalue of ~a ~, and thus captures the scale of variations in a that arise from variations in s. Thus we would expect A ex u~. Our constraint insures that the proportionality constant be the same for each local model. Next, note that changing So causes a shift 2The least squares and maximum likelihood solutions are identical when the model is exact Ea == C, i.e. the observed data covariance is exactly of the form dictated by the model. Under this condition, U = (UTE;lU)-1/2Ee -1/2U and (~- 1) = ~(UTE;lU). The LS and ML solutions are also identical when the noise covariance is homoscedastic Ee = (1; I, even if the model is not exact. 828 R. K. Sharma, T. K. Leen and M. Pavel in s. To maintain consistency between local regions, we take So = 0 everywhere. These choices for 11'; and So constrain the parameter estimates to f3 LS r V and aLS Pa . (13) In (5) 11'; and So are defined at each hyperpixel. However, to estimate f3 and a, we used spatial averages to compute the sample mean and covariance. This is somewhat inconsistent, since the spatial variation of So (e.g. when there are edges in the scene) is not explicitly captured in the model mean and covariance. These variations are, instead, attributed to 11';, resulting in overestimation of the latter. A more complete model would explicitly model the spatial variations of So, though we expect this will produce only minor changes in the results. Finally, the sign parameter r is not specified. In order to properly piece together our local models, we must choose r at each hyperpixel in such a way that f3 changes direction slowly as we move from hyperpixel to hyperpixel and encounter changes in the local image statistics. That is, large direction changes due to arbitrary sign reversals are not allowed. We use a simple heuristic to accomplish this. 5 Relation to peA The MAP and ML fusion rules are closely related to PCA. To see this, assume that the noise is homoscedastic EE = 11';1 and use the parameter estimates (13) in the MAP fusion rule (5), reducing the latter to 1 T 1 s= 1+I1'UI1'; Va(a-Pa) + 1+11';;11'~ So (14) where Va is the principal eigenvector of the data covariance matrix Ea. The MAP estimate s is simply a scaled and shifted local PCA projection of the sensor data. Both the scaling and shift arise because the prior distribution on s tends to bias s towards So. When the prior is flat 11'; -+ 00, (or equivalently when using the ML fusion estimate), or when the noise variance vanishes, the fused image is given by a simple local PCA projection (15 ) 6 Experilllents and Results We applied our fusion method to visible-band and IR runway images, Fig. 1, containing additive Gaussian noise. Fig. l(e) shows the result of ML fusion with f3 and a estimated using (13) . ML fusion performs better than either averaging or selection in regions that contain local polarity reversals or complementary features. ML fusion gives higher weight to IR in regions where the features in the two images are common , thus reducing the effects of noise in the visible-band image. ML fusion gives higher weight to the appropriate sensor in regions with complementary features. Fig. l(f) shows the result of MAP fusion (5) with the priors 11'; and So those dictated by the consistency requirements discussed in section 4. Clearly, the MAP image is less noisy than the ML image. In regions of low sensor image contrast, 11'; is low (since>. is low), thus the contribution from the sensor images is attenuated compared to the ML fusion rule. Hence the noise is attenuated. In regions containing features such as edges, 11'; is high (since>. is high); hence the contribution from the sensor images is similar to that in ML fusion. Probabilistic Image Sensor Fusion 829 (a) Visible-band image (b) IR image (c) Averaging (d) Selection (e) ML (f) MAP Figure 1: Fusion of visible-band and IR images containing additive Gaussian noise In Fig. 2 we demonstrate the use of a database image for fusion. Fig. 2(a) and 2(b) are simulated noisy sensor images from visible-band and JR, that depict a runway with an aircraft on it. Fig. 2(c) is an image of the same scene as might be obtained from a terrain database. Although this image is clean, it does not show the actual situation on the runway. One can use the database image pixel intensities as the prior mean So in the MAP fusion rule (5). The prior variance u; in (5) can be regarded as a m-easure of confidence in the database image - it's value controls the relative contribution of the sensors vs. the database image in the fused image. (The parameters f3 and a, and the sensor noise covariance EIE were estimated exactly as before.) Fig. 2(d), 2(e) and 2(f) show the MAP-fused image as a function of increasing 0";. Higher values of 0"; accentuate the contribution of the sensor images, whereas lower values of 0"; accentuate the contribution of the database. 7 Discussion We presented a model-based probabilistic framework for fusion of images from multiple sensors and exercised the approach on visible-band and IR images. The approach provides both a rigorous framework for PCA-like fusion rules, and a principled way to combine information from a terrain database with sensor images. We envision several refinements of the approach given here. Writing new image formation models at each hyperpixel produces an overabundance of models. Early experiments show that this can be relaxed by using the same model parameters over regions of several square hyperpixels, rather than recalculating for each hyperpixel. A further refinement could be provided by adopting a mixture of linear models to build up the non-linear image formation model. Finally, we have used multiple frames from a video sequence to obtain ML and MAP fused sequences, and one should be able to produce superior parameter estimates by suitable use of the video sequence. 830 R. K. Sharma, T. K. Leen and M Pavel (a) Visible-band image (b) IR image (c) Database image Figure 2: Fusion of simulated visible-band and IR images using database image Acknowledgments - This work was supported by NASA Ames Research Center grant NCC2-S11. TKL was partially supported by NSF grant ECS-9704094. References [1] L. A. Klein. Sensor and Data Fusion Concepts and Applications. SPIE, 1993. [2] J. R. Kerr, D. P. Pond, and S. Inman. Infrared-optical muItisensor for autonomous landing guidance. Proceedings of SPIE, 2463:38-45, 1995. [3] B. Roberts and P. Symosek. Image processing for flight crew situation awareness. Proceedings of SPIE, 2220:246-255, 1994. [4] M. Pavel and R. K. Sharma. Model-based sensor fusion for aviation. In J. G. Verly, editor, Enhanced and Synthetic Vision 1997, volume 3088, pages 169-176. SPIE, 1997. [5] P. J. Burt and R. J. Kolczynski. Enhanced image capture through fusion. In Fourth Int. Conf. on Computer Vision, pages 173-182. IEEE Compo Soc., 1993. [6] H. Li and Y. Zhou. Automatic visual/IR image registration. Optical Engineering, 35(2):391-400, 1996. ' [7] M. Pavel, J. Larimer, and A. Ahumada. Sensor fusion for synthetic vision. In Proceedings of the Society for Information Display, pages 475-478. SPIE, 1992. [8] P. Burt. A gradient pyramid basis for pattern-selective image fusion. In Proceedings of the Society for Information Display, pages 467-470. SPIE, 1992. [9] A. Toet. Hierarchical image fusion. Machine Vision and Applications, 3:1-11, 1990. [10] J. J. Clark and A. L. Yuille. Data Fusion for Sensory Information Processing Systems. Kluwer, Boston, 1990. [11] A. Basilevsky. Statistical Factor Analysis and Related Methods. Wiley, 1994. [12] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Technical report, NCRG/97/01O, Neural Computing Research Group, Aston University, UK,1997.
|
1998
|
42
|
1,540
|
Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks Akito Sakurai School of Knowledge Science Japan Advanced Institute of Science and Technology Nomi-gun, Ishikawa 923-1211, Japan. CREST, Japan Science and Technology Corporation. ASakurai@jaist.ac.jp Abstract O(ws(s log d+log(dqh/ s))) and O(ws((h/ s) log q) +log(dqh/ s)) are upper bounds for the VC-dimension of a set of neural networks of units with piecewise polynomial activation functions, where s is the depth of the network, h is the number of hidden units, w is the number of adjustable parameters, q is the maximum of the number of polynomial segments of the activation function, and d is the maximum degree of the polynomials; also n(wslog(dqh/s)) is a lower bound for the VC-dimension of such a network set, which are tight for the cases s = 8(h) and s is constant. For the special case q = 1, the VC-dimension is 8(ws log d). 1 Introduction In spite of its importance, we had been unable to obtain VC-dimension values for practical types of networks, until fairly tight upper and lower bounds were obtained ([6], [8], [9], and [10]) for linear threshold element networks in which all elements perform a threshold function on weighted sum of inputs. Roughly, the lower bound for the networks is (1/2)w log h and the upper bound is w log h where h is the number of hidden elements and w is the number of connecting weights (for one-hidden-Iayer case w ~ nh where n is the input dimension of the network). In many applications, though, sigmoidal functions, specifically a typical sigmoid function 1/ (1 + exp( -x)), or piecewise linear functions for economy of calculation, are used instead of the threshold function. This is mainly because the differentiability of the functions is needed to perform backpropagation or other learning algorithms. Unfortunately explicit bounds obtained so far for the VC-dimension of sigmoidal networks exhibit large gaps (O(w2h2) ([3]), n(w log h) for bounded depth 324 A. Sakurai and f!(wh) for unbounded depth) and are hard to improve. For the piecewise linear case, Maass obtained a result that the VO-dimension is O(w210g q), where q is the number of linear pieces of the function ([5]). Recently Koiran and Sontag ([4]) proved a lower bound f!(w 2 ) for the piecewise polynomial case and they claimed that an open problem that Maass posed if there is a matching w 2 lower bound for the type of networks is solved. But we still have something to do, since they showed it only for the case w = 8(h) and the number of hidden layers being unboundedj also O(w2 ) bound has room to improve. We in this paper improve the bounds obtained by Maass, Koiran and Sontag and consequently show the role of polynomials, which can not be played by linear functions, and the role of the constant functions that could appear for piecewise polynomial case, which cannot be played by polynomial functions. After submission of the draft, we found that Bartlett, Maiorov, and Meir had obtained similar results prior to ours (also in this proceedings). Our advantage is that we clarified the role played by the degree and number of segments concerning the both bounds. 2 Terminology and Notation log stands for the logarithm base 2 throughout the paper. The depth of a network is the length of the longest path from its external inputs to its external output, where the length is the number of units on the path. Likewise we can assign a depth to each unit in a network as the length of the longest path from the external input to the output of the unit. A hidden layer is a set of units at the same depth other than the depth of the network. Therefore a depth L network has L - 1 hidden layers. In many cases W will stand for a vector composed of all the connection weights in the network (including threshold values for the threshold units) and w is the length of w. The number of units in the network, excluding "input units," will be denoted by hj in other words, the number of hidden units plus one, or sometimes just the number of hidden units. A function whose range is {O, 1} (a set of 0 and 1) is called a Boolean-valued function. 3 Upper Bounds To obtain upper bounds for the VO-dimension we use a region counting argu.ment, developed by Goldberg and Jerrum [2]. The VO-dimension of the network, that is, the VO-dimension of the function set {fG(wj . ) I W E'RW} is upper bounded by max {N 12N ~ Xl~.~N Nee ('Rw - UJ:1.N'(fG(:Wj x£))) } (3.1) where NeeO is the number of connected components and .N'(f) IS the set {w I f(w) = O}. The following two theorems are convenient. Refer [11] and [7] for the first theorem. The lemma followed is easily proven. Theorem 3.1. Let fG(wj Xi) (1 ~ i ~ N) be real polynomials in w, each of degree d or less. The number of connected components of the set n~l {w I fG(wj xd = O} is bounded from above by 2(2d)W where w is the length of w. Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks 325 Lemma 3.2. Ifm ~ w(1ogC + loglogC + 1), then 2m > (mC/w)W for C ~ 4. First let us consider the polynomial activation function case. Theorem 3.3. Suppose that the activation function are polynomials of degree at most d. O( ws log d) is an upper bound of the VC-dimension for the networks with depth s. When s = 8(h) the bound is O(whlogd). More precisely ws(1ogd + log log d + 2) is an upper bound. Note that if we allow a polynomial as the input function, d1d2 will replace d above where d1 is the maximum degree of the input functions and d2 is that of the activation functions. The theorem is clear from the facts that the network function (fa in (3.1)) is a polynomial of degree at most dS + ds- 1 + ... + d, Theorem 3.1 and Lemma 3.2. For the piecewise linear case, we have two types of bounds. The first one is suitable for bounded depth cases (i. e. the depth s = o( h)) and the second one for the unbounded depth case (i.e. s = 8(h)). Theorem 3.4. Suppose that the activation functions are piecewise polynomials with at most q segments of polynomials degree at most d. O(ws(slogd + log(dqh/s))) and O(ws((h/s)logq) +log(dqh/s)) are upper bounds for the VC-dimension, where s is the depth of the network. More precisely, ws((s/2)logd + log(qh)) and ws( (h/ s) log q + log d) are asymptotic upper bounds. Note that if we allow a polynomial as the input function then d1 d2 will replace d above where d1 is the maximum degree of the input functions and d2 is that of the activation functions. Proof. We have two different ways to calculate the bounds. First S i=1 < s (8eNQhs(di-1 + .. . + d + l)d) 'l»l+'''+W; -p Wl+"'+W' J=1 J ::; (8eNqd(s:)/2(h/S)) ws where hi is the number of hidden units in the i-th layer and 0 is an operator to form a new vector by concatenating the two. From this we get an asymptotic upper bound ws((s/2) log d + log(qh)) for the VC-dimension. Secondly From this we get an asymptotic upper bound ws((h/s)logq + log d) for the VCdimension. Combining these two bounds we get the result. Note that sin log( dqh/ s) in it is introduced to eliminate unduly large term emerging when s = 8(h). 0 4 Lower Bounds for Polynomial Networks Theorem 4.1 Let us consider the case that the activation function are polynomials of degree at most d. n( ws log d) is a lower bound of the VC-dimension for the networks with depth s. When s = 8(h) the bound is n(whlogd), More precisely, 326 A. Sakurai (1/16)w( 5 - 6) log d is an asymptotic lower bound where d is the degree of activation functions and is a power of two and h is restricted to O(n2) for input dimension n. The proof consists of several lemmas. The network we are constructing will have two parts: an encoder and a decoder. We deliberately fix the N input points. The decoder part has fixed underlying architecture but also fixed connecting weights whereas the encoder part has variable weights so that for any given binary outputs for the input points the decoder could output the specified value from the codes in which the output value is encoded by the encoder. First we consider the decoder, which has two real inputs and one real output. One of the two inputs y holds a code of a binary sequence bl , b2, ... ,bm and the other x holds a code of a binary sequence Cl, C2, ... ,Cm . The elements of the latter sequence are all O's except for Cj = 1, where Cj = 1 orders the decoder to output bj from it and consequently from the network. We show two types of networks; one of which has activation functions of degree at most two and has the VC-dimension w(s-l) and the other has activation functions of degree d a power of two and has the VC-dimension w( s - 5) log d. We use for convenience two functions 'H9(X) = 1 if x 2:: 0 and ° otherwise and 'H9,t/J (x) = 1 if x 2:: cp, ° if x ::; 0, and undefined otherwise. Throughout this section we will use a simple logistic function p(x) = (16/3)x(1- x) which has the following property. Lemma 4.2. For any binary sequence bl , b2, . .. , bm , there exists an interval [Xl, X2] such that bi = 'Hl /4,3/4(pi(x)) and ° :S /(x) ::; 1 for any x E [Xl, X2]' The next lemmas are easily proven. Lemma 4.3. For any binary sequence Cl, C2,"" Cm which are all O's except for Cj = 1, there exists Xo such that Ci = 'Hl/4,3/4(pi(xo)). Specifically we will take Xo = p~(j-l)(1/4), where PLl(x) is the inverse of p(x) on [0,1/2]. Then pi-l(xo) = 1/4, pi(xo) = 1, pi(xo) = ° for all i > j, and pj-i(xo) ::; (1/4)i for all positive i ::; j. Proof. Clear from the fact that p(x) 2:: 4x on [0,1/4]. o Lemma 4.4. For any binary sequence bl , b2, ... , bm , take y such that bi 'H 1/ 4,3/4(pi(y)) and ° ::; pi(y) ::; 1 for all i and Xo = p~(j-l)(1/4), then 'H7/ 12,3/4 (l::l pi(xo)pi(y)} = bi' i.e. 'Ho (l::l pi(xo)pi(y) - 2/3} = bi' Proof. If bj = 0, l::l pi(xo)pi(y) = l:1=1 pi(xo)pi(y) :S pi(y) + l:1:::(1/4)i < pi(y) + (1/3)::; 7/12. If bj = 1, l::l pi(xo)pi(y) > pi(xo)pi(y) 2:: 3/4. 0 By the above lemmas, the network in Figure 1 (left) has the following function: Suppose that a binary sequence bl , ... ,bm and an integer j is given. Then we can present y that depends only on bl , •• • ,bm and Xo that depends only on j such that bi is output from the decoder. Note that we use (x + y)2 - (x - y)2 = 4xy to realize a multiplication unit. For the case of degree of higher than two we have to construct a bit more complicated one by using another simple logistic function fL(X) = (36/5)x(1- x). We need the next lemma. Lemma 4.5. Take Xo = fL~(j-l)(1/6), where fLLl(X) is the inverse of fL(X) on [0,1/2]. Then fLi-1(xo) = 1/6, fLj(XO) = 1, fLi(xo) = ° for all i > j, and fLi-i(xo) = Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks 327 L--_L...-_---L_...L..-_ X. ~A·l f·i~~] i~-!~ '........ ,----_ .. __ ...... __ .. x, y Figure 1: Network architecture consisting of polynomials of order two (left) and those of order of power of two (right). (1/6)i for all i > 0 and $ j. Proof. Clear from the fact that J-L(x) ~ 6x on [0,1/6]. 0 Lemma 4.6. For any binary sequence bl. b2, ... , bk, bk+b bk+2, . .. ,b2k , ... , b(m-1)H1,'''' bmk take y such that bi = 1-l1/4,3/4(pi(y)) and 0 $ pi(y) $ 1 for all i. Moreover for any 1 $ j $ m and any 1 $ 1 $ k take Xl = J-LL(j-1)(1/6), and Xo = J-LL(I-1)(1/6k). Then for Z = E:1 pik(Y)J-Lik(xt), 1-lo (E~==-Ol pi(z)J-Li(xo) - (1/2)) = bki+l holds. Lemma 4.7. If 0 < pi(x) < 1 for any 0 < i $1, take an £ such that (16/3)1£ < 1/4. Then pl(x) - (16/3)1£ < pl(x + £) < pl(x) + (16/3)1£. Proof.. There are four cases ~epending on ~hether pl- ~ (x + £) is on the uphill or downhIll of p and whether x IS on the uphlll or downhIll of p -1 . The proofs are done by induction. First suppose that the two are on the uphill. Then pl(x + £) = p(pl-1\X + f)) < p(pl-1(X) + (16/3)1-1£)) < pl(x) + (16/3)1£. Secondly suppose that p -l(x + £) is on the uphill but x is on the downhill. Then pl(x + £) = p(pl-1(x + f)) > p(pl-1(x) - (16/3)1-1£)) > pl(x) - (16/3)1£. The other two cases are similar. 0 Proof of Lemma 4.6. We will show that the difference between piHl(y) and E~==-ol p'(z)J-Li(xo) is sufficiently small. Clearly Z = E:1 J-Lik(X1)pik(y) = E{=l J-Lik(X1)pik(y) $ pik(y)+ E{~i(1/6k)i < pik(y)+1/(6k-1) and pik(y) < z. If Z is on the uphill of pI then by using the above lemma, we get E~==-Ol pi(z)J-Li(xO) = E~=o p'(z)J-Li(xo) < pl(z) + 1/(6k - 1) < piHl(y) + (1 + (16/3)1)(1/(6k - 1)) < pik+1(y) + 1/4 (note that 1 $ k - 1 and k ~ 2). If z is on the downhill of pI then by using the above lemma, we get E~==-Ol pi(Z)J-Li(xo) = E~=o pi(z)J-Li(xo) > pl(z) > pl(pik(y)) _ (16/3)1(1/(6k - 1)) > pik+l(y) - 1/4. 0 Next we show the encoding scheme we adopted. We show only the case w = 8(h2 ) since the case w = 8(h) or more generally w = O(h2) is easily obtained from this. Theorem 4.8 There is a network of2n inputs, 2h hidden units with h2 weights w, 328 A. Sakurai and h 2 sets of input values Xl, ... ,Xh2 such that for any set of values Y1, ... , Yh2 we can chose W to satisfy Yi = fG(w; Xi). Proof. We extensively utilize the fact that monomials obtained by choosing at most k variables from n variables with repetition allowed (say X~X2X6) are all linearly independent ([1]). Note that the number of monomials thus formed is (n~m). Suppose for simplicity that we have 2n inputs and 2h main hidden units (we have other hidden units too), and h = (n~m). By using multiplication units (in fact each is a composite of two squaring units and the outputs are supposed to be summed up as in Figure 1), we can form h = (n~m) linearly independent monomials composed of variables Xl, . •• ,Xn by using at most (m -l)h multiplication units (or h nominal units when m = 1). In the same way, we can form h linearly independent monomials composed of variables Xn+ll . .• , X2n. Let us denote the monomials by U1, •.• , Uh and V1, . .. , Vh. We form a subnetwork to calculate 2:7=1 (2:7=1 Wi,jUi)Vj by using h multiplication units. Clearly the calculated result Y is the weighted sum of monomials described above where the weights are Wi,j for 1 $ i, j $ h. Since y = fG(w; x) is a linear combination of linearly independent terms, if we choose appropriately h2 sets of values Xll . . . , Xh2 for X = (Xl, .. • , X2n) , then for any assignment of h2 values Y1, ... ,Yh2 to Y we have a set of weights W such that Yi = f(xi, w). 0 Proof of Theorem -4.1. The whole network consists of the decoder and the encoder. The input points are the Cartesian product of the above Xl, ... ,Xh2 and {xo defined in Lemma 4.4 for bj = 111 $ j :$ 8'} for some h where 8' is the number of bits to be encoded. This means that we have h2 s points that can be shattered. Let the number of hidden layers of the decoder be 8. The number of units used for the decoder is 4(8 - 1) + 1 (for the degree 2 case which can decode at most 8 bits) or 4(8 - 3) + 4(k - 1) + 1 (for the degree 2k case which can decode at most (8 - 2)k bits). The number of units used for the encoder is less than 4h; we though have constraints on 8 (which dominates the depth of the network) and h (which dominates the number of units in the network) that h :$ (n~m) and m = O(s) or roughly log h = 0(8) be satisfied. Let us chose m = 2 (m = log 8 is a better choise). As a result, by using 4h + 4(s I} + 1 (or 4h + 4(8 - 3) + 4(k -1) + 1) units in s + 2 layers, we can shatter h 28 (or h 2 (8 - 2) log d) points; or asymptotically by using h units 8 layers we can shatter (1/16)w( 8 - 3) (or (1/16)w( 8 - 5) log d) points. 0 5 Piecewise Polynomial Case Theorem 5.1. Let us consider a set of networks of units with linear input functions and piecewise polynomial (with q polynomial segments) activation functions. Q( W8 log( dqh/ 8)) is a lower bound of the VC-dimension, where 8 is the depth of the network and d is the maximum degree of the activation functions. More precisely, (1/16)w(s - 6)(10gd+ log(h/s) + logq) is an asymptotic lower bound. For the scarcity of space, we give just an outline of the proof. Our proof is based on that of the polynomial networks. We will use h units with activation function of q ~ 2 polynomial segments of degree at most d in place of each of pk unit in the decoder, which give the ability of decoding log dqh bits in one layer and slog dqh bits in total by 8( 8h) units in total. If h designates the total number of units, the Tight Bounds for the VC-Dimension of Piecewise Polynomial Networks 329 number of the decodable bits is represented as log(dqh/s). In the following for simplicity we suppose that dqh is a power of 2. Let pk(x) be the k composition of p(x) as usual i.e. pk(x) = p(pk-l(x)) and pl(X) = p(x). Let plogd,/(x) = /ogd(,X/(x)), where 'x(x) = 4x if x $ 1/2 and 4 - 4x otherwise, which by the way has 21 polynomial segments. Now the pk unit in the polynomial case is replaced by the array /ogd,logq,logh(x) of h units that is defined as follows: (i) plogd,logq,l(X) is an array of two units; one is plogd,logq(,X+(x)) where ,X+(x) = 4x if x $ 1/2 and 0 otherwise and the other is plog d,log q ('x - (x)) where ,X - (x) = 0 if x $ 1/2 and 4 - 4x otherwise. (ii) plog d,log q,m~x) is the array of 2m units, each with one of the functions plogd,logq(,X ( . .• ('x±(x)) . . . )) where ,X±( ... ('x±(x)) .. ·) is the m composition of 'x+(x) or 'x - (x). Note that ,X±( ... ('x±(x)) ... ) has at most three linear segments (one is linear and the others are constant 0) and the sum of 2m possible combinations t(,X±( . . . ('x±(x)) · . . )) is equal to t(,Xm(x)) for any function f such that f(O) = O. Then lemmas similar to the ones in the polynomial case follow. References [1] Anthony, M: Classification by polynomial surfaces, NeuroCOLT Technical Report Series, NC-TR-95-011 (1995). [2] Goldberg, P. and M. Jerrum: Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers, Proc. Sixth Annual ACM Conference on Computational Learning Theory, 361-369 (1993). [3] Karpinski, M. and A. Macintyre, Polynomial bounds for VC dimension of sigmoidal neural networks, Proc. 27th ACM Symposium on Theory of Computing, 200-208 (1995). [4] Koiran, P. and E. D. Sontag: Neural networks with quadratic VC dimension, Journ. Compo Syst. Sci., 54, 190-198(1997). [5] Maass, W. G.: Bounds for the computational power and learning complexity of analog neural nets, Proc. 25th Annual Symposium of the Theory of Computing, 335-344 (1993). [6] Maass, W. G.: Neural nets with superlinear VC-dimension, Neural Computation, 6, 877-884 (1994) [7] Milnor, J.: On the Betti numbers of real varieties, Proc. of the AMS, 15, 275-280 (1964). [8] Sakurai, A.: Tighter Bounds of the VC-Dimension of Three-layer Networks, Proc. WCNN'93, III, 540-543 (1993). [9] Sakurai, A.: On the VC-dimension of depth four threshold circuits and the complexity of Boolean-valued functions, Proc. ALT93 (LNAI 744), 251-264 (1993); refined version is in Theoretical Computer Science, 137, 109-127 (1995). [10] Sakurai, A.: On the VC-dimension of neural networks with a large number of hidden layers, Proc. NOLTA'93, IEICE, 239-242 (1993). [11] Warren, H. E.: Lower bounds for approximation by nonlinear manifolds, Trans. AMS, 133, 167-178, (1968). On-Line Learning with Restricted Training Sets: Exact Solution as Benchmark for General Theories H.C. Rae hamish.rae@kcl.ac.uk P. Sollich psollich@mth.kcl.ac.uk Department of Mathematics King's College London The Strand London WC2R 2LS, UK Abstract A.C.C. Coolen tcoolen@mth.kcl.ac.uk We solve the dynamics of on-line Hebbian learning in perceptrons exactly, for the regime where the size of the training set scales linearly with the number of inputs. We consider both noiseless and noisy teachers. Our calculation cannot be extended to nonHebbian rules, but the solution provides a nice benchmark to test more general and advanced theories for solving the dynamics of learning with restricted training sets. 1 Introduction Considerable progress has been made in understanding the dynamics of supervised learning in layered neural networks through the application of the methods of statistical mechanics. A recent review of work in this field is contained in [1 J. For the most part, such theories have concentrated on systems where the training set is much larger than the number of updates. In such circumstances the probability that a question will be repeated during the training process is negligible and it is possible to assume for large networks, via the central limit theorem, that the local field distribution is Gaussian. In this paper we consider restricted training sets; we suppose that the size of the training set scales linearly with N, the number of inputs. The probability that a question will reappear during the training process is no longer negligible, the assumption that the local fields have Gaussian distributions is not tenable, and it is clear that correlations will develop between the weights and the Learning with Restricted Training Sets: Exact Solution 317 questions in the training set as training progresses. In fact, the non-Gaussian character of the local fields should be a prediction of any satisfactory theory of learning with restricted training sets, as this is clearly demanded by numerical simulations. Several authors [2, 3, 4, 5, 6, 7] have discussed learning with restricted training sets but a general theory is difficult. A simple model of learning with restricted training sets which can be solved exactly is therefore particularly attractive and provides a yardstick against which more difficult and sophisticated general theories can, in due course, be tested and compared. We show how this can be accomplished for on-line Hebbian learning in perceptrons with restricted training sets and we obtain exact solutions for the generalisation error and the training error for a class of noisy teachers and students with arbitrary weight decay. Our theory is in excellent agreement with numerical simulations and our prediction of the probability density of the student field is a striking confirmation of them, making it clear that we are indeed dealing with local fields which are non-Gaussian. 2 Definitions We study on-line learning in a student percept ron S, which tries to perform a task defined by a teacher percept ron characterised by a fixed weight vector B* E ~N. We assume, however, that the teacher is noisy and that the actual teacher output T and the corresponding student response S are given by T: {-I, I}N ~ {-I, I} T(e) = sgn[B· eL S: {-I, I}N ~ {-I, I} S(e) = sgn[J· e]' where the vector B is drawn independently of e with probability p(B} which may depend explicitly on the correct teacher vector B*. Of particular interest are the following two choices, described in literature as output noise and Gaussian input noise, respectively: p(B} = >. 6(B+B*} + (1->.) 6(B-B*} (1) where >. ~ 0 represents the probability that the teacher output is incorrect, and N (B) = [~] T -I:f(B-Bo)2/'E2 P 211'~2 e . (2) The variance ~2 / N has been chosen so as to achieve appropriate scaling for N ~ CXl. Our learning rule will be the on-line Hebbian rule, i.e. J(f+l) = (1- ~)J(f) + ~ e(f) sgn[B(f)· e(f)] (3) where the non-negative parameters, and fJ are the decay rate and the learning rate, respectively. At each iteration step f an input vector e(f) is picked at random from a training set consisting of p = aN randomly drawn vectors e· E {-I, I} N, f..L = 1, . . . p. This set remains unchanged during the learning dynamics. At the same time the teacher selects at random, and independently of e(f}, the vector B(£), according to the probability distribution p(B} . Iterating equation (3) gives J(m) = (1 - ~) m J o + ~ ~ (1 _ ~) m-l-Ie(e) sgn[B(f) . e(f)] (4) (=0 We assume that the (noisy) teacher output is consistent in the sense that if a question e reappears at some stage during the training process the teacher makes the same choice of B in both cases, i.e. if e(e) = e(f') then also B(f) = B(e') . This consistency allows us to define a generalised training set iJ by including with the p 318 H. C. Rae, P. Sollich and A. C. C. Coo/en questions the corresponding teacher vectors: D = {(e,B 1), ... ,(e,BP)} There are two sources of randomness in this problem. First of all there is the random realisation of the 'path' n = ((e(O), B(O)), (e(l), B(l)), ... , (e(f), B (f)), ... }. This is simply the randomness of the stochastic process that gives the evolution of the vector J. Averages over this process will be denoted as ( ... ). Secondly there is the randomness in the composition of the training set. We will write averages over all training sets as ( ... )sets. We note that p (J[e(f), B(e))) = ~ L f(e, Btl) p tL=1 (for all e) and that averages over all possible realisations of the training set are given by (J[(e, B1), (e, B2), ... , (e, BP)])sets = L L ... L 2~P J [ IT p(BIl) dBIl] f[(e, B1), (e, B2), ... ,(e, BP)] e1 e e tL=l where e E {-I, l}N. We normalise B* so that [B*]2 = 1 and choose the time unit t = miN. We finally assume that J o and B* are statistically independent of the training vectors ell, and that they obey Ji(O), B; = O(N-~) for all i. 3 Explicit Microscopic Expressions At the m-th stage of the learning process the two simple scalar observables Q[J] = J2 and R[J] = B* . J, and the joint distribution of fields x = J . e, y = B* . e, z = B . e (calculated over the questions in the training set D), are given by Q[J(m)] = J2(m) R[J(m)] = B* . J(m) (5) 1 P Pix, y, z; J(m)] = - L o[x - J(m) . e] o[y - B* . ell] o[z - Bil . ell] (6) p 11=1 For infinitely large systems one can prove that the fluctuations in mean-field observables such as {Q, R, P}, due to the randomness in the dynamics, will vanish [6]. Furthermore one assumes, with convincing support from numerical simulations, that for N -r (Xl the evolution of such observables, observed for different random realisations of the training set, will be reproducible (i.e. the sample-to-sample fluctuations will also vanish, which is called 'self-averaging'). Both properties are central ingredients of all current theories. We are thus led to the introduction of the averages of the observables in (5,6), with respect to the dynamical randomness and with respect to the randomness in the training set (to be carried out in precisely this order): Q(t) = lim ( (Q[J(tN))) )set.s N-+oo R(t) = lim ( (R[J(tN)]) )sets N-+oo Pt(x,y,z) = lim «P[x,y,z;J(tN)]) )sets N-+oo ( 7) (8) A fundamental ingredient of our calculations will be the average (~i sgn(B ·e))(e, B), calculated over all realisations of (e, B). We find, for a wide class of p(B), that (9) where, for example, Learning with Restricted Training Sets: Exact Solution P = if (1-2>.) P_ f!. 1 - V -; V1 + 'f,2 (output noise) (Gaussian input noise) 4 Averages of Simple Scalar Observables 3/9 (10) (11) Calculation of Q(t) and R(t) using (4, 5, 7, 9) to execute the path average and the average over sets is relatively straightforward, albeit tedious. We find that -"Yt(l -"Yt) 2 Q(t) = e-2""(tQo + 21}PRo e -e + ~(1_e-2"Yt) "( 2, (1_e- "Yt)2 1 +1}2 (_+p2) (12) "(2 a and that (13) where p is given by equations (10, 11) in the examples of output noise and Gaussian input noise, respectively. We note that the generalisation error is given by Eg = ~arccos [R(t)/v'Q(t)] (14) All models of the teacher noise which have the same p will thus have the same generalisation error at any time. This is true, in particular, of output noise and Gaussian input noise when their respective parameters>. and 'f, are related by 1 - 2>' = 1 (15) V1 + 'f,2 With each type of teacher noise for which (9) holds, one can thus associate an effective output noise parameter >.. Note, however, that this effective teacher error probability>. will in general not be identical to the true teacher error probability associated with a given p(B), as can immediately be seen by calculating the latter for the Gaussian input noise (2). 5 Average of the Joint Field Distribution The calculation of the average of the joint field distribution starting from equation (8) is more difficult. Writing a = (l-,IN) , and expressing the 6 functions in terms of complex exponentials, we find that P, (x y z) = jdidydZ ei(xHyy+zi) lim (e-i[xe-"YtJo ·el+i;B· .e+zBl.el] t , , 871"3 N-400 X fi:[~ te-[i1)XN- 1 /TtN-t(e 1 f') sg~(B""f')l]) (16) £=0 p v=l sets In this expression we replace e 1 bye and Bl by B, and abbreviate S = I1~~0[' ·l Upon writing the latter product in terms of the auxiliary variables Vv = (e1 ·eV ) I IN and Wv == B V • C, we find that for large N . A 2 A2 logS", X(x sgn[B· e],t) t1}XUl (l_e- "Yt) _ 1} x u2(1_e-2"Yt) (17) "( 4"( where Ul, U2 are the random variables given by 320 H. C. Rae. P. Sollich and A. C. C. Coolen 1 '""' 1 '""' 2 Ul = .jN ~ Vv sgn(wv ), U2 = - ~ Vv . a N v>l P v>l 1 it [-Y(.-t)] X(w, t) = ds [e- 11]We 1] a 0 and with (18) A study of the statistics of Ul and U2 shows that limN --700 U2 = 1, and that (N ~ 00), where U is a Gaussian random variable with mean equal to zero and variance unity. On the basis of these results and equations (16, 17) we find that P, (x y z) = jdXdfjdi ei(x:Hyy+==)_~x2[Q - R2- e -2-yt(Qo-R6)]+ ~dx sgn[=],t) -ixy(R-Roe->' ) t , , 87f3 (19) where Q and R are given by the expressions (12,13) (note: Q - R2 is independent of p, i.e. of the distribution p(B)). Let Xo = J o .~, y = B* .~, z = B . ~. We assume that, given y, z is independent of Xo. This condition, which reflects in some sense the property that the teacher noise preserves the perceptron structure. is certainly satisfied for the models which we are considering and is probably true of all reasonable noise models. The joint probability density then has the form p( Xo, y, z) = p( Xo J Y )p(y, z). Equation (19) then leads to the following expression for the conditional probability of x, given y and z: P,t(xJy, z) = j ~! eiX[x-Ry]-~x2[Q-R2J+x(x sgn[z),t) (20) We observe that this probability distribution is the same for all models with the same p and that the dependence on z is through r = sgn[ z], a directly observable quantity. The training error and the student field probability density are given by Etr = j dxdy L B( -xr)P,t (xJy , r)P(rJy)P,(y) T=±l (21 ) P,t(x) = j dy L P,t(xJy, r)P,(rJy)P(y) T=±l (22) 1 1 2 in which P,(y) = (27f)-2e- 2Y . We note that the dependence of Etr and P,t{x) on the specific noise model arises solely through P,( rJy) which we find is given by P(rJy) = )"B( -ry) + (1 - )..)B(ry) 1 P{rJy) = 2(1 + rerf[y/J2~]) in the output noise and Gaussian input noise models, respectively. In order to simplify the numerical computation of the remaining integrals one can further reduce the number of integrations analytically. Details will be reported elsewhere. 6 Comparison with Numerical Simulations It will be clear that there is a large number of parameters that one could vary in order to generate different simulation experiments with which to test our theory. Here we have to restrict ourselves to presenting a number of representative results. Figure 1 shows, for the output noise model, how the probability density Pdx) of Learning with Restricted Training Sets: Exact Solution 0.2 ,----------, 0.1 f 0.0 '-L-_~~ . -10 o X 10 -10 o X 10 -10 o X 10 -10 o X 321 10 Figure 1: Student field distribution P(x) for the case of output noise, at different times (left to right: t= 1,2,3,4), for a=,=~, 10 =1}= 1, A=0.2. Histograms: distributions measured in simulations, (N = 10,000). Lines: theoretical predictions. the student field x = J . ~ develops in time, starting as a Gaussian at t = 0 and evolving to a highly non-Gaussian distribution with a double peak by time t = 4. The theoretical results give an extremely satisfactory account of the numerical simulations. Figure 2 compares our predictions for the generalisation and training errors Eg and Etr with the results of numerical simulations, for different initial conditions, Eg(O) = 0 and Eg(O) = 0.5, and for different choices of the two most important parameters A (which controls the amount of teacher noise) and a (which measures the relative size of the training set). The theoretical results are again in excellent agreement with the simulations. The system is found to have no memory of its past (which will be different for some other learning rules), the asymptotic values of Eg and Etr being independent of the initial student vector. In our examples Eg is consistently larger than Etr , the difference becoming less pronounced as a increases. Note, however, that in some circumstances E tr can also be larger then E g . Careful inspection shows that for Hebbian learning there are no true overfitting effects, not even in the case of large A and small, (for large amounts of teacher noise, without regularisation via weight decay). Minor finite time minima of the generalisation error are only found for very short times (t < 1), in combination with special choices for parameters and initial conditions. 7 Discussion Starting from a microscopic description of Hebbian on-line learning in perceptrons with restricted training sets, of size p = aN where N is the number of inputs, we have developed an exact theory in terms of macroscopic observables which has enabled us to predict the generalisation error and the training error, as well as the probability density of the student local fields in the limit N ~ 00. Our results are in execellent agreement with numerical simulations (as carried out for systems of size N = 5,000) in the case of output noise; our predictions for the Gaussian input noise model are currently being compared with the results of simulations. Generalisations of our calculations to scenarios involving, for instance, time-dependent learning rates or time-dependent decay rates are straightforward. Although it will be clear that our present calculations cannot be extended to non-Hebbian rules, since they 322 H. C. Rae, P Sollich and A. C. C. Coo/en 0.5 0.4 b,. I~ 0.3 0.2 0.1 0.0 0.5 0.4 ~ 0.3 ~ 0.2 ~~ ~ 0.1 ~ 0.0 o 10 20 t ""V' -~ ~ a=O.5 a=4.0 -~ a=0.5 A=D.25 A=D.O A:=0.25 "\.T )..=0.0 30 40 a=0.5 r f a=4.0 1 ~ ~~ Jhj 'v~ l a=O.5 A:=0.25 )..=0.0 )..=0.25 "'..t> ~ J A.---0.0 j , o 10 20 30 40 t Figure 2: Generalisation errors (diamonds/lines) and training errors (circles/li.nes) as observed during on-line Hebbian learning, as functions of time. Upper two graphs: A = 0.2 and a E {0.5,4.0} (upper left: Eg(O) = 0.5, upper right: Eg(O) = 0). Lower two graphs: a = 1 and A E {O.O, 0.25} (lower left: Eg(O) = 0.5. lower right: Eg(O) = 0.0). Markers: simulation results for an N = 5,000 system. Solid lines: predictions of the theory. In all cases Jo = 'f} = 1 and 'Y = 0.5. ultimately rely on our ability to write down the microscopic weight vector J at any time in explicit form (4), they do indeed provide a significant yardstick against which more sophisticated and more general theories can be tested. In particular. they have already played a valuable role in assessing the conditions under which a recent general theory of learning with restricted training sets, based on a dynamical version of the replica formalism, is exact [6, 7]. References [1] Mace C.W.H.and Coolen A.C.C. (1998) Statistics and Computing 8, 55 [2] Horner H. (1992a) , Z.Phys. B 86.291; (1992b) , Z.Phys. B 87,371 [3] Krogh A. and Hertz J.A. (1992) IPhys. A: Math. Gen. 25, 1135 [4] Sollich P. and Barber D. (1997) Europhys. Lett. 38, 477 [5] SoUich P. and Barber D. (1998) Advances in Neural Information Processing Systems 10, Eds. Jordan M., Kearns M. and Solla S. (Cambridge: MIT) [6] Cool en A.C.C. and Saad D., King's College London preprint KCL-MTH-98-08 [7] Coolen A.C.C. and Saad D. (1998) (in preparation)
|
1998
|
43
|
1,541
|
Phase Diagram and Storage Capacity of Sequence Storing Neural Networks A. During Dept. of Physics Oxford University Oxford OX 1 3NP United Kingdom a.duringl @physics.oxford.ac.uk D. Sherrington Dept. of Physics Oxford University Oxford OX I 3NP United Kingdom A. C. C. Coolen Dept. of Mathematics King's College London WC2R 2LS United Kingdom tcoolen @mth.kc1.ac.uk d.sherrington I @physics.oxford.ac.uk Abstract We solve the dynamics of Hopfield-type neural networks which store sequences of patterns, close to saturation. The asymmetry of the interaction matrix in such models leads to violation of detailed balance, ruling out an equilibrium statistical mechanical analysis. Using generating functional methods we derive exact closed equations for dynamical order parameters, viz. the sequence overlap and correlation and response functions. in the limit of an infinite system size. We calculate the time translation invariant solutions of these equations. describing stationary limit-cycles. which leads to a phase diagram. The effective retarded self-interaction usually appearing in symmetric models is here found to vanish, which causes a significantly enlarged storage capacity of eYe ~ 0.269. compared to eYe ~ 0.139 for Hopfield networks s~oring static patterns. Our results are tested against extensive computer simulations and excellent agreement is found. 212 A. Diiring, A. C. C. Coo/en and D. Sherrington 1 INTRODUCTION AND DEFINITIONS We consider a system of N neurons O'(t) = {ai(t) = ±1}, which can change their states collectively at discrete times (parallel dynamics). Each neuron changes its state with a probability Pi(t) = ~[l-tanh,Bai(t)[Lj Jijaj(t)+Oi(t)]], so that the transition matrix is N W[o'(s + l)IO'(s)] = II e.BO',(s+l)[E;=l J, j O'} (s)+o,(s)]-ln2cosh(i3[E; =1 J'JO'} ( s)+(J, (s ))) i=l (I) with the (non-symmetric) interaction strengths Jij chosen as p J 1 "'" el'+l el' ij N ~ "'i "'j' (2) 1'=1 The ~r represent components of an ordered sequence of patterns to be stored I. The gain parameter ,B can be interpreted as an inverse temperature governing the noise level in the dynamics (1) and the number of patterns is assumed to scale as N, i. e. P = aN. If the interaction matrix would have been chosen symmetrically, the model would be accessible to methods originally developed for the equilibrium statistical mechanical analysis of physical spin systems and related models [1 , 2], in particular the replica method. For the nonsymmetric interaction matrix proposed here this is ruled out, and no exact solution exists to our knowledge, although both models have been first mentioned at the same time and an approximate solution compatible with the numerical evidence at the time has been provided by Amari [3] . The difficulty for the analysis is that a system with the interactions (2) never reaches equilibrium in the thermodynamic sense, so that equilibrium methods are not applicable. One therefore has to apply dynamical methods and give a dynamical meaning to the notion of the recall state. Consequently, we will for this paper employ the dynamical method of path integrals, pioneered for spin glasses by de Dominicis [4] and applied to the Hopfield model by Rieger et al. [5] . We point out that our choice of parallel dynamics for the problem of sequence recall is deliberate in that simple sequential dynamics will not lead to stable recall of a sequence. This is due to the fact that the number of updates of a single neuron per time unit is not a constant for sequential dynamics. Schemes for using delayed asymmetric interactions combined with sequential updates have been proposed (see e. g. [6] for a review), but are outside the scope of this paper. Our analysis starts with the introduction of a generating functional Z[1jJ] of the form Z[1jJ] = L p[O'(O), . .. ,O'(t)] e- i E .<t O'( s )'1/1( s ) , O'(O) ... O'(t) (3) which depends on real fields { 'ljJi (t)} . These fields playa formal role only, allowing for the identification of interesting order parameters, such as " 8Z[1jJ] mi(s) = (ai(s)) = 1 hm 8 () 1/1-t0 'ljJ1 S ( ') 8 ( )) .. 82 Z[1jJ] GiJ S, S = 80j (s') a1(s = 1 Jt.~o 8'IjJi(S)80j (s') Cij(s,s') = (ai(s)aj(s')) = lim 8 ~201jJ\ )' 1/1-t0 'ljJi S 'ljJj S' I Upper (pattern) indices are understood to be taken modulo p unless otherwise stated. Phase Diagram and Storage Capacity ojSequence-Storing Neural Networks 213 for the average activation, response and correlation functions, respectively. Since this functional involves the probability p[o-(O), ... ,o-(t)] of finding a 'path' of neuron activations {o-(O), ... ,o-(t)}, the task of the analysis is to express this probability in terms of the macroscopic order parameters itself to arrive at a set of closed macroscopic equations. The first step in rewriting the path probability is to realise that (I) describes a onestep Markov process and the path probability is therefore just the product of the single-time transition probabilities, weighted by the probability of the initial state: p[o-(O), ... ,o-(t)] = p(o-(O)) TI~:~ W[o-(s + l)lo-(s)] . Furthermore, we will in the course of the analysis frequently isolate interesting variables by introducing appropriate 8-functions, such as The variable hi(t) can be interpreted as the local field (or presynaptic potential) at site i and time t and their introduction transforms Z['ljJ] into , t-l Z['ljJ] = L p(o-(O)) J d2~ d~t II [ei3U(S+I) .h(S)-Li In 2cosh(i3h;is)) u(O) .. . u(t) () s=o This expression is the last general form of Z['ljJ] we consider. To proceed with the analysis, we have to make a specific ansatz for the system behaviour. 2 DYNAMIC MEAN FIELD THEORY As sequence recall is the mode of operation we are most interested in, we make the ansatz that, for large systems, we have an overlap of order 0 (NO) between the pattern e at time s, and that all other patterns are overlapping with order a (N- 1/ 2 ) at most. Accordingly, we introduce the macroscopic order parameters for the condensed pattern m(s) = N- 1 L:i ~:ai(s) and for the quantity k(s) = N- 1 L:i ~:hi(S), and their noncondensed equivalents yl'(s) = N- 1/ 2 L:i ~rai(s) and x(s) = N- 1/ 2 L:i ~rhi(S) (1-£ =f. s), where the scaling ansatz is reflected in the normalisation constants. Introducing these objects using 8 functions, as with the local fields hi (s), removes the product of two patterns in the last line of eq. (4), so that the exponent will be linear in the pattern bits. Because macroscopic observab1es will in general not depend on the microscopic realisation of the patterns, the values of these observab1es do not change if we average Z['ljJ] over the realisations of the patterns. Performing this average is complicated by the occurrence of some patterns in both the condensed and the noncondensed overlaps, depending on the current time index, which is an effect not occurring in the standard Hopfield model. Using some simple scaling arguments, this difficulty can be removed and we can perform the average over the noncondensed patterns. The disorder averaged Z['ljJ] acquires the form 214 A. During, A. C. C. Coo/en and D. Sherrington where we have introduced the new observables q(s, S') = 1/ N L:i ai (s )ai(s'), Q(s, S') = I/N Li hi(S)hi(S'), and K(s, S') = I/N Li ai(s)hi(S'), and their corresponding conjugate variables. The functions in the exponent turn out to be 'l1[m, ril, k, k, q, q, Q, Q, K, K] = i L [m(s)m(s) + k(s)k(s) - m(s)k(s)] + s<t i L [q(s, S')q(S, S') + O(s, S')Q(S, S') + K(s, s')K(s, Sl)], (6) s,s'<t <I>[m, k, q, Q, K] = ~ LIn [ L Pi(a(O)) J II [dh(S;:h(S)] i O'(O) .. . O'(t) s<t eL.<t [,BO'(S+l)h(s) - ln 2('osh(~h(s))] X e- i L""<t [q(s,s')O'(s)O'(s')+Q(s ,s')h(s)h(s')+K(s,s')O'(slh(s')] x and ei L« ,(. j [k( ,j-• .(.j -i(, j,:+< ]-; E.« a(.j [m(.j€: H. (,j]]. (7) n[q Q Q] = ~ In / II [dU(S) dV(S)] ei L,,>t 2::.<t U,,+l (s)v,,(s) X "N (2rr )(p-t) s<t e- ~ L" >1 L •.• , < 1 [u" (s)Q(s,s' )u" (s' )+u" (s)K(s' ,s)v" (s' )+v" (s)K(s,s' )u" (s' )+v" (s)q(s,s' )v" (s')] . (8) The first of these expressions is just a result of the introduction of 6 functions, while the second will turn out to represent a probability measure given by the evolution of a single neuron under prescribed fields and the third reflects the disorder contribution to the local fields in that single neuron measure2• We have thus reduced the original problem involving N neurons in a one-step Markov process to one involving just a single neuron, but at the cost of introducing two-time observables. 3 DERIVATION OF SADDLE POINT EQUATIONS The integral in (5) will be dominated by saddle points, in our case by a unique saddle point when causality is taken into account. Extremising the exponent with respect to all occurring variables gives a number of equations, the most important of which give the physical meanings of three observables: q(s, S') = C(s, S'), K(s , S') = iG(s, s'), m(s) = lim N1 '" (at (s)~i (9) N~oo 6 with 1 • 1 '" a(ai(s) G(s, s ) = hm N 6 ae ( ') , N~oo i S t (10) 2We have assumed p(u(O)) = n, p,(a,(O)). Phase Diagram and Storage Capacity of Sequence-Storing Neural Networks 215 which are the single-site correlation and response functions, respectively. The overline . . . is taken to represent disorder averaged values. Using also additional equations arising from the normalisation Z[O] = 1, we can rewrite the single neuron measure ell as (f[{u}])* = 2:: In [dh(S;:h(S)] p(a(O))J[{u}]eLs<t [t30'(S+1 )h( s)- ln 2COSh (.L3 h (s)) ] O'o ... O'(t) s< t (11 ) with the short-hand R = L:~o GtlCGl . To simplify notation, we have here assumed that the initial probabilities Pi(ai(O )) are uniform and that the external fields Oi(S) are so-called staggered ones, i. e. Oi (s) = O~: + 1, which makes the single neuron measure site-independent. This single neuron measure (II) represents the essential result of our calculations and is already properly normalised (i.e. (1) = 1). * When one compares the present form of the single neuron measure with that obtained for the symmetric Hopfield network, One finds in the latter model an additional term which corresponds to a retarded self-interaction. The absence of such a term here suggests that the present model will have a higher storage capacity. It can be explained by the constant change of state of a large number of neurons as the network goes through the sequence, which prevents the build-up of microscopic memory of past activations. However, as is the case for the standard Hopfield model, the measure (II) is still too complicated to find explicit equations for the observables we are interested in. Although it is possible to evaluate the necessary integrals numerically, we instead concentrate on the interesting behaviour when transients have died out and time-translation invariance is present. 4 STATIONARY STATE We will now concentrate on the behaviour of the network at the stage when transients have subsided and the system is on a macroscopic limit cycle. Then the relations m(s) = m C(s , s') = C(s - s') G(s , s') = C(s - s'). (12) hold and also R(s, s') = R(s - s') . We can then for simplicity shift the time origin to = - 00 and the upper temporal bound to t = 00 . Note, however, that this state is not to be confused with microscopic equilibrium in the thermodynamic sense. The stationary versions of the measure (11) for the interesting observables are then given by the following expressions (note that C(O) = 1): m = I II dV(S;:W(S) e ivw-!w.Rw tanh/3[m + 0 + Q! v(O)] s C(T f= 0) = In dV (S~:w(S) e iv.w-!w.Rw x s tanh B[m + 0 + Q~V(T)] tanh /3 [m + 0 + Q ~ V(O ) ] G( T) = (30,,1 [1 -J If dv(s~~w( s) e'vw- ;wRw tanh' (3 [m + B + ,,'"(0) 1] (13) and we notice that the response function is nOw limited to a single time step, which again reflects the influence of the uncorrelated flips induced by the sequence recall. These equations can be sol ved by separating the persistent and fluctuating parts of C( T) and R( T), 216 A. During, A. C. C. Coolen and D. Sherrington C(T) = q + C(T), R(T) = r + R(T), lim C(T) = lim R(T) = O. T=±OO T=±OO Doing so eventually leads us to the coupled equations p = [1 - ,82(1 - q)2rl (14) m = / Dz tanh,8[m + e + zv'aP] (15) q = / Dz tanh2 ,8[m + e + zv'aP] (16) q = / Dz [/ Dx tanh,8 [m + e + zJoqp + xV 0(1 - q)p] r (17) Note that the three equations (14-16) form a closed set, from which the persistent correlation q simply follows. 5 PHASE DIAGRAM AND STORAGE CAPACITY 1.0 0.8 p 0.6 T 0.4 I r , R 0.2 I I0.0 0.0 0.1 0.2 0.3 a Figure 1: Phase diagram of the sequence storage network, in which one finds two phases: a recall phase (R), characterized by {m f:. 0, q > 0, ij > O}, and a paramagnetic phase (P), characterized by {m = 0, q = 0, q > O}. The solid line separating the two phases is the theoretical prediction for the (discontinuous) phase transition. The markers represent simulation results, for systems of N = 10,000 neurons measured after 2, 500 iteration steps, and obtained by bisection in o. The precision in terms of 0 is at least 6.0 = 0.005 (indicated by error bars); the values for T are exact. The coupled equations (14-17) can be solved numerically for e = 0 to find the area in the o-T plane where solutions m f:. 0 corresponding to sequence recall- exist. The boundary of this area describes the storage capacity of the system. This theoretical curve can then be compared with computer simulations directly performing the neural dynamics Phase Diagram and Storage Capacity ojSequence-Storing Neural Networks 217 given by (I) and (2). We show the result of doing both in the same accompanying diagram. We find that there are only two types of solutions, namely a recall phase R where m f:. 0 and q f:. 0, and a paramagnetic phase where m = q = O. Unlike the standard Hopfield model, the present model does not have a spin glass phase with m = a and q f:. O. The agreement between simulations (done here for N = la, 000 neurons) and theoretical results is excellent and separate simulations of systems with up to N = 50, 000 neurons to assess finite size effects confirm that the numerical data are reliable. 6 DISCUSSION In this paper, we have used path integral methods to solve in the infinite system size limit the dynamics of a non-symmetric neural network model, designed to store and recall a sequence of patterns, close to saturation. This model has been known for over a decade from numerical simulations to possess a storage capacity roughly twice that of the symmetric Hopfield model, but no rigorous analytic results were available. We find here that in contrast to equilibrium statistical mechanical methods, which do not apply due to the absence of detailed balance, the powerful path integral formalism provides us with a solution and a transparent explanation of the increased storage capacity. It turns out that this higher capacity is due to the absence of a retarded self-interaction, viz. the absence of microscopic memory of activations. The theoretically obtained phase diagram can be compared to the results of numerical simulations and we find excellent agreement. Our confidence in this agreement is supported by additional simulations to study the effect of finite size scaling. Full details of the calculations will be presented elsewhere [7] . References [I] Sherrington D and Kirkpatrick S 1975 Phys. Rev. Lett. 35 1972 [2] Amit D J, Gutfreund H, and Sompolinsky H 1985 Phys. Rev. Lett. 55 1530 [3] Amari Sand Maginu K 1988 Neural Networks 1 63 [4] de Dominicis G 1978 Phys. Rev. B 184913 [5] Rieger H, Schreckenberg M, and Zittartz J 1988 J. Phys. A: Math. Gen. 21 L263 [6] Kuhn R and van Hemmen J L 1991 Temporal Association ed E Domany, J L van Hemmen, and K Schulten (Berlin, Heidelberg: Springer) p 213 [7] During A, Coolen A C C, and Sherrington D 1998 J. Phys. A: Math. Gen. 31 8607
|
1998
|
44
|
1,542
|
Example Based Image Synthesis of Articulated Figures Trevor Darrell Interval Research. 1801C Page Mill Road. Palo Alto CA 94304 trevor@interval.com, http://www.interval.com/-trevor/ Abstract We present a method for learning complex appearance mappings. such as occur with images of articulated objects. Traditional interpolation networks fail on this case since appearance is not necessarily a smooth function nor a linear manifold for articulated objects. We define an appearance mapping from examples by constructing a set of independently smooth interpolation networks; these networks can cover overlapping regions of parameter space. A set growing procedure is used to find example clusters which are well-approximated within their convex hull; interpolation then proceeds only within these sets of examples. With this method physically valid images are produced even in regions of parameter space where nearby examples have different appearances. We show results generating both simulated and real arm images. 1 Introduction Image-based view synthesis is.an important application of learning networks. offering the ability to render realistic images without requiring detailed models of object shape and illumination effects. To date. much attention has been given to the problem of view synthesis under varying camera pose or rigid object transformation. Several successful solutions have been proposed in the computer graphics and vision literature. including view morphing [12], plenoptic modeling/depth recovery [8], "lightfields" [7], and recent approaches using the trifocal tensor for view extrapolation [13]. For non-rigid view synthesis. networks for model-based interpolation and manifold learning have been used successfully in some cases [14. 2. 4. 11]. Techniques based on Radial Basis Function (RBF) interpolation or on Principle Components Analysis (peA), have been able to interpolate face images under varying pose. expression and identity [1.5, 6]. HowExample-Based Image Synthesis of Articulated Figures 769 extends the notion of example clustering to the case of coupled shape and texture appearance models. Our basic method is to find sets of examples which can be well-approximated from their convex hull in parameter space. We define a set growing criterion which enforces compactness and the good-interpolation property. To add a new point to an example set, we require both that the new point must be well approximated by the previous set alone and that all interior points in the resulting set be well interpolated from the exterior examples. We define exterior examples to be those on the convex hull of the set in parameter space. Given a training subset s C 0 and new point p E 0, E(s,p) = max(E/(s U {p}),EE(S,p)) , with the interior and extrapolation error defined as 1ix (s) is the subset of s whose x vectors lie on the convex hull of all such vectors in s. To add a new point, we require E < E, where E is a free parameter of the clustering method. Given a seed example set, we look to nearest neighbors in appearance space to find the next candidate to add. Unless we are willing to test the extrapolation error of the current model to all points, we have to rely on precomputed non-vectorized appearance distance (e.g., MSE between example images). If the examples are sparse in the appearance domain, this may not lead to effective groupings. If examples are provided in sequence and are based on observations from an object with realistic dynamics, then we can find effective groupings even if observations are sparse in appearance space. We make the assumption that along the trajectory of example observations over time, the underlying object is likely to remain smooth and locally span regions of appearance which are possible to interpolate. We thus perform set growing along examples on their input trajectory. Specifically, in the results reported below, we select K seed points on the trajectory to form initial clusters. At each point p we find the set s which is the smallest interval on the example trajectory which contains p, has a non-zero interior region (s -1ix (s)), and for which E / (s) < €. If such set exists, we continue to expand it, growing the set along the example trajectory until the above set growing criterion is violated. Once we can no longer grow any set, we test whether any set is a proper subset of another, and delete it if so. We keep the remaining sets, and use them for interpolation as described below. 4 Synthesis using example sets We generate new views using sets of examples: interpolation is restricted to only occur inside the convex hull of an example set found as above for which E/(s) ::; E. Given a new parameter vector x, we test whether it is in the convex hull of parameters in any example set. If the point does not lie in the convex hull of any example set, we find the nearest point on the convex hull of one of the example sets, and use that instead. This prevents erroneous extrapolation. If a new parameter is in the convex hull of more than one example set, we first select the set whose median example parameter is closest to the desired example parameter. Once a set has been selected, we interpolate a new function value from examples using the RBF method summarized above. To enforce temporal consistency of rendered images over time, (b) (c) 770 T. Darrell Figure 2: (a) Images of a real arm (from a sequence of 33 images) with changing appearance and elbow configuration. (b,c) Interpolated shape of arms tracked in previous figure. (b) shows results using all examples in a single interpolation network; (c) shows results using example sets algorithm. Open contours show arms example locations; filled contour shows interpolation result. Near regions of appearance singularity in parameter space the full network method generates physically-invalid arm shapes; the example sets method produces realistic images. The method presented below for grouping examples into locally valid spaces is generally applicable to both the PCA and RBF-based view synthesis techniques. However our initial implementation, and the results reported in this paper, have been with RBF-based models. 3 Finding consistent example sets Given examples from a complicated (non-linear, non-smooth) appearance mapping, we find local regions of appearance which are well-behaved as smooth, possibly linear, functions. We wish to cluster our examples into sets which can be used for successful interpolation using our local appearance mode\. Conceptually, this problem is similar to that faced by Bregler and Omohundro [2], who built image manifolds using a mixture of local PCA models. Their work was limited to modeling shape (lip outlines); they used K-means clustering of image appearance to form the initial groupings for PCA analysis. However this approach had no model of texture and performed clustering using a mean-squared-error distance metric in simple appearance. Simple appearance clustering drastically over-partitions the appearance space compared to a model that jointly represent shape and texture. Examples which are distant in simple appearance can often be close when considered in 'vectorized' representation. Our work Example-Based Image Synthesis of Articulated Figures ", . '" .. " ... .. : .. ", . . " .... . " .. .............. '" ..... . . , " .. " . .. ' . .: .... . . " , " '" .. .. . . : .. - ' , .. ' 0 • •• :::~:.:.: :-:.: ' 0 '~1.)'. ..... . '.. . :':',::':: .. ' . ..' . (b) ,-------.c' -,-' '_'. _' '_--, '-----"--'---'--____ ---' '------"--'---'--__ --' '------"~ __ -' '-----"--'---'--____ ---' (C)'----________ ~L ________ __" '____ __ ~ ____ ~ '-----' ______ --' '----~ ______ --' 771 Figure 1: Arm appearance interpolated from examples using approximation network. (a) A 2DOF planar arm. Discontinuities in appearance due to workspace constraints make this a difficult function to learn from examples; the first and last example are very close in parameter space, but far in appearance space. (b) shows results using all examples in a single network; (c) using the example sets algorithm described in text. Note poor approximation on last two examples in (a); appearance discontinuities and extrapolation cause problems for full network, but are handled well in examples sets method. In peA-based approaches, G projects a portion of u onto a optimal linear subspace found from D, and F projects a portion of u onto a subspace found from T [6, 5]. For example G D (u) = PI) 59 U , where 59 is a diagonal boolean matrix which selects the texture parameters from u and PI) is a matrix containing the m-th largest principle components of D. F warps the reconstructed texture according to the given shape: FT(u, s) = [PT5tu] 0 s. While interpolation is simple using a peA approach, the parameters used in peA models often do not have any direct physical interpretation. For the task of view synthesis, an additional mapping u = H(x) is needed to map from task parameters to peA input values; a backpropogation neural net was used to perform this function for the task of eye gaze analysis [10]. Using the RBF-based approach [1], the application to view synthesis is straightforward. Both G and F are networks which compute locally-weighted regression, and parameters are used directly (u = x). G computes an interpolated shape, and F warps and blends the example texture images according to that shape: G D(X) = Ei cd(x - xd, FT(X, s) = [Ei cU(x - Xi)] os, where f is a radial basis function. The coefficients c and c' are derived from D and T, respectively: C = D R+ , where rij = f (x i-X j) and C is the matrix of row vectors Ci; similarly C' = T R+ [9]. We have found both vector norm and Gaussian basis functions give good results when appearance data is from a smooth function; the results below use f(r) = Ilrll. 772 T. Darrell ever, these methods are limited in the types of object appearance they can accurately model. PCA-based face analysis typically assumes images of face shape and texture fall in a linear subspace; RBF approaches fare poorly when appearance is not a smooth function. We want to extend non-rigid interpolation networks to handle cases where appearance is not a linear manifold and is not a smooth function, such as with articulated bodies. The mapping from parameter to appearance for articulated bodies is often one-to-many due to the multiple solutions possible for a given endpoint. It will also be discontinuous when constraints call for different solutions across a boundary in parameter space, such as the example shown in Figure 1. Our approach represents an appearance mapping as a set of piecewise smooth functions. We search for sets of examples which are well approximated by the examples on the convex hull of the set's parameter values. Once we have these 'safe' sets of examples we perform interpolation using only the examples in a single set. The clear advantage of this approach is that it will prevent inconsistent examples from being combined during interpolation. It also can reduce the number of examples needed to fully interpolate the function, as only those examples which are on the convex hull of one or more example sets are needed. If a new example is provided and it falls within and is well-approximated by the convex hull of an existing set, it can be safely ignored. The remainder of this paper proceeds as follows. First, we will review methods for modeling appearance when it can be well approximated with a smooth and/or linear function. Next, we will present a technique for clustering examples to find maximal subsets which are well approximated in their interior. We will then detail how we select among the subsets during interpolation, and finally show results with both synthetic and real imagery. 2 Modeling smooth and/or linear appearance functions Traditional interpolation networks work well when object appearance can be modeled either as a linear manifold or as a smooth function over the parameters of interest (describing pose, expression, identity, configuration, etc.). As mentioned above, both peA and RBF approaches have been successfully applied to model facial expression. In both approaches, a key step in modeling non-rigid shape appearance from examples is to couple shape and texture into a single representation. Interpolation of shape has been well studied in the computer graphics literature (e.g., splines for key-frame animation) but does not alone render realistic images. PCA or RBF models of images without a shape model can only represent and interpolate within a very limited range of pose or object configuration. In a coupled representation, texture is modeled in shape-normalized coordinates, and shape is modeled as disparity between examples or displacement from a canonical example to all examples. Image warping is used to generate images for a particular texture and shape. Given a training set n = {(Yi, Xi, di ), 0 ~ i ~ n}, where Yi is the image of example i, Xi is the associated pose or configuration parameter, and di is a dense correspondence map relative to a canonical pose, a set of shape-aligned texture images can be computed such that texture ti warped with displacement di renders example image Yi: Yi = ti 0 di [5, 1,6]. A new image is constructed using a coupled shape model G and texture model F, based on input u: y(n,U) = FT(GD(U),u) , where D, T are the matrices [dodl ... dn ], [totl ... t n ], respectively. Example-Based Image Synthesis of Articulated Figures 773 (c (b) Figure 3: Interpolated shape and texture result. (a) shows exemplar contours (open) and interpolated shape (filled). (b) shows example texture images. (c) shows final interpolated image. we can use a simple additional constraint on subsequent frames. Once we have selected an example set, we keep using it until the desired parameter value leaves the valid region (convex hull) of that set. When this occurs, we allow transitions only to "adjacent" example sets; adjacency is defined as those pairs of sets for which at least one example on each convex hull are sufficiently close (11Yi - Yj II < E) in appearance space. S Results First we show examples using a synthetic arm with several workspace constraints. Figure l(a) shows examples of a simple planar 2DOF ann and the inverse kinematic solution for a variety of endpoints. Due to an artificial obstacle in the world, the ann is forced to switch between ann-up and ann-down configurations to avoid collision. We trained an interpolation network using a single RBF to model the appearance of the ann as a function of endpoint location. Appearance was modeled as the vector of contour point locations, obtained from the synthetic ann rendering function. We first trained a single RBF network on a dense set of examples of this appearance function. Figure l(b) shows results interpolating new ann images from these examples; results are accurate except where there are regions of appearance discontinuity due to workspace constraints, or when the network extrapolates erroneously. We applied our clustering method described above to this data, yielding the results shown in Figure 1 (c). None of the problems with discontinuities or erroneous extrapolation can be seen in these results, since our method enforces the constraint that an interpolated result must be returned from on or within the convex hull of a valid example set. Next we applied our method to the images of real anns shown in Figure 2(a). Ann contours were obtained in a sequence of 33 such images using a semi-automated defonnable contour tracker augmented with a local image distance metric [3]. Dense correspondences were interpolated from the values on the contour. Figure 2(b) shows interpolated ann shapes using a single RBF on all examples; dramatic errors can be seen near where multiple different 774 T. Darrell appearances exist within a small region of parameter space. Figure 2( c) shows the results on the same points using sets of examples found using our clustering method; physically realistic arms are generated in each case. Figure 3 shows the final interpolated result rendered with both shape and texture. 6 Conclusion View-based image interpolation is a powerful paradigm for generating realistic imagery without full models of the underlying scene geometry. Current techniques for non-rigid interpolation assume appearance is a smooth function. We apply an example clustering approach using on-line cross validation to decompose a complex appearance mapping into sets of examples which can be smoothly interpolated. We show results on real imagery of human arms, with correspondences recovered from deformable contour tracking. Given images of an arm moving on a plane with various configuration conditions (elbow up and elbow down), and with associated parameter vectors marking the hand location, our method is able to discover a small set of manifolds with a small number of exemplars each can render new examples which are always physically correct. A single interpolating manifold for this same data has errors near the boundary between different arm configurations, and where multiple images have the same parameter value. References [1] D. Beymer, A. Shashua and T. Poggio, Example Based Image Analysis and Synthesis, MIT AI Lab Memo No. 1431, MIT, 1993. also see D. Beymer and T. Poggio, Science 272:1905-1909, 1996. [2] C. Breg1er and S. Omohundro, Nonlinear Image Interpolation using Manifold Learning, NIPS7, MIT Press, 1995. [3] T. DarrelI, A Radial Cumulative Similarity Transform for Robust Image Correspondence, Proc. CVPR-98. Santa Barbara, CA, IEEE CS Press, 1998. [4] M. Jagersand, Image Based View Synthesis of Articulated Agents, Proc. CVPR-97, San Jaun, Pureto Rico, pp. 1047-1053, IEEE CS Press, 1997. [5] M. Jones and T Poggio, Multidimensional Morphable Models, Proc. ICCV-98, Bombay, India, pp. 683-688, 1998. [6] A. Lanitis, C.J. Taylor, TF. Cootes, A Unified Approach to Coding and Interpreting Fa:::e Images, Proc. ICCV-95, pp. 368-373, Cambridge, MA, 1995. [7] M. Levoy and P. Hanrahan, Light Field Rendering, In SIGGRAPH-96, pp. 31-42,1996. [8] L. McMillan and G. Bishop, Plenoptic Modeling: An image-based rendering system. In Proc. SIGGRAPH-95, pp. 39-46, 1995. [9] T Poggio and F. Girosi, A Theory of Networks for Approximation and Learning, MIT AI Lab Memo No. 1140. 1989. [10] T Rikert and M. Jones, Gaze Estimation using Morphable Models, Proc. IEEE Conf. Face and Gesture Recognition '98, pp. 436-441, Nara, Japan, IEEE CS Press, 1998. [II] L. Saul and M. Jordan, A Variational Principle for Model-based Morphing, NIPS-9, MIT Press, 1997. [12] S. Seitz and C. Dyer, View Morphing, in Proc. SIGGRAPH-96, pp. 21-30,1996. [13] A. Shashua and M. Werman, Trilinearity of Three Perspective Views and its Associated Tensor, in Proc. ICCV-95, pp. 920-935, Cambridge, MA, IEEE CS Press, 1995. [14] 1. Tenenbaum, Mapping a manifold of perceptual observations, NIPS-IO, MIT Press, 1998.
|
1998
|
45
|
1,543
|
Shrinking the Thbe: A New Support Vector Regression Algorithm Bernhard SchOikopr§,*, Peter Bartlett*, Alex Smola§,r, Robert Williamson* § GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany * FEITIRSISE, Australian National University, Canberra 0200, Australia bs, smola@first.gmd.de, Peter.Bartlett, Bob.Williamson@anu.edu.au Abstract A new algorithm for Support Vector regression is described. For a priori chosen 1/, it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction 1/ of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with non-constant radius. The algorithm is analysed theoretically and experimentally. 1 INTRODUCTION Support Vector (SV) machines comprise a new class of learning algorithms, motivated by results of statistical learning theory (Vapnik, 1995). Originally developed for pattern recognition, they represent the decision boundary in terms of a typically small subset (SchOikopf et aI., 1995) of all training examples, called the Support Vectors. In order for this property to carryover to the case of SV Regression, Vapnik devised the so-called E-insensitive loss function Iy - f(x)lc = max{O, Iy - f(x)1 - E}, which does not penalize errors below some E > 0, chosen a priori. His algorithm, which we will henceforth call E-SVR, seeks to estimate functions f (x) = (w . x) + b, w, x E ~N , b E ~, (1) based on data (xl,yd, ... ,(xe,Ye) E ~N x~, (2) by minimizing the regularized risk functional IIwll2/2 + C . R~mp, (3) where C is a constant determining the trade-off between minimizing training errors and minimizing the model complexity term IIwll2, and R~mp := t 2::;=1 IYi - f(Xi)lc' The parameter E can be useful if the desired accuracy of the approximation can be specified beforehand. In some cases, however, we just want the estimate to be as accurate as possible, without having to commit ourselves to a certain level of accuracy. We present a modification of the E-SVR algorithm which automatically minimizes E, thus adjusting the accuracy level to the data at hand. Shrinking the Tube: A New Support Vector Regression Algorithm 331 2 ZJ-SV REGRESSION AND c-SV REGRESSION To estimate functions (1) from empirical data (2) we proceed as follows (SchOlkopf et aI., 1998a). At each point Xi, we allow an error of E. Everything above E is captured in slack variables d*) «(*) being a shorthand implying both the variables with and without asterisks), which are penalized in the objective function via a regularization constant C, chosen a priori (Vapnik, 1995). The tube size E is traded off against model complexity and slack variables via a constant v > 0: 1 e minimize -r(w, e(*) ,E) = Ilw112/2 + C· (VE + £ :L(Ei + En) (4) i-I subject to ((w,xi)+b)-Yi < E+Ei (5) Yi-((W ' Xi)+b) < E+Ei (6) d*) ~ 0, E > 0. (7) Here and below, it is understood that i = 1, ... , i, and that bold face greek letters denote i-dimensional vectors of the corresponding variables. Introducing a Lagrangian with multipliers o~ *) , 77i *) ,f3 ~ 0, we obtain the the Wolfe dual problem. Moreover, as Boser et al. (1992), we substitute a kernel k for the dot product, corresponding to a dot product in some feature space related to input space via a nonlinear map <I> , k(x,y) = (<I>(x)· <I>(y)). (8) This leads to the v-SVR Optimization Problem: for v ~ 0, C > 0, e e maximize W(o(*)) = :L(oi - Oi)Yi ~ :L (oi - Oi)(O; - OJ)k(Xi, Xj) (9) i=1 i,j=1 subject to (11) The regression estimate can be shown to take the form l f(x) = :L(oi - oi)k(Xi' x) + b, i=1 (13) where b (and E) can be computed by taking into account that (5) and (6) (substitution of L:j (0; - oj)k(xj, x) for (w· x) is understood) become equalities with E~*) = ° for points with ° < o~*) < C / i, respectively, due to the Karush-Kuhn-Tuckerconditions (cf. Vapnik, 1995). The latter moreover imply that in the kernel expansion (13), only those o~*) will be nonzero that correspond to a constraint (5)/(6) which is precisely met. The respective patterns Xi are referred to as Support Vectors. Before we give theoretical results explaining the significance of the parameter v, the following observation concerning E is helpful. If v > 1, then E = 0, since it does not pay to increase E (cf. (4)). If v ~ 1, it can still happen that E = 0, e.g. if the data are noise-free and can perfectly be interpolated with a low capacity model. The case E = 0, however, is not what we are interested in; it corresponds to plain Ll loss regression. Below, we will use the term errors to refer to training points lying outside of the tube, and the term fraction of errors/SVs to denote the relative numbers of errors/SVs, i.e. divided by i. Proposition 1 Assume E > 0. The following statements hoLd: (i) v is an upper bound on the fraction of errors. (ii) v is a Lower bound on the fraction ofSVs. 332 B. SchOlkopf, P. L. Bartlett, A. 1. Smola and R. Williamson (iii) Suppose the data (2) were generated iid from a distribution P(x, y) P(x)P(ylx) with P(ylx) continuous. With probability 1, asymptotically, v equals both the fraction of SVs and the fraction of errors. The first two statements of this proposition can be proven from the structure of the dual optimization problem, with (12) playing a crucial role. Presently, we instead give a graphical proof based on the primal problem (Fig. 1). To understand the third statement, note that all errors are also SVs, but there can be SVs which are not errors: namely, if they lie exactly at the edge of the tube. Asymptotically, however, these SVs form a negligible fraction of the whole SV set, and the set of errors and the one of SV s essentially coincide. This is due to the fact that for a class of functions with well-behaved capacity (such as SV regression functions), and for a distribution satisfying the above continuity condition, the number of points that the tube edges f ± £ can pass through cannot asymptotically increase linearly with the sample size. Interestingly, the proof (Scholkopf et aI., 1998a) uses a uniform convergence argument similar in spirit to those used in statistical learning theory. Due to this proposition, 0 ::; v ::; 1 can be used to control the number of errors (note that for v ~ 1, (11) implies (12), since ai . a; = 0 for all i (Vapnik, 1995)). Moreover, since the constraint (10) implies that (12) is equivalent to Li a~*) ::; Cv/2, we conclude that Proposition 1 actually holds for the upper and the lower edge of the tube separately, with v /2 each. As an aside, note that by the same argument, the number of SVs at the two edges of the standard £-SVR tube asymptotically agree. Moreover, note that this bears on the robustness of v-SVR. At first glance, SVR seems all but robust: using the £-insensitive loss function, only the patterns outside of the £-tube contribute to the empirical risk term, whereas the patterns closest to the estimated regression have zero loss. This, however, does not mean that it is only the outliers that determine the regression. In fact, the contrary is the case: one can show that local movements of target values Yi of points Xi outside the tube do not influence the regression (Scholkopf et aI., 1998c). Hence, v-SVR is a generalization of an estimator for the mean of a random variable which throws away the largest and smallest examples (a fraction of at most v /2 of either category), and estimates the mean by taking the average of the two extremal ones of the remaining examples. This is close in spirit to robust estimators like the trimmed mean. Let us briefly discuss how the new algorithm relates to £-SVR (Vapnik, 1995). By rewriting (3) as a constrained optimization problem, and deriving a dual much like we did for v-SVR, Figure 1: Graphical depiction of the v-trick. Imagine increasing £, starting from O. The first term in v£+ 1 L;=l (~i +~n (cf. (4)) will increase proportionally to v, while the second term will decrease proportionally to the fraction of points outside of the tube. Hence, £ will grow as long as the latter +£ fraction is larger than v . At the optimum, it thereo fore must be::; v (Proposition 1, (i)). Next, imagine decreasing £, starting from some large value. -£ Again, the change in the first term is proportional to v, but this time, the change in the second term is proportional to the fraction of SVs (even points on the edge of the tube will contribute). Hence, £ will shrink as long as the fraction of SVs is smaller than v, eventually leading to Proposition 1, (ii). Shrinking the Tube: A New Support Vector Regression Algorithm 333 one arrives at the following quadratic program: maximize l l l W(a, a*) = -£ 2)0: +Oi)+ 'L)oi -Oi)Yi-~ L (0; -Oi)(O) -OJ)k(Xi' Xj) (14) i=l i=l i,j=l subject to (10) and (11). Compared to (9), we have an additional term -c 2:;=1 (aT + Oi), which makes it plausible that the constraint (12) is not needed. In the following sense, v-SVR includes c-SVR. Note that in the general case, using kernels, w is a vector in feature space. Proposition 2 If v-SVR leads to the solution t, w, b, then c-SVR with E set a priori to t, and the same value of C, has the solution W, b. Proof If we minimize (4), then fix c and minimize only over the remaining variables, the solution does not change. • 3 PARAMETRIC INSENSITIVITY MODELS We generalized £-SVR by considering the tube as not given but instead estimated it as a model parameter. What we have so far retained is the assumption that the c-insensitive zone has a tube (or slab) shape. We now go one step further and use parametric models of arbitrary shape. Let { d *)} (here and below, q = 1, ... ,p is understood) be a set of 2p positive functions on IRN. Consider the following quadratic program: for given v~*), . .. , v~*) 2: 0, minimize ( p 1 l ) r(w, e(*), c(*») = IlwW /2 + C· ?;(vqcq + v;£;) + f ~(~i + ~n (15) subject to ((w· Xi) + b) - Yi < L cq(q(X;) + ~i q (16) Yi-((W'Xi)+b) < L c;(;(xd+C q (17) ~J*) 2: 0, E~*) > O. (18) A calculation analogous to Sec. 2 shows that the Wolfe dual consists of maximizing (9) subject to (10), (11), and, instead of (12), the modified constraints 2:;=1 o~*)d*)(xd :S C . v~*). In the experiments in Sec. 4, we use a simplified version of this optimization problem, where we drop the term v;c~ from the objective function (15), and use Cq and (q in (17). By this, we render the problem symmetric with respect to the two edges of the tube. In addition, we use p = 1. This leads to the same Wolfe dual, except for the last constraint, which becomes (cf. (12» l L i=l (a; + ai)((xi) :S C . v. (19) The advantage of this setting is that since the same v is used for both sides of the tube, the computation of E, b is straightforward: for instance, by solving a linear system, using two conditions as those described following (13). Otherwise, general statements are harder to make: the linear system can have a zero determinant, depending on whether the functions d *) , evaluated on the Xi with 0 < o~ *) < C / £, are linearly dependent. The latter occurs, for instance, if we use constant functions (( *) == 1. In this case, it is pointless to use two different values v, v*; for, the constraint (10) then implies that both sums 2:;=1 a~ *) will be bounded by C . min {v, v*}. We conclude this section by giving, without proof, a generalization of Proposition 1, (iii), to the optimization problem with constraint (19): 334 B. SchOlkopf, P L. Bartlett, A. J. Smola and R. Williamson Proposition 3 Assume c > O. Suppose the data (2) were generated iid from a distribution P(x, y) = P(x)P(ylx) with P(ylx) continuous. With probability 1, asymptotically, the fractions of SVs and errors equal v·(J ((x) d?(X))-l, where? is the asymptotic distribution of SVs over x. 4 EXPERIMENTS AND DISCUSSION In the experiments, we used the optimizer LOQO (http://www.princeton.edwrvdb/).This has the serendipitous advantage that the primal variables band c can be recovered as the dual variables of the Wolfe dual (9) (i.e. the double dual variables) fed into the optimizer. In Fig. 2, the task was to estimate a regression of a noisy sinc function, given f examples (Xi,Yi), with Xi drawn uniformly from [-3,3], and Yi = sin(7l'Xi)/(7l'Xi) + Vi, with Vi drawn from a Gaussian with zero mean and variance (J2. We used the default parameters £ = 50, C = 100, (J = 0.2, and the RBF kernel k(x, x') = exp( -Ix - x/12 ). Figure 3 gives an illustration of how one can make use of parametric insensitivity models as proposed in Sec. 3. Using the proper model, the estimate gets much better. In the parametric case, we used v = 0.1 and ((x) = sin2 ((27l' /3)x), which, due to J ((x) dP(x) = 1/2, corresponds to our standard choice v = 0.2 in v-SVR (cf. Proposition 3). The experimental findings are consistent with the asymptotics predicted theoretically even if we assume a uniform distribution of SVs: for £ = 200, we got 0.24 and 0.19 for the fraction of SVs and errors, respectively. This method allows the incorporation of prior knowledge into the loss function. Although this approach at first glance seems fundamentally different from incorporating prior knowledge directly into the kernel (Sch6lkopf et al., 1998b), from the point of view of statistical ,,,' "'''''''' Figure 2: Left: v-SV regression with v = 0.2 (top) and v = 0.8 (bottom). The larger v allows more points to lie outside the tube (see Sec. 2). The algorithm automatically adjusts c to 0.22 (top) and 0.04 (bottom). Shown are the sinc function (dotted), the regression f and the tube f ± c. Middle: v-SV regression on data with noise (J = 0 (top) and (J = 1 (bottom). In both cases, v = 0.2. The tube width automatically adjusts to the noise (top: c = 0, bottom: c = 1.19). Right: c-SV regression (Vapnik, 1995) on data with noise (J = 0 (top) and (J = 1 (bottom). In both cases, c = 0.2 this choice, which has to be specified a priori, is ideal for neither case: in the top figure, the regression estimate is biased; in the bottom figure, c does not match the external noise (cf. Smola et al., 1998). Shrinking the Tube: A New Support Vector Regression Algorithm 335 ,'''-''-, , . ,--~-------, Figure 3: Toy example, using . ,'_ . .0.5 • ___ • • ..' -....... _-_ ....... . . ', " ., prior knowledge about an xdependence of the noise. Additive noise (0' = 1) was multiplied by sin2 ((27r 13)x). Left: the same function was used as ( as a parametric insensitivity tube (Sec. 3) . , . '-: • ,,:---::-----c7-----:--7-----:------!, .,,:---::--.,------:---.,------:------!, Right: v-S VR with standard tube. Table 1: Results for the Boston housing benchmark; top: v-SVR, bottom: e:-SVR MSE: Mean squared errors, STD: standard deviations thereof (100 trials), Errors: fraction oftraining points outside the tube, SVs: fraction of training points which are SVs, Iv I 0.1 I 0.2 I 0.3 I 0.4 I 0.5 I 0.6 I 0.7 I 0,8 I 0,9 I 1.0 I automatic e: 2.6 1.7 1.2 0.8 0.6 0.3 0.0 0.0 0.0 0.0 MSE 9.4 8.7 9.3 9.5 10.0 10.6 11.3 11.3 11.3 11.3 STD 6.4 6.8 7.6 7.9 8.4 9.0 9.6 9.5 9.5 9.5 Errors 0.0 0.1 0.2 0.2 0.3 0.4 0.5 0.5 0.5 0.5 SVs 0.3 0.4 0.6 0.7 0.8 0.9 1.0 1.0 1.0 1.0 Ie: 0 1 1 I 2 I 3 I 41 5 I 6 1 7 I 8 I 9 I 10 I MSE 11.3 9.5 8.8 9.7 11.2 13.1 15.6 18.2 22.1 27.0 34.3 STD 9.5 7.7 6.8 6.2 6.3 6.0 6.1 6.2 6.6 7.3 8.4 Errors 0.5 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 SVs 1.0 0.6 0.4 0.3 0.2 0.1 0.1 0.1 0.1 0.1 0.1 learning theory the two approaches are closely related: in both cases, the structure of the loss-function-induced class of functions (which is the object of interest for generalization error bounds) is customized; in the first case, by changing the loss function, in the second case, by changing the class of functions that the estimate is taken from. Empirical studies using e:-SVR have reported excellent performance on the widely used Boston housing regression benchmark set (Stitson et aI., 1999). Due to Proposition 2, the only difference between v-SVR and standard e:-SVR lies in the fact that different parameters, e: vs. v , have to be specified a priori. Consequently, we are in this experiment only interested in these parameters and simply adjusted C and the width 20'2 in k(x, y) = exp( -llx - YI12/(20'2)) as Scholkopf et ai. (1997): we used 20'2 = 0.3 · N, where N = 13 is the input dimensionality, and C 1 e = 10 . 50 (i.e. the original value of 10 was corrected since in the present case, the maximal y-value is 50). We performed 100 runs, where each time the overall set of 506 examples was randomly split into a training set of e = 481 examples and a test set of 25 examples. Table 1 shows that in a wide range of v (note that only 0 :s v :s 1 makes sense), we obtained performances which are close to the best performances that can be achieved by selecting e: a priori by looking at the test set. Finally, note that although we did not use validation techniques to select the optimal values for C and 20'2, we obtained performance which are state of the art (Stitson et ai. (1999) report an MSE of 7.6 for e:-SVR using ANOVA kernels, and 11.7 for Bagging trees). Table 1 moreover shows that v can be used to control the fraction of SVs/errors. Discussion. The theoretical and experimental analysis suggest that v provides a way to control an upper bound on the number of training errors which is tighter than the one used in the soft margin hyperplane (Vapnik, 1995). In many cases, this makes it a parameter which is more convenient than the one in e:-SVR. Asymptotically, it directly controls the 336 B. SchOlkopf, P L. Bartlett, A. 1. Smola and R. Williamson number of Support Vectors, and the latter can be used to give a leave-one-out generalization bound (Vapnik, 1995). In addition, v characterizes the compression ratio: it suffices to train the algorithm only on the SVs, leading to the same solution (SchOlkopf et aI., 1995). In c:-SVR, the tube width c: must be specified a priori; in v-SVR, which generalizes the idea of the trimmed mean, it is computed automatically. Desirable properties of c:-SVR, including the formulation as a definite quadratic program, and the sparse SV representation of the solution, are retained. We are optimistic that in many applications, v-SVR will be more robust than c:-SVR. Among these should be the reduced set algorithm of Osuna and Girosi (1999), which approximates the SV pattern recognition decision surface by c:-SVR. Here, v should give a direct handle on the desired speed-up. One of the immediate questions that a v-approach to SV regression raises is whether a similar algorithm is possible for the case of pattern recognition. This question has recently been answered to the affirmative (SchOlkopf et aI., 1998c). Since the pattern recognition algorithm (Vapnik, 1995) does not use c:, the only parameter that we can dispose of by using v is the regularization constant C. This leads to a dual optimization problem with a homogeneous quadratic form, and v lower bounding the sum of the Lagrange multipliers. Whether we could have abolished C in the regression case, too, is an open problem. Acknowledgement This work was supported by the ARC and the DFG (# Ja 379171). References B. E. Boser, 1. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152, Pittsburgh, PA, 1992. ACM Press. E. Osuna and F. Girosi. Reducing run-time complexity in support vector machines. In B. SchOlkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 271 - 283. MIT Press, Cambridge, MA, 1999. B. SchOlkopf, C. Burges, and V. Vapnik. Extracting support data for a given task. In U. M. Fayyad and R. Uthurusamy, editors, Proceedings, First International Conference on Knowledge Discovery & Data Mining. AAAI Press, Menlo Park, CA, 1995. B. Scholkopf, P. Bartlett, A. Smola, and R. Williamson. Support vector regression with automatic accuracy control. In L. Niklasson, M. Boden, and T. Ziemke, editors, Proceedings of the 8th International Conference on Artificial Neural Networks, Perspectives in Neural Computing, pages III - 116, Berlin, 1998a. Springer Verlag. B. SchOlkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector kernels. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems 10, pages 640 - 646, Cambridge, MA, 1998b. MIT Press. B. SchOlkopf, A. Smola, R. Williamson, and P. Bartlett. New support vector algorithms. 1998c. NeuroColt2-TR 1998-031; cf. http:!www.neurocolt.com B. Scholkopf, K. Sung, C. Burges, F. Girosi, P. Niyogi, T. Poggio, and V. Vapnik. Comparing support vector machines with gaussian kernels to radial basis function classifiers. IEEE Trans. Sign. Processing, 45:2758 - 2765, 1997. A. Smola, N. Murata, B. SchOlkopf, and K.-R. Moller. Asymptotically optimal choice of c:-Ioss for support vector machines. In L. Niklasson, M. Boden, and T. Ziemke, editors, Proceedings of the 8th International Conference on Artificial Neural Networks, Perspectives in Neural Computing, pages 105 - 110, Berlin, 1998. Springer Verlag. M. Stitson, A. Gammerman, V. Vapnik, V. Vovk, C. Watkins, and J. Weston. Support vector regression with ANOVA decomposition kernels. In B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 285 - 291. MIT Press, Cambridge, MA, 1999. V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995.
|
1998
|
46
|
1,544
|
Lazy Learning Meets the Recursive Least Squares Algorithm Mauro Birattari, Gianluca Bontempi, and Hugues Bersini Iridia - Universite Libre de Bruxelles Bruxelles, Belgium {mbiro, gbonte, bersini} @ulb.ac.be Abstract Lazy learning is a memory-based technique that, once a query is received, extracts a prediction interpolating locally the neighboring examples of the query which are considered relevant according to a distance measure. In this paper we propose a data-driven method to select on a query-by-query basis the optimal number of neighbors to be considered for each prediction. As an efficient way to identify and validate local models, the recursive least squares algorithm is introduced in the context of local approximation and lazy learning. Furthermore, beside the winner-takes-all strategy for model selection, a local combination of the most promising models is explored. The method proposed is tested on six different datasets and compared with a state-of-the-art approach. 1 Introduction Lazy learning (Aha, 1997) postpones all the computation until an explicit request for a prediction is received. The request is fulfilled interpolating locally the examples considered relevant according to a distance measure. Each prediction requires therefore a local modeling procedure that can be seen as composed of a structural and of a parametric identification. The parametric identification consists in the optimization of the parameters of the local approximator. On the other hand, structural identification involves, among other things, the selection of a family of local approximators, the selection of a metric to evaluate which examples are more relevant, and the selection of the bandwidth which indicates the size of the region in which the data are correctly modeled by members of the chosen family of approximators. For a comprehensive tutorial on local learning and for further references see Atkeson et al. (1997). As far as the problem of bandwidth selection is concerned, different approaches exist. The choice of the bandwidth may be performed either based on some a priori assumption or on the data themselves. A further sub-classification of data-driven approaches is of interest 376 M. Birattari, G. Bontempi and H. Bersini here. On the one hand, a constant bandwidth may be used; in this case it is set by a global optimization that minimizes an error criterion over the available dataset. On the other hand, the bandwidth may be selected locally and tailored for each query point. In the present work, we propose a method that belongs to the latter class of local data-driven approaches. Assuming a given fixed metric and local linear approximators, the method we introduce selects the bandwidth on a query-by-query basis by means of a localleave-oneout cross-validation. The problem of bandwidth selection is reduced to the selection of the number k of neighboring examples which are given a non-zero weight in the local modeling procedure. Each time a prediction is required for a specific query point, a set of local models is identified, each including a different number of neighbors. The generalization ability of each model is then assessed through a local cross-validation procedure. Finally, a prediction is obtained either combining or selecting the different models on the basis of some statistic of their cross-validation errors. The main reason to favor a query-by-query bandwidth selection is that it allows better adaptation to the local characteristics of the problem at hand. Moreover, this approach is able to handle directly the case in which the database is updated on-line (Bontempi et at., 1997). On the other hand, a globally optimized bandwidth approach would, in principle, require the global optimization to be repeated each time the distribution of the examples changes. The major contribution of the paper consists in the adoption of the recursive least squares algorithm in the context of lazy learning. This is an appealing and efficient solution to the intrinsically incremental problem of identifying and validating a sequence of local linear models centered in the query point, each including a growing number of neighbors. It is worth noticing here that a leave-one-out cross-validation of each model considered does not involve any significant computational overload, since it is obtained though the PRESS statistic (Myers, 1990) which simply uses partial results returned by the recursive least squares algorithm. Schaal and Atkeson (1998) used already the recursive least squares algorithm for the incremental update of a set of local models. In the present paper, we use for the first time this algorithm in a query-by-query perspective as an effective way to explore the neighborhood of each query point. As a second contribution, we propose a comparison, on a local scale, between a competitive and a cooperative approach to model selection. On the problem of extracting a final prediction from a set of alternatives, we compared a winner-takes-all strategy with a strategy based on the combination of estimators (Wolpert, 1992). In Section 5 an experimental analysis of the recursive algorithm for local identification and validation is presented. The algorithm proposed, used in conjunction with different strategies for model selection or combination, is compared experimentally with Cubist, the rule-based tool developed by Ross Quinlan for generating piecewise-linear models. 2 Local Weighted Regression Given two variables x E lRm and y E lR, let us consider the mapping f: lRm --t lR, known only through a set of n examples {(Xi, yd} ~=l obtained as follows: (1) where Vi, Ci is a random variable such that E[ciJ = 0 and E[ciCjJ = 0, Vj =1= i, and such that E[ciJ = I-lm(Xi), Vm ~ 2, where I-lmO is the unknown mth moment of the distribution of Ci and is defined as a function of Xi. In particular for m = 2, the last of the above mentioned properties implies that no assumption of global homoscedasticity is made. Lazy Learning Meets the Recursive Least Squares Algorithm 377 The problem of local regression can be stated as the problem of estimating the value that the regression function f(x) = E[Ylx] assumes for a specific query point x, using information pertaining only to a neighborhood of x. Given a query point x q , and under the hypothesis of a local homoscedasticity of Ci, the parameter (3 of a local linear approximation of f (.) in a neighborhood of Xq can be obtained solving the local polynomial regression: (2) where, given a metric on the space Rm, d( Xi, Xq) is the distance from the query point to the ith example, K (.) is a weight function, h is the bandwidth, and where a constant value 1 has been appended to each vector Xi in order to consider a constant term in the regression. In matrix notation, the solution of the above stated weighted least squares problem is given by: /3 = (X'W'WX)-lX'W'Wy = (Z'Z)-lZ'V = PZ'v, (3) where X is a matrix whose ith row is x~, y is a vector whose ith element is Yi, W is a diagonal matrix whose ith diagonal element is Wii = JK (d(Xi,Xq)jh), Z = WX, v = Wy, and the matrix X'W'WX = Z'Z is assumed to be non-singular so that its inverse P = (Z'Z)-l is defined. Once obtained the local linear polynomial approximation, a prediction of Yq = f(xq), is finally given by: Yq=X~/3 . (4) Moreover, exploiting the linearity of the local approximator, a leave-one-out crossvalidation estimation of the error variance E[ (Yq - Yq)2] can be obtained without any significant overload. In fact, using the PRESS statistic (Myers, 1990), it is possible to calculate the error er = Yj - xj /3 _ j' without explicitly identifying the parameters /3- j from the examples available with the ph removed. The formulation of the PRESS statistic for the case at hand is the following: cv _ ,A _ Yj - xjPZ'v _ Yj - xj/3 ej Yj - x j{3_j 1 'P 1 h ' Zj Zj jj (5) where zj is the ph row of Z and therefore Zj = WjjXj, and where hjj is the ph diagonal e1ementoftheHatmatrixH = ZPZ' = Z(Z'Z)- lZ'. 3 Recursive Local Regression In what follows, for the sake of simplicity, we will focus on linear approximator. An extension to generic polynomial approximators of any degree is straightforward. We will assume also that a metric on the space Rm is given. All the attention will be thus centered on the problem of bandwidth selection. If as a weight function K(-) the indicator function K (d(Xi'Xq)) = {I ifd(xi,xq)::; h, h 0 otherwise; (6) is adopted, the optimization of the parameter h can be conveniently reduced to the optimization of the number k of neighbors to which a unitary weight is assigned in the local 378 M. Birattari, G. Bontempi and H. Bersini regression evaluation. In other words, we reduce the problem of bandwidth selection to a search in the space of h( k) = d( x( k), Xq), where x( k) is the kth nearest neighbor of the query point. The main advantage deriving from the adoption of the weight function defined in Eq. 6, is that, simply by updating the parameter /3(k) of the model identified using the k nearest neighbors, it is straightforward and inexpensive to obtain /3 (k + 1). In fact, performing a step of the standard recursive least squares algorithm (Bierman, 1977), we have: P(k + 1) = P(k) _ P(k)x(k + l)x'(k + l)P(k) 1 + x'(k + l)P(k)x(k + 1) ,(k + 1) = P(k + l)x(k + 1) e(k + 1) = y(k + 1) - x' (k + l)/3(k) /3(k + 1) = /3(k) + ,(k + l)e(k + 1) (7) where P(k) = (Z'Z)-l when h = h(k), and where x(k + 1) is the (k + l)th nearest neighbor of the query point. Moreover, once the matrix P(k + 1) is available, the leave-one-out cross-validation errors can be directly calculated without the need of any further model identification: cv _ Yj - xj/3(k + 1) ej (k + 1) 1 _ xjP(k + l)x/ (8) It will be useful in the following to define for each value of k the [k x 1] vector eCV (k) that contains all the leave-one-out errors associated to the model {3(k). Once an initialization /3(0) = jj and P(O) = P is given, Eq. 7 and Eq. 8 recursively evaluate for different values of k a local approximation of the regression function f(·), a prediction of the value of the regression function in the query point, and the vector of leave-one-out errors from which it is possible to extract an estimate of the variance of the prediction error. Notice that jj is an a priqri estimate of the parameter and P is the covariance matrix that reflects the reliabi!ity of f3 (Bierman, 1977). For non-reliable initialization, the following is usually adopted: P = >'1, with>. large and where I is the identity matrix. 4 Local Model Selection and Combination The recursive algorithm described by Eq. 7 and Eq. 8 returns for a given query point x q , a set of predictions Yq (k) = x~/3(k), together with a set of associated leave-one-out error vectors e Cv (k) . From the information available, a final prediction f)q of the value of the regression function can be obtained in different ways. Two main paradigms deserve to be considered: the first is based on the selection of the best approximator according to a given criterion, while the second returns a prediction as a combination of more local models. If the selection paradigm, frequently called winner-takes-all, is adopted, the most natural way to extract a final prediction Yq, consists in comparing the prediction obtained for each value of k on the basis of the classical mean square error criterion: "k ( CV(k))2 A L.J' Wi e· with k = argmin MSE(k) = argmin t=l t . k k "k . ' L.Ji=l W t (9) Lazy Learning Meets the Recursive Least Squares Algorithm 379 Table 1: A summary of the characteristics of the data sets considered. Dataset I Housing I Cpu I Prices I Mpg I Servo I Ozone I Number of 506 209 159 392 167 330 examples Number of l3 6 regressors 16 7 8 8 where Wi are weights than can be conveniently used to discount each error according to the distance from the query point to the point to which the error corresponds (Atkeson et at., 1997). As an alternative to the winner-takes-all paradigm, we explored also the effectiveness of local combinations of estimates (Wolpert, 1992). Adopting also in this case the mean square error criterion, the final prediction of the value Yq is obtained as a weighted average of the best b models, where b is a parameter of the algorithm. Suppose the predictions ilq (k) and the error vectors e Cv (k) have been ordered creating a sequence of integers {ki } so that MSE( ki ) ::; MSE( kj ), Vi < j. The prediction of Yq is given by ~ L~-l (iYq(kd Yq = ",b r. ' L..-i=l ,>z (10) where the weights are the inverse of the mean square errors: (i = l/MSE(ki ). This is an example of the generalized ensemble method (Perrone & Cooper, 1993). 5 Experiments and Results The experimental evaluation ofthe incremental local identification and validation algorithm was performed on six datasets. The first five, described by Quinlan (1993), were obtained from the VCI Repository of machine learning databases (Merz & Murphy, 1998), while the last one was provided by Leo Breiman. A summary ofthe characteristics of each dataset is presented in Table 1. The methods compared adopt the recursive identification and validation algorithm, combined with different strategies for model selection or combination. We considered also two approaches in which k is selected globally: Ibl: Local bandwidth selection for linear local models. The number of neighbors is selected on a query-by-query basis and the prediction returned is the one of the best model according to the mean square error criterion. IbO: Local bandwidth selection for constant local models. The algorithm for constant models is derived directly from the recursive method described in Eq. 7 and Eq. 8. The best model is selected according to the mean square error criterion. IbC: Local combination of estimators. This is an example, of the method described in Eq. 10. On the datasets proposed, for each query the best 2 linear local models and the best 2 constant models are combined. gbl: Global bandwidth selection for linear local models. The value of k is obtained minimizing the prediction error in 20-fold cross-validation on the dataset available. This value is then used for all the query points. gbO: Global bandwidth selection for constant local models. As in gbl, the value of k is optimized globally and kept constant for all the queries. 380 M. Birattarl. G. Bontempi and H. Bersini Table 2: Mean absolute error on unseen cases. Method I Housing I Cpu I Prices I Mpg I Servo I Ozone Ibl 2.21 28.38 1509 1.94 0.48 3.52 IbO 2.60 31.54 1627 1.97 0.32 3.33 IbC 2.12 26.79 1488 1.83 0.29 3.31 gbl 2.30 28.69 1492 1.92 0.52 3.46 gbO 2.59 32.19 1639 1.99 0.34 3.19 Cubist 2.17 28.37 1331 1.90 0.36 3.15 Table 3: Relative error (%) on unseen cases. I Method I Housing I Cpu I Prices I Mpg I Servo I Ozone Ibl 12.63 9.20 15.87 12.65 28.66 35.25 IbO 18.06 20.37 22.19 12.64 22.04 31.11 IbC 12.35 9.29 17.62 11.82 19.72 30.28 gb1 13.47 9.93 15.95 12.83 30.46 32.58 gbO 17.99 21.43 22.29 13.48 24.30 28.21 Cubist 16.02 12.71 11.67 12.57 18.53 26.59 As far as the metric is concerned, we adopted a global Euclidean metric based on the relative influence (relevance) ofthe regressors (Friedman, 1994). We are confident that the adoption of a local metric could improve the performance of our lazy learning method. The results of the methods introduced are compared with those we obtained, in the same experimental settings, with Cubist, the rule-based tool developed by Quinlan for generating piecewise-linear models. Each approach was tested on each dataset using the same 10-fold cross-validation strategy. Each dataset was divided randomly into 10 groups of nearly equal size. In turn, each of these groups was used as a testing set while the remaining ones together were providing the examples. Thus all the methods performed a prediction on the same unseen cases, using for each of them the same set of examples. In Table 2 we present the results obtained by all the methods, and averaged on the 10 cross-validation groups. Since the methods were compared on the same examples in exactly the same conditions, the sensitive one-tailed paired test of significance can be used. In what follows, by "significantly better" we mean better at least at a 5% significance level. The first consideration about the results concerns the local combination of estimators. According to Table 2, the method IbC performs in average always better than the winnertakes-all linear and constant. On two dataset IbC is significantly better than both Ibl and IbO; and on three dataset it is significantly better than one of the two, and better in average than the other. The second consideration is about the comparison between our query-by-query bandwidth selection and a global optimization of the number of neighbors: in average Ibl and IbO performs better than their counterparts gbl and gbO. On two datasets Ibl is significantly better than gbl, while is about the same on the other four. On one dataset IbO is significantly better than gbO. As far as the comparison with Cubist is concerned, the recursive lazy identification and validation proposed obtains results comparable with those obtained by the state-of-the-art method implemented in Cubist. On the six datasets, IbC performs one time significantly better than Cubist, and one time significantly worse. Lazy Learning Meets the Recursive Least Squares Algorithm 381 The second index of performance we investigated is the relative error, defined as the mean square error on unseen cases, normalized by the variance of the test set. The relative errors are presented in Table 3 and show a similar picture to Table 2, although the mean square errors considered here penalize larger absolute errors. 6 Conclusion and Future Work The experimental results confirm that the recursive least squares algorithm can be effectively used in a local context. Despite the trivial metric adopted, the local combination of estimators, identified and validated recursively, showed to be able to compete with a state-of-the-art approach. Future work will focus on the problem of local metric selection. Moreover, we will explore more sophisticated ways to combine local estimators and we will extend this work to polynomial approximators of higher degree. Acknowledgments The work of Mauro Birattari was supported by the FIRST program of the Region Wallonne, Belgium. The work of Gianluca Bontempi was supported by the European Union TMR Grant FMBICT960692. The authors thank Ross Quinlan and gratefully acknowledge using his software Cubist. For more details on Cubist see http://www.rulequest.com. We also thank Leo Breiman for the dataset ozone and the UCI Repository for the other datasets used in this paper. References Aha D. W. 1997. Editorial. Artificial Intelligence Review, 11(1-5), 1-6. Special Issue on Lazy Learning. Atkeson C. G. , Moore A. W. & Schaal S. 1997. Locally weighted learning. Artificial Intelligence Review, 11(1-5), 11-73. Bierman G. 1. 1977. Factorization Methodsfor Discrete Sequential Estimation. New York, NY: Academic Press. Bontempi G., Birattari M. & Bersini H. 1997. Lazy learning for local modeling and control design. International Journal of Control. Accepted for publication. Friedman 1. H. 1994. Flexible metric nearest neighbor classification. Tech. rept. Department of Statistics, Stanford University. Merz C. J. & Murphy P. M. 1998. UCI Repository of machine learning databases. Myers R. H. 1990. Classical and Modern Regression with Applications. Boston, MA: PWS-KENT. Perrone M. P. & Cooper L. N. 1993. When networks disagree: Ensemble methods for hybrid neural networks. Pages 126-142 of Mammone R. J. (ed), Artificial Neural Networks for Speech and Vision. Chapman and Hall. Quinlan 1. R. 1993. Combining instance-based and model-based learning. Pages 236-243 of Machine Learning. Proceedings of the Tenth International Conference. Morgan Kaufmann. Schaal S. & Atkeson C. G. 1998. Constructive incremental learning from only local information. Neural Computation, 10(8), 2047-2084. Wolpert D. 1992. Stacked Generalization. Neural Networks, 5, 241-259.
|
1998
|
47
|
1,545
|
Reinforcement Learning for Trading John Moody and Matthew Saffell* Oregon Graduate Institute, CSE Dept. P.O. Box 91000, Portland, OR 97291-1000 {moody, saffell }@cse.ogi.edu Abstract We propose to train trading systems by optimizing financial objective functions via reinforcement learning. The performance functions that we consider are profit or wealth, the Sharpe ratio and our recently proposed differential Sharpe ratio for online learning. In Moody & Wu (1997), we presented empirical results that demonstrate the advantages of reinforcement learning relative to supervised learning. Here we extend our previous work to compare Q-Learning to our Recurrent Reinforcement Learning (RRL) algorithm. We provide new simulation results that demonstrate the presence of predictability in the monthly S&P 500 Stock Index for the 25 year period 1970 through 1994, as well as a sensitivity analysis that provides economic insight into the trader's structure. 1 Introduction: Reinforcement Learning for Thading The investor's or trader's ultimate goal is to optimize some relevant measure of trading system performance, such as profit, economic utility or risk-adjusted return. In this paper, we propose to use recurrent reinforcement learning to directly optimize such trading system performance functions, and we compare two different reinforcement learning methods. The first, Recurrent Reinforcement Learning, uses immediate rewards to train the trading systems, while the second (Q-Learning (Watkins 1989)) approximates discounted future rewards. These methodologies can be applied to optimizing systems designed to trade a single security or to trade portfolios. In addition, we propose a novel value function for risk-adjusted return that enables learning to be done online: the differential Sharpe ratio. Trading system profits depend upon sequences of interdependent decisions, and are thus path-dependent. Optimal trading decisions when the effects of transactions costs, market impact and taxes are included require knowledge of the current system state. In Moody, Wu, Liao & Saffell (1998), we demonstrate that reinforcement learning provides a more elegant and effective means for training trading systems when transaction costs are included, than do more standard supervised approaches. • The authors are also with Nonlinear Prediction Systems. 918 J. Moody and M Saffell Though much theoretical progress has been made in recent years in the area of reinforcement learning, there have been relatively few successful, practical applications of the techniques. Notable examples include Neuro-gammon (Tesauro 1989), the asset trader of Neuneier (1996), an elevator scheduler (Crites & Barto 1996) and a space-shuttle payload scheduler (Zhang & Dietterich 1996). In this paper we present results for reinforcement learning trading systems that outperform the S&P 500 Stock Index over a 25-year test period, thus demonstrating the presence of predictable structure in US stock prices. The reinforcement learning algorithms compared here include our new recurrent reinforcement learning (RRL) method (Moody & Wu 1997, Moody et ai. 1998) and Q-Learning (Watkins 1989). 2 Trading Systems and Financial Performance Functions 2.1 Structure, Profit and Wealth for Trading Systems We consider performance functions for systems that trade a single 1 security with price series Zt. The trader is assumed to take only long, neutral or short positions Ft E {-I, 0, I} of constant magnitude. The constant magnitude assumption can be easily relaxed to enable better risk control. The position Ft is established or maintained at the end of each time interval t, and is re-assessed at the end of period t + 1. A trade is thus possible at the end of each time period, although nonzero trading costs will discourage excessive trading. A trading system return Rt is realized at the end of the time interval (t - 1, t] and includes the profit or loss resulting from the position Ft - 1 held during that interval and any transaction cost incurred at time t due to a difference in the positions Ft- 1 and Ft. In order to properly incorporate the effects of transactions costs, market impact and taxes in a trader's decision making, the trader must have internal state information and must therefore be recurrent. An example of a single asset trading system that takes into account transactions costs and market impact has following decision function: Ft = F((}t; Ft-l. It) with It = {Zt, Zt-1, Zt-2,··.; Yt, Yt-1, Yt-2, ... } where (}t denotes the (learned) system parameters at time t and It denotes the information set at time t, which includes present and past values of the price series Zt and an arbitrary number of other external variables denoted Yt. Trading systems can be optimized by maximizing performance functions U 0 such as profit, wealth, utility functions of wealth or performance ratios like the Sharpe ratio. The simplest and most natural performance function for a risk-insensitive trader is profit. The transactions cost rate is denoted 6. Additive profits are appropriate to consider if each trade is for a fixed number of shares or contracts of security Zt. This is often the case, for example, when trading small futures accounts or when trading standard US$ FX contracts in dollardenominated foreign currencies. With the definitions rt = Zt - Zt-1 and r{ = 4 - 4-1 for the price returns of a risky (traded) asset and a risk-free asset (like TBills) respectively, the additive profit accumulated over T time periods with trading position size Jl > 0 is then defined as: T T PT = LRt = Jl L {r{ + Ft- 1(rt - r{) - 61Ft - Ft-11} (1) t=l t=l 1 See Moody et al. (1998) for a detailed discussion of multiple asset portfolios. Reinforcement Learning for Trading 919 with Po = 0 and typically FT = Fa = O. Equation (1) holds for continuous quantities also. The wealth is defined as WT = Wo + PT. Multiplicative profits are appropriate when a fixed fraction of accumulated wealth v > 0 is invested in each long or short trade. Here, rt = (zt/ Zt-l I) and r{ = (z{ /4-1 - 1). If no short sales are allowed and the leverage factor is set fixed at v = 1, the wealth at time Tis: T T WT = Wo II {I + Rd = Wo II {I + (1- Ft_t}r{ + Ft-1rt} {1- 81Ft - Ft- 11}· (2) t=1 t=1 2.2 The Differential Sharpe Ratio for On-line Learning Rather than maximizing profits, most modern fund managers attempt to maximize risk-adjusted return as advocated by Modern Portfolio Theory. The Sharpe ratio is the most widely-used measure of risk-adjusted return (Sharpe 1966). Denoting as before the trading system returns for period t (including transactions costs) as R t , the Sharpe ratio is defined to be S _ Average(Re) T Standard Deviation(Rt ) (3) where the average and standard deviation are estimated for periods t = {I, ... , T}. Proper on-line learning requires that we compute the influence on the Sharpe ratio of the return at time t. To accomplish this, we have derived a new objective function called the differential Sharpe ratio for on-line optimization of trading system performance (Moody et al. 1998). It is obtained by considering exponential moving averages of the returns and standard deviation of returns in (3), and expanding to first order in the decay rate ".,: St ~ St-1 + ""~ll1=O + 0(".,2) . Noting that only the first order term in this expansion depends upon the return R t at time t, we define the differential Sharpe ratio as: (4) where the quantities At and B t are exponential moving estimates of the first and second moments of Rt : A t- 1 + ".,~At = At- 1 + ".,(Rt - At-1) Bt- 1 + ".,~Bt = Bt- 1 + TJ(R; - Bt-d (5) Treating At - 1 and Bt - 1 as numerical constants, note that"., in the update equations controls the magnitude of the influence of the return Rt on the Sharpe ratio St . Hence, the differential Sharpe ratio represents the influence of the trading return Rt realized at time t on St. 3 Reinforcement Learning for Trading Systems The goal in using reinforcement learning to adjust the parameters of a system is to maximize the expected payoff or reward that is generated due to the actions of the system. This is accomplished through trial and error exploration of the environment. The system receives a reinforcement signal from its environment (a 920 J. Moody and M. Saffell reward) that provides information on whether its actions are good or bad. The performance function at time T can be expressed as a function of the sequence of trading returns UT = U(R1' R 2 , ... , RT). Given a trading system model FtU}), the goal is to adjust the parameters () in order to maximize UT. This maximization for a complete sequence of T trades can be done off-line using dynamic programming or batch versions of recurrent reinforcement learning algorithms. Here we do the optimization on-line using a reinforcement learning technique. This reinforcement learning algorithm is based on stochastic gradient ascent. The gradient of UT with respect to the parameters () of the system after a sequence of T trades is T dUT(()) = L dUT {dRt dFt + dRt dFt-1} d() dRt dFt d() dFt- 1 d() t=1 (6) A simple on-line stochastic optimization can be obtained by considering only the term in (6) that depends on the most recently realized return Rt during a forward pass through the data: _dU_t-'..( ()-'-) = _dU_t {_dR_t _dF_t + _d_R_t __ dF_t_-_1} . d() dRt dFt d() dFt- 1 d() (7) The parameters are then updated on-line using /),.()t = pdUt(()t)/d()t. Because of the recurrent structure of the problem (necessary when transaction costs are included), we use a reinforcement learning algorithm based on real-time recurrent learning (Williams & Zipser 1989). This approach, which we call recurrent reinforcement learning (RRL), is described in (Moody & Wu 1997, Moody et al. 1998) along with extensive simulation results. 4 Empirical Results: S&P 500 I TBill Asset Allocation A long/short trading system is trained on monthly S&P 500 stock index and 3month TBill data to maximize the differential Sharpe ratio. The S&P 500 target series is the total return index computed by reinvesting dividends. The 84 input series used in the trading systems include both financial and macroeconomic data. All data are obtained from Citibase, and the macroeconomic series are lagged by one month to reflect reporting delays. A total of 45 years of monthly data are used, from January 1950 through December 1994. The first 20 years of data are used only for the initial training of the system. The test period is the 25 year period from January 1970 through December 1994. The experimental results for the 25 year test period are true ex ante simulated trading results. For each year during 1970 through 1994, the system is trained on a moving window of the previous 20 years of data. For 1970, the system is initialized with random parameters. For the 24 subsequent years, the previously learned parameters are used to initialize the training. In this way, the system is able to adapt to changing market and economic conditions. Within the moving training window, the "RRL" systems use the first 10 years for stochastic optimization of system parameters, and the subsequent 10 years for validating early stopping of training. The networks are linear, and are regularized using quadratic weight decay during training with a Reinforcement Learningfor Trading 921 regularization parameter of 0.0l. The "Qtrader" systems use a bootstrap sample of the 20 year training window for training, and the final 10 years of the training window are used for validating early stopping of training. The networks are twolayer feedforward networks with 30 tanh units in the hidden layer. 4.1 Experimental Results The left panel in Figure 1 shows box plots summarizing the test performance for the full 25 year test period of the trading systems with various realizations of the initial system parameters over 30 trials for the "RRL" system, and 10 trials for the "Qtrader" system2 . The transaction cost is set at 0.5%. Profits are reinvested during trading, and multiplicative profits are used when calculating the wealth. The notches in the box plots indicate robust estimates of the 95% confidence intervals on the hypothesis that the median is equal to the performance of the buy and hold strategy. The horizontal lines show the performance of the "RRL" voting, "Qtrader" voting and buy and hold strategies for the same test period. The annualized monthly Sharpe ratios of the buy and hold strategy, the "Qtrader" voting strategy and the "RRL" voting strategy are 0.34, 0.63 and 0.83 respectively. The Sharpe ratios calculated here are for the returns in excess of the 3-month treasury bill rate. The right panel of Figure 1 shows results for following the strategy of taking positions based on a majority vote of the ensembles of trading systems compared with the buy and hold strategy. We can see that the trading systems go short the S&P 500 during critical periods, such as the oil price shock of 1974, the tight money periods of the early 1980's, the market correction of 1984 and the 1987 crash. This ability to take advantage of high treasury bill rates or to avoid periods of substantial stock market loss is the major factor in the long term success of these trading models. One exception is that the "RRL" trading system remains long during the 1991 stock market correction associated with the Persian Gulf war, though the "Qtrader" system does identify the correction. On the whole though, the "Qtrader" system trades much more frequently than the "RRL" system, and in the end does not perform as well on this data set. From these results we find that both trading systems outperform the buy and hold strategy, as measured by both accumulated wealth and Sharpe ratio. These differences are statistically significant and support the proposition that there is predictability in the U.S. stock and treasury bill markets during the 25 year period 1970 through 1994. A more detailed presentation of the "RRL" results appears in (Moody et al. 1998). 4.2 Gaining Economic Insight Through Sensitivity Analysis A sensitivity analysis of the "RRL" systems was performed in an attempt to determine on which economic factors the traders are basing their decisions. Figure 2 shows the absolute normalized sensitivities for 3 of the more salient input series as a function of time, averaged over the 30 members of the "RRL" committee. The sensitivity of input i is defined as: Si = I dF I /max I dF I dXi J dXj (8) where F is the unthresholded trading output and Xi denotes input i. 2Ten trials were done for the "Qtrader" system due to the amount of computation required in training the systems 922 F,nal Eqully: OIrador VI RRl 70 I::....:...:. ""''''''_I . --~:vcMI ro~====~ ________ ~ __ ~ , so g ~40 ....z_____ g_~ _________________ , _______ _ 30 , , , , 20 , 10 , , -'----- --r- ----- --- -------- --- -, -'RRL I~ ""' __ I Ml_ a.-._ J. Moody and M. Saffell Figure 1: Test results for ensembles of simulations using the S&P 500 stock index and 3-month Treasury Bill data over the 1970-1994 time period. The solid curves correspond to the "RRL" voting system performance, dashed curves to the "Qtrader" voting system and the dashed and dotted curves indicate the buy and hold performance. The boxplots in (a) show the performance for the ensembles of "RRL" and "Qtrader" trading systems The horizontal lines indicate the performance of the voting systems and the buy and hold strategy. Both systems significantly outperform the buy and hold strategy. (b) shows the equity curves associated with the voting systems and the buy and hold strategy, as well as the voting trading signals produced by the systems. In both cases, the traders avoid the dramatic losses that the buy and hold strategy incurred during 1974 and 1987. The time-varying sensitivities in Figure 2 emphasize the nonstationarity of economic relationships. For example, the yield curve slope (which measures inflation expectations) is found to be a very important factor in the 1970's, while trends in long term interest rates (measured by the 6 month difference in the AAA bond yield) becomes more important in the 1980's, and trends in short term interest rates (measured by the 6 month difference in the treasury bill yield) dominate in the early 1990's. 5 Conclusions and Extensions In this paper, we have trained trading systems via reinforcement learning to optimize financial objective functions including our differential Sharpe ratio for online learning. We have also provided results that demonstrate the presence of predictability in the monthly S&P 500 Stock Index for the 25 year period 1970 through 1994. We have previously shown with extensive simulation results (Moody & Wu 1997, Moody et al. 1998) that the "RRL" trading system significantly outperforms systems trained using supervised methods for traders of both single securities and portfolios. The superiority of reinforcement learning over supervised learning is most striking when state-dependent transaction costs are taken into account. Here, we present results for asset allocation systems trained using two different reinforcement learning algorithms on a real, economic dataset. We find that the "Qtrader" system does not perform as well as the "RRL" system on the S&P 500 / TBill asset allocation problem, possibly due to its more frequent trading. This effect deserves further exploration. In general, we find that Q-Iearning can suffer from the curse of dimensionality and is more difficult to use than our RRL approach. Finally, we apply sensitivity analysis to the trading systems, and find that certain interest rate variables have an influential role in making asset allocation decisions. Reinforcement Learningfor Trading S",sltivity Analysis: A .... g. on RRL Commill •• 0.9 , 0.8 I ~f07 : ! i GO.6 • jos! iO.4 I 1 ,- "\ ,- -.. j \ ,I , ,I I , I , I " , , , \ , I I , , 103 I ' 0.2 Ir--------'...!.'-----, VI.1d Curv. Slop. 6 Month Dill. In AM Bond yield 6 Month Dill. In TBIU Vieid 1975 1980 1985 0.,. " ,', 1990 923 1995 Figure 2: Sensitivity traces for three of the inputs to the "RRL" trading system averaged over the ensemble of traders. The nonstationary relationships typical among economic variables is evident from the time-varying sensitivities. We also find that these influences exhibit nonstationarity over time. Acknowledgements We gratefully acknowledge support for this work from Nonlinear Prediction Systems and from DARPA under contract DAAH01-96-C-R026 and AASERT grant DAAH04-95-10485. References Crites, R. H. & Barto, A. G. (1996), Improving elevator performance using reinforcement learning, in D. S. Touretzky, M. C. Mozer & M. E. Hasselmo, eds, 'Advances in NIPS', Vol. 8, pp. 1017-1023. Moody, J. & Wu, L. (1997), Optimization of trading systems and portfolios, in Y. AbuMostafa, A. N. Refenes & A. S. Weigend, eds, 'Decision Technologies for Financial Engineering', World Scientific, London, pp. 23-35. This is a slightly revised version of the original paper that appeared in the NNCM*96 Conference Record, published by Caltech, Pasadena, 1996. Moody, J., Wu, L., Liao, Y. & Saffell, M. (1998), 'Performance functions and reinforcement learning for trading systems and portfolios', Journal of Forecasting 17,441-470. Neuneier, R. (1996), Optimal asset allocation using adaptive dynamic programming, in D. S. Touretzky, M. C. Mozer & M. E. Hasselmo, eds, 'Advances in NIPS', Vol. 8, pp. 952-958. Sharpe, W. F. (1966), 'Mutual fund performance', Journal of Business pp. 119-138. Tesauro, G. (1989), 'Neurogammon wins the computer olympiad', Neural Computation 1,321-323. Watkins, C. J. C. H. (1989), Learning with Delayed Rewards, PhD thesis, Cambridge University, Psychology Department. Williams, R. J. & Zipser, D. (1989), 'A learning algorithm for continually running fully recurrent neural networks', Neural Computation 1,270-280. Zhang, W. & Dietterich, T. G. (1996), High-performance job-shop scheduling with a timedelay td(A) network, in D. S. Touretzky, M. C. Mozer & M. E. Hasselmo, eds, 'Advances in NIPS', Vol. 8, pp. 1024-1030.
|
1998
|
48
|
1,546
|
Distributional Population Codes and Multiple Motion Models Richard S. Zemel University of Arizona zemel@u.arizona.edu Peter Dayan Gatsby Computational Neuroscience Unit dayan@gatsby.ucl.ac.uk Abstract Most theoretical and empirical studies of population codes make the assumption that underlying neuronal activities is a unique and unambiguous value of an encoded quantity. However, population activities can contain additional information about such things as multiple values of or uncertainty about the quantity. We have previously suggested a method to recover extra information by treating the activities of the population of cells as coding for a complete distribution over the coded quantity rather than just a single value. We now show how this approach bears on psychophysical and neurophysiological studies of population codes for motion direction in tasks involving transparent motion stimuli. We show that, unlike standard approaches, it is able to recover multiple motions from population responses, and also that its output is consistent with both correct and erroneous human performance on psychophysical tasks. A population code can be defined as a set of units whose activities collectively encode some underlying variable (or variables). The standard view is that population codes are useful for accurately encoding the underlying variable when the individual units are noisy. Current statistical approaches to interpreting population activity reflect this view, in that they determine the optimal single value that explains the observed activity pattern given a particular model of the noise (and possibly a loss function). In our work, we have pursued an alternative hypothesis, that the population encodes additional information about the underlying variable, including multiple values and uncertainty. The Distributional Population Coding (DPC) framework finds the best probability distribution across values that fits the population activity (Zemel, Dayan, & Pouget, 1998). The DPC framework is appealing since it makes clear how extra information can be conveyed in a population code. In this paper, we use it to address a particuDistributional Population Codes and Multiple Motion Models 175 6.0: 30° 100 6.0: 90° 100 ... .. • ....... fir" 50 'lO ~ • • ".'" ...... ~ •• ,." ] ..... 0 0 -180 -90 0 90 180 '0... -180 -90 {I 90 180 ~ ~ 100 6.0: 60° 100 6.0: 120° 0 0... • ~ .. +..",. .... 'lO • ..... '50 •• .. ..++ • .. •• ....... .. . . ."... ... ..... ~ + 0 ... 0 -180 -90 0 90 180 -180 -')0 0 90 180 Figure 1: Each of the four plots depicts a single MT cell response (spikes per second) to a transparent motion stimulus of a fixed directional difference (LlO) between the two motion directions. The x-axis gives the average direction of stimulus motion relative to the cell's preferred direction (0°). From Treue, personal communication. lar body of experimental data on transparent motion perception, due to Treue and colleagues (HoI & Treue, 1997; Rauber & Treue, 1997). These transparent motion experiments provide an ideal test of the DPC framework, in that the neurophysiological data reveal how the population responds to multiple values in the stimuli, and the psychophysical data describe how these values are actually decoded, putatively from the population response. We investigate how standard methods fare on these data, and compare their performance to that of DPC. 1 RESPONSES TO MULTIPLE MOTIONS Many investigators have examined neural and behavioral responses to stimuli composed of two patterns sliding across each other. These often create the impression of two separate surfaces moving in different directions. The general neurophysiological finding is that an MT cell's response to these stimuli can be characterized as the average of its responses to the individual components (van Wezel et al., 1996; Recanzone et al., 1997). As an example, Figure 1 shows data obtained from single-cell recordings in MT to random dot patterns consisting of two distinct motion directions (Treue, personal communication). Each plot is for a different relative angle (LlO) between the two directions. A plot can equivalently be viewed as the response of an population of MT cells having different preferred directions to a single presentation of a stimulus containing two directions. If LlO is large, the activity profile is bimodal, but as the directional difference shrinks, the profile becomes unimodal. The population response to a LlO = 30° motion stimulus is merely a wider version of the response to a stimulus containing a single direction of motion. However, this transition from a bimodal to unimodal profiles in MT does not apparently correspond to subjects' percepts; subjects can reliably perceive both motions in superimposed transparent random patterns down to an angle of 10° (Mather & Moulden, 1983). If these MT activities playa determining role in motion perception, the challenge is to understand how the visual system can extract 176 R. S. Zemel and P. Dayan A B encode __ r ~ , -" _-------................ decode f unit ........... I P[rIP(O)) 1 I ! ~ ""'" I I I P[P(O)lrJ I : .... : I unit f : t : I I I J(O)}=== . " )~ P(O)I l ~ , " , \ (J + ,." \ t P'(O)) ,'/ \P (O)l~ ~'O o Figure 2: (A) The standard Bayesian population coding framework assumes that a single value is encoded in a set of noisy neural activities. (B) The distributional population coding framework shows how a distribution over 8 can be encoded and then decoded from noisy population activities. From Zemel et al. (1998). both motions from such unimodal (and bimodal) response profiles. 2 ENCODING & DECODING Statistical population code decoding methods begin with the knowledge, collected over many experimental trials, of the tuning function h(8) for each cell i, determined using simple stimuli (e.g., ones containing uni-directional motion). Figure 2A cartoons the framework used for standard decoding. Starting on the bottom left, encoding consists of taking a value 8 to be coded and representing it by the noisy activities ri of the elements of a population code. In the simulations described here, we have used a population of 200 model MT cells, with tuning functions defined by random sampling within physiologically-determined ranges for the parameters: baseline b, amplitude a and width 0'. The encoding model comes from the MT data: for a single motion, (ri/8) = h(8) = bi +ai x exp[-(8-8i )2 /20'n while for two motions, (ri/81, ( 2) = ~ [h(8d + h(82 )]. The noise is taken to be independent and Poisson. Standard Bayesian decoding starts with the activities r = {r i} and generates a distribution P[8/r]. Under the model with Poisson noise, This method thus provides a multiplicative kernel density estimate, tending to produce a sharp distribution for a single motion direction 8. A single estimate 0 can be extracted from P[8/r] using a loss function. For this method to decode successfully when there are two motions in the input (81 and ( 2 ), the extracted distribution must at least have two modes. Standard Bayesian decoding fails to satisfy this requirement. First, if the response profile r is unimodal (d. the 30° plot in Figure I), convolution with unimodal kernels {log h (8)} produces a unimodal log P[8/r], peaked about the average of the two Distributional Population Codes and Multiple Motion Models 177 directions. The additive kernel density estimate, an alternative distributional decoding method proposed by Anderson (1995), suffers from the same problem, and also fails to be adequately sharp for single value inputs. Surprisingly, the standard Bayesian decoding method also fails on bimodal response profiles. If the baseline response bi = 0, then P[O/r] is Gaussian, with mean L:i riOd L:il ri' and variance II L:i rdo-; (Snippe, 1996; Zemel et aL, 1998). If bi > 0, then, for the extracted distribution to have two modes in the appropriate positions, log[P[01/r]/P[02Ir]] must be smalL However, the variance of this quantity is L:i(ri) (log[/i(Odl h(02)])2, which is much greater than 0 unless the tuning curves are so flat as to be able to convey only little information about the stimuli. Intuitively, the noise in the rates causes L: r i log fi(O) to be greater around one of the two values, and exponentiating to form P[Olr] selects out this one value. Thus the standard method can only extract one of the two motion components from the population responses to transparent motion. The distributional population coding method (Figure 2B) extends the standard encoding model to allow r to depend on general P[O]: (ri) = l P [0] fi (O)dO (1) Bayesian decoding takes the observed activities r and produces probability distributions over probability distributions over 0, P[P(O)/r]. For simplicity, we decode using an approximate form of maximum likelihood in distributions over 0, finding the pr(o) that maximizes L [P(O)lr] '" L:i r i log [/i(O) * P(O)] - ag [P(O)] where the smoothness term g[] acts as a regularizer. The distributional encoding operation in Equation 1 is quite straightforward - by design, since this represents an assumption about what neural processing prior to (in this case) MT performs. However, the distributional decoding operation that we have used (Zemel et aL, 1998) involves complicated and non-neural operations. The idea is to understand what information in principle may be conveyed by a population code under this interpretation, and then to judge actual neural operations in the light of this theoretical optimum. DPC is a statistical cousin of so-called line-element models, which attempt to account for subjects' performance in cases like transparency using the output of some fixed number of directionselective mechanisms (Williams et al., 1991). 3 DECODING MULTIPLE MOTIONS We have applied our model to simulated MT response patterns r generated via the DPC encoding model (Equation 1). For multiple motion stimuli, with P(O) = (8 (0 - 01 ) + 8 (0 - O2)) 12, this encoding model produces the observed neurophysiological response: each unit's expected activity is the av~rage of its responses to the component motions. For bimodal response patterns, DPC matches the generating distribution (Figure 3). For unimodal response patterns, such as those generated by double motion stimuli with fj.O = 30°, DPC also consistently recovers the generating distribution. The bimodality of the reconstructed distribution begins to break down around fj.O = 10°, which is also the point at which subjects are unable distinguish two motions from a single broader band of motion directions (Mather & Moulden, 1983). It has been reported (Treue, personal communication) that for angles fj.0 < 10°, subjects can tell that all points are not moving in parallel, but are uncertain whether 178 200 . ~ ~150 ~ .: .. '5. .. $100 ... dJ ~ 0° • • ~ •• •• • .0 0° R ••• eo GO •••• ' e SO 0° :.- ........ ~ ........ . ..... ... o· '-000 ••• ~ • _ ... .." ·~_o •• ~o ..... , .4P\ ~.. • ,. __ .,.¥l -~80 -90 0 90 preferred direction (deg) 0.08 ~0.06 CD <iC §:0.04 0... 0.02 -Hi6 .. ::':120 180 I ••••• • • , -60 0 60 120 180 direction (deg) R. S. Zemel and P. Dayan 200 ~150 Q) .>< '5. . . . ... • It. o ,. . o '" ~loo ~c'" 0 ... ..... .. 8. • 0" dJ ~50 ... ,.: •• ..~. .. ,1.. 0 • • .,..'\,,;,~. • ·:.tolft.~,. ..... o. ,.· ... 4-~ -~80 -90 0 90 180 preferred direction (deg) 0.08 ~0 .06 CD <[' eO.04 0... 0.02 ~ i I .. .• -60 0 60 120 direction (deg) .. , 180 Figure 3: (A) On a single simulated trial, the population response forms a bimodal activity profile when 1:l8 = 120°. (B) The reconstructed (darker) distribution closely matches the true input distribution for this trial. (C) As 1:l8 -+ 10°, the population response is no longer bimodal, instead has a noisy unimodal profile, and (D) the reconstructed distribution no longer has two clear modes. they are moving in two discrete directions or within a directional band. Our model qualitatively captures this uncertainty, reconstructing a broad distribution with two small peaks for directional differences between 7° and 10°. DPC also matches psychophysical performance on metameric stimuli. Rauber and Treue (1997) asked human subjects to report the directions in moving dot patterns consisting of 2, 3 or 5 directions of motion. The motion directions were -40° and +40°; -50°, 0° and +50°; and -50°, -30°, 0°, +30°, and +50°, respectively, but the proportions of dots moving in each direction were adjusted so that the population responses produced by an encoding model similar to Equation 1 would all be the same. Subjects reported the same two motion directions, at -40° and 40°, to all three types of stimuli. DPC, like any reasonably deterministic decoding model, takes these (essentially identical) patterns of activity and, metamerically, reports the same answer for each case. Unlike most models, its answer-that there are two motions at roughly ±400-matches human responses. The fact of metamerization is not due to any kind of prior in the model as to the number of directions to be recovered. However, that the actual report in each case includes just two motions (when clearly three or five motions would be equally consistent with the input) is a consequence of the smoothness prior. We can go further with DPC and predict how changing the proportion of dots moving in the central of three directions would lead to different percepts - from a single motion to two as this proportion decreases. We can further evaluate the performance of DPC by comparing the quality of its Distributional Population Codes and Multiple Motion Models 179 100 -~ ... 75 g Q) Q) .~ as 50 CD ... Q) 0) ~ 25 Q) ~ 00 10 20 30 40 50 60 .1.9 (deg) Figure 4: The average relative error E in direction judgments (Equation 2) for the DPC model (top curve) and for a model with the correct prior for this particular input set. reconstruction to that obtained by fitting the correct model of the input distribution, a mixture of delta functions. We simulated MT responses to motion stimuli composed of two evenly-weighted directions, with 100 examples for each value of ~() in a range from 5° to 60°. We fit a mixture of two delta functions to each population response, and measured the average relative error in direction judgments based on this fitted distribution versus the two true directions, ()1 and ()2 on that example t: (2) We then applied the DPC model to the same population codes. To measure the average error, we first fit the general distribution pr«()) produced by DPC with a pair of equal-weighted Gaussians, and determined O~ and O~ from the appropriate mean and variance. As can be seen in Figure 4, the DPC model, which only has a general smoothness prior over the form of the input distribution, preserves the information in the observed rates nearly as well as the model with the correct prior. 4 CONCLUSIONS Transparent motion provides an ideal test of distributional population coding, since the encoding model is determined by neural activity and the decoding model by the behavioral data. Two existing kernel density estimate models, involving additive (Anderson, 1995) and multiplicative (standard Bayesian decoding) combination, perform poorly in this paradigm. DPC, a model in which neuronal responses and the animal's judgments are treated as being sensitive to the entire distribution of an encoded value, has been shown to be consistent with both single-cell responses and behavioral decisions, even matching subjects' threshold behavior. We are currently applying this same model to several other motion experiments, including one in which subjects had to determine whether a motion stimulus consisted of a number of discrete directions or a uniform distribution (Williams et al., 1991). We are investigating whether our model can explain the nonmonotonic relationship between the number of directions and the judgments. We have also applied DPC to a notorious puzzle for population coding: that single MT cells are 180 R. S. Zemel and P Dayan just as accurate as the whole monkey - one cell's output could directly support inference of the same quality as the monkeys. Our approach provides an alternative explanation for part of this apparent inefficiency to that of the noisy pooling model of Shadlen et al. (1996). Finally, experiments showing the effect of target uncertainty on population responses (Basso & Wurtz, 1998; Bastian et al,. 1998) are also handled naturally by the DPe approach. The current model is intended to describe the information available at one stage in the processing stream. It does not address the precise mechanism of motion encoding, i.e., how responses in MT arise. We also have not considered the neural decoding and decision mechanisms. These could likely involve a layer of units that reaches decisions through a pattern of feedforward and lateral connections, as in the model proposed by Grunewald (1996) for the detection of transparent motion. One critical issue that remains is normalization. It is not clear how to distinguish ambiguity about a single value for the encoded variable from the existence of multiple values of that variable (as in transparency for motion). Various factors are likely to be important, including the degree of separation of the modes and also prior expectations about the possibility of equivalents of transparency. Acknowledgements: This work was funded by ONR Young Investigator Award NOOOI4-98-1-0509 to RZ, and NIMH grant lR29MH5541-01, and grants from the Surdna Foundation and the Gatsby Charitable Foundation to PD. We thank Stefan Treue for proViding us with the data plot and for informative discussions of his experiments; Alexandre Pouget and Charlie Anderson for useful discussions of distributed coding and the standard model; and Zoubin Ghahramani and Geoff Hinton for helpful conversations about reconstruction in the log probability domain. References [1] Anderson, C. H. (1995). Unifying perspectives on neuronal codes and processing. In XIX International workshop on condensed matter theories. Caracas, Venezuela. [2] Basso, M. A. & Wurtz, R. H. (1998). Modulation of neuronal activity in superior colliculus by changes in target probability. Journal a/Neuroscience, 18(18),7519-34. [3] Bastian, A., Riehle, A., Erlhagen, w., & Schoner, G. (1998). Prior information preshapes the population representation of movement direction in motor cortex. Neuroreport, 9(2), 315-319. [4] Britten, K. H., Shadlen, M. N., Newsome, W. T., & Movshon, J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal a/Neuroscience, 12(12), 4745-4765. [5] Grunewald, A. (1996). A model of transparent motion and non-transparent motion aftereffects. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems 8 (pp. 837-843). Cambridge, MA: MIT Press. [6] HoI, K. & Treue, S. (1997). Direction-selective responses in the superior temporal sulcus to transparent patterns moving at acute angles. Society for Neuroscience Abstracts 23 (p. 179:11). [7] Mather, G. & Moulden, B. (1983). Thresholds for movement direction: two directions are less detectable than one. Quarterly Journal 0/ Experimental Psychology, 35, 513-518. [8] Rauber, H. J. & Treue, S. (1997). Recovering the directions of visual motion in transparent patterns. Society for Neuroscience Abstracts 23 (p. 179:10). [9] Recanzone, G. H., Wurtz, R. H., & Schwarz, U. (1997). Responses of MT and MST neurons to one and two moving objects in the receptive field. Journal a/Neurophysiology, 78(6), 2904-2915. [10] Shadlen, M. N., Britten, K. H, Newsome, W. T., & Movshon, J. A. (1996). A computational analysis of the relationship between neuronal and behavioral responses to visual motion. Journal 0/ Neuroscience, 16(4), 1486-510. [11] Snippe, H. P. (1996). Theoretical considerations for the analysis of population coding in motor cortex. Neural Computation, 8(3):29-37. [12] van Wezel, R. J., Lankheet, M. J., Verstraten, F. A., Maree, A. F., & van de Grind, W. A. (1996). Responses of complex cells in area 17 of the cat to bi-vectorial transparent motion. Vision Research, 36(18), 2805-13. [13] Williams, D., Tweten,S., & Sekuler, R. (1991). Using me tamers to explore motion perception. Vision Research, 31(2),275-286. [14] Zemel, R. 5., Dayan, P , & Pouget, A. (1998). Probabilistic interpretation of population codes. Neural Computation, 10,403-430. PART III THEORY
|
1998
|
49
|
1,547
|
Mean field methods for classification with Gaussian processes Manfred Opper Neural Computing Research Group Division of Electronic Engineering and Computer Science Aston University Birmingham B4 7ET, UK. opperm~aston.ac.uk Ole Winther Theoretical Physics II, Lund University, S6lvegatan 14 A S-223 62 Lund, Sweden CONNECT, The Niels Bohr Institute, University of Copenhagen Blegdamsvej 17, 2100 Copenhagen 0, Denmark winther~thep.lu.se Abstract We discuss the application of TAP mean field methods known from the Statistical Mechanics of disordered systems to Bayesian classification models with Gaussian processes. In contrast to previous approaches, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given. 1 Modeling with Gaussian Processes Bayesian models which are based on Gaussian prior distributions on function spaces are promising non-parametric statistical tools. They have been recently introduced into the Neural Computation community (Neal 1996, Williams & Rasmussen 1996, Mackay 1997). To give their basic definition, we assume that the likelihood of the output or target variable T for a given input s E RN can be written in the form p(Tlh(s)) where h : RN --+ R is a priori assumed to be a Gaussian random field. If we assume fields with zero prior mean, the statistics of h is entirely defined by the second order correlations C(s, S') == E[h(s)h(S')], where E denotes expectations 310 M Opper and 0. Winther with respect to the prior. Interesting examples are C(s, s') (1) C(s, s') (2) The choice (1) can be motivated as a limit of a two-layered neural network with infinitely many hidden units with factorizable input-hidden weight priors (Williams 1997). Wi are hyperparameters determining the relevant prior lengthscales of h(s). The simplest choice C(s, s') = 2::i WiSiS~ corresponds to a single layer percept ron with independent Gaussian weight priors. In this Bayesian framework, one can make predictions on a novel input s after having received a set Dm of m training examples (TJ.1., sJ.1.), J.L = 1, ... , m by using the posterior distribution of the field at the test point s which is given by p(h(s)IDm) = J p(h(s)l{hV}) p({hV}IDm) II dhJ.1.. J.1. (3) p(h(s)1 {hV}) is a conditional Gaussian distribution and p({hV}IDm) = ~P({hV}) II p(TJ.1.IhJ.1.). (4) J.1. is the posterior distribution of the field variables at the training points. Z is a normalizing partition function and (5) is the prior distribution of the fields at the training points. Here, we have introduced the abbreviations hJ.1. = h(sJ.1.) and CJ.1.V == C(sJ.1., SV). The major technical problem of this approach comes from the difficulty in performing the high dimensional integrations. Non-Gaussian likelihoods can be only treated by approximations, where e.g. Monte Carlo sampling (Neal 1997), Laplace integration (Barber & Williams 1997) or bounds on the likelihood (Gibbs & Mackay 1997) have been used so far. In this paper, we introduce a further approach, which is based on a mean field method known in the Statistical Physics of disordered systems (Mezard, Parisi & Virasoro 1987). We specialize on the case of a binary classification problem, where a binary class label T = ±1 must be predicted using a training set corrupted by i.i.d label noise. The likelihood for this problem is taken as where I\, is the probability that the true classification label is corrupted, i.e. flipped and the step function, 0(x) is defined as 0(x) = 1 for x > 0 and 0 otherwise. For such a case, we expect that (by the non-smoothness of the model), e.g. Laplace's method and the bounds introduced in (Gibbs & Mackay 1997) are not directly applicable. Mean Field Methods for Classification with Gaussian Processes 31J 2 Exact posterior averages In order to make a prediction on an input s, ideally the label with maximum posterior probability should be chosen, i.e. TBayes = argmaxr p( TIDm), where the predictive probability is given by P(TIDm) = J dhp(Tlh) p(hIDm). For the binary case the Bayes classifier becomes TBayes = sign(signh(s)), where we throughout the paper let brackets ( ... ) denote posterior averages. Here, we use a somewhat simpler approach by using the prediction T = sign((h(s))) . This would reduce to the ideal prediction, when the posterior distribution of h(s) is symmetric around its mean (h(s)). The goal of our mean field approach will be to provide a set of equations for approximately determining (h( s)) . The starting point of our analysis is the partition function Z = J II dX;:ihJ.L IIp(TJ.LlhJ.L)e~ LI' ,u cI'UxI'X U- L I' hl'xl' , (6) J.L J.L where the new auxiliary variables x/l (integrated along the imaginary axis) have been introduced in order to get rid of C- l in (5). It is not hard to show from (6) that the posterior averages of the fields at the m training inputs and at a new test point s are given by (7) l/ l/ We have thus reduced our problem to the calculation of the "microscopic orderparameters" (x/l). 1 Averages in Statistical Physics can be calculated from derivatives of -In Z with respect to small external fields, which are then set to zero, An equivalent formulation uses the Legendre transform of -In Z as a function of the expectations, which in our case is given by G( {(XJ.L) , ((XJ.L)2)}) = -In Z(, /l, A) + L(xJ.L)'yJ.L + ~ L AJ.L((XJ.L)2) . (8) J.L J.L with Z( bJ.L, A/l}) = J II dX;:ihJ.L IIp(TJ.LlhJ.L)e~ LI',JAI'Ol'u+Cl'u)xl'x u+ LI' xl'(-yl' - hl'). (9) /l J.L The additional averages ((XJ.L)2) have been introduced, because the dynamical variables xJ.L (unlike Ising spins) do not have fixed length. The external fields ,J.L , AJ.L must be eliminated from t~ = t; = 0 and the true expectation values of xJ.L and ( J.L)2 th h' h t' f 8G 8G - 0 x are ose w IC sa IS y 8 « xl' )2) 8(xl' ) , 3 Naive mean field theory So far, this description does not give anything new. Usually G cannot be calculated exactly for the non-Gaussian likelihood models of interest. Nevertheless, based on mean field theory (MFT) it is possible to guess an approximate form for G. 1 Although the integrations are over the imaginary axis, these expectations come out positive. This is due to the fact that the integration "measure" is complex as well. 312 M Opper and 0. Winther Mean field methods have found interesting applications in Neural Computing within the framework of ensemble learning, where the the exact posterior distribution is approximated by a simpler one using product distributions in a variational treatment. Such a "standard" mean field method for the posterior of the hf.L (for the case of Gaussian process classification) is in preparation and will be discussed somewhere else. In this paper, we suggest a different route, which introduces nontrivial corrections to a simple or "naive" MFT for the variables xl-'. Besides the variational method (which would be purely formal because the distribution of the xf.L is complex and does not define a probability), there are other ways to define the simple MFT. E.g., by truncating a perturbation expansion with respect to the "interactions" Cf.LV in G after the first order (Plefka 1982). These approaches yield the result G ~ Gnaive = Go ~ :LCI-'f.L((XI-')2) ~ :L CI-'v(xl-')(XV). (10) I-' 1-', v, wl-f.L Go is the contribution to G for a model without any interactions i.e. when CI-'v = 0 in (9), i.e. it is the Legendre transform of - In Zo = l: In [~+ (1 - 2~) <I> (TI-' ;;)] , I-' where <I>(z) = J~oo .:}f;e-t2 / 2 is an error function. For simple models in Statistical Physics, where all interactions CI-'V are positive and equal, it is easy to show that Gnaive will become exact in the limit of an infinite number of variables xl-'. Hence, for systems with a large number of nonzero interactions having the same orders of magnitude, one may expect that the approximation is not too bad. 4 The TAP approach Nevertheless, when the interactions Cf.LV can be both positive and negative (as one would expect e.g. when inputs have zero mean), even in the thermodynamic limit and for nice distributions of inputs, an additional contribution tlG must be added to the "naive" mean field theory (10). Such a correction (often called an Onsager reaction term) has been introduced for a spin glass model by (Thouless, Anderson & Palmer 1977) (TAP). It was later applied to the statistical mechanics of single layer perceptrons by (Mezard 1989) and then generalized to the Bayesian framework by (Opper & Winther 1996, 1997). For an application to multilayer networks, see (Wong 1995). In the thermodynamic limit of infinitely large dimension of the input space, and for nice input distributions, the results can be shown coincide with the results of the replica framework. The drawback of the previous derivations of the TAP MFT for neural networks was the fact that special assumptions on the input distribution had been made and certain fluctuating terms have been replaced by their averages over the distribution of random data, which in practice would not be available. In this paper, we will use the approach of (Parisi & Potters 1995), which allows to circumvent this problem. They concluded (applied to the case of a spin model with random interactions of a specific type), that the functional form of tlG should not depend on the type of the "single particle" contribution Go. Hence, one may use any model in Go, for which G can be calculated exactly (e.g. the Gaussian regression model) and subtract the naive mean field contribution to obtain the Mean Field Methods for Classification with Gaussian Processes 313 desired I:1G. For the sake of simplicity, we have chosen the even simpler model p( TI-'l hi-') '"" 6 (hi-') without changing the final result. A lengthy but straightforward calculation for this problem leads to the result (11) with RI-' == ((xl-')2) - (Xi-')2. The Ai-' must be eliminated using tj( = 0, which leads I' to the equation (12) Note, that with this choice, the TAP mean field theory becomes exact for Gaussian likelihoods, i.e. for standard regression problems. Finally, setting the derivatives of GT AP = Gnaive + I:1G with respect to the 4 variables (xl-'), ((xl-')2) ,rl-" AI-' equal to zero, we obtain the equations (13) v where D(z) = e- z 2 /2 /..,j2; is the Gaussian measure. These eqs. have to be solved numerically together with (12). In contrast, for the naive MFT, the simpler result AI-' = C 1-'1-' is found. 5 Simulations Solving the nonlinear system of equations (12,13) by iteration turns out to be quite straightforward. For some data sets to get convergence, one has to add a diagonal term v to the covariance matrix C: Cij -+ Cij +6ijV. It may be shown that this term corresponds to learning with Gaussian noise (with variance v) added the Gaussian random field. Here, we present simulation results for a single data set, the Sonar - Mines versus Rocks using the same training/test set split as in the original study by (Gorman & Sejnowski 1988). The input data were pre-processed by linear rescaling such that over the training set each input variable has zero mean and unit variance. In some cases the mean field equations failed to converge using the raw data. A further important feature of TAP MFT is the fact that the method also gives an approximate leave-one-out estimator for the generalization error, C]oo expressed in terms of the solution to the mean field equations (see (Opper & Winther 1996, 1997) for more details) . It is also possible to derive a leave-one-out estimator for the naive MFT (Opper & Winther to be published). Since we so far haven't dealt with the problem of automatically estimating the hyperparameters, their number was drastically reduced by setting Wi = (TiN in the covariances (1) and (2). The remaining hyperparameters, a2 , K, and v were chosen 314 M. Opper and 0. Winther Table 1: The result for the Sonar data. Algorithm Covariance Function €test €exact 100 €Joo TAP Mean Field (1) 0.183 0.260 0.260 (2) 0.077 0.212 0.212 Naive Mean Field (1) 0.154 0.269 0.269 (2) 0.077 0.221 0.221 Back-Prop Simple Percept ron 0.269(±0.048) Best 21ayer - 12 Hidden 0.096(±0.018) as to minimize €Ioo . It turned out that the lowest €Ioo was found from modeling without noise: K, = v = O. The simulation results are shown in table 1. The comparisons for back-propagation is taken from (Gorman & Sejnowski 1988). The solution found by the algorithm turned out to be unique, i.e. different order presentation of the examples and different initial values for the (XIL) converged to the same solution. In table 1, we have also compared the estimate given by the algorithm with the exact leave-one-out estimate €i~~ct obtained by going through the training set and keeping an example out for testing and running the mean field algorithm on the rest. The estimate and exact value are in complete agreement. Comparing with the test error we see that the training set is 'hard' and the test set is 'easy'. The small difference for test error between the naive and full mean field algorithms also indicate that the mean field scheme is quite robust with respect to choice of AIL' 6 Discussion More work has to be done to make the TAP approach a practical tool for Bayesian modeling. One has to find better methods for solving the equations. A conversion into a direct minimization problem for a free energy maybe helpful. To achieve this, one may probably work with the real field variables hJ.l. instead of the imaginary XIL . A further problem is the determination of the hyperparameters of the covariance functions. Two ways seem to be interesting here. One may use the approximate free energy G, which is essentially the negative logarithm of the Bayesian evidence to estimate the most probable values of the hyperparameters. However, an estimate on the errors made in the TAP approach would be necessary. Second, one may use the built-in leave-one-out estimate to estimate the generalization error. Again an estimate on the validity of the approximation is necessary. It will further be interesting to apply our way of deriving the TAP equations to other models (Boltzmann machines, belief nets, combinatorial optimization problems), for which standard mean field theories have been applied successfully. Acknowledgments This research is supported by the Swedish Foundation for Strategic Research and by the Danish Research Councils for the Natural and Technical Sciences through the Danish Computational Neural Network Center (CONNECT). Mean Field Methods for Classification with Gaussian Processes 315 References D. Barber and C. K. I. Williams, Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo, in Neural Information Processing Systems 9, M . C. Mozer, M. I. Jordan and T. Petsche, eds., 340-346. MIT Press (1997). M. N. Gibbs and D. J. C. Mackay, Variational Gaussian Process Classifiers, Preprint Cambridge University (1997). R. P. Gorman and T. J. Sejnowski, Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets, Neural Networks 1, 75 (1988) . D. J. C. Mackay, Gaussian Processes, A Replacement for Neural Networks, NIPS tutorial 1997, May be obtained from http://wol.ra.phy.cam.ac . uk/pub/mackay/. M. Mezard, The Space of interactions in Neural Networks: Gardner's Computation with the Cavity Method, J. Phys. A 22, 2181 (1989). M. Mezard and G. Parisi and M. A. Virasoro, Spin Glass Theory and Beyond, Lecture Notes in Physics, 9, World Scientific (1987). R. Neal, Bayesian Learning for Neural Networks, Lecture Notes in Statistics, Springer (1996). R. M. Neal, Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification, Technical Report CRG-TR-97-2, Dept. of Computer Science, University of Toronto (1997). M. Opper and O. Winther, A Mean Field Approach to Bayes Learning in FeedForward Neural Networks, Phys. Rev. Lett. 76, 1964 (1996). M. Opper and O. Winther, A Mean Field Algorithm for Bayes Learning in Large Feed-Forward Neural Networks, in Neural Information Processing Systems 9, M. C. Mozer, M. I. Jordan and T. Petsche, eds., 225-231. MIT Press (1997). G. Parisi and M. Potters, Mean-Field Equations for Spin Models with Orthogonal Interaction Matrices, J . Phys. A: Math. Gen. 28 5267 (1995). T. Plefka, Convergence Condition of the TAP Equation for the Infinite-Range Ising Spin Glass, J. Phys. A 15, 1971 (1982). D. J. Thouless, P. W. Anderson and R. G. Palmer, Solution of a 'Solvable Model of a Spin Glass', Phil. Mag. 35, 593 (1977). C. K. I. Williams, Computing with Infinite Networks, in Neural Information Processing Systems 9, M. C. Mozer, M. I. Jordan and T. Petsche, eds., 295-301. MIT Press (1997). C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer and M. E. Hasselmo eds., 514-520, MIT Press (1996). K. Y. M. Wong, Microscopic Equations and Stability Conditions in Optimal Neural Networks, Europhys. Lett. 30, 245 (1995).
|
1998
|
5
|
1,548
|
USING COLLECTIVE INTELLIGENCE TO ROUTE INTERNET TRAFFIC David H. Wolpert NASA Ames Research Center Moffett Field, CA 94035 dhw@ptolemy.arc.nasa.gov Kagan Turner NASA Ames Research Center Moffett Field, CA 94035 kagan@ptolemy.arc.nasa.gov Jeremy Frank NASA Ames Research Center Moffett Field, CA 94035 frank@ptolemy.arc.nasa.gov Abstract A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms. 1 INTRODUCTION COllective INtelligences (COINs) are large, sparsely connected recurrent neural networks, whose "neurons" are reinforcement learning (RL) algorithms. The distinguishing feature of COINs is that their dynamics involves no centralized control, but only the collective effects of the individual neurons each modifying their behavior via their individual RL algorithms. This restriction holds even though the goal of the COIN concerns the system's global behavior. One naturally-occurring COIN is a human economy, where the "neurons" consist of individual humans trying to maximize their reward, and the "goal", for example, can be viewed as having the overall system achieve high gross domestic product. This paper presents a preliminary investigation of designing and using artificial COINs as controllers of distributed systems. The domain we consider is routing of internet traffic. The design of a COIN starts with a global utility function specifying the desired global behavior. Our task is to initialize and then update the neurons' "local" utility Using Collective Intelligence to Route Internet Traffic 953 functions, without centralized control, so that as the neurons improve their utilities, global utility also improves. (We may also wish to update the local topology of the COIN.) In particular, we need to ensure that the neurons do not "frustrate" each other as they attempt to increase their utilities. The RL algorithms at each neuron that aim to optimize that neuron's local utility are microlearners. The learning algorithms that update the neuron's utility functions are macrolearners. For robustness and breadth of applicability, we assume essentially no knowledge concerning the dynamics of the full system, i.e., the macrolearning and/ or microlearning must "learn" that dynamics, implicitly or otherwise. This rules out any approach that models the full system. It also means that rather than use domain knowledge to hand-craft the local utilities as is done in multi-agent systems, in COINs the local utility functions must be automatically initialized and updated using only the provided global utility and (locally) observed dynamics. The problem of designing a COIN has never previously been addressed in full hence the need for the new formalism described below. Nonetheless, this problem is related to previous work in many fields: distributed artificial intelligence, multi-agent systems, computational ecologies, adaptive control, game theory [6], computational markets [2], Markov decision theory, and ant-based optimization. For the particular problem of routing, examples of relevant work include [4, 5, 8, 9, 10]. Most of that previous work uses microlearning to set the internal p'arameters of routers running conventional shortest path algorithms (SPAs). However the microlearning occurs, they do not address the problem of ensuring that the associated local utilities do not cause the microlearners to work at cross purposes. This paper concentrates on COIN-based setting of local utilities rather than macrolearning. We used simulations to compare three algorithms. The first two are an SPA and a COIN. Both had "full knowledge" (FK) of the true rewardmaximizing path, with reward being the routing time of the associated router's packets for the SPAs, but set by COIN theory for the COINs. The third algorithm was a COIN using a memory-based (MB) microlearner [1] whose knowledge was limited to local observations. The performance of the FK COIN was the theoretical optimum. The performance of the FK SPA was 12.5 ± 3 % worse than optimum. Despite limited knowledge, the MB COIN outperformed the FK SPA, achieving performance 36 ± 8 % closer to optimum. Note that the performance of the FK SPA is an upper bound on the performance of any RL-based SPA. Accordingly, the performance of the MB COIN is at least 36% superior to that of any RL-based SPA. Section 2 below presents a cursory overview of the mathematics behind COINs. Section 3 discusses how the network routing problem is mapped into the COIN formalism, and introduces our experiments. Section 4 presents results of those experiments, which establish the power of COINs in the context of routing problems. Finally, Section 5 presents conclusions and summarizes future research directions. 2 MATHEMATICS OF COINS The mathematical framework for COINs is quite extensive [11, 12]. This paper concentrates on four of the concepts from that framework: subworlds, factored systems, constraint-alignment, and the wonderful-life utility function. We consider the state of the system across a set of discrete, time steps, t E {O, 1, ... }. All characteristics of a neuron at time t including its internal parameters at that 954 D. H. Wolpert, K. Turner and J. Frank time as well as its externally visible actions are encapsulated in a real-valued vector i 17,t' We call this the "state" of neuron 1] at time t, and let i be the state of all neurons across all time. World utility, G((), is a function of the state of all neurons across all time, potentially not expressi@.e as a discounted sum. A subworld is a set of neurons. All neurons in the same subworld w share the same subworld utility function 9w ((). So when each subworld is a set of neurons that have the most effect on each other, neurons are unlikely to work at cross-purposes all neurons that affect each other substantially share the same local utility. Associated with subworlds is the concept of a (perfectly) constraint-aligned system. In such systems any change to the neurons in subworld w at time 0 will have no effects on the neurons outside of w at times later than O. Intuitively, a system is constraint-aligned if the neurons in separate subworlds do not affect each other directly, so that the rationale behind the use of subworlds holds. A subworld-factored system is one where for each subworld w considered by itself, a change at time 0 to the states of the neurons in that subworld results in an increased value for 9w(() if and only if it results in an increased value for G((). For a subworldfactored system, the side effects on the rest of the system of w's-increasing its own utility (which perhaps decrease other subworlds' utilities) do not end up decreasing world utility. For these systems, the separate subworlds successfully pursuing their separate goals do not frustrate each other as far as world utility is concerned. The desideratum of subworld-factored is carefully crafted. In particular, it does not concern changes in the value of the utility of subworlds other than the one changing its actions. Nor does it concern changes to the states of neurons in more than one subworld at once. Indeed, consider the following alternative desideratum: any change to the t = 0 state of the entire system that improves all subworld utilities simultaneously also improves world utility. Reasonable as it may appear, one can construct examples of systems that obey this desideratum and yet quickly evolve to a minimum of world utility [12J. It can be proven that for a subworld-factored system, when each of the neurons' reinforcement learning algorithms are performing as well as they can, given each others' behavior, world utility is at a critical point. Correct global behavior corresponds to learners reaching a (Nash) equilibrium [8, 13J. There can be no tragedy of the commons for a subworld-factored system [7, 11, 12J. Let CLw (() be defined as the vector ( modified by clamping the states of all neurons in subworld w, across all time, to an-arbitrary fixed value, here taken to be O. The wonderful life subworld utility (WLU) is: (1) When the system is constraint-aligned, so that, loosely speaking, subworld w's "absence" would not affect the rest of the system, we can view the WLU as analogous to the change in world utility that would have arisen if subworld w "had never existed". (Hence the name of this utility - cf. the Frank Capra movie.) Note however, that CL is a purely mathematical operation. Indeed, no assumption is even being made that CLw (() is consistent with the dynamics of the system. The sequence of states the neurons in w are clamped to in the definition of the WL U need not be consistent with the dynamical laws of the system. This dynamics-independence is a crucial strength of the WLU. It means that to evaluate the WLU we do not try to infer how the system would have evolved if all neurons in w were set to 0 at time 0 and the system evolved from there. So long as Using Collective Intelligence to Route Internet Traffic 955 we know ( extending over all time, and so long as we know G, we know the value of WL U. This is true even if we know nothing of the dynamics of the system. In addition to assuring the correct equilibrium behavior, there exist many other theoretical advantages to having a system be subworld-factored. In particular, the experiments in this paper revolve around the following fact: a constraint-aligned system with wonderful life subworld utilities is subworld-factored. Combining this with our previous result that subworld-factored systems are at Nash equilibrium at critical points of world utility, this result leads us to expect that a constraint-aligned system using WL utilities in the microlearning will approach near-optimal values of the world utility. No such assurances accrue to WL utilities if the system is not constraint-aligned however. Accordingly our experiments constitute an investigation of how well a particular system performs when WL utilities are used but little attention is paid to ensuring that the system is constraint-aligned. 3 COINS FOR NETWORK ROUTING In our experiments we concentrated on the two networks in Figure 1, both slightly larger than those in [9]. To facilitate the analysis, traffic originated only at routers indicated with white boxes and had only the routers indicated by dark boxes as ultimate destinations. Note that in both networks there is a bottleneck at router 2. -(a) Network A (b ) Network B Figure 1: Network Architectures. As is standard in much of traffic network analysis [3], at any time all traffic at a router is a real-valued number together with an ultimate destination tag. At each timestep, each router sums all traffic received from upstream routers in this timestep, to get a load. The router then decides which downstream router to send its load to, and the cycle repeats. A running average is kept of the total value of each router's load over a window of the previous L timesteps. This average is run through a load-to-delay function, W(x), to get the summed delay accrued at this timestep by all those packets traversing this router at this timestep. Different routers had different W(x), to reflect the fact that real networks have differences in router software and hardware (response time, queue length, processing speed etc). In our experiments W(x) = x 3 for routers 1 and 3, and W(x) = log(x + 1) for router 2, for both networks. The global goal is to minimize total delay encountered by all traffic. 956 D. H. Wolpert, K. Tumer and J. Frank In terms of the COIN formalism, we identified the neurons "I as individual pairs of routers and ultimate destinations. So ~17,t was the vector of traffic sent along all links exiting rJ's router, tagged for rJ's ultimate destination, at time t. Each subworld consisted of the set all neurons that shared a particular ultimate destination. In the SPA each node "I tries to set ( to minimize the sum of the delays to -17,t be accrued by that traffic on the way to its ultimate destination. In contrast, in a COIN "I tries to set ~17,t to optimize gw for the subworld w containing "I. For both algorithms, "full knowledge" means that at time t all of the routers know the window-averaged loads for all routers for time t - 1, and assume that those values will be the same at t. For large enough L, this assumption will be arbitrarily good, and therefore will allow the routers to make arbitrarily accurate estimates of how best to route their traffic, according to their respective routing criteria. In contrast, having limited knowledge, the MB COIN could only predict the WLU value resulting from each routing decision. More precisely, for each router-ultimatedestination pair, the associated microlearner estimates the map from traffic on all outgoing links (the inputs) to WLU-based reward (the outputs - see below). This was done with a single-nearest-neighbor algorithm. Next, each router could send the packets along the path that results in outbound traffic with the best (estimated) reward. However to be conservative, in these experiments we instead had the router randomly select between that path and the path selected by the FK SPA. The load at router r at time t is determined by (. Accordingly, we can encapsulate the load-to-delay functions at the nodes by writing the delay at node r at time t as Wr,t(O. In our experiments world utility was the total delay, i.e., G(~) = 2:r,t Wr,t(~). So using the WLU, gw(~) = 2:r,t ~w,r,t(~), where ~w,r,t(~) = [Wr,t(() - Wr,t(CLw(())]. At each time t, the MB COIN used 2:r ~w,r,t(O as the "WLU--=-based" reward-signal for trying optimize this full WLU. In the MB COIN, evaluating this reward in a decentralized fashion was straightforward. All packets have a header containing a running sum of the ~'s encountered in all the routers it has traversed so far. Each ultimate destination sums all such headers it received and echoes that sum back to all routers that had routed to it. In this way each neuron is apprised of the WLU-based reward of its subworld. 4 EXPERIMENTAL RESULTS The networks discussed above were tested under light, medium and heavy traffic loads. Table 1 shows the associated destinations (cf. fig. 1). Table 1: Source- Destination Pairings for the Three Traffic Loads Network I Source II Dest. (Light) I Dest. (Medium) I Dest. (Heavy) A 4 6 6,7 6,7 5 7 7 6,7 B 4 7,8 7,8,9 6,7,8,9 5 6,9 6,7,9 6,7,8,9 In our experiments one new packet was fed to each source router at each time step. Table 2 reports the average total delay (i.e., average per packet time to traverse the total network) in each of the traffic regimes, for the shortest path algorithm with full knowledge, the COIN with full knowledge, and the MB COIN. Each table entry is based on 50 runs with a window size of 50, and the errors reported are errors Using Collective Intelligence to Route Internet Traffic 957 in the meanl . All the entries in Table 2 are statistically different at the .05 level, including FK SPA vs. MB COIN for Network A under light traffic conditions. Table 2: Average Total Delay Network II Load FK SPA I FK COIN MB COIN light 0.53 ± .007 0.45 ± .001 0.50 ± .008 A medium 1.26 ± .010 1.10 ± .001 1.21 ± .009 heavy 2.17 ± .012 1.93 ± .001 2.06 ± .010 light 2.13 ± .012 1.92 ± .001 2.05 ± .010 B medium 4.37 ± .014 3.96 ± .001 4.19 ± .012 heavy 6.94 ± .015 6.35 ± .001 6.82 ± .024 Table 2 provides two important observations: First, the WLU-based COIN outperformed the SPA when both have full knowledge, thereby demonstrating the superiority of the new routing strategy. By not having its routers greedily strive for the shortest paths for their packets, the COIN settles into a more desirable state that reduces the average total delay for all packets. Second, even when the WLU is estimated through a memory-based learner (using only information available to the local routers), the performance of the COIN still surpasses that of the FK SPA. This result not only establishes the feasibility of COIN-based routers, but also demonstrates that for this task COINs will outperform any algorithm that can only estimate the shortest path, since the performance of the FK SPA is a ceiling on the performance of any such RL-based SPA. Figure 2 shows how total delay varies with time for the medium traffic regime (each plot is based on 50 runs). The "ringing" is an artifact caused by the starting conditions and the window size (50). Note that for both networks the FK COIN not only provides the shortest delays, but also settles into that solution very rapidly. 1.4 1.35 i 1.3 "" ~ 1.25 Cl. iii Q. 1.2 >cu a; 0 1.15 Cij ~ 1.1 1.05 1 0 100 200 300 Unit Time Steps (a) Network A 5 DISCUSSION FKSPA 0+FKCOIN + -_. MBCOIN· · 400 500 i ~ Cl. ... Q) Q. ~ a; 0 ~ t4.6 4.5 4.4 4.3 42 4.1 4 3.9 3.8 FKSPA 0+FKCOIN -+ --MBCOIN " 0 "" 3. 7 '---"---'-----'-----'---,--'-"--.......... -'----'-~ o SO 100 ISO 200' 250 300 350 400 450 SOO Unit Time Steps (b) Network B Figure 2: Total Delay. Many distributed computational tasks are naturally addressed as recurrent neural networks ofreinforcement learning algorithms (i.e., COINs). The difficulty in doing so is ensuring that, despite the absence of centralized communication and control, IThe results are qualitatively identical for window sizes 20 and 100 along with total timesteps of 100 and 500. 958 D. H. Wolpert, K. Turner and J. Frank the reward functions of the separate neurons work in synchrony to foster good global performance, rather than cause their associated neurons to work at cross-purposes. The mathematical framework synopsized in this paper is a theoretical solution to this difficulty. To assess its real-world applicability, we employed it to design a fullknowledge (FK) COIN as well as a memory-based (RL-based) COIN, for the task of packet routing on a network. We compared the performance of those algorithms to that of a FK shortest-path algorithm (SPA). Not only did the FK COIN beat the FK SPA, but also the memory-based COIN, despite having only limited knowledge, beat the full-knowledge SPA. This latter result is all the more remarkable in that the performance of the FK SPA is an upper bound on the performance of previously investigated RL-based routing schemes, which use the RL to try to provide accurate knowledge to an SPA. There are many directions for future work on COINs, even restricting attention to domain of packet routing. Within that particular domain, currently we are extending our experiments to larger networks, using industrial event-driven network simulators. Concurrently, we are investigating the use of macrolearning for COINbased packet-routing, i.e., the run-time modification of the neurons' utility functions to improve the subworld-factoredness of the COIN. References [1] C. G. Atkenson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence Review, Submitted, 1996. [2] E. Baum. Manifesto for an evolutionary economics of intelligence. In C. M. Bishop, editor, Neural Networks and Machine Learning. Springer-Verlag, 1998. [3] D. Bertsekas and R. Gallager. Data Networks. Prentice Hall, NJ, 1992. [4] J. Boyan and M. Littman. Packet routing in dynamically changing networks: A reinforcement learning approach. In Advances in Neural Information Processing Systems - 6, pages 671-678. Morgan Kaufmann, 1994. [5] S. P. M. Choi and D. Y. Yeung. Predictive Q-routing: A memory based reinforcement learning approach to adaptive traffic control. In Advances in Neural Information Processing Systems - 8, pages 945-951. MIT Press, 1996. [6] D. Fudenberg and J. Tirole. Game Theory. MIT Press, Cambridge, MA, 1991. [7] G. Hardin. The tragedy of the commons. Science, 162:1243-1248,1968. [8] Y. A. Korilis, A. A. Lazar, and A. Orda. Achieving network optima using Stackelberg routing strategies. IEEE Tran. on Networking, 5(1):161-173, 1997. [9] P. Marbach, O. Mihatsch, M. Schulte, and J. Tsisiklis. Reinforcement learning for call admission control and routing in integrated service networks. In Adv. in Neural Info. Proc. Systems - 10, pages 922-928. MIT Press, 1998. [10] D. Subramanian, P. Druschel, and J. Chen. Ants and reinforcement learning: A case study in routing in dynamic networks. In Proceedings of the Fifteenth International Conference on Artificial Intelligence, pages 832-838, 1997. [11] D. Wolpert and K. Tumer. Collective Intelligence. In J. M. Bradshaw, editor, Handbook of Agent technology. AAAI Press/MIT Press, 1999. to appear. [12] D. Wolpert, K. Wheeler, and K. Tumer. Automated design of multi-agent systems. In Proc. of the 3rd Int. Conf. of Autonomous Agents, 1999. to appear. [13] D. Wolpert, K. Wheeler, and K. Tumer. Collective intelligence for distributed control. 1999. (pre-print). PART IX CONTROL, NAVIGATION AND PLANNING
|
1998
|
50
|
1,549
|
Sparse Code Shrinkage: Denoising by Nonlinear Maximum Likelihood Estimation Aapo Hyvarinen, Patrik Hoyer and Erkki Oja Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 5400, FIN-02015 HUT, Finland aapo.hyvarinen@hut.fi,patrik.hoyer@hut.fi,erkki.oja@hut.fi http://www.cis.hut.fi/projects/ica/ Abstract Sparse coding is a method for finding a representation of data in which each of the components of the representation is only rarely significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has some neurophysiological plausibility. In this paper, we show how sparse coding can be used for denoising. Using maximum likelihood estimation of nongaussian variables corrupted by gaussian noise, we show how to apply a shrinkage nonlinearity on the components of sparse coding so as to reduce noise. Furthermore, we show how to choose the optimal sparse coding basis for denoising. Our method is closely related to the method of wavelet shrinkage, but has the important benefit over wavelet methods that both the features and the shrinkage parameters are estimated directly from the data. 1 Introduction A fundamental problem in neural network research is to find a suitable representation for the data. One of the simplest methods is to use linear transformations of the observed data. Denote by x = (Xl, X2, ... , Xn)T the observed n-dimensional random vector that is the input data (e.g., an image window), and by s = (81,82 , . .. , 8n )T the vector of the linearly transformed component variables. Denoting further the n x n transformation matrix by W, the linear representation is given by s=Wx. (1) 474 A. Hyviirinen, P Hoyer and E. Dja We assume here that the number of transformed components equals the number of observed variables, but this need not be the case in general. An important representation method is given by (linear) sparse coding [1, 10], in which the representation of the form (1) has the property that only a small number of the components Si of the representation are significantly non-zero at the same time. Equivalently, this means that a given component has a 'sparse' distribution. A random variable Si is called sparse when Si has a distribution with a peak at zero, and heavy tails, as is the case, for example, with the double exponential (or Laplace) distribution [6]; for all practical purposes, sparsity is equivalent to supergaussianity or leptokurtosis [8]. Sparse coding is an adaptive method, meaning that the matrix W is estimated for a given class of data so that the components Si are as sparse as possible; such an estimation procedure is closely related to independent component analysis [2J. Sparse coding of sensory data has been shown to have advantages from both physiological and information processing viewpoints [1]. However, thorough analyses of the utility of such a coding scheme have been few. In this paper, we introduce and analyze a statistical method based on sparse coding. Given a signal corrupted by additive gaussian noise, we attempt to reduce gaussian noise by soft thresholding ('shrinkage') of the sparse components. Intuitively, because only a few of the components are significantly active in the sparse code of a given data point, one may assume that the activities of components with small absolute values are purely noise and set them to zero, retaining just a few components with large activities. This method is closely connected to the wavelet shrinkage method [3]. In fact, sparse coding may be viewed as a principled way for determining a wavelet-like basis and the corresponding shrinkage nonlinearities, based on data alone. 2 Maximum likelihood estimation of sparse components The starting point of a rigorous derivation of our denoising method is the fact that the distributions of the sparse components are nongaussian. Therefore, we shall begin by developing a general theory that shows how to remove gaussian noise from nongaussian variables, making minimal assumptions on the data. Denote by S the original nongaussian random variable (corresponding here to a noise-free version of one of the sparse components Si), and by v gaussian noise of zero mean and variance a 2 • Assume that we only observe the random variable y: y=S+v (2) and we want to estimate the original s. Denoting by p the probability density of s, and by f = -logp its negative log-density, the maximum likelihood (ML) method gives the following estimator for s: § = argmin ~(y - u)2 + f(u). u 2a (3) Assuming f to be strictly convex and differentiable, this can be solved [6] to yield § = g(y), where the function g can be obtained from the relation (4) This nonlinear estimator forms the basis of our method. Sparse Code Shrinkage: Denoising by Nonlinear Maximum Likelihood Estimation , " "'-~~-----r\ --~-~~---, '. '. ' . . . . . 475 Figure 1: Shrinkage nonlinearities and associated probability densities. Left: Plots of the different shrinkage functions. Solid line: shrinkage corresponding to Laplace density. Dashed line: typical shrinkage function obtained from (6). Dash-dotted line: typical shrinkage function obtained from (8). For comparison, the line x = y is given by dotted line. All the densities were normalized to unit variance, and noise variance was fixed to .3. Right: Plots of corresponding model densities of the sparse components. Solid line: Laplace density. Dashed line: a typical moderately supergaussian density given by (5). Dash-dotted line: a typical strongly supergaussian density given by (7). For comparison, gaussian density is given by dotted line. 3 Parameterizations of sparse densities To use the estimator defined by (3) in practice, the densities of the Si need to be modelled with a parameterization that is rich enough. We have developed two parameterizations that seem to describe very well most of the densities encountered in image denoising. Moreover, the parameters are easy to estimate, and the inversion in (4) can be performed analytically. Both models use two parameters and are thus able to model different degrees of supergaussianity, in addition to different scales, i.e. variances. The densities are here assumed to be symmetric and of zero mean. The first model is suitable for supergaussian densities that are not sparser than the Laplace distribution r6], and is given by the family of densities p(s) = C exp( -as2 12 - bls!), (5) where a, b > 0 are parameters to be estimated, and C is an irrelevant scaling constant. The classical Laplace density is obtained when a = 0, and gaussian densities correspond to b = o. A simple method for estimating a and b was given in [6]. For this density, the nonlinearity g takes the form: g(u) = 1 2 sign(u) max(O, lui - ba2 ) (6) 1 +a a where a 2 is the noise variance. The effect of the shrinkage function in (6) is to reduce the absolute value of its argument by a certain amount, which depends on the parameters, and then rescale. Small arguments are thus set to zero. Examples of the obtained shrinkage functions are given in Fig. l. The second model describes densities that are sparser than the Laplace density: 1 (a: + 2) [a: (a: + 1)/2](a/Hl) p(s) = 2d [Va: (a: + 1)/2 + I sid 1](a+3)· (7) 476 A. Hyvarinen, P Hoyer and E. Dja When a -+ 00, the Laplace density is obtained as the limit. A simple consistent method for estimating the parameters d, a > 0 in (7) can be obtained from the relations d = JE{S2} and a = (2 - k + Jk(k + 4))/(2k - 1) with k = d2Ps(O)2, see [6]. The resulting shrinkage function can be obtained as [6] lui - ad 1 ,..,---,-----,--::-------:,....,---g(u) = sign(u)max(O, 2 + "2J (l u l + ad)2 - 4a2(a + 3)) (8) where a = Ja(a + 1)/2, and g(u) is set to zero in case the square root in (8) is imaginary. This is a shrinkage function that has a certain hard-thresholding flavor, as depicted in Fig. 1. Examples of the shapes of the densities given by (5) and (7) are given in Fig. 1, together with a Laplace density and a gaussian density. For illustration purposes, the densities in the plot are normalized to unit variance, but these parameterizations allow the variance to be choosen freely. Choosing whether model (5) or (7) should be used can be based on moments of the distributions; see [6]. Methods for estimating the noise variance a 2 are given in [3,6]. 4 Sparse code shrinkage The above results imply the following sparse code shrinkage method for denoising. Assume that we observe a noisy version x = x + v of the data x, where v is gaussian white noise vector. To denoise x, we transform the data to a sparse code, apply the above ML estimation procedure component-wise, and then transform back to the original variables. Here, we constrain the transformation to be orthogonal; this is motivated in Section 5. To summarize: 1. First, using a noise-free training set of x, use some sparse coding method for determining the orthogonal matrix W so that the components Si in s = Wx have as sparse distributions as possible. Estimate a density model Pi(Si) for each sparse component, using the models in (5) and (7). 2. Compute for each noisy observation x(t) of x the corresponding noisy sparse components y(t) = Wx(t). Apply the shrinkage non-linearity gi(') as defined in (6), or in (8), on each component Yi(t), for every observation index t. Denote the obtained components by Si(t) = gi(Yi(t)). 3. Invert the relation (1) to obtain estimates of the noise-free x, given by x(t) = WT§(t) . To estimate the sparsifying transform W, we assume that we have access to a noisefree realization of the underlying random vector. This assumption is not unrealistic on many applications: for example, in image denoising it simply means that we can observe noise-free images that are somewhat similar to the noisy image to be treated, i.e., they belong to the same environment or context. This assumption can be, however, relaxed in many cases, see [7]. The problem of finding an optimal sparse code in step 1 is treated in the next section. Sparse Code Shrinkage: Denoising by Nonlinear Maximum Likelihood Estimation 477 In fact, it turns out that the shrinkage operation given above is quite similar to the one used in the wavelet shrinkage method derived earlier by Donoho et al [3] from a very different approach. Their estimator consisted of applying the shrinkage operator in (6), with different values for the parameters, on the coefficients of the wavelet transform. There are two main differences between the two methods. The first is the choice of the transformation. We choose the transformation using the statistical properties of the data at hand, whereas Donoho et al use a predetermined wavelet transform. The second important difference is that we estimate the shrinkage nonlinearities by the ML principle, again adapting to the data at hand, whereas Donoho et al use fixed thresholding operators derived by the minimax principle. 5 Choosing the optimal sparse code Different measures of sparseness (or nongaussianity) have been proposed in the literature [1, 4, 8, 10]. In this section, we show which measures are optimal for our method. We shall here restrict ourselves to the class of linear, orthogonal transformations. This restriction is justified by the fact that orthogonal transformations leave the gaussian noise structure intact, which makes the problem more simply tractable. This restriction can be relaxed, however, see [7]. A simple, yet very attractive principle for choosing the basis for sparse coding is to consider the data to be generated by a noisy independent component analysis (ICA) model [10, 6, 9]: x = As+v, (9) where the Si are now the independent components, and v is multivariate gaussian noise. We could then estimate A using ordinary maximum likelihood estimation of the ICA model. Under the restriction that A is constrained to be orthogonal, estimation of the noise-free components Si then amounts to the above method of shrinking the values of AT x, see [6]. In this ML sense, the optimal transformation matrix is thus given by W = AT. In particular, using this principle means that ordinary ICA algorithms can be used to estimate the sparse coding basis. This is very fortunate since the computationally efficient methods for ICA estimation enable the basis estimation even in spaces of rather high dimensions [8, 5]. An alternative principle for determining the optimal sparsifying transformation is to minimize the mean-square error (MSE). In [6], a theorem is given that shows that the optimal basis in minimum MSE sense is obtained by maximizing 2:~=1 IF(wTx) where IF(s) = E{[P'(s)jp(s)J2} is the Fisher information of the density of s, and the wT are the rows of W . Fisher information of a density [4] can be considered as a measure of its nongaussianity. It is well-known [4] that in the set of probability densities of unit variance, Fisher information is minimized by the gaussian density, and the minimum equals 1. Thus the theorem shows that the more nongaussian (sparse) S is, the better we can reduce noise. Note, however, that Fisher information is not scale-invariant. The former (ML) method of determining the basis matrix gives usually sparser components than the latter method based on minimizing MSE. In the case of image denoising, however, these two methods give essentially equivalent bases if a perceptually weighted MSE is used [6]. Thus we luckily avoid the classical dilemma of choosing between these two optimality criteria. 478 A. Hyvtirinen, P. Hoyer and E. Oja 6 Experiments Image data seems to fulfill the assumptions inherent in sparse code shrinkage: It is possible to find linear representations whose components have sparse distributions, using wavelet-like filters [10]. Thus we performed a set of experiments to explore the utility of sparse code shrinkage in image denoising. The experiments are reported in more detail in [7]. Data. The data consisted of real-life images, mainly natural scenes. The images were randomly divided into two sets. The first set was used in estimating the matrix W that gives the sparse coding transformation, as well as in estimating the shrinkage nonlinearities. The second set was used as a test set. It was artificially corrupted by Gaussian noise, and sparse code shrinkage was used to reduce the noise. The images were used in the method in the form of subwindows of 8 x 8 pixels. Methods. The sparse coding matrix W was determined by first estimating the ICA model for the image windows (with DC component removed) using the FastICA algorithm [8, 5], and projecting the obtained estimate on the space of orthogonal matrices. The training images were also used to estimate the parametric density models of the sparse components. In the first series of experiments, the local variance was equalized as a preprocessing step [7]. This implied that the density in (5) was a more suitable model for the densities of the sparse components; thus the shrinkage function in (6) was used. In the second series, no such equalization was made, and the density model (7) and the shrinkage function (8) were used [7]. Results. Fig. 2 shows, on the left, a test image which was artificially corrupted with Gaussian noise with standard deviation 0.5 (the standard deviations of the original images were normalized to 1). The result of applying our denoising method (without local variance equalization) on that image is shown on the right. Visual comparison of the images in Fig. 2 shows that our sparse code shrinkage method cancels noise quite effectively. One sees that contours and other sharp details are conserved quite well, while the overall reduction of noise is quite strong, which in is contrast to methods based on low-pass filtering. This result is in line with those obtained by wavelet shrinkage [3]. More experimental results are given in [7]. 7 Conclusion Sparse coding and ICA can be applied for image feature extraction, resulting in a wavelet-like basis for image windows [10]. As a practical application of such a basis, we introduced the method of sparse code shrinkage. It is based on the fact that in sparse coding the energy of the signal is concentrated on only a few components, which are different for each observed vector. By shrinking the absolute values of the sparse components towards zero, noise can be reduced. The method is also closely connected to modeling image data with noisy independent component analysis [9]. We showed how to find the optimal sparse coding basis for denoising, and we developed families of probability densities that allow the shrinkage nonlinearities to adapt accurately to the data at hand. Experiments on image data showed that the performance of the method is very appealing. The method reduces noise without blurring edges or other sharp features as much as linear low-pass or median filtering. This is made possible by the strongly non-linear nature of the shrinkage operator that takes advantage of the inherent statistical structure of natural images.
|
1998
|
51
|
1,550
|
Finite-dimensional approximation of Gaussian processes Giancarlo Ferrari Trecate Dipartimento di Informatica e Sistemistica, Universita di Pavia, Via Ferrata 1, 27100 Pavia, Italy ferrari@conpro.unipv.it Christopher K. I. Williams Department of Artificial Intelligence, University of Edinburgh, 5 Forrest Hill, Edinburgh EH1 2QL, ckiw@dai.ed.ac.uk. Manfred Opper Neural Computing Research Group Division of Electronic Engineering and Computer Science Aston University, Birmingham, B4 7ET, UK m.opper@aston.ac.uk Abstract Gaussian process (GP) prediction suffers from O(n3) scaling with the data set size n. By using a finite-dimensional basis to approximate the GP predictor, the computational complexity can be reduced. We derive optimal finite-dimensional predictors under a number of assumptions, and show the superiority of these predictors over the Projected Bayes Regression method (which is asymptotically optimal). We also show how to calculate the minimal model size for a given n. The calculations are backed up by numerical experiments. 1 Introduction Over the last decade there has been a growing interest in the Bayesian approach to regression problems, using both neural networks and Gaussian process (GP) prediction, that is regression performed in function spaces when using a Gaussian random process as a prior. The computational complexity of the GP predictor scales as O(n3), where n is the size Finite-Dimensional Approximation o/Gaussian Processes 219 of the datasetl . This suggests using a finite-dimensional approximating function space, which we will assume has dimension m < n. The use of the finite-dimensional model is motivated by the need for regression algorithms computationally cheaper than the G P one. Moreover, GP regression may be used for the identification of dynamical systems (De Nicolao and Ferrari Trecate, 1998), the next step being a model-based controller design. In many cases it is easier to accomplish this second task if the model is low dimensional. Use of a finite-dimensional model leads naturally to the question as to which basis is optimal. Zhu et al. (1997) show that, in the asymptotic regime, one should use the first m eigenfunctions of the covariance function describing the Gaussian process. We call this method Projected Bayes Regression (PBR). The main results of the paper are: 1. Although PBR is asymptotically optimal, for finite data we derive a predictor hO(x) with computational complexity O(n2m) which outperforms PBR, and obtain an upper bound on the generalization error of hO(x). 2. In practice we need to know how large to make m . We show that this depends on n and provide a means of calculating the minimal m. We also provide empirical results to back up the theoretical calculation. 2 Problem statement Consider the problem of estimating an unknown function f(x) : JRd -T JR, from the noisy observations ti = f(Xi) + Ei, i = 1, ... , n where Ei are i.i.d. zero-mean Gaussian random variables with variance a2 and the samples Xi are drawn independently at random from a distribution p(x) . The prior probability measure over the function f (.) is assumed to be Gaussian with zero mean and autocovariance function C (6,6). Moreover we suppose that f (.), Xi, Ei, are mutually independent. Given the data set 'Dn = {x, f}, where x = [Xl' . .. ' Xn] and f = [tl, ... , tn]', it is well known that the posterior probability PUI'Dn) is Gaussian and the GP prediction can be computed via explicit formula (e.g. Whittle, 1963) j(x) = E[JI'Dn](x) = [C(x, xd C(x, Xn)] H - lf, {H} ij ~C(Xi ' Xj) + a20ij where H is a n x n matrix and Oij is the Kronecker delta. In this work we are interested in approximating j in a suitable m-dimensional space that we are going to define. Consider the Mercer-Hilbert expansion of C(6, 6) r C(6,6)'Pi(6)p(6)d6 = Ai'Pi(6), r 'Pi(~)'Pj(Op(~)d~ = Oij (1) iRd iRd +00 C(6,6) = L Ai'Pi(6)'Pi(6), i=l where the eigenvalues Ai are ordered in a decreasing way. Then, in (Zhu et al., 1997) is shown that, at least asymptotically, the optimal model belongs to M = Span { 'Pi, i = 1, ... , m}. This motivates the choice of this space even when dealing with a finite amount of data. Now we introduce the finite-dimensional approximator which we call Projected Bayes Regression. lO(n3) arises from the inversion of a n x n matrix. 220 G. Ferrari-Trecate, C. K. I. Williams and M. Opper Definition 1 The PBR approximator is b(x) = k' (x)w, where w=,8A-1iJ>'f, ,8=1/(12, A = (A -1 + ,8iJ>' iJ» , (A)ij=Ai6ij and k(x)= [~I{X) l' iJ>= rpm (x) The name PBR comes from the fact that b(x) is the GP predictor when using the mis-specified prior (2) i=1 whose auto covariance function is the projection of C(6,6) on M. From the computational point of view, is interesting to note that the calculation of PBR scales with the data as O(m2n), assuming that n » m (this is the cost of computing the matrix product A-I iJ>'). Throughout the paper the following measures of performance will be extensively used. Definition 2 Let s{x) be a predictor that uses only information from Dn. Then its x-error and generalization error are respectively defined as Es(n,x)=Et.,x.,l [(t* - s(x*))2] , EHn)=Ex [Es{n,x)]. An estimator SO(x) belonging to a class 11. is said x-optimal or simply optimal if, respectively, Eso(n,x) ~ Es{n,x) or E;o(n) ~ E~(n), for all the s(x) E 11. and the data sets x. Note that x-optimality means optimality for each fixed vector x of data points. Obviously, if SO(x) is x-optimal it is also simply optimal. These definitions are motivated by the fact that for Gaussian process priors over functions and a predictor s that depends linearly on 1, the computation of Es(n, x) can be carried out with finite-dimensional matrix calculations (see Lemma 4 below), although obtaining Ei{n) is more difficult, as the average over x is usually analytically intractable. 3 Optimal finite-dimensional models We start considering two classes of linear approximators, namely 11.1 = {g(x) = k' (x)LIJL E jRmxn} and 11.2= {h(x) = k' (x)FiJ>'IJF E jRmxm}, where the matrices Land F are possibly dependent on the Xi samples. We point out that 11.2 C 11.1 and that the PBR predictor b(x) E 11.2. Our goal is the characterization of the optimal predictors in 11.1 and 11.2. Before stating the main result, two preliminary lemmas are given. The first one is proved in (Pilz, 1991) while the second follows from a straightforward calculation. Lemma 3 Let A E jRnxn, BE jRnxr, A> O. Then it holds that inf Tr [(ZAZ' - ZB - B' Z')] = Tr [-B' A-I B] ZERrxn and the minimum is achieved for the matrix Z* = B' A-I . Lemma 4 Let g(x) E 11. 1 • Then it holds that +00 Eg(n,x) = LAi + (12 + q{L), q(L)=Tr [LHL' - 2LiJ>A]. i=1 Finite-Dimensional Approximation o/Gaussian Processes 221 Proof. In view of the x-error definition, setting r( x*) [C(x*, xd C(x*, x n )]' ,it holds Et< ,t [(t* - k' (x*)U)2] (12 + C(X*, X*) + k' (x*)LH L' k(x*) -2k' (x*)Lr(x*) (3) (12 + C(x*, x*) +Tr [LHL'k(x*)k' (x*) - 2Lr(x*)k' (x*)] . Note that Ex< [k(x*)k' (x*)] = fm, Ex' [r(x*)k' (x*)] = «I> A , and, from the MercerHilbert expansion (1), Ex' [C(x*, x*)] = I:~~ Ai· Then, taking the mean of (3) w.r.t. x*, the result follows.D Theorem 5 The predictors gO(x) E 11.1 given by L = £0 = A«I>' H-1 and hO(x) E 11.2 given by F = FO = A«I>' «1>(<<1>' H«I»-I, Vn 2: m, are x-optimal. Moreover +00 Ego(n,x) = LAi+(12-Tr[A«I>'H-1«1>A] (4) i=1 +00 L Ai + (12 - Tr [A«I>' «1>(<<1>' H«I»-I«I>' «I>A] i=1 Proof. We start considering the gO(x) case. In view of Lemma 4 we need only to minimize q(L) w.r.t. to the matrix L. By applying Lemma 3 with B = «I>A, A = H > 0, Z = L, one obtains argmlnq(L)=Lo = A«I>'H-1 mlnq(L) = -Tr [A«I>'H-1«1>A] (5) so proving the first result. For the second case, we apply Lemma 4 with L = F«I>' and then perform the minimization of q(F«I>'), w.r.t. the matrix F. This can be done as before noting that «1>' H-I«I> > 0 only when n 2: m. 0 Note that the only difference between gO(x) and the GP predictor derives from the approximation of the fUnctions C(x, Xk) with I::l Ai'Pi(X)'Pi(Xk). Moreover the complexity of gO (x) is O(n3) the same of j(x). On the other hand hO(x) scales as O(n2m), so having a computational cost intermediate between the GP predictor and PBR. Intuitively, the PBR method is inferior to hO as it does not take into account the x locations in setting up its prior. We can also show that the PBR predictor b(x) and hO(x) are asymptotically equivalent. l,From (4) is clear that the explicit evaluations of E%o(n) and Eho(n) are in general very hard problems, because the mean w.r.t. the Xi samples that enters in the «I> and H matrices. In the remainder of this section we will derive an upper bound on Eho(n). Consider the class of approximators 11.3= {u(x) = k' (x) D «1>' ~D E ffi.mxm , (D)ij = di6ij }. Because of the inclusions 11.3 C 11.2 C 11.1 , if UO(x) is the x-optimal predictor in 11.3, then Ego(n) ::; Eho(n) ::; E;o(n). Due the diagonal structure of the matrix D, an upper bound to E~o (n) may be explicitly computed, as stated in the next Theorem. Theorem 6 The approximator UO(x) E 11.3 given by (<<I>' «I> A) . (D)ij = (DO)ij = (<<I>' H«I»:: 6ij , (6) 222 G. Ferrari-Trecate, C. K. I. Williams and M. Opper is x-optimal. Moreover an upper-bound on its generalization error is given by += m Ak E;o < L Ai + (J2 - n L qkAk, qk =(7) i=l k=l Ck Ck (n -l)Ak + J C(x,x)cp~(x)p(x)dx +(J2. Proof. In order to find the x-optimal approximator in 11.3 , we start applying the Lemma 4 with L = Dq,'. Then we need to minimize (8) w.r.t. di so obtaining (6). To bound E;o(n), we first compute the generalization error of a generic approximation u(x) that is EZ = Ex [q(Dq,')] + L~~ Ai + (J2. After verifying that we obtain from (8), assuming the di constant, += m m E~ = L Ai + (J2 + n L d;Ci - 2n L diAi. i=l i=l i=l Minimizing EZ w.r.t. di , and recalling that UO(x) is also simply optimal the formula (7) follows.O When C(6, 6) is stationary, the expression of the Ci coefficient becomes simply Ci = (n - l)Ai + L~~ Ai + (J2 . Remark : A naive approach to estimating the coefficients in the estimator L~lWi¢i(X) would be to set Wi = n- 1(q,'t)i as an approximation to the integral Wi = f ¢i(x)f(x)p(x)dx. The effect of the matrix D is to "shrink" the wi's of the higher-frequency eigenfunctions. If there was no shrinkage it would be necessary to limit m to stop the poorly-determined Wi'S from dominating, but equation 7 shows that in fact the upper bound is improved as m increases. (In fact equation 7 can be used as an upper bound on the GP prediction error; it is tightest when m ~ 00.) This is consistent with the idea that increasing m under a Bayesian scheme should lead to improved predictions. In practice one would keep m < n, otherwise the approximate algorithm would be computationally more expensive than the O(n3) GP predictor. 4 Choosing m For large n, we can show that (9) where b(x) is the PBR approximator of Definition 1. (This arises because the matrix q,/q, becomes diagonal in the limit n ~ 00 due to the orthogonality of the eigenfunctions.) In equation 9, the factor (Ail + ,Bn)-l indicates by how much the prior variance of the ith eigenfunction ¢i has been reduced by the observation of the n datapoints. (Note that this expression is exactly the same as the posterior variance of the mean Finite-Dimensional Approximation o/Gaussian Processes 00/ " tl 06 .'. u.I o os : o~ · --------,,~,------~,,~,------~ ,,· (a) --, I , , -, , , , , \"2, log, (b) 223 Figure 1: (a) E~o(n) and detaching points for various model orders. Dashed: m = 3, dash-dot: m = 5, dotted: m = 8, solid: Ejen). (b) Eg(n) E~ o (n) plotted against n. of a Gaussian with prior N(O, Ai) given n observations corrupted by Gaussian noise of variance {3-1 .) For an eigenfunction with Ai » 0-2 In, the posterior is considerably tighter than the prior, but when Ai « 0-2 In, the prior and posterior have almost the same width, which suggests that there is little point in including these eigenfunctions in the finite-dimensional model. By omitting all but the first m eigenfunctions we add a term L~m+1 Ai to the expected generalization error. This means that for a finite-dimensional model using the first m eigenfunctions, we expect that Eg(n) ~ Ej(n) up to a training set size n determined by n = 1/({3Am). We call n the detatching point for the m-dimensional approximator. Conversely, in practical regression problems the data set size n is known. Then, from the knowledge of the auto covariance eigenvalues, is possible to determine, via the detatching points formula, the order m of the approximation that should be used in order to guarantee Eho (n) ~ Ej(n). 5 Experimental results We have conducted experiments using the prior covariance function C(6,6) = (1 + h)e-h where h = 16 - 61/p· with p = 0.1. This covariance function corresponds to a Gaussian process which is once mean-squared differentiable, It lies in the family of stationary covariance functions C(h) = hV Kv(h) (where Kv(-) is a modified Bessel function) , with v = 3/2. The eigenvalues and eigenfunctions of this covariance kernel for the density p(x) '" U(O, 1) have been calculated in Vivarelli (1998). In our first experiment (using 0-2 = 1) the learning curves of b(x), hO(x) and i(x) were obtained; the average over the choice of training data sets was estimated by using 100 different x samples. It was noticed that Eg (n) and Eho (n) practically coincide, so only the latter curve is drawn in the pictures. In Figure l(a) we have plotted the learning curves for GP regression and the approximation hO(x) for various model orders. The corresponding detaching points are also plotted, showing their effectiveness in determining the size of data sets for which E~ o (n) ~ Ej(n). The minimum possible error attainable is (J2 = 1.0 For finitedimensional models this is increased by L~m+l Ai; these "plateaux" can be clearly seen on the right hand side of Figure l(a). 224 G. Ferrari-Trecate, C. K. J. Williams and M. Opper Our second experiment demonstrates the differences in performance for the hO(x) and b(x) estimators, using (72 = 0.1. In Figure 1(b) we have plotted the average difference Eg(n) - Eho(n). This was obtained by averaging Eb(n,x) - Eho(n,x) (computed with the same x, i.e. a paired comparison) over 100 choices of x, for each n. Notice that hO is superior to the PBR estimator for small n (as expected), but that they are asymptotically equivalent. 6 Discussion In this paper we have shown that a finite-dimensional predictor hO can be constructed which has lower generalization error than the PBR predictor. Its computational complexity is O(n2m), lying between the O(n3 ) complexity of the GP predictor and O(m2n) complexity of PBR. We have also shown how to calculate m, the number of basis functions required, according to the data set size. We have used finite-dimensional models to approximate GP regression. An interesting alternative is found in the work of Gibbs and MacKay (1997), where approximate matrix inversion methods that have O(n2 ) scaling have been investigated. It would be interesting to compare the relative merits of these two methods. Acknowledgements We thank Francesco Vivarelli for his help in providing the learning curves for Ej(n) and the eigenfunctions/values in section 5. References [1) De Nicolao, G., and Ferrari Trecate, G. (1998). Identification of NARX models using regularization networks: a consistency result .. IEEE Int. Joint Conf. on Neural Networks, Anchorage, US, pp. 2407-2412. [2) Gibbs, M. and MacKay, D. J. C.'(1997). Efficient Implementation of Gaussian Processes. Cavendish Laboratory, Cambridge, UK. Draft manuscript, available from http://wol.ra.phy.cam.ac.uk/mackay/homepage.html. [3) Opper, M. (1997). Regression with Gaussian processes: Average case performance. In I. K. Kwok-Yee, M. Wong and D.-Y. Yeung (eds), Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective. Springer-Verlag. [4) Pilz, J. (1991). Bayesian estimation and experimental design in linear regression models. Wiley & Sons. [5) Ripley, B. D. (1996). Pattern recognition and neural networks. CUP. [6) Wahba, G. (1990). Spline models for observational data. Society for Industrial and Applied Mathematics. CBMS-NSF Regional Conf. series in applied mathematics. [7) Whittle, P. (1963). Prediction and regUlation by linear least-square methods. English Universities Press. [8) Williams C. K. I. (1998). Prediction with Gaussian processes: from linear regression to linear prediction and beyond. In Jordan, M.I. editor, Learning and inference in graphical models. Kluwer Academic Press. [9] Vivarelli, F. (1998).Studies on generalization in Gaussian processes and Bayesian Neural Networks. Forthcoming PhD thesis, Aston University, Birmingham, UK. [10] Zhu, H., and Rohwer, R. (1996). Bayesian regression filters and the issue of priors. Neural Computing and Applications, 4:130-142. [11) Zhu, H., Williams, C. K. I. Rohwer, R. and Morciniec, M. (1997). Gaussian regression and optimal finite dimensional linear models. Tech. Rep. NCRG/97/011. Aston University, Birmingham, UK.
|
1998
|
52
|
1,551
|
Controlling the Complexity of HMM Systems by Regularization Christoph Neukirchen, Gerhard Rigoll Department of Computer Science Gerhard-Mercator-University Duisburg 47057 Duisburg, Germany email: {chn.rigoll}@fb9-ti.uni-duisburg.de Abstract This paper introduces a method for regularization ofHMM systems that avoids parameter overfitting caused by insufficient training data. Regularization is done by augmenting the EM training method by a penalty term that favors simple and smooth HMM systems. The penalty term is constructed as a mixture model of negative exponential distributions that is assumed to generate the state dependent emission probabilities of the HMMs. This new method is the successful transfer of a well known regularization approach in neural networks to the HMM domain and can be interpreted as a generalization of traditional state-tying for HMM systems. The effect of regularization is demonstrated for continuous speech recognition tasks by improving overfitted triphone models and by speaker adaptation with limited training data. 1 Introduction One general problem when constructing statistical pattern recognition systems is to ensure the capability to generalize well, i.e. the system must be able to classify data that is not contained in the training data set. Hence the classifier should learn the true underlying data distribution instead of overfitting to the few data examples seen during system training. One way to cope with the problem of overfitting is to balance the system's complexity and flexibility against the limited amount of data that is available for training. In the neural network community it is well known that the amount of information used in system training that is required for a good generalization performance should be larger than ,the number of adjustable weights (Baum, 1989). A common method to train a large size neural network sufficiently well is to reduce the number of adjustable parameters either by removing those weights that seem to be less important (in (Ie Cun, 1990) the sensitivity of individual network weights is estimated by the second order gradient) or by sharing 738 C. Neukirchen and G. Rigoll the weights among many network connections (in (Lang, 1990) the connections that share identical weight values are determined in advance by using prior knowledge about invariances in the problem to be solved). A second approach to avoid overfitting in neural networks is to make use of regularization methods. Regularization adds an extra term to the training objective function that penalizes network complexity. The simplest regularization method is weight decay (Plaut, 1986) that assigns high penalties to large weights. A more complex regularization term is used in soft weight-sharing (Nowlan, 1992) by favoring neural network weights that fall into a finite set of small weight-clusters. The traditional neural weight sharing technique can be interpreted as a special case of soft weight-sharing regularization when the cluster variances tend towards zero. In continuous speech recognition the Hidden Markov Model (HMM) method is common. When using detailed context-dependent triphone HMMs, the number ofHMM-states and parameters to estimate in the state-dependent probability density functions (pdfs) is increasingly large and overfitting becomes a serious problem. The most common approach ,to balance the complexity of triphone HMM systems against the training data set is to reduce the number of parameters by tying, i.e. parameter sharing (Young, 1992). A popular sharing method is state-tying with selecting the HMM-states to be tied in advance, either by data-driven state-clustering based on a pdf-dependent distance metric (Young, 1993), or by constructing binary decision trees that incorporate higher phonetic knowledge (Bahl, 1991). In these methods, the number of state-clusters and the decision tree sizes, respectively, must be chosen adequately to match the training data size. However, a possible drawback of both methods is that two different states may be selected to be tied (and their pdfs are forced to be identical) although there is enough training data to estimate the different pdfs of both states sufficiently well. In the following, a method to reduce the complexity of general HMM systems based on a regularization term is presented. Due to its close relationship to the soft weight-sharing method for neural networks this novel approach can be interpreted as soft state-tying. 2 Maximum likelihood training in HMM systems Traditionally, the method most commonly used to determine the set of adjustable parameters 8 in a HMM system is maximum likelihood (ML) estimation via the expectation maximization (EM) algorithm. If the training observation vector sequence is denoted as X = (x(l), ... ,x(T)) and the corresponding HMM is denoted as W the ML estimator is given by: {)ML = argmax {logpe(XIW)} () (1) In the following, the total number of different HMM states is given by K. The emission pdf ,of the k-th state is denoted as bk (x); for continuous HMMs bk (x) is a mixture of Gaussian pdfs most commonly; in the case of discrete HMMs the observation vector x is mapped by a vector quantizer (VQ) on the discrete VQ-Iabel m(x) and the emission pdfis replaced by the discrete output probability bk (m). By the forward-backward algorithm the probabilistic state counts rdt) can be determined for each training observation and the log-likelihood over the training data can be decomposed into the auxiliary function Q (8) optimized in the EM steps (state transition probabilities are neglected here): T K Q(8) = L L rk(t) ·logbk(x(t)) (2) t=l k=l Sometimes, the observation vector x is split up into several independent streams. If the total number of streams is given by Z, the features in the z-th stream comprise the subvector x(z) and in the case of application ofa VQ the corresponding VQ label is denoted as m(z) (x(z»). Controlling the Complexity of HMM Systems by Regularization 739 The observation subvectors in different streams are assumed to be statistically independent thus the states' pdfs can be written as: Z bk(x) = II b~z)(x(z») (3) z=l 3 A complexity measure for HMM systems When using regularization methods to train the HMM system, the traditional objective training function Q(0) is augmented by a complexity penalization term 0 and the new optimization problem becomes: (reg = argmax {Q(0) + v· 0(0)} (} (4) Here, the regulizer term 0 should be small if the HMM system has high complexity and parameter overfitting becomes a problem; 0 should be large if the HMM-states' pdfs are shaped smoothly and system generalization works well. The constant v 2: 0 is a control parameter that adjusts the tradeoff between the pure ML solution and the smoothness of penalization. In Eqn. (4) the term Q (0) becomes larger the more data is used for training (which makes the ML estimation become more reliable) and the influence of the term v· 0 gets less important, relatively. The basic idea when constructing an expression for the regulizer 0 that favors smooth HMM systems is, that in the case of simple and smooth systems the state-dependent emission pdfs bk (.) should fall into several groups of similar pdfs. This is in contrast to the traditional state-tying that forces identical pdfs in each group. In the following, these clusters of similar emission pdfs are described by a probabilistic mixture model. Each pdf is assumed to be generated by a mixture of I different mixture components Pi (. ). In this case the probability (-density) of generating the emission pdf bk (.) is given by: I p(bkO) = L Ci . Pi(bk(·)) (5) i=l with the mixture weights Ci that are constrained to 0 ::::; Ci ::::; 1 and 1 = 2::=1 ci. The i-th mixture component Pi (.) is used to model the i-th cluster of HMM-emission pdfs. Each cluster is represented by a prototype pdf that is denoted as fJi (.) for the i-th cluster; the distance (using a suitable metric) between a HMM emission pdf bk 0 and the i-th prototype pdfis denoted as Di(bk (.)). If these distances are small for all HMM emission probabilities there are several small clusters of emission probabilities and the regulizer term 0 should be large. Now, it is assumed that the distances follow a negative exponential distribution (with a deviation parameter Ai), yielding an expression for the mixture components: p; (b.O) - (g A;,,) . exp ( - ~ A;" . D;" (bh)) ) (6) In Eqn. (6) the more general case of Z independent streams is given. Hence, the HMM emission pdfs and the cluster prototype pdfs are split up into Z different pdfs b~) (.) and fJ;Z) ( . ), respectively and the stream dependent distances D i,z and parameters Ai,z are used. Now, for the regulizer term 0 the log-likelihood of the mixture model in Eqn. (5) over all emission pdfs in the HMM system can be used: K 0(0) = L logp(bk (·)) (7) k=l 740 C. Neukirchen and G. Rigoll 4 Regularization example: discrete HMMs As an example for parameter estimation in the regularization framework, a discrete HMM system with different VQs for each of the Z streams is considered here: Each VQ subdivides the feature space into Jz different partitions (i.e. the z-th codebook size is Jz) and the VQ-partition labels are denoted m)z) . If the observation subvector x (z) is in the j-th VQ-partition the VQ output is m(z) (x(z)) = m)Z). Since the discrete kind HMM output probabilities b~\m(z)) are used here, the regulizer's , prototypes are the discrete probabilities (3~z) (m (z) ). As a distance metric between the HMM emission probabilities and the prototype probabilities used in Eqn. (6) the asymmetric Kullback-Leibler divergence is applied: (8) 4.1 Estimation of HMM parameters using regularization The parameter set e of the HMM system to be estimated mainly consists of the discrete HMM emission probabilities (transition probabilities are not subject of regularization here). To get an iterative parameter estimation in the EM style, Eqn. (4) must be maximized; e.g. by setting the derivative of Eqn. (4) with respect to the HMM -parameter b~) (m )z) ) to zero and application of Lagrange multipliers with regard to the constraint 1 = EJ~ 1 biz) (m ;z)) . This leads to a quite complex solution that can be only solved numerically. The optimization problem can be simplified if the mixture in Eqn. (5) is replaced by the maximum approximation; i.e. only the maximum component in the sum is considered. The corresponding index of the maximum component is denoted i * : (9) In this simplified case the HMM parameter estimation is given by: (10) This is a weighted sum of the well known ML solution and the regulizer's prototype probability (3i~ (.) that is selected by the maximum search in Eqn. (9). The larger the value ofthe constant II, the stronger is the force that pushes the estimate of the HMM emission probability biz) (m ;z)) towards the prototype probability (3i~ (.). The situation when II tends towards infinity corresponds to the case of traditional state-tying, because all different states that fall into the same cluster i* make use of (3i~ (.) as emission probability in the z-th stream. 4.2 Estimation of regulizer parameters The parameter set ~ of the regulizer consists of the mixture weights Ci, the deviation parameters Ai,z, and of the discrete prototype probabilities (3~z) (m ;z) ) in the case of regulizing Controlling the Complexity ofHMM Systems by Regularization 741 discrete HMMs. These parameters can be set in advance by making use of prior knowledge; e.g. the prototype probabilities can be obtained from a simple HMM system that uses a small number of states. Alternatively, the regulizer's parameters can be estimated in a similar way as in (Nowlan, 1992) by maximizing Eqn. (7). Since there is no direct solution to this optimization problem, maximization must be performed in an EM-like iterative procedure that uses the HMM emission pdfs bk (.) as training data for the mixture model and by increasing the following auxiliary function in each step: K I R(~) = L L P(ilbk(·)) ·logp(i, bk(·)) k=1 i=1 K I L L P(ilbk(·)) . log (Ci . Pi(bk(·))) k=1 i=1 with the posterior probability used as weighting factor given by: P(ilbk(.)) = ICi . Pi(bk(')) 2::1=1 Cl . Pl(bk(·)) (11) (12) Again, maximization of Eqn. (11) can be performed by setting the derivative of R(~) with respect to the regulizer's parameters to zero under consideration of the constraints 1 = 2:::=1 ci and 1 = 2:::::1 f3~Z)(m~Z)) by application of Lagrange multipliers. For the eso timation of the regulizer parameters this yields: K Ci = ~ . L P(ilbk (-)) k=1 (13) ~. _ 2:::=1 P(ilbk(·)) ~,z - 2:::=1 Di,z(b~) (.)) . P(ilbk(-)) (14) ( 2:::=1 P(ilbk(')) 'IOgb~)(m)Z))) exp K. ~(z)( (z)) _ 2::k=1 P(zlbk(')) i mj - ~ (2:::=1 P(llbk (.)) 'IOgb~)(m}Z))) ~exp K 1=1 2::k=1 P(llbk (·)) (15) The estimate Ci can be interpreted as the a:-erage probability that a HMM emission probability falls into the i-th mixture cluster; Ai,z is the inverse ofthe weighted average distance between the emission probabilities and the prototype probability f3;z) ( .). The estimate ~;z)(m)zl) is the average probability over all emission probabilities for the VQ-label m~zl weighted in the log-domain. If the Euclidean distance between the discrete probabilities is used instead of Eqn. (8) to measure the differences between the HMM emission probabilities and the prototypes Jz 2 Di 'Z (b~)(m(zl)) = L (f3jz\myl) b~z)(m;Zl)) (16) j=1 the estimate of the prototype probabilities is given by the average of the HMM probabilities weighted in the original space: (17) 742 C. Neukirchen and G. Rigo/l 5 Experimental results To investigate the performance of the regularization methods described above a HMM speech recognition system for the speaker-independent resource management (RM) continuous speech task is built up. For training 3990 sentences from 109 different speakers are used. Recognition results are given as word error rates averaged over the official DARPA RM test sets feb'89, oct'89, feb'91 and sep'92, consisting of 1200 sentences from 40 different speakers, totally. Recognition is done via a beam search guided Viterbi decoder using the DARPA RM word pair grammar (perplexity: 60). 'As acoustic features every 10 ms 12 MFCC coefficients and the relative signal power are extracted from the speech signal along with the dynamic ~- and ~~-features, comprising 39 features per frame. The HMM system makes use of standard 3-state discrete probability phonetic models. Four different neural networks, trained by the MMI method, that is described in in (Rigoll, 1997) and extended in (Neukirchen, 1998), are used as VQ to quantize the features into Z = 4 different streams of discrete labels. The codebook size in each stream is set to 200. A simple system with models for 47 monophones and for the most prominent 33 function words (totally 394 states) yields a word error rate of 8.6%. A system that makes use of the more detailed (but untied) word internal triphone models (totally 6921 states) yields 12.2% word error. Hence HMM overfitting because of insufficient training data is a severe problem in this case. Traditional methods to overcome the effects of overfitting like interpolating between triphones and monophones (Bahl, 1983), data driven state-clustering and decision tree clustering yield error rates of 6.5%, 8.3% and 6.4%, respectively. It must be noted that in contrast to the usual training procedure in (Rigoll, 1996) no further smoothing methods are applied to the HMM emission probabilities here. In a first series of experiments the untied triphone system is regulized by a quite simple mixture of I = 394 density components, i.e. the number of clusters in the penalty term is identical to the number of states in the monophone system. In this case the prototype probabilities are initialized by the emission probabilities of the monophone system; the mixture weights and the deviation parameters in the regulizer are set to be uniform, initially. In order to test the inluence of the tradeoff parameter v it is set to 50, 10 and 2, respectively. The corresponding word error rates are 8.4%, 6.9% and 6.3%, respectively. In the case of large vs regularization degrades to a tying of trip hone states to monophone states and ,the error rate tends towards the monophone system performance. For smaller vs there is a good tradeoff between data fitting and HMM smoothness yielding improved system performance. The initial prototype probability settings provided by the monophone system do not seem to be changed much by regulizer parameter estimation, since the system performance only changes slightly when the regulizer's parameter reestimation is not incorporated. In preliminary experiments the regularization method is also used for speaker adaptation. A speaker-independent system trained on the Wall Street Journal (WSJ) database yields an error rate of32.4% on the Nov. 93 S33>0 test set with 10 different non-native speakers. The speaker-independent HMM emission probabilities are used to initialize the prototype probabilities of the regulizer. Then, speaker-dependent systems are built up for each speaker using only 40 fast enrollment sentences for training along with regularization (v is set to 10). Now, the error rate drops to 25.7% what is better than the speaker adaptation method described in (Rottland, 1998) that yields 27.3% by a linear feature space transformation. In combination both methods achieve 23.0% word error. 6 Summary and Discussion A method to avoid parameter overfitting in HMM systems by application of a regularization term that favor smooth and simple models has been presented here. The complexity Controlling the Complexity of HMM Systems by Regularization 743 measure applied to the HMMs is based on a finite mixture of negative exponential distributions, that generates the state-dependent emission probabilities. This kind of regularization term can be interpreted as a soft state-tying, since it forces the HMM emission probabilities to form a finite set of clusters. The effect of regularization has been demonstrated on the RM task by improving overfitted trip hone models. On a WSJ non-native speaker adaption task with limited training data, regularization outperforms feature space transformations. Eqn. (4) may be also interpreted from a perspective of Bayesian inference: the term v . n plays the role of setting a prior distribution on the HMM parameters to be estimated. Hence, the use of a mixture model for n is equivalent to using a special kind of prior in the framework of MAP estimation for HMMs (Gauvain, 1994). References L.R. Bahl, F. Jelinek, L.R. Mercer, 'A Maximum Likelihood Approach to Continuous Speech Recognition', IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 5, No.2 Mar. 1983, pp. 179-190. L.R. Bahl, P.v. de Souza, P.S. Gopalakrishnan,D. Nahamoo, M.A. Picheny, (1991) Context dependent modeling of phones in continuous speech using decision trees. Proc. DARPA speech and natural language processing workshop, 264-270. E.B. Baum, D. Haussler, (1989) What size net gives valid generalization? Neural Computation, 1:151-160. Y. Ie Cun, J. Denker, S. Solla, R.E. Howard, L.D. Jackel, (1990) Optimal brain damage. Advances in Neural Information Processing Systems 2, San Mateo, CA, Morgan Kauffinan. J.L. Gauvain, C.R. Lee, (1994) Maximum a posteriori estimation for multivariate Gaussian mixture observations of markov chains. IEEE Transaction Speech and Audio Proc., Vol. 2, 2:291-298. KJ. Lang, A.H. Waibel, G.E. Hinton, (1990) A time-delay neural network architecture for isolated word recognition. Neural Networks, 3:23~3. Ch. Neukirchen, D. Willett, S. Eickeler, S. Muller, (1998) Exploiting acoustic feature correlations by joint neural vector quantizer design in a discrete HMM system. Proc. ICASSP'98,5-8. S.J. Nowlan, G.E. Hinton, (1992) Simplifying neural networks by soft weight-sharing. Neural Computation, 4:473~93. D.C. Plaut, S.J. Nowlan, G.E. Hinton, (1986) Experiments on learning by backpropagation. technical report CMU-CS-86-126, Carnegie-Mellon University, Pittsburgh, PA. G. Rigoll, Ch. Neukirchen, J. Rottland, (1996) A new hybrid system based on MMI-neural networks for the RM speech recognition task. Proc. ICASSP'96, 865-868. G. Rigoll, Ch. Neukirchen, (1997) A new approach to hybrid HMMI ANN speech recognition using mutual information neural networks. Advances in Neural Information Processing Systems 9, Cambridge, MA, MIT Press, 772-778. J. Rottland, Ch. Neukirchen, G. Rigoll, (1998) Speaker adaptation for hybrid MMIconnectionist speech recognition systems. Pmc. ICASSP '98, 465~68. 'S.J. Young, (1992) The general use of tying in phoneme-based HMM speech recognizers. Proc. ICASSP '92, 569- 572. SJ. Young, P.C. Woodland (1993) The use of state tying in continuous speech recognition. Proc. Eurospeech '93, 2203-2206.
|
1998
|
53
|
1,552
|
Computation of Smooth Optical Flow in a Feedback Connected Analog Network Alan Stocker * Institute of Neuroinforrnatics University and ETH Zi.irich Winterthurerstrasse 190 8057 Zi.irich, Switzerland Abstract Rodney Douglas Institute of Neuroinforrnatics University and ETH Zi.irich Winterthurerstrasse 190 8057 Zi.irich, Switzerland In 1986, Tanner and Mead [1] implemented an interesting constraint satisfaction circuit for global motion sensing in a VLSI. We report here a new and improved a VLSI implementation that provides smooth optical flow as well as global motion in a two dimensional visual field. The computation of optical flow is an ill-posed problem, which expresses itself as the aperture problem. However, the optical flow can be estimated by the use of regularization methods, in which additional constraints are introduced in terms of a global energy functional that must be minimized. We show how the algorithmic constraints of Hom and Schunck [2] on computing smooth optical flow can be mapped onto the physical constraints of an equivalent electronic network. 1 Motivation The perception of apparent motion is crucial for navigation. Knowledge of local motion of the environment relative to the observer simplifies the calculation of important tasks such as time-to-contact or focus-of-expansion. There are several methods to compute optical flow. They have the common problem that their computational load is large. This is a severe disadvantage for autonomous agents, whose computational power is restricted by energy, size and weight. Here we show how the global regularization approach which is necessary to solve for the ill-posed nature of computing optical flow, can be formulated as a local feedback constraint, and implemented as a physical analog device that is computationally efficient. * correspondence to: alan@ini.phys.ethz.ch Computation of Optical Flow in an Analog Network 707 2 Smooth Optical Flow Horn and Schunck [2] defined optical flow in relation to the spatial and temporal changes in image brightness. Their model assumes that the total image brightness E(x, y, t) does not change over time; d dt E(x, y, t) = O. (I) Expanding equation (1) according to the chain rule of differentiation leads to o 0 0 F == ox E(x, y, t)u + oy E(x, y, t)v + 8t E(x, y, t) = 0, (2) where u = dx / dt and v = dy / dt represent the two components of the local optical flow vector. Since there is one equation for two unknowns at each spatial location, the problem is ill-posed, and there are an infinite number of possible solutions lying on the constraint line for every location (x, y). However, by introducing an additional constraint the problem can be regularized and a unique solution can be found. For example, Horn and Schunck require the optical flow field to be smooth. As a measure of smoothness they choose the squares of of the spatial derivatives of the flow vectors, (3) One can also view this constraint as introducing a priori knowledge: the closer two points are in the image space the more likely they belong to the projection of the same object. Under the assumption of rigid objects undergoing translational motion, this constraint implies that the points have the same, or at least very similar motion vectors. This assumption is obviously not valid at boundaries of moving objects, and so this algorithm fails to detect motion discontinuities [3]. The computation of smooth optical flow can now be formulated as the minimization problem of a global energy functional, J J ~ dx dy ---7 min (4) L with F and 8 2 as in equation (2) and (3) respectively. Thus, we exactly apply the approach of standard regularization theory [4]: Ax=y x = A -Iy II Ax - y II +.x II P 11= min y: data inverse problem, ill-posed regularization The regularization parameter, .x, controls the degree of smoothing of the solution and its closeness to the data. The norm, II . II, is quadratic. A difference in our case is that A is not constant but depends on the data. However, if we consider motion on a discrete time-axis and look at snapshots rather than continuously changing images, A is quasistationary.1 The energy functional (4) is convex and so, a simple numerical technique like gradient descent would be able to find the global minimum. To compute optical flow while preserving motion discontinuities one can modify the energy functional to include a binary line process that prevents smoothing over discontinuities [4]. However, such an functional will not be convex. Gradient descent methods would probably fail to find the global amongst all local minima and other methods have to be applied. 1 In the a VLSI implementation this requires a much shorter settling time constant for the network than the brightness changes in the image. 708 A. Stocker and R. Doug/as 3 A Physical Analog Model 3.1 Continuous space Standard regularization problems can be mapped onto electronic networks consisting of conductances and capacitors [5]. Hutchinson et al. [6] showed how resistive networks can be used to compute optical flow and Poggio et al. [7] introduced electronic network solutions for second-order-derivative optic flow computation. However, these proposed network architectures all require complicated and sometimes negative conductances although Harris et al. [8] outlined a similar approach as proposed in this paper independently. Furthennore, such networks were not implemented practically, whereas our implementation with constant nearest neighbor conductances is intuitive and straightforward. Consider equation (4): L = L(u, v, '\lu, '\lv, x, y). The Lagrange function L is sufficiently regular (L E C 2 ), and thus it follows from calculus of variation that the solution of equation (4) also suffices the linear Euler-Lagrange equations A '\l2u - Ex (Exu + Eyv + E t ) A'\l2v - Ey(Exu + Eyv + E t ) o O. (5) The Euler-Lagrange equations are only necessary conditions for equation (4). The sufficient condition for solutions of equations (5) to be a weak minimum is the strong Legendrecondition, that is L'ilu'ilu > 0 which is easily shown to be true. and L'ilv'ilv > 0, 3.2 Discrete Space - Mapping to Resistive Network By using a discrete five-point approximation of the Laplacian \7 2 on a regular grid, equations (5) can be rewritten as A(Ui+1 )' + Ui-1 )' + Ui )'+1 + Ui )-1 - 4Ui )') - Ex, ,(Ex ,Ui)' + E y' Vi)' + E t ,) =0 (6) , , , , , t,] l,J' ' . ] ' 1, J A(Vi+1)' +Vi- 1)' +Vi)'+1 +Vi)'-1 - 4Vi)') -Ey' (Ex, ,Ui)' +Ey' ,Vi)' +Et, ,)=0 , , , , , 1,)'.J ' 1 ,1' 1,] where i and j are the indices for the sampling nodes. Consider a single node of the resistive network shown in Figure 1: Figure 1: Single node of a resistive network. From Kirchhoff's law it follows that dV,· , C d~') = G(Vi+1 ,j + Vi-I ,j + Vi,HI + Vi,j-1 - 4Vi,j) + lini.j (7) Computation of Optical Flow in an Analog Network 709 where Vi,j represents the voltage and l in',i the input current. G is the conductance between two neighboring nodes and C the node capacitance. In steady state, equation (7) becomes G(Vi+I ,j + Vi - I,j + Vi,j+! + Vi ,j- I - 4Vi,j) + lini" = O. (8) The analogy with equations (6) is obvious: G ~ .A lUin ·· ~ -Ex· .(Ex · UiJ' +Ey , ViJ' +Et · ) t t ] t. ) t t ) ' t ,]' 1 , ) lVin " ~ -Ey. ,(Ex " UiJ, +Ey" Vi),+Et , ) (9) t , } t , } 1 , ) ' 1 , ) ' I , J To create the full system we use two parallel resistive networks in which the node voltages Ui,j and Vi,j represent the two components of the optical flow vector U and v . The input currents lUini,i and lVini" are computed by a negative recurrentfeedback loop modulated by the input data, which are the spatial and temporal intensity gradients. Notice that the input currents are proportional to the deviation of the local brightness constraint: the less the local optical flow solution fits the data the higher the current lini.j will be to correct the solution and vice versa. Stability and convergence of the network are guaranteed by Maxwell's minimum power principle [4, 9]. 4 The Smooth Optical Flow Chip 4.1 Implementation -CP\~}1J ~tf)~ ! I ~ Figure 2: A single motion cell within the three layer network. For simplicity only one resistive network is shown. The circuitry consists of three functional layers (Figure 2). The input layer includes an array of adaptive photoreceptors [10] and provides the derivatives of the image brightness to the second layer, The spatial gradients are the first-order linear approximation obtained by subtracting the two neighboring photoreceptor outputs. The second layer computes the input current to the third layer according to equations (9). Finally these currents are fed into the two resistive networks that report the optical flow components. The schematics of the core of a single motion cell are drawn in Figure 3. The photoreceptor and the temporal differentiator are not shown as well as the other half of the circuitry that computes the y-component of the flow vector. 710 A. Stocker and R. Doug/as A few remarks are appropriate here: First, the two components of the optical flow vector have to be able to take on positive and negative values with respect to some reference potential. Therefore, a symmetrical circuit scheme is applied where the positive and negative (reference voltage) values are carried on separate signal lines. Thus, the actual value is encoded as the difference of the two potentials. temporal differentiator E (E V + E) x x x t ~." .... " ....... " ....... " ......... : Exl l _ f-VViBias ! I:········ .. ·· .. · .. ····· .. ··· .. ·: OpBias v+ X DiffBias 1 Figure 3: Cell core schematics; only the circuitry related to the computation of the x-component of the flow vector is shown. Second, the limited linear range of the Gilbert multipliers leads to a narrow span of flow velocities that can be computed reliably. However, the tuning can be such that the operational range is either at high or very low velocities. Newer implementations are using modified multipliers with a larger linear range. Third, consider a single motion cell (Figure 2). In principle, this cell would be able to satisfy the local constraint perfectly. In practice (see Figure 3), the finite output impedance of the p-type Gilbert multiplier slightly degrades this ideal solution by imposing an effective conductance G load . Thus, a constant voltage on the capacitor representing a non-zero motion signal requires a net output current of the mUltiplier to maintain it. This requirement has two interesting consequences: i) The reported optical flow is dependent on the spatial gradients (contrast). A single uncoupled cell according to Figure 2 has a steady state solution with -Et .Ex . U I , ] ' .J i ,j '" (Gload + E;i.j + E~iJ and -EtEy .. 'Y: 1,) 1, J i,j '" (Gload + E; . + Ey2) 1,) 1,) respectively. For the same object speed, the chip reports higher velocity signals for higher spatial gradients. Preferably, Gload should be as low as possible to minimize its influence on the solution. ii) On the other hand, the locally ill-posed problem is now well-posed because G load imposes a second constraint. Thus, the chip behaves sensibly in the case of low contrast input (small gradients), reporting zero motion where otherwise, unreliable high values would occur. This is convenient because the signal-to-noise ratio at low contrast is very poor. Furthermore, a single cell is forced to report the velocity on the constraint line with smallest absolute value, which is normal to the spatial gradient. That means that the chip Computation of Optical Flow in an Analog Network 711 reports normal flow when there is no neighbor connection. Since there is an trade-off between the robustness of the optical flow computation and a low conductance Glaad, the follower-connected transconductance amplifier in our implementation allows us to control G laad above its small intrinsic value. 4.2 Results The results reported below were obtained from a MOSIS tinychip containing a 7x7 array of motion cells each 325x325 A 2 in size. The chip was fabricated in 1.2 J.,tm technology at AMI. \ ......... "', " "",""",-- ~ ~ "- "- ", ""- , ," "-, " ""3 ... ",~" , ,,,"" .' , ," "- , -~" , " ,~" , , ." ".' ," "- , " 'f-~' ~ , ,1'-'" , , , a b c Figure 4: Smooth optical flow response of the chip to an left-upwards moving edge. a: photoreceptor output, the arrow indicates the actual motion direction. b: weak coupling (small conductance G). c: strong coupling. \ -\ I , lr- ~~~~~~ , - , ,- -/ 2F--- ~ ~~ ~ -E-- ~ 3 , " "'-3F-- ~ ~ ~ "'E--- ~ -E-'r-- /" "/ .F--~~~~~~ --\ 'I / ,.,~ Sr- ~ ~ ~ ~ '4-<Eo-. , I I " ,, '.... &r-- ~ 'E--- ~ -E-'E-~ .,.- ,/ \ 1F-- ~ ~ ~ ~ '<E-4a b c Figure 5: Response of the optical flow chip to a plaid stimulus moving towards the left: a: photoreceptor output; b shows the normal flow computation with disabled coupling between the motion cells in the network while in c the coupling strength is at maximum. The chip is able to compute smooth optical flow in a qualitative manner. The smoothness can be set by adjusting the coupling conductances (Figure 4). Figure 5b presents the normal flow computation that occurs when the coupling between the motion cells is disabled. The limited resolution of this prototype chip together with the small size of the stimulus leads to a noisy response. However it is clear that the chip perceives the two gratings as separate moving objects with motion normal to their edge orientation. When the network 712 A. Stocker and R. Douglas conductance is set very high the chip perfonns a collective computation solving the aperture problem under the assumption of single object motion. Figure 5c shows how the chip can compute the correct motion of a plaid pattern. 5 Conclusion We have presented here an aVLSI implementation of a network that computes 2D smooth optical flow. The strength of the resistive coupling can be varied continuously to obtain different degrees of smoothing, from a purely local up to a single global motion signal. The chip ideally computes smooth optical flow in the classical definition of Horn and Schunck. Instead of using negative and complex conductances we implemented a network solution where each motion cell is perfonning a local constraint satisfaction task in a recurrent negative feedback loop. It is significant that the solution of a global energy minimization task can be achieved within a network of local constraint solving cells that do not have explicit access to the global computational goal. Acknowledgments This article is dedicated to Misha Mahowald. We would like to thank Eric Vittoz, Jorg Kramer, Giacomo Indiveri and Tobi Delbriick for fruitful discussions. We thank the Swiss National Foundation for supporting this work and MOSIS for chip fabrication. References [1] J. Tanner and c.A. Mead. An integrated analog optical motion sensor. In S. -Y. Kung, R. Owen, and G. Nash, editors, VLSI Signal Processing, 2, page 59 ff. IEEE Press, 1986. [2] B.K. Horn and B.G. Schunck. Detennining optical flow. Artificial Intelligence, 17: 185-203, 1981. [3] A. Yuille. Energy functions for early vision and analog networks. Biological Cybernetic~61:115-123, 1989. [4] T. Poggio, V. Torre, and C. Koch. Computational vision and regularization theory. Nature, 317(26):314-319, September 1985. [5] B. K. Horn. Parallel networks for machine vision. Technical Report 1071, MIT AI Lab, December 1988. [6] J. Hutchinson, C. Koch, 1. Luo, and C. Mead. Computing motion using analog and binary resistive networks. Computer, 21 :52-64, March 1988. [7] T. Poggio, W. Yang, and V. Torre. Optical flow: Computational properties and networks, biological and analog. The Computing Neuron, pages 355-370, 1989. [8] 1.G. Harris, C. Koch, E. Staats, and J. Luo. Analog hardware for detecting discontinuities in early vision. Int. Journal of Computer Vision, 4:211-223, 1990. [9] J. Wyatt. Little-known properties of resistive grids that are useful in analog vision chip designs. In C. Koch and H. Li, editors, Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits, pages 72-89. IEEE Computer Society Press, 1995. [10] S.c. Liu. Silicon retina with adaptive filtering properties. In Advances in Neural Information Processing Systems 10, November 1997. Scheduling Straight-Line Code Using Reinforcement Learning and Rollouts Amy McGovern and Eliot Moss { amy I moss@cs. umass. edu} Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 Abstract The execution order of a block of computer instructions can make a difference in its running time by a factor of two or more. In order to achieve the best possible speed, compilers use heuristic schedulers appropriate to each specific architecture implementation. However, these heuristic schedulers are time-consuming and expensive to build. In this paper, we present results using both rollouts and reinforcement learning to construct heuristics for scheduling basic blocks. The rollout scheduler outperformed a commercial scheduler, and the reinforcement learning scheduler performed almost as well as the commercial scheduler. 1 Introduction Although high-level code is generally written as if it were going to be executed sequentially, many modern computers are pipelined and allow for the simultaneous issue of multiple instructions. In order to take advantage of this feature, a scheduler needs to reorder the instructions in a way that preserves the semantics of the original high-level code while executing it as quickly as possible. An efficient schedule can produce a speedup in execution of a factor of two or more. However, building a scheduler can be an arduous process. Architects developing a new computer must manually develop a specialized instruction scheduler each time a change is made in the proposed system. Building a scheduler automatically can save time and money. It can allow the architects to explore the design space more thoroughly and to use more accurate metrics in evaluating designs. Moss et al. (1997) showed that supervised learning techniques can induce excellent basic block instruction schedulers for the Digital Alpha 21064 processor. Although all of the supervised learning methods performed quite well, they shared several limitations. Supervised learning requires exact input/output pairs. Generating these training pairs requires an optimal scheduler that searches every valid permutation of the instructions within a basic block and saves the optimal permutation (the schedule with the smallest running time). However, this search was too time-consuming to perform on blocks with more than 10 in904 A. McGovern and E. Moss structions, because optimal instruction scheduling is NP-hard. Using a semi-supervised method such as reinforcement learning or rollouts does not require generating training pairs, so the method can be applied to larger basic blocks and can be trained without knowing optimal schedules. 2 Domain Overview Moss et al. (1997) gave a full description of the domain. This study presents an overview, necessary details, our experimental method and detailed results for both rollouts and reinforcement learning. We focused on scheduling basic blocks of instructions on the 21064 version (DEC, 1992) of the Digital Alpha processor (Sites, 1992). A basic block is a set of instructions with a single entry point and a single exit point. Our schedulers could reorder instructions within a basic block but could not rewrite, add, or remove any instructions. The goal of each scheduler is to find a least-cost valid ordering of the instructions. The cost is defined as the simulated execution time of the block. A valid ordering is one that preserves the semantically necessary ordering constraints of the original code. We insure validity by creating a dependency graph that directly represents those necessary ordering relationships. This graph is a directed acyclic graph (DAG). The Alpha 21064 is a dual-issue machine with two different execution pipelines. Dual issue occurs only if a number of detailed conditions hold, e.g., the two instructions match the two pipelines. An instruction can take anywhere from one to many tens of cycles to execute. Researchers at Digital have a publicly available 21064 simulator that also includes a heuristic scheduler for basic blocks. We call that scheduler DEC. The simulator gives the running time for a given scheduled block assuming all memory references hit the cache and all resources are available at the beginning of the block. All of our schedulers used a greedy algorithm to schedule the instructions, i.e., they built schedules sequentially from beginning to end with no backtracking. In order to test each scheduling algorithm, we used the 18 SPEC95 benchmark programs. Ten of these programs are written in FORTRAN and contain mostly floating point calculations. Eight of the programs are written in C and focus more on integer, string, and pointer calculations. Each program was compiled using the commercial Digital compiler at the highest level of optimization. We call the schedules output by the compiler OR/G. This collection has 447,127 basic blocks, containing 2,205,466 instructions. 3 Rollouts Rollouts are a form of Monte Carlo search, first introduced by Tesauro and Galperin (1996) for use in backgammon. Bertsekas et al. (l997a,b) have explored rollouts in other domains and proven important theoretical results. In the instruction scheduling domain, rollouts work as follows: suppose the scheduler comes to a point where it has a partial schedule and a set of (more than one) candidate instructions to add to the schedule. For each candidate, the scheduler appends it to the partial schedule and then follows a fixed policy 1r to schedule the remaining instructions. When the schedule is complete, the scheduler evaluates the running time and returns. When 1r is stochastic, this rollout can be repeated many times for each instruction to achieve a measure of the average expected outcome. After rolling out each candidate, the scheduler picks the one with the best average running time. Our first set of rollout experiments compared three different rollout policies 1r. The theory developed by Bertsekas et al. (l997a,b) proved that if we used the DEC scheduler as 1r, we would perform no worse than DEC. An architect proposing a new machine might not have a good heuristic available to use as 1r, so we also considered policies more likely to be available. The first was the random policy, RANDOM-1r, which is a choice that is clearly always available. Under this policy, the rollout makes all choices randomly. We also used Scheduling Straight-Line Code Using RL and Rollouts 905 the ordering produced by the optimizing compiler ORIG, denoted ORIG-1r. The last rollout policy tested was the DEC scheduler itself, denoted DEC-1r. The scheduler performed only one rollout per candidate instruction when using ORIG-1r and DEC-1r because they are deterministic. We used 25 rollouts for RANDOM-1r. After performing a number of rollouts for each candidate instruction, we chose the instruction with the best average running time. As a baseline scheduler, we also scheduled each block randomly. Because the running time increases quadratically with the number of rollouts, we focused our rollout experiments on one program in the SPEC95 suite: applu. Table 1 gives the performance of each rollout scheduler as compared to the DEC scheduler on all 33,007 basic blocks of size 200 or less from applu. To assess the performance of each rollout policy 1r, we used the ratio of the weighted execution time of the rollout scheduler to the weighted execution time of the DEC scheduler. More concisely, the performance measure was: . Lall blocks rollout scheduler execution time * number of times block is executed ratio = ===:=-'-'-~==-~----------------------- Lall blocks DEC scheduler execution time * number of times block is executed This means that a faster running time on the part of our scheduler would give a smaller ratio. Scheduler Ratio Scheduler Ratio Random 1.3150 RANDOM-1T' 1.0560 ORIG-1T' 0.9895 DEC-1T' 0.9875 Table 1: Ratios of the weighted execution time of the rollout scheduler to the DEC scheduler. A ratio of less than one means that the rollouts outperformed the DEC scheduler. All of the rollout schedulers far outperformed the random scheduler which was 31 % slower than DEC. By only adding rollouts, RANDOM-1r was able to achieve a running time only 5% slower than DEC. Only the schedulers using ORIG-1r and DEC-1r as a model outperformed the DEC scheduler. Using ORIG-1r and DEC-1r for rollouts produced a schedule that was 1.1 % faster than the DEC scheduler on average. Although this improvement may seem small, the DEC scheduler is known to make optimal choices 99.13% of the time for blocks of size 10 or less (Stefanovic, 1997). Rollouts were tested only on applu rather than on the entire SPEC95 benchmark suite due to the lengthy computation time. Rollouts are costly because performing m rollouts on n instructions is O(n2m), whereas a greedy scheduling algorithm is O(n). Again, because of the time required, we only performed five runs of RANDOM-1r. Since DEC-1r and ORIG-1r are deterministic, only one run was necessary. We also ran the random scheduler 5 times. Each number reported above is the geometric mean of the ratios across the five runs. Part of the motivation behind using rollouts in a scheduler is to obtain fast schedules without spending the time to build a precise heuristic. With this in mind, we explored RANDOM-1r more closely in a follow-up experiment. Evaluation of the number of rollouts This experiment considered how performance varies with the number of rollouts. We tested 1,5, 10,25, and 50 rollouts per candidate instruction. We also varied the metric for choosing among candidates. Instead of always choosing the instruction with the best average performance, we also experimented with selecting the instruction with the absolute best running time among its rollouts. We hypothesized that selection of the absolute best path might lead to better performance overall. These experiments were performed on all 33,007 basic blocks of size 200 or less from applu. Figure 1 shows the performance of the rollout scheduler as a function of the number of rollouts. Performance is assessed in the same way as before: ratio of weighted execution 906 A. McGovern and E. Moss Performance over number of rollouts , ,8 , '6 1-"~~t l ,",4 ~ lij' '2 E .g " ., Q. , 08 '06 1.04, 5 '0 25 50 Number of Rollouts Figure 1: Performance of rollout scheduler with the random model as a function of the number of rollouts and the choice of evaluation function. times. Thus, a lower number is better. Each data point represents the geometric mean over five different runs. The difference in performance between one rollout and five rollouts using the average choice for each rollout is 1.16 versus 1.10. However, the difference between 25 rollouts and 50 rollouts is only 1.06 versus 1.05. This indicates the tradeoff between schedule quality and the number of rollouts. Also, choosing the instructions with the best rollout schedule did not yield better performance over any numbers of rollouts. We hypothesize that this is due to the stochastic nature of the rollouts. Once the scheduler chooses an instruction, it repeats the rollout process again. By choosing the instruction with the absolute best rollout, there is no guarantee that the scheduler will find that permutation of instructions again on the next rollout. When it chooses the instruction with the best average rollout, the scheduler has a better chance of finding a good schedule on the next rollout. Although the rollout schedulers performed quite well, the extremely long scheduling time is a major drawback. Using 25 rollouts per block took over 6 hours to schedule one program. Unless this aspect can be improved, rollouts cannot be used for all blocks in a commercial scheduler or in evaluating more than a few proposed machine architectures. However, because rollout scheduling performance is high, rollouts could be used to optimize the schedules on important (long running times or frequently executed) blocks within a program. 4 Reinforcement Learning Results 4.1 Overview Reinforcement learning (RL) is a collection of methods for discovering near-optimal solutions to stochastic sequential decision problems (Sutton & Barto, 1998). A reinforcement learning system does not require a teacher to specify correct actions. Instead, the learning agent tries different actions and observes their consequences to determine which actions are best. More specifically, in the reinforcement learning framework, a learning agent interacts with an environment over a series of discrete time steps t = 0,1,2, 3, .... At each time t, the agent is in some state, denoted St, and chooses an action, denoted at , which causes the environment to transition to state StH and to emit a reward, denoted rtH' The next state and reward depend only on the preceding state and action, but they may depend on it in a stochastic fashion. The objective is to learn a (possibly stochastic) mapping from states to actions called a policy, which maximizes the cumulative discounted reward received by the agent. More precisely, the objective is to choose action at so as to maximize the expected return, E n::::o -yirt+i+l }, where -y E [0, 1) is a discount-rate parameter. Scheduling Straight-Line Code USing RL and Rollouts 907 A common solution strategy is to approximate the optimal value function V* , which maps states to the maximal expected return that can be obtained starting in each state and taking the best action. In this paper we use temporal difference (TD) learning (Sutton, 1988). In this method, the approximation to V* is represented by a table with an entry V (s) for every state. After each transition from state St to state StH, under an action with reward rt+l, the estimated value function V (St) is updated by: V(St) +- V(St) + a [rtH + ,V(St+l) - V(st)] where a is a positive step-size parameter. 4.2 Experimental Results Scheeff et al. (1997) have previously experimented with reinforcement learning in this domain. However, the results were not as good as hoped. Finding the right reward structure was the difficult part of using RL in this domain. Rewarding based on number of cycles to execute the block does not work well as it punishes the learner on long blocks. To normalize for this effect, Scheeff et al. (1997) rewarded based on the cycles per instruction (CPI). However, learning with this reward also did not work well as some blocks have more unavoidable idle time than others. A reward based solely on CPI does not account for this aspect. To account for this variation across blocks, we gave the RL scheduler a final reward of: . (. (# of instructions) ) r = time to execute block-max minimum wetghted critical path, 2 The scheduler received a reward of zero unless the schedule was complete. As the 21064 processor can only issue two instructions at a time, the number of instructions divided by 2 gives an absolute lower bound on the running time. The weighted critical path (wcp) helps to solve the problem of the same size blocks being easier or harder to schedule than others. When a block is harder to execute than another block of the same size, the wcp tends to be higher, thus causing the learner to get a different reward. The wcp is correlated with the predicted number of execution cycles for the DEC scheduler (r = 0.9) and the number of instructions divided by 2 is also correlated (r = 0.78) with the DEC scheduler. Future experiments will use a weighted combination of these two features to compute the reward. As with the supervised learning results presented in Moss et al. (1997), the RL system learned a preferential value function between candidate instructions. That is, instead of learning the value of instruction A or instruction B, RL learned the value of choosing instruction A over instruction B. The state space consisted of a tuple of features from a current partial schedule and the two candidate instructions. These features were derived from knowledge of the DEC simulator. The features and our intuition for their importance are summarized in Table 2. Previous experiments (Moss et al. 1997) showed that the actual value of wcp and e did not matter as much as their relative values. Thus, for those features we used the signum «(1) of the difference of their values for the two candidate instruction. Signum returns -1,0, or 1 depending on whether the value is less than, equal to, or greater than zero. Using this representation, the RL state space consisted of the following tuple, given candidate instruction x and y and partial schedule p: state_vec(p, x, y) = (odd(P), ic(x) , ic(y),d(x), dey), a(wcp(x) - wcp(y)), a(e(x) - e(y») This yields 28,800 unique states. Figure 2 shows an example partial schedule, a set of candidate instructions, and the resulting states for the RL system. The RL scheduler does not learn over states where there are no choices to be made. The last choice point in a trajectory is given the final reward even if further instructions are scheduled from that point. The values of multiple states are updated at each time step because the instruction that is chosen affects the preference function of multiple states. For 908 A. McGovern and E. Moss Heuristic Name Heuristic Description Intuition for Use Odd Partial (odd) Is the current number of instructions schedIf TRUE, we're interested in scheduling inuled odd or even? structions that can dual-issue with the previous instruction. Instruction Class (ic) The Alpha's instructions can be divided The instructions in each class can be exinto equivalence classes with respect to ecuted only in certain execution pipelines, timing properties. etc. Weighted Critical Path (wcp) The height of the instruction in the DAG Instructions on longer critical paths should (the length of the longest chain of instrucbe scheduled first, since they affect the tions dependent on this one), with edges lower bound of the schedule cost. weighted by expected latency of the result produced by the instruction Actual Dual (d) Can the instruction dual-issue with the preIf Odd Partial is TRUE, it is important that vious scheduled instruction? we find an instruction, if there is one, that can issue in the same cycle with the previous scheduled instruction. Max Delay (e) The earliest cycle when the instruction can We want to schedule instructions that will begin to execute, relative to the current cyhave their data and functional unit available cle; this takes into account any wait for inearliest. puts for functional units to become available Table 2: Features for Instructions and Partial Schedule States for RL system partial schedule p State label State AB state_ vec(p,A,B) AC state_ vec(p,A,C) BC state_vec(p,B,C) BA state_ vec(p,B,A) c CA state_vec(p,C,A) CB state_ vec(p,C,B) A candidate instructions Figure 2: On the left is a graphical depiction of a partial schedule and three candidate instructions. The table on the right shows how the RL system makes its states from this. example, using the partial schedule and candidate instructions shown in Figure 2, scheduling instruction A, the RL system would backup values for AB, AC, and the opposite values for BA and CA. Using this system, we performed leave-one-out cross validation across all blocks of the SPEC95 benchmark suite. Blocks with more than 800 instructions were broken into blocks of 800 or less because of memory limitations on the DEC simulator. This was true for only two applications: applu and fpppp. The RL system was trained online for 19 of the 20 applications using Q = 0.05 and an £-greedy exploration method with £ = 0.05. This was repeated 20 different times, holding one program from SPEC95 out of the training each time. We then evaluated the greedy policy (£ = 0) learned by the RL system on each program that had been held out. All ties were broken randomly. Performance was assessed the same way as before. The results for each benchmark are shown in Table 3. Overall, the RL scheduler performed only 2% slower than DEC. This is a geometric mean over all applications in the suite and on all blocks. Although the RL system did not outperform the DEC scheduler overall, it significantly outperformed DEC on the large blocks (applu-big and fpppp-big). 5 Conclusions The advantages of the RL scheduler are its performance on the task, its speed, and the fact that it does not rely on any heuristics for training. Each run was much faster than with rollouts and the performance came close to the performance of the DEC scheduler. In a Scheduling Straight-Line Code Using RL and Rollouts 909 App Ratio App Ratio App Ratio App Ratio applu 1.001 applu-big 0.959 apsi 1.018 ccl 1.022 compress95 0.977 fpppp 1.055 fpppp-big 0.977 go 1.028 hydro2d 1.022 ijpeg 0.975 Ii 1.012 m88ksim 1.042 mgrid 1.009 perl 1.014 su2cor 1.018 swim 1.040 tomcatv 1.019 turb3d 1.218 vortex 1.032 waveS 1.032 Table 3: Performance of the greedy RL-scheduler on each application in SPEC95 over all leave-one-out cross-validation runs as compared to DEC. Applications whose running time was better than DEC are shown in italics. system where multiple architectures are being tested, RL could provide a good scheduler with minimal setup and training. We have demonstrated two methods of instruction scheduling that do not rely on having heuristics and that perform quite well. Future work could address tying the two methods together while retaining the speed of the RL learner, issues of global instruction scheduling, scheduling loops, and validating the techniques on other architectures. Acknowledgments We thank John Cavazos and Darko Stefanovic for setting up the simulator and for prior work in this domain, along with Paul Utgoff, Doina Precup, Carla Brodley, and David Scheeff. We also wish to thank Andrew Barto, Andrew Fagg, and Doina Precup for comments on earlier versions of the paper. This work is supported in part by the National Physical Science Consortium, Lockheed Martin, Advanced Technology Labs, and NSF grant IRI-9503687 to Roderic A. Grupen and Andrew G. Barto. We thank various people of Digital Equipment Corporation, for the DEC scheduler and the ATOM program instrumentation tool (Srivastava & Eustace, 1994), essential to this work. We also thank Sun Microsystems and Hewlett-Packard for their support. References Bertsekas, D. P. (1997). Differential training of rollout policies. In Proc. of the 35th Allerton Conference on Communication, Control, and Computing. Allerton Park, Ill. Bertsekas, D. P., Tsitsiklis, 1. N. & Wu, c. (1997). Rollout algorithms for combinatorial optimization. Journal of Heuristics. DEC (1992). DEC chip 21064-AA Microprocessor Hardware Reference Manual (first edition Ed.). Maynard, MA: Digital Equipment Corporation. Moss, 1. E. B., Utgoff, P. E., Cavazos, J., Precup, D., Stefanovic, D., Brodley, C. E. & Scheeff, D. T. (1997). Learning to schedule straight-line code. In Proceedings of Advances in Neural Information Processing Systems 10 (Proceedings of NIPS'97). MIT Press. Scheeff, D., Brodley, c., Moss, E., Cavazos, 1. & Stefanovic, D. (1997). Applying reinforcement learning to instruction scheduling within basic blocks. Technical report, University of Massachusetts, Amherst. Sites, R. (1992). Alpha Architecture Reference Manual. Maynard, MA: Digital Equipment Corporation. Srivastava, A. & Eustace, A. (1994). ATOM: A system for building customized program analysis tools. In Proc. ACM SIGPLAN '94 Con! on Prog. Lang. Design and Imp!. (pp. 196-205). Stefanovic, D. (1997). The character of the instruction scheduling problem. University of Massachusetts, Amherst. Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine Learning, 3,9-44. Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning. An Introduction. Cambridge, MA: MIT Press. Tesauro, G. & Galperin, G. R. (1996). On-line policy improvement using monte-carlo search. In Advances in Neural Information Processing: Proceedings of the Ninth Conference. MIT Press.
|
1998
|
54
|
1,553
|
Learning to estimate scenes from images William T. Freeman and Egon C. Pasztor MERL, Mitsubishi Electric Research Laboratory 201 Broadway; Cambridge, MA 02139 freeman@merl.com, pasztor@merl.com Abstract We seek the scene interpretation that best explains image data. For example, we may want to infer the projected velocities (scene) which best explain two consecutive image frames (image). From synthetic data, we model the relationship between image and scene patches, and between a scene patch and neighboring scene patches. Given' a new image, we propagate likelihoods in a Markov network (ignoring the effect of loops) to infer the underlying scene. This yields an efficient method to form low-level scene interpretations. We demonstrate the technique for motion analysis and estimating high resolution images from low-resolution ones. 1 Introduction There has been recent interest in studying the statistical properties of the visual world. Olshausen and Field [23J and Bell and Sejnowski [2J have derived VI-like receptive fields from ensembles of images; Simon celli and Schwartz [30J account for contrast normalization effects by redundancy reduction. Li and Atick [1 J explain retinal color coding by information processing arguments. Various research groups have developed realistic texture synthesis methods by studying the response statistics of VI-like multi-scale, oriented receptive fields [12, 7, 33, 29J. These methods help us understand the early stages of image representation and processing in the brain. Unfortunately, they don't address how a visual system might interpret images, i.e., estimate the underlying scene. In this work, we study the statistical properties of a labelled visual world, images together with scenes, in order to infer scenes from images. The image data might be single or multiple frames; the scene quantities 776 W T. Freeman and E. C. Pasztor to be estimated could be projected object velocities, surface shapes, reflectance patterns, or colors. We ask: can a visual system correctly interpret a visual scene if it models (1) the probability that any local scene patch generated the local image, and (2) the probability that any local scene is the neighbor to any other? The first probabilities allow making scene estimates from local image data, and the second allow these local estimates to propagate. This leads to a Bayesian method for low level vision problems, constrained by Markov assumptions. We describe this method, and show it working for two low-level vision problems. 2 Markov networks for scene estimation First, we synthetically generate images and their underlying scene representations, using computer graphics. The synthetic world should typify the visual world in which the algorithm will operate. For example, for the motion estimation problem of Sect. 3, our training images were irregularly shaped blobs, which could occlude each other, moving in randomized directions at speeds up to 2 pixels per frame. The contrast values of the blobs and the background were randomized. The image data were the concatenated image intensities from two successive frames of an image sequence. The scene data were the velocities of the visible objects at each pixel in the two frames. Second, we place the image and scene data in a Markov network [24]. We break the images and scenes into localized patches where image patches connect with underlying scene patches; scene patches also connect with neighboring scene patches. The neighbor relationship can be with regard to position, scale, orientation, etc. For the motion problem, we represented both the images and the velocities in 4level Gaussian pyramids [6], to efficiently communicate across space. Each scene patch then additionally connects with the patches at neighboring resolution levels. Figure 2 shows the multiresolution representation (at one time frame) for images and scenes. 1 Third, we propagate probabilities. Weiss showed the advantage of belief propagation over regularization methods for severall-d problems [31]; we apply related methods to our 2-d problems. Let the ith and jth image and scene patches be Yi and Xj, respectively. For the MAP estimate [3] of the scene data,2 we want to find argmaxxl ,X2 , ... ,XNP(Xl,X2,'" ,xNIYl,Y2, .. . ,YM), where Nand M are the number of scene and image patches. Because the joint probability is simpler to compute, we find, equivalently, argmaxx1,X2, ... ,XNP(Xl , X2,· . . , XN, Yl , Y2, · .. , YM) . The conditional independence assumptions of the Markov network let us factorize the desired joint probability into quantities involving only local measurements and calculations [24, 32]. Consider the two-patch system of Fig. 1. We can factorize P(Xl , X2,Yl,Y2) in three steps: (1) P(XI,X2 ,Yl,Y2) = P(X2 ,Yl,Y2Ixt}P(Xl) (by elementary probability); (2) P(X2,Yl,Y2Ixl) = P(ydXl)P(X2 ,Y2Ixl) (by conditional ITo maintain the desired conditional independence relationships, we appended the image data to the scenes. This provided the scene elements with image contrast information, which they would otherwise lack. 2Related arguments follow for the MMSE or other estimators. Learning to Estimate Scenes from Images 777 independence); (3) P(X2,Y2IxI) = P(x2Ixt}P(Y2Ix2) (by elementary probability and the Markov assumption). To estimate just Xl at node 1, the argmaxx2 becomes max X 2 , and then slides over constants, giving terms involving only local computations at each node: argmaxX1 maxx2 P(xI, X2, YI, Y2) = argmaxx1 [P(XI )P(Yllxl)maxX2 [P(x2Ixt}P(Y2I x2)]]. (1) This factorization generalizes to any network structure without loops. We use a different factorization at each scene node: we turn the initial joint probability into a conditional by factoring out that node's prior, P(Xj) , then proceeding analogously to the example above. The resulting factorized computations give local propagation rules, similar to those of [24, 32]: Each node, j, receives a message from each neighbor, k , which is an accumulated likelihood function, Lkj = P(Yk ... Yzlxj), where Yk . .. Yz are all image nodes that lie at or beyond scene node k, relative to scene node j. At each iteration, more image nodes Y enter that likelihood function. After each iteration, the MAP estimate at node j is argmaxXj P(x j )P(Yj Ix j) Ilk L kj , where k runs over all scene node neighbors of node j . We calculate Lkj from: L kj = maxxkP(xklxj)P(Yklxk) II £lk, l#j (2) where Llk is Llk from the previous iteration. The initial £lk'S are 1. Using the Figure 1: Markov network nodes used in example. factorization rules described above, one can verify that the local computations will compute argmaxx1 ,X2 , . .. , XN P(XI' X2, ... ,xNIYI, Y2, ... ,YM), as desired. To learn the network parameters, we measure P(Xj), P(Yjlxj), and P(xklxj) , directly from the synthetic training data. If the network contains loops, the above factorization does not hold. Both learning and inference then require more computationally intensive methods [15]. Alternatively, one can use multi-resolution quad-tree networks [20], for which the factorization rules apply, to propagate information spatially. However, this gives results with artifacts along quad-tree boundaries, statistical boundaries in the model not present in the real problem. We found good results by including the loop-causing connections between adjacent nodes at the same tree level but applying the factorized propagation rules, anyway. Others have obtained good results using the same approach for inference [8, 21, 32]; Weiss provides theoretical arguments why this works for certain cases [32]. 3 Discrete Probability Representation (motion example) We applied the training method and propagation rules to motion estimation, using a vector code representation [11] for both images and scenes. We wrote a treestructured vector quantizer, to code 4 by 4 pixel by 2 frame blocks of image data 778 W. T. Freeman and E. C. Pasztor for each pyramid level into one of 300 codes for each level. We also coded scene patches into one of 300 codes. During training, we presented approximately 200,000 examples of irregularly shaped moving blobs, some overlapping, of a contrast with the background randomized to one of 4 values. Using co-occurance histograms, we measured the statistical relationships that embody our algorithm: P(x) , P(ylx), and P(xnlx), for scene Xn neighboring scene x. Figure 2 shows an input test image, (a) before and (b) after vector quantization. The true underlying scene, the desired output, is shown (c) before and (d) after vector quantization. Figure 3 shows six iterations of the algorithm (Eq. 2) as it converges to a good estimate for the underlying scene velocities. The local probabilities we learned (P(x), P(ylx), and P(xnlx)) lead to figure/ground segmentation, aperture problem constraint propagation, and filling-in (see caption). Figure 2: (a) First of two frames of image data (in gaussian pyramid), and (b) vector quantized. (c) The optical flow scene information, and (d) vector quantized. Large arrow added to show small vectors' orientation. 4 Density Representation (super-resolution example) For super-resolution, the input "image" is the high-frequency components (sharpest details) of a sub-sampled image. The "scene" to be estimated is the high-frequency components of the full-resolution image, Fig. 4. We improved our method for this second problem. A faithful image representation requires so many vector codes that it becomes infeasible to measure the prior and co-occurance statistics (note unfaithful fit of Fig. 2). On the other hand, a discrete representation allows fast propagation. We developed a hybrid method that allows both good fitting and fast propagation. We describe the image and scene patches as vectors in a continuous space, and first modelled the probability densities, P(x) , P(y, x), and P(xn, x), as gaussian mixtures [4]. (We reduced the dimensionality some by principal components analysis [4]). We then evaluated the prior and conditional distributions of Eq. 2 only at a discrete set of scene values, different for each node. (This sample-based approach relates to [14, 7]). The scenes were a sampling of those scenes which render to the image at that node. This focusses the computation to the locally feasible scene interpretations. P(xkIXj) in Eq. 2 becomes the ratios of the gaussian mixtures P(Xk ,Xj) and P(Xj), evaluated at the scene samples at nodes k and j, respectively. P(Yklxk) is P(Yk ,Xk)/P(Xk) evaluated at the scene samples of node k. To select the scene samples, we could condition the mixture P(y, x) on the Y observed at each node, and sample x's from the resulting mixture of gaussians. We obtained somewhat better results by using the scenes from the training set whose Learning to Estimate Scenes from Images T1 1 I I 1 (a) k.... l. ............. :ol. ,~"'.... . ~ ....................... .. 2 I : ~ .. I; • t..;i' ....... ~ .... ,"', 3 1 .. I;. I ,. ..... ;i),. ;1, , ~ ,; ~. '<6. ! ...... .... .... "1· ... .J 4 t'~ r-:;; .. ..,. I I .. ... ~ ...... ~ I I ... .. it::;) 1 I · ... ~A~ t " A~ I ~~ ,. ,,;~ : ! ............. ;:::: #::~;:.. I :!::::::~:::: ~;;;;~ ... ~",,~~~~~~ i "~~A'''1~''!"'''-!:-:'' I t ... I I I I I I I I I I I Figure 3: The most probable scene code for Fig. 2b at first 6 iterations of Bayesian belief propagation. (a) Note initial motion estimates occur only at edges. Due to the "aperture problem", initial estimates do not agree. (b) Filling-in of motion estimate occurs. Cues for figure/ground determination may include edge curvature, and information from lower resolution levels. Both are included implicitly in the learned probabilities. (c) Figure/ground still undetermined in this region of low edge curvature. (d) Velocities have filled-in, but do not yet all agree. (e) Velocities have filled-in , and agree with each other and with the correct velocity direction, shown in Fig. 2. 779 images most closely matched the image observed at that node (thus avoiding one gaussian mixture modeling step). Using 40 scene samples per node, setting up the P(xklxj) matrix for each link took several minutes for 96x96 pixel images. The scene (high resolution) patch size was 3x3; the image (low resolution) patch size was 7x7. We didn't feel long-range scene propagation was critical here, so we used a flat, not a pyramid, node structure. Once the matrices were computed, the iterations of Eq. 2 were completed within seconds. Figure 4 shows the results. The training images were random' shaded and painted blobs such as the test image shown. After 5 iterations, the synthesized maximum likelihood estimate of the high resolution image is visually close to the actual high frequency image (top row). (Including P(x) gave too flat results, we suspect due to errors modeling that highly peaked distribution). The dominant structures are all in approximately the correct position. This may enable high quality zooming of low-resolution images, attempted with limited success by others [28, 25]. 5 Discussion In related applications of Markov random fields to vision, researchers typically use relatively simple, heuristically derived expressions (rather than learned) for the likelihood function P(ylx) or for the spatial relationships in the prior term on scenes 780 sub-sampled image iteration 0 zoomed high freqs. of sub-sampled image (algorithm input) iteration 1 full-detail image • J I I "" .. . ' r / / f . t . ,} , / , I 1'( , I • \: , '. iteration 5 (output) W. T. Freeman and E. C. Pasztor 'f' , f / high freqs. of full-detail image (desired output) w/o with computed output Figure 4: Superresolution example. Top row: Input and desired output (contrast normalized, only those orientations around vertical). Bottom row: algorithm output and comparison of image with and without estimated high vertical frequencies. [10, 26, 9, 17, 5, 20, 19, 27]. Some researchers have applied related learning approaches to low-level vision problems, but restricted themselves to linear models [18, 13]. For other learning or constraint propagation approaches in motion analysis, see [20, 22, 16]. In summary, we have developed a principled and practical learning based method for low-level vision problems. Markov assumptions lead to factorizing the posterior probability. The parameters of our Markov random field are probabilities specified by the training data. For our two examples (programmed in C and Matlab, respectively), the training can take several hours but the running takes only several minutes. Scene estimation by Markov networks may be useful for other low-level vision problems, such as extracting intrinsic images from line drawings or photographs. Acknowledgements We thank E. Adelson, J. Tenenbaum, P. Viola, and Y. \Veiss for helpful discussions. References [1] J. J. Atick, Z. Li, and A. N. Redlich. Understanding retinal color coding from first principles. Neural Computation, 4:559- 572, 1992. [2] A. J. Bell and T . J. Senjowski. The independent components of natural scenes are edge filters. Vision Research, 37(23):3327- 3338, 1997. [3] J. O. Berger. Statistical decision theory and Bayesian analysis. Springer, 1985. [4] C. l\l. Bishop. Neural networks for pattern recognition. Oxford, 1995. [5] M. J. Black and P. Anandan. A framework for the robust estimation of optical flow. In Fmc. 4th Inti. Conf. Computer Vision, pages 231- 236. IEEE, 1993. [6] P. J. Burt and E. H. Adelson. The Laplacian pyramid as a compact image code. IEEE Trans. Comm., 31(4):532- 540, 1983. [7] J. S. DeBonet and P. Viola. Texture recognition using a non-parametric multi-scale Learning to Estimate Scenes from Images 781 statistical model. In Proc. IEEE Computer Vision and Pattern Recognition, 1998. [8] B. J . Frey. Bayesian networks for pattern classification. MIT Press, 1997. [9] D. Geiger and F. Girosi. Parallel and deterministic algorithms from MRF 's: surface reconstruction. IEEE Pattern Analysis and Machine Intelligence, 13(5):401- 412, May 1991. [10] S. Geman and D . Geman. Stochastic relaxation, Gibbs distribution, and the Bayesian [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] restoration of images. IEEE Pattern Analysis and Machine Intelligence, 6:721- 741 , 1984. R. M. Gray, P. C. Cosman, and K. L. Oehler. Incorporating visual factors into vector quantizers for image compression. In A. B. Watson, editor, Digital images and human vision. MIT Press, 1993. D. J. Heeger and J . R. Bergen. Pyramid-based texture analysis/synthesis. In ACM SIGGRAPH, pages 229- 236, 1995. In Computer Graphics Proceedings, Annual Conference Series. A. C. Hurlbert and T . A. Poggio. Synthesizing a color algorithm from examples. Science, 239:482- 485, 1988. M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density. In Proc. European Conf. on Computer Vision, pages 343- 356, 1996. M. 1. Jordan , editor. Learning in graphical models. MIT Press, 1998. S. Ju, M. J. Black, and A. D. Jepson. Skin and bones: Multi-layer, locally affine, optical flow and regularization with transparency. In Proc. IEEE Computer Vision and Pattern Recognition, pages 307- 314, 1996. D. Kersten. Transparancy and the cooperative computation of scene attributes. In M. S. Landy and J. A. Movshon, editors, Computational Models of Visual Processing, chapter 15. MIT Press, Cambridge, MA, 1991. D. Kersten, A. J. O'Toole, M. E. Sereno, D. C. Knill, and J . A. Anderson. Associative learning of scene parameters from images. Applied Optics, 26(23):4999- 5006, 1987. D. Knill and W . Richards, editors. P erception as Bayesian inference. Cambridge Univ. Press, 1996. M. R. Luettgen, W. C. Karl, and A. S. Will sky. Efficient multiscale regularization with applications to the computation of optical flow. IEEE Trans. Image Processing, 3(1):41- 64, 1994. D. J . C. Mackay and R. M. Neal. Good error- correcting codes based on very sparse matrices. In Cryptography and coding - LNCS 1025, 1995. S. Nowlan and T . J. Senjowski. A selection model for motion processing in area l'vIT of primates. J . Neuroscience, 15:1195- 1214, 1995. B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607- 609, 1996. J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988. A. Pentland and B. Horowitz. A practical approach to fractal-based image compression. In A. B. Watson, editor, Digital images and human vision. MIT Press, 1993. T . Poggio, V. Torre, and C. Koch. Computational vision and regularization theory. Nature, 317(26):314- 139, 1985. E. Saund. Perceptual organization of occluding contours of opaque surfaces. In CVPR '98 Workshop on Perceptual Organization, Santa Barbara, CA, 1998. R. R. Schultz and R. L. Stevenson. A Bayesian approach to image expansion for improved definition. IEEE Trans. Image Processing, 3(3):233- 242, 1994. E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conf. on Sig. , Sys . and Computers, Pacific Grove, CA, 1997. E. P. Simoncelli and O. Schwartz. Modeling surround suppression in vI neurons with a statistically-derived normalization model. In Adv. in Neural Information Processing Systems, volume 11, 1999. Y. Weiss. Interpreting images by propagating Bayesian beliefs. In Adv. in Neural Information Processing Systems, volume 9, pages 908- 915, 1997. Y. Weiss. Belief propagation and revision in networks with loops. Technical Report 1616, AI Lab Memo, MIT, Cambridge, MA 02139, 1998. S. C. Zhu and D. Mumford. Prior learning and Gibbs reaction-diffusion. IEEE Pattern Analysis and Machine Intelligence, 19(11), 1997.
|
1998
|
55
|
1,554
|
Learning curves for Gaussian processes Peter Sollich * Department of Physics, University of Edinburgh Edinburgh EH9 3JZ, U.K. Email: P.Sollich<Oed.ac . uk Abstract I consider the problem of calculating learning curves (i.e., average generalization performance) of Gaussian processes used for regression. A simple expression for the generalization error in terms of the eigenvalue decomposition of the covariance function is derived, and used as the starting point for several approximation schemes. I identify where these become exact, and compare with existing bounds on learning curves; the new approximations, which can be used for any input space dimension, generally get substantially closer to the truth. 1 INTRODUCTION: GAUSSIAN PROCESSES Within the neural networks community, there has in the last few years been a good deal of excitement about the use of Gaussian processes as an alternative to feedforward networks [lJ. The advantages of Gaussian processes are that prior assumptions about the problem to be learned are encoded in a very transparent way, and that inference-at least in the case of regression that I will consider-is relatively straightforward. One crucial question for applications is then how 'fast' Gaussian processes learn, i.e., how many training examples are needed to achieve a certain level of generalization performance. The typical (as opposed to worst case) behaviour is captured in the learning curve, which gives the average generalization error € as a function of the number of training examples n. Several workers have derived bounds on €(n) [2,3, 4J or studied its large n asymptotics. As I will illustrate below, however, the existing bounds are often far from tight; and asymptotic results will not necessarily apply for realistic sample sizes n. My main aim in this paper is therefore to derive approximations to €( n) which get closer to the true learning curves than existing bounds, and apply both for small and large n. In its simplest form, the regression problem that I am considering is this: We are trying to learn a function 0* which maps inputs x (real-valued vectors) to (realvalued scalar) outputs O*(x) . We are given a set of training data D, consisting of n 'Present address: Department of Mathematics, King's College London, Strand, London WC2R 2LS, U.K. Email peter.sollicMlkcl.ac . uk Learning Curves for Gaussian Processes 345 input-output pairs (Xl, yt); the training outputs Yl may differ from the 'clean' target outputs 9* (xL) due to corruption by noise. Given a test input x, we are then asked to come up with a prediction 9(x) for the corresponding output, expressed either in the simple form of a mean prediction 9(x) plus error bars, or more comprehensively in terms of a 'predictive distribution' P(9(x)lx, D). In a Bayesian setting, we do this by specifying a prior P(9) over our hypothesis functions, and a likelihood P(DI9) with which each 9 could have generated the training data; from this we deduce the posterior distribution P(9ID) ex P(DI9)P(9). In the case of feedforward networks, where the hypothesis functions 9 are parameterized by a set of network weights, the predictive distribution then needs to be extracted by integration over this posterior, either by computationally intensive Monte Carlo techniques or by approximations which lead to analytically tractable integrals. For a Gaussian process, on the other hand, obtaining the predictive distribution is trivial (see below); one reason for this is that the prior P(9) is defined directly over input-output functions 9. How is this done? Any 9 is uniquely determined by its output values 9(x) for all x from the input domain, and for a Gaussian process, these are simply assumed to have a joint Gaussian distribution (hence the name). This distribution can be specified by the mean values (9(x))o (which I assume to be zero in the following, as is commonly done), and the covariances (9(x)9(x' ))o = C(x, x'); C(x, x') is called the covariance function of the Gaussian process. It encodes in an easily interpretable way prior assumptions about the function to be learned. Smoothness, for example, is controlled by the behaviour of C(x, x') for x' -+ x: The OrnsteinUhlenbeck (OU) covariance function C(x, x') ex exp( -IX-X'l/l) produces very rough (non-differentiable) functions, while functions sampled from the squared exponential (SE) prior with C(X,X') ex exp(-Ix - x' 12/(2l2)) are infinitely differentiable. The 'length scale' parameter l, on the other hand, corresponds directly to the distance in input space over which we expect our function to vary significantly. More complex properties can also be encoded; by replacing l with different length scales for each input component, for example, relevant (smalll) and irrelevant (large l) inputs can be distinguished. How does inference with Gaussian processes work? I only give a brief summary here and refer to existing reviews on the subject (see e.g. [5, 1]) for details. It is simplest to assume that outputs yare generated from the 'clean' values of a hypothesis function 9(x) by adding Gaussian noise of x-independent variance 0'2. The joint distribution of a set of training outputs {yd and the function values 9(x) is then also Gaussian, with covariances given by here I have defined an n x n matrix K and x-dependent n-component vectors k(x). The posterior distribution P(9ID) is then obtained by simply conditioning on the {Yl}. It is again Gaussian and has mean and variance (B(x))OID ( (9(x) - 9(X))2) OlD 9(x) = k(X)TK-1y C(x, x) - k(X)TK-lk(x) (1) (2) Eqs. (1,2) solve the inference problem for Gaussian process: They provide us directly with the predictive distribution P(9(x)lx, D). The posterior variance, eq. (2), in fact also gives us the expected generalization error at x. Why? If the teacher is 9*, the squared deviation between our mean prediction and the teacher output is 1 (9(x) - 9* (X))2; averaging this over the posterior distribution of teachers P(9* ID) just gives (2). The underlying assumption is that our assumed Gaussian process lOne can also one measure the generalization by the squared deviation between the prediction O(x) and the noisy teacher output; this simply adds a term 0'2 to eq. (3). 346 P. Sollich prior is the true one from which teachers are actually generated (and that we are using the correct noise model). Otherwise, a more complicated expression for the expected generalization error results; in line with most other work on the subject, I only consider the 'correct prior' case in the following. Averaging the generalization error at x over the distribution of inputs gives then (3) This form of the generalization error (which is well known [2, 3, 4, 5]) still depends on the training inputs (the fact that the training outputs have dropped out already is a signature of the fact that Gaussian processes are linear predictors, compare (1)). Averaging over data sets yields the quantity we are after, € = (t(D)}D· (4) This average expected generalization error (I will drop the 'average expected' in the following) only depends on the number of training examples n; the function €(n) is called the learning curve. Its exact calculation is difficult because of the joint average in eqs. (3,4) over the training inputs Xl and the test input x. 2 LEARNING CURVES As a starting point for an approximate calculation of €(n), I first derive a representation of the generalization error in terms of the eigenvalue decomposition of the covariance function. Mercer's theorem (see e.g. [6]) tells us that the covariance function can be decomposed into its eigenvalues Ai and eigenfunctions (/Ji(x): 00 C(x, x') = L Ai<Pi(X)qJi(x') (5) i=1 This is simply the analogue of the eigenvalue decomposition of a finite symmetric matrix; the eigenfunctions can be taken to be normalized such that (<Pi (x) <Pj (x)} x = Oij . Now write the data-dependent generalization error (3) as €(D) = (C(x,x)}xtr (k(x)k(x)T)x K- 1 and perform the x-average in the second term: «(k(x)k(x)T)lm)x = L AiAj<Pi(XI) (<Pi (x)<pj (x)} <pj(xm) = L A;<Pi (Xt}<Pi (xm ) ij i This suggests introducing the diagonal matrix (A)ij = AiOij and the 'design matrix' (<J»li = <Pi (xt), so that (k(x)k(x)T)x = <J>A2<J>T. One then also has (C(x,x)}x = tr A, and the matrix K is expressed as K = a 21 + <J>A<J>T, 1 being the identity matrix. Collecting these results, we have €(D) = tr A - tr (a21 + <J>A<J>T)-I<J>A2<J>T This can be simplified using the Woodbury formula for matrix inverses (see e.g. [7]), which applied to our case gives (a2I+<J>A<J>T)-1 = a-2[I-<J>(a21+A<J>T<J»-1 A<J>TJ; after a few lines of algebra, one then obtains the final result t= (t(D))D' €(D) =tra2A(a21+A<J>T<J»-1 =tr(A-1 +a-2 <J>T<J»-1 (6) This exact representation of the generalization error is one of the main results of this paper. Its advantages are that the average over the test input X has already been carried out, and that the remainingf dependence on the training data is contained entirely in the matrix <J> T <J>. It also includes as a special case the well-known result for linear regression (see e.g. [8]); A-I and <J> T <J> can be interpreted as suitably generalized versions of the weight decay (matrix) and input correlation matrix. Starting from (6), one can now derive approximate expressions for the learning Learning Curves for Gaussian Processes 347 curve I:(n). The most naive approach is to entirely neglect the fluctuations in cJ>TcJ> over different data sets and replace it by its average, which is simply (( cJ> T cJ> )ij ) D = I:l (¢i(Xt)¢j(XI)) D = n8ij . This leads to the Naive approximation I:N(n) = tr (A -1 + O'- 2nI)-1 (7) which is not, in general, very good. It does however become exact in the large noise limit 0'2 -t 00 at constant nlO'2 : The fluctuations of the elements of the matrix O'-2cJ>TcJ> then become vanishingly small (of order foO'- 2 = (nlO' 2)/fo -t 0) and so replacing cJ> T cJ> by its average is justified. To derive better approximations, it is useful to see how the matrix 9 = (A -1 + O'-2cJ>TcJ»-1 changes when a new example is added to the training set. One has 9(n + 1) - 9(n) = [9- 1(n) + O'-21j11j1Tr l - 9(n) = _ 9(n)1jI1jIT9(n) (8) 0'2 + 1jIT9(n)1jI in terms of the vector 1jI with elements (1jI)i = ¢i(Xn+I); the second identity uses again the Woodbury formula. To get exact learning curves, one would have to average this update formula over both the new training input Xn+1 and all previous ones. This is difficult, but progress can be made by again neglecting some fluctuations: The average over Xn+1 is approximated by replacing 1jI1jIT by its average, which is simply the identity matrix; the average over the previous training inputs by replacing 9(n) by its average G(n) = (9(n)) D' This yields the approximation G 2 (n) G(n + 1) - G(n) = 2 G() (9) a +tr n Iterating from G(n = 0) = A, one sees that G(n) remains diagonal for all n, and so (9) is trivial to implement numerically. I call the resulting I:D(n) = tr G(n) the Discrete approximation to the learning curve, because it still correctly treats n as a variable with discrete, integer values. One can further approximate (9) by taking n as continuously varying, replacing the difference on the left-hand side by the derivative dG( n) 1 dn. The resulting differential equation for G( n) is readily solved; taking the trace, one obtains the generalization error I:uc(n) = tr (A -1 + O'-2n'I)-1 (10) with n' determined by the self-consistency equation n' + tr In(I + O'- 2n' A) = n. By comparison with (7), n' can be thought of as an 'effective number of training examples'. The subscript DC in (10) stands for Upper Continuous approximation. As the name suggests, there is another, lower approximation also derived by treating n as continuous. It has the same form as (10), but a different self-consistent equation for n', and is derived as follows. Introduce an auxiliary offset parameter v (whose usefulness will become clear shortly) by 9-1 = vI+A -1 +O'-2cJ>TcJ>; at the end ofthe calculation, v will be set to zero again. As before, start from (8)-which also holds for nonzero v-and approximate 1jI1jIT and tr 9 by their averages, but retain possible fluctuations of 9 in the numerator. This gives G(n+ 1) - G(n) = - (92(n)) 1[0'2 + tr G(n)]. Taking the trace yields an update formula for the generalization error 1:, where the extra parameter v lets us rewrite the average on the right-hand side as -tr (92 ) = (olov)tr (9) = ol:lov. Treating n again as continuous, we thus arrive at the partial differential equation Eh{on = (oI:l ov) 1 (0'2 + 1:). This can be solved using the method of characteristics [8 and (for v = 0) gives the Lower Continuous approximation to the learning curve, nO'2 I:Lc(n) = tr (A -1 + O'- 2n'I)-1 , n' = (11) 0'2 + I:LC By comparing derivatives w.r.t. n, it is easy to show that this is always lower than the DC approximation (10). One can also check that all three approximations that I have derived (D, LC and DC) converge to the exact result (7) in the large noise limit as defined above. 348 P. Sol/ich 3 COMPARISON WITH BOUNDS AND SIMULATIONS I now compare the D, LC and UC approximations with existing bounds, and with the 'true' learning curves as obtained by simulations. A lower bound on the generalization error was given by Michelli and Wahba [2J as €(n) ~ €Mw(n) = 2::n+l Ai (12) This is derived for the noiseless case by allowing 'generalized observations' (projections of 0* (x) along the first n eigenfunctions of C (x, x') ), and so is unlikely to be tight for the case of 'real' observations at discrete input points. Based on information theoretic methods, a different Lower bound was obtained by Opper [3J: 1 €(n) ~ €Lo(n) = 4"tr (A -1 + 2a-2nl)-1 x [I + (I + 2a-2nA)-lJ This is always lower than the naive approximation (7); both incorrectly suggest that € decreases to zero for a2 -+ 0 at fixed n, which is clearly not the case (compare (12)). There is also an Upper bound due to Opper [3J, i(n) ~ €uo(n) = (a- 2n)-1 tr In(1 + a-2nA) + tr (A -1 + a-2nl)-1 (13) Here i is a modified version of € which (in the rescaled version that I am using) becomes identical to € in the limit of small generalization errors (€ « a 2 ), but never gets larger that 2a2 ; for small n in particular, €(n) can therefore actually be much larger than i(n) and its bound (13). An upper bound on €(n) itself was derived by Williams and Vivarelli [4J for one-dimensional inputs and stationary covariance functions (for which C(x, x') is a function of x - x' alone). They considered the generalization error at x that would be obtained from each individual training example, and then took the minimum over all n examples; the training set average of this 'lower envelope' can be evaluated explicitly in terms of integrals over the covariance function [4J. The resulting upper bound, €wv(n), never decays below a 2 and therefore complements the range of applicability of the UO bound (13). In the examples in Fig. 1, I consider a very simple input domain, x E [0, 1 Jd, with a uniform input distribution. I also restrict myself to stationary covariance functions, and in fact I use what physicists call periodic boundary conditions. This is simply a trick that makes it easy to calculate the required eigenvalue spectra of the covariance function, but otherwise has little effect on the results as long as the length scale of the covariance function is smaller than the size of the input domain2 , l « 1. To cover the two extremes of 'rough' and 'smooth' Gaussian priors, I consider the OU [C(x, x') = exp( -lx-xll/l)J and SE [C(x, x') = exp( -lx-x' 12 /2l 2)J covariance functions. The prior variance of the values of the function to be learned is simply C (x, x) = 1; one generically expects this 'prior ignorance' to be significantly larger than the noise on the training data, so I only consider values of a 2 < 1. I also fix the covariance function length scale to l = 0.1; results for l = 0.01 are qualitatively similar. Several observations can be made from Figure 1. (1) The MW lower bound is not tight, as expected. (2) The bracket between Opper's lower and upper bounds (LO /UO) is rather wide (1-2 orders of magnitude); both give good representations of the overall shape of the learning curve only in the asymptotic regime (most clearly visible for the SE covariance function), i. e., once € has dropped below a 2 . (3) The WV upper bound (available only in d = 1) works 21n d = 1 dimension, for example, a 'periodically continued' stationary covariance function on [0,1] can be written as C(X,X') = 2:::_ooc(x - x' + r). For I « 1, only the r = 0 term makes a significant contribution, except when x and x' are within ~ I of opposite ends of the input space. With this definition, the eigenvalues of C(x, x') are given by the Fourier transform 1:"00 dx c(x) exp( -2rriqx), for integer q. Learning Curves for Gaussian Processes 349 10° E 10-1 10-2 10-3 10° E 10-1 10-2 10° E 10-1 10-2 2 -3 OU, d=l, 1=0.1, cr =10 10° 2 -3 SE, d=l, 1=0.1, cr =10 10-1 (b) 10-2 -_WV 10-3 MW--y~ - , \ \. -10-4 'i- , ',L? \ , __ ~O --10-5 'MW 0 200 400 600 0 50 100 150 200 2 OU, d=l, 1=0.1, cr =0.1 10° 2 SE, d=l , 1=0.1, cr =0.1 (c) (d) 10-1 ___ wv '-. ---- - ------, --- -- _ _ VO -~~-::::.-::::.WV 10-2 - - --D/uC VO , \\~ LC 10-3 , '-.\. IMW - _ _ -l-O ''''', --- - --..!-O 10-4 0 200 400 600 0 200 400 600 2 -3 OU, d=2, 1=0.1, cr =10 10° 2 - 3 SE, d=2, 1=0.1, cr =10 (e) (D DIVC 10-1 \. -- '-. - - -~ LC 10-2 vo-----10-3 \. '.MW \ \ , , , , - ___ ~o 10-4 \~---,~--\ - - - 10-5 0 200 n 400 600 0 200 n 400 600 Figure 1: Learning curves c(n): Comparison of simulation results (thick solid lines; the small fluctuations indicate the order of magnitude of error bars), approximations derived in this paper (thin solid lines; D = discrete, UC/LC = upper/lower continuous) , and existing upper (dashed; UO = upper Opper, WV = Williams-Vivarelli) and lower (dot-dashed; LO = lower Opper, MW = Michelli-Wahba) bounds. The type of covariance function (Ornstein-Uhlenbeck/Squared Exponential), its length scale l, the dimension d of the input space, and the noise level (72 are as shown. Note the logarithmic y-axes. On the scale of the plots, D and UC coincide (except in (b)); the simulation results are essentially on top of the LC curve in (c-e). 350 P'Sollich well for the OU covariance function, but less so for the SE case. As expected, it is not useful in the asymptotic regime because it always remains above (72. (4) The discrete (D) and upper continuous (UC) approximations are very similar, and in fact indistinguishable on the scale of most plots. This makes the UC version preferable in practice, because it can be evaluated for any chosen n without having to step through all smaller values of n. (5) In all the examples, the true learning curve lies between the UC and LC curves. In fact I would conjecture that these two approximations provide upper and lower bounds on the learning curves, at least for stationary covariance functions. (6) Finally, the LC approximation comes out as the clear winner: For (72 = 0.1 (Fig. 1c,d), it is indistinguishable from the true learning curves. But even in the other cases it represents the overall shape of the learning curves very well, both for small n and in the asymptotic regime; the largest deviations occur in the crossover region between these two regimes. In summary, I have derived an exact representation of the average generalization c error of Gaussian processes used for regression, in terms of the eigenvalue decomposition of the covariance function. Starting from this, I have obtained three different approximations to the learning curve c(n) . All of them become exact in the large noise limit; in practice, one generically expects the opposite case ((72 /C(x, x) « 1), but comparison with simulation results shows that even in this regime the new approximations perform well. The LC approximation in particular represents the overall shape of the learning curves very well, both for 'rough' (OU) and 'smooth' (SE) Gaussian priors, and for small as well as for large numbers of training examples n. It is not perfect, but does get substantially closer to the true learning curves than existing bounds. Future work will have to show how well the new approximations work for non-stationary covariance functions and/or non-uniform input distributions, and whether the treatment of fluctuations in the generalization error (due to the random selection of training sets) can be improved, by analogy with fluctuation corrections in linear perceptron learning [8]. Acknowledgements: I would like to thank Chris Williams and Manfred Opper for stimulating discussions, and for providing me with copies of their papers [3,4] prior to publication. I am grateful to the Royal Society for financial support through a Dorothy Hodgkin Research Fellowship. [1] See e.g. D J C MacKay, Gaussian Processes, Tutorial at NIPS 10, and recent papers by Goldberg/Williams/Bishop (in NIPS 10), Williams and Barber/Williams (NIPS 9), Williams/Rasmussen (NIPS 8). [2] C A Michelli and G Wahba. Design problems for optimal surface interpolation. In Z Ziegler, editor, Approximation theory and applications, pages 329-348. Academic Press, 1981. [3] M Opper. Regression with Gaussian processes: Average case performance. In I K Kwok-Yee, M Wong, and D-Y Yeung, editors, Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective. Springer, 1997. [4] C K I Williams and F Vivarelli. An upper bound on the learning curve for Gaussian processes. Submitted for publication. [5] C K I Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M I Jordan, editor, Learning and Inference in Graphical Models. Kluwer Academic. In press. [6] E Wong. Stochastic Processes in Information and Dynamical Systems. McGraw-Hill, New York, 1971. [7] W H Press, S A Teukolsky, W T Vetterling, and B P Flannery. Numerical Recipes in C (2nd ed.). Cambridge University Press, Cambridge, 1992. [8] P Sollich. Finite-size effects in learning and generalization in linear perceptrons. Journal of Physics A, 27:7771- 7784, 1994.
|
1998
|
56
|
1,555
|
Facial Memory is Kernel Density Estimation (Almost) Matthew N. Dailey Garrison W. Cottrell Department of Computer Science and Engineering U.C. San Diego La Jolla, CA 92093-0114 {mdailey,gary}@cs.ucsd.edu Abstract Thomas A. Busey Department of Psychology Indiana University Bloomington, IN 47405 busey@indiana.edu We compare the ability of three exemplar-based memory models, each using three different face stimulus representations, to account for the probability a human subject responded "old" in an old/new facial memory experiment. The models are 1) the Generalized Context Model, 2) SimSample, a probabilistic sampling model, and 3) MMOM, a novel model related to kernel density estimation that explicitly encodes stimulus distinctiveness. The representations are 1) positions of stimuli in MDS "face space," 2) projections of test faces onto the "eigenfaces" of the study set, and 3) a representation based on response to a grid of Gabor filter jets. Of the 9 model/representation combinations, only the distinctiveness model in MDS space predicts the observed "morph familiarity inversion" effect, in which the subjects' false alarm rate for morphs between similar faces is higher than their hit rate for many of the studied faces. This evidence is consistent with the hypothesis that human memory for faces is a kernel density estimation task, with the caveat that distinctive faces require larger kernels than do typical faces. 1 Background Studying the errors subjects make during face recognition memory tasks aids our understanding of the mechanisms and representations underlying memory, face processing, and visual perception. One way of evoking such errors is by testing subjects' recognition of new faces created from studied faces that have been combined in some way (e.g. Solso and McCarthy, 1981; Reinitz, Lammers, and Cochran 1992). Busey and Tunnicliff (submitted) have recently examined the extent to which image-quality morphs between unfamiliar faces affect subjects' tendency to make recognition errors. Their experiments used facial images of bald males and morphs between these images (see Facial Memory Is Kernel Density Estimation (Almost) • ... " <: .. ,, ';'.> "i' .. > \ " '.' I' ",.,,", . " .. .,. .'~ .. ,.;;.' :-.. ' .' ... ..•. . ' .. \i< r . Il" ::;;1;f~ Figure 1: Three normalized morphs from the database. 25 Figure 1) as stimuli. In one study, Busey (in press) had subjects rate the similarity of all pairs in a large set of faces and morphs, then performed a multidimensional scaling (MDS) of these similarity ratings to derive a 6~dimensional "face space" (Valentine and Endo, 1992). In another study, "Experiment 3" (Busey and Tunnicliff, submitted), 179 subjects studied 68 facial images, including 8 similar pairs and 8 dissimilar pairs, as determined in a pilot study. These pairs were included in order to study how morphs between similar faces and dissimilar faces evoke false alanns. We call the pair of images from which a morph are derived its "parents," and the morph itself as their "child." In the experiment's test phase, the subjects were asked to make new/old judgments in response to 8 of the 16 morphs, 20 completely new distractor faces, the 36 non-parent targets and one of the parents of each of the 8 morphs. The results were that, for many of the morphlparent pairs, subjects responded "old" to the unstudied morph more often than to its studied parent. However, this effect (a morphfamiliarity inversion) only occurred for the morphs with similar parents. It seems that the similar parents are so similar to their "child" morphs that they both contribute toward an "old" (false alann) response to the morpho Researchers have proposed many models to account for data from explicit memory experiments. Although we have applied other types of models to Busey and Tunnicliff's data with largely negative results (Dailey et al., 1998), in this paper, we limit discussion to exemplar-based models, such as the Generalized Context Model (Nosofsky, 1986) and SAM (Gillund and Shiffrin, 1984). These models rely on the assumption that subjects explicitly store representations of each of the stimuli they study. Busey and Tunnicliff applied several exemplar-based models to the Experiment 3 data, but none of these models have been able to fully account for the observed similar morph familiarity inversion without positing that the similar parents are explicitly blended in memory, producing prototypes near the morphs. We extend Busey and Tunnicliff's (submitted) work by applying two of their exemplar models to additional image-based face stimulus representations, and we propose a novel exemplar model that accounts for the similar morphs' familiarity inversion. The results are consistent with the hypothesis that facial memory is a kernel density estimation (Bishop, 1995) task, except that distinctive exemplars require larger kernels. Also, on the basis of our model, we can predict that distinctiveness with respect to the study set is the critical factor influencing kernel size, as opposed to a context-free notion of distinctiveness. We can easily test this prediction empirically. 2 Experimental Methods 2.1 Face Stimuli and Normalization The original images were 104 digitized 560x662 grayscale images of bald men, with consistent lighting and background and fairly consistent position. The subjects varied in race and extent of facial hair. We automatically located the left and right eyes on each face using a simple template correlation technique then translated, rotated, scaled and cropped each image so the eyes were aligned in each image. We then scaled each image to 114x 143 to speed up image processing. Figure 1 shows three examples of the normalized morphs (the original images are copyrighted and cannot be published) . 26 M N. Dailey, G. W Cottrell and T. A. Busey 2.2 Representations Positions in multidimensional face space Many researchers have used a multidimensional scaling approach to model various phenomena in face processing (e.g. Valentine and Endo, 1992). Busey (in press) had 343 subjects rate the similarity of pairs of faces in the test set and performed a multidimensional scaling on the similarity matrix for 100 of the faces (four non-parent target faces were dropped from this analysis). The process resulted in a 6-dimensional solution with r2 = 0.785 and a stress of 0.13. In the MDS modeling results described below, we used the 6-dimensional vector associated with each stimulus as its representation. Principal component projections "Eigenfaces," or the eigenvectors of the covariance matrix for a set of face images, are a common basis for face representations (e.g. Turk and Pentland, 1991). We performed a principal components analysis on the 68 face images used in the study set for Busey and Tunnicliff's experiment to get the 67 non-zero eigenvectors of their covariance matrix. We then projected each of the 104 test set images onto the 30 most significant eigenfaces to obtain a 30-dimensional vector representing each face. l Gabor filter responses von der Malsburg and colleagues have made effective use of banks of Gabor filters at various orientations and spatial frequencies in face recognition systems. We used one form of their wavelet (Buhmann, Lades, and von der Malsburg, 1990) at five scales and 8 orientations in an 8x8 square grid over each normalized face image as the basis for a third face stimulus representation. However, since this representation resulted in a 2560-dimensional vector for each face stimulus, we performed a principal components analysis to reduce the dimensionality to 30, keeping this representation's dimensionality the same as the eigenface representation's. Thus we obtained a 30-dimensional vector based on Gabor filter responses to represent each test set face image. 2.3 Models The Generalized Context Model (GCM) There are several different flavors to the GCM. We only consider a simple sum-similarity form that will lead directly to our distinctivenessmodulated density estimation model. Our version of GCM's predicted P(old), given a representation y of a test stimulus and representations x E X of the studied exemplars, is predy = a + {3 L e-c (dx •y )2 xEX where a and {3linearly convert the probe's summed similarity to a probability, X is the set of representations of the study set stimuli; c is used to widen or narrow the width of the similarity function, and dx,y is either Ilx - yll, the Euclidean distance between x and y or the weighted Euclidean distance VLk Wk(Xk - Yk)2 where the "attentional weights" Wk are constants that sum to 1. Intuitively, this model simply places a Gaussian-shaped function over each of the studied exemplars, and the predicted familiarity of a test probe is simply the summed height of each of these surfaces at the probe's location. Recall that two of our representations, PC projection space and Gabor filter space, are 30-dimensional, whereas the other, MDS, is only 6-dimensional. Thus allowing adaptive weights for the MDS representation is reasonable, since the resulting model only uses 8 parameters to fit 100 points, but it is clearly unreasonable to allow adaptive weights in PC and Gabor space, where the resulting models would be fitting 32 parameters to 100 points. Thus, for all models, we report results in MDS space both with and without adaptive weights, but do not report adaptive weight results for models in PC and Gabor space. SimSample Busey and Tunnicliff (submitted) proposed SimSample in an attempt to remedy the GCM's poor predictions of the human data. It is related to both GCM, in that it 1 We used 30 eigenfaces because with this number, our theoretical "distinctiveness" measure was best correlated with the same measure in MDS space. Facial Memory Is Kernel Density Estimation (Almost) 27 uses representations in MDS space, and SAM (Gillund and Shiffrin, 1984), in that it involves sampling exemplars. The idea behind the model is that when a subject is shown a test stimulus, instead of a summed comparison to all of the exemplars in memory, the test probe probabilistically samples a single exemplar in memory, and the subject responds "old" if the probe's similarity to the exemplar is above a noisy criterion. The model has a similarity scaling parameter and two parameters describing the noisy threshold function. Due to space limitations, we cannot provide the details of the model here. Busey and Tunnicliff were able to fit the human data within the SimS ample framework, but only when they introduced prototypes at the locations of the morphs in MDS space and made the probability of sampling the prototype proportional to the similarity of the parents. Here, however, we only compare with the basic version that does not blend exemplars. Mixture Model of Memory (MMOM) In this model, we assume that subjects, at study time, implicitly create a probability density surface corresponding to the training set. The subjects' probability of responding "old" to a probe are then taken to be proportional to the height of this surface at the point corresponding to the probe. The surface must be robust in the face of the variability or noise typically encountered in face recognition (lighting changes, perspective changes, etc.) yet also provide some level of discrimination support (i.e. even when the intervals of possible representations for a single face could overlap due to noise, some rational decision boundary must still be constructed). If we assume a Gaussian mixture model, in which the density surface is built from Gaussian "blobs" centered on each studied exemplar, the task is a form of kernel density estimation (Bishop, 1995). We can fonnulate the task of predicting the human subjects' P( old) in this framework, then, as optimizing the priors and widths of the kernel functions to minimize the mean squared error of the prediction. However, we also want to minimize the number of free parameters in the model parsimonious methods for setting the priors and kernel function widths potentially lead to more useful insights into the principles underlying the human data. If the priors and widths were held constant, we would have a simple two parameter model predicting the probability a subject responds "old" to a test stimulus y: I!x_~1!2 predy = L oe2 .. xEX where a folds together the uniform prior and normalization constants, and (7 is the standard deviation of the Gaussian kernels. If we ignore the constants, however, this model is essentially the same as the version of the GCM described above. As the results section will show, this model cannot fully account for the human familiarity data in any of our representational spaces. To improve the model, we introduce two parameters to allow the prior (kernel function height) and standard deviation (kernel function width) to vary with the distinctiveness of the studied exemplar. This modification has two intuitive motivations. First, when humans are asked which of two parent faces a 50% morph is most similar to, if one parent is distinctive and the other parent is typical, subjects tend to choose the more distinctive parent (Tanaka et aI., submitted). Second, we hypothesize that when a human is asked to study and remember a set of faces for a recognition test, faces with few neighbors will likely have more relaxed (wider) discrimination boundaries than faces with many nearby neighbors. Thus in each representation space, for each studied face x, we computed d(x), the theoretical distinctiveness of each face, as the Z-scored average distance to the five nearest studied faces. We then allowed the height and width of each kernel function to vary with d(x): _ I!x_yl!2 predy = L 0(1 + cod(x»e 2("(l+c .. d(x»2 xEX As was the case for GCM and SimSample, we report the results of using a weighted Euclidean distance between y and x in MDS space only. 28 M. N Dailey. G. W. Cottrell and T. A. Busey Model " MDS space I MDS + weights I PC projections I Gabor jets I GCM 0.1633 0.1417 0.1745 0.1624 SimS ample 0.1521 0.1404 0.1756 0.1704 MMOM 0.1601 0.1528 0.1992 0.1668 Table 1: RMSE for the three models and three representations. Quality of fit for models with adaptive attentional weights are only reported for the low-dimensional representation ("MDS + weights"). The baseline RMSE, achievable with a constant prediction, is 0.2044. 2.4 Parameter fitting and model evaluation For each of the twelve combinations of models with face representations, we searched parameter space by simple hill climbing for the parameter settings that minimized the mean squared error between the model's predicted P(old) and the actual human P(old) data. We rate each model's effectiveness with two criteria. First, we measure the models' global fit with RMSE over all test set points. A model's RMSE can be compared to the baseline performance of the "dumbest" model, which simply predicts the mean human P(old) of 0.5395, and achieves an RMSE of 0.2044. Second, we evaluate the extent to which a model predicts the mean human response for each of the six categories of test set stimuli: 1) nonparent targets, 2) non-morph distractors, 3) similar parents, 4) dissimilar parents, 5) similar morphs, and 6) dissimilar morphs. If a model correctly predicts the rank ordering of these category means, it obviously accounts for the similar morph familiarity inversion pattern in the human data. As long as models do an adequate job of fitting the human data overall, as measured by RMSE, we prefer models that predict the morph familiarity inversion effect as a natural consequence of minimizing RMSE. 3 Results Table 1 shows the global fit of each model/representation pair. The SimSample model in MDS space provides the best quantitative fit. GeM generally outperforms MMOM, indicating that for a tight quantitative fit, having parameters for a linear transformation built into the model is more important than allowing the kernel function to vary with distinctiveness. Also of note is that the PC projection representation is consistently outperformed by both the Gabor jet representation and the MDS space representation. But for our purposes, the degree to which a model predicts the mean human responses for each of the six categories of stimuli is more important, given that it is doing a reasonably good job globally. Figure 2 takes a more detailed look at how well each model predicts the human category means. Even though SimSample in MDS space has the best global fit to the human familiarity ratings, it does not predict the familiarity inversion for similar morphs. Only the mixture model in weighted MDS space correctly predicts the morph familiarity effect. All of the other models underpredict the human responses to the similar morphs. 4 Discussion The results for the mixture model are consistent with the hypothesis that facial memory is a kernel density estimation task, with the caveat that distinctive exemplars require larger kernels. Whereas true density estimation would tend to deemphasize outliers in sparse areas of the face space, the human data show that the priors and kernel function widths for outliers should actually be increased. Two potentially significant problems with the work presented here are first, we experimented with several models before finding that MMOM was able to predict the morph familiarity inversion effect, and second, we are fitting a single Facial Memory Is Kernel Density Estimation (Almost) 29 GCMlMDS SimSamplelMDS MMOMlMDS 0.6 0.6 0.6 i i i ~ 0." iI:" OA E" 0.4 f f f ~ 0.2 ~ 0.2 ~ 0.2 0 .0 0 .0 0.0 OP SM T SP OM 0 OP SM T SP OM 0 DP SM T SP OM 0 GCMlMDS+wts SimSamplelMDS+wts MMOMlMDS+wts 0.6 0.6 0.6 i i i E" 0 .4 E" 0." ill:: 0." f f f ~ 0.2 ~ 0.2 ~ 0 .2 0.0 0.0 0.0 OP SM T SP OM 0 OP SM T SP OM 0 OP SM T SP OM D GCMlPC SimSampleIPC MMOMIPC 0 .6 0 .6 0.6 i i i it'" 0." il:"0A ill:: 0." r f f 0.2 ~ 0.2 ~ 0.2 0,0 0.0 0.0 OP SM T SP OM 0 DP SM T SP OM 0 OP SM T SP OM 0 GCWGabor SimSample/Gabor MMOWGabor 0.6 0.6 0.6 i i i E" 0." ill:: 0." it'" 0." r f t 0.2 ~ 0.2 ~ 0.2 0 .0 0.0 0.0 DP SM T SP OM 0 OP SM T SP OM 0 OP SM T SP OM 0 r::::=:I Actual _Predicted Figure 2: Average actual/predicted responses to the faces in each category. Key: DP = Dissimilar parents; SM = Similar morphs; T = Non-parent targets; SP = Similar parents; DM = Dissimilar morphs; D = Distractors. experiment. The model thus must be carefully tested against new data, and its predictions empirically validated. Since a theoretical distinctiveness measure based on the sparseness of face space around an exemplar was sufficient to account for the similar morphs' familiarity inversion, we predict that distinctiveness with respect to the study set is the critical factor influencing kernel size, rather than context-free human distinctiveness judgments. We can easily test this prediction by having subjects rate the distinctiveness of the stimuli without prior exposure and then determine whether their distinctiveness ratings improve or degrade the model's fit. A somewhat disappointing (though not particularly surprising) aspect of our results is that the model requires a representation based on human similarity judgments. Ideally, we would prefer to provide an information-processing account using image-based representations like eigenface projections or Gabor filter responses. Interestingly, the efficacy of the image-based representations seems to depend on how similar they are to the MDS representations. The PC projection representation performed the worst, and distances between pairs of PC representations had a correlation of 0.388 with the distances between pairs of MDS representations. For the Gabor filter representation, which performed better, the correlation is 0.517. In future work, we plan to investigate how the MDS representation (or a representation like it) might be derived directly from the face images. 30 M N. Dailey, G. W Cottrell and T A. Busey Besides providing an infonnation-processing account of the human data, there are several other avenues for future research. These include empirical testing of our distinctiveness predictions, evaluating the applicability of the distinctiveness model in domains other than face processing, and evaluating the ability of other modeling paradigms to account for this data. Acknowledgements We thank Chris Vogt for comments on a previous draft, and other members of Gary's Unbelievable Research Unit (GURU) for earlier comments on this work. This research was supported in part by NIMH grant MH57075 to GWe. References Bishop, C. M. (1995). Neural networks for pattern recognition. Oxford University Press, Oxford. Busey, T. A. (1999). Where are morphed faces in multi-dimensional face space? Psychological Science. In press. Busey, T. A. and Tunnicliff, J. (submitted). Accounts of blending, distinctiveness and typicality in face recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition. Dailey, M. N., Cottrell, G. W., and Busey, T. A. (1998). Eigenfaces for familiarity. In Proceedings of the Twentieth Annual Conference of the Cognitive Science Society, pages 273-278, Mahwah, NJ. Erlbaum. Gillund, G. and Shiffrin, R. (1984). A retrieval model for both recognition and recall. Psychological Review, 93(4):411-428. J. Buhmann, M. L. and von der Malsburg, C. (1990). Size and distortion invariant object recognition by hierarchical graph matching. In Proceedings of the IJCNN International Joint Conference on Neural Networks, volume II, pages 411-416. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 116(1):39-57. Reinitz, M., Lammers, W., and Cochran, B. (1992). Memory-conjunction errors: Miscombination of stored stimulus features can produce illusions of memory. Memory & Cognition, 20(1):1-11. Solso, R. L. and McCarthy, J. E. (1981). Prototype formation offaces: A case of pseudomemory. British Journal of Psychology, 72(4):499-503. Tanaka, J., Giles, M., Kremen, 5., and Simon, V. (submitted). Mapping attract or fields in face space: The atypicality bias in face recognition. Turk, M. and Pentland, A. (1991). Eigenfaces for recognition. The Journal of Cognitive Neuroscience, 3:71-86. Valentine, T. and Endo, M. (1992). Towards an exemplar model of face processing: The effects of race and distinctiveness. The Quarterly Journal of Experimental Psychology, 44A(4):671-703.
|
1998
|
57
|
1,556
|
VLSI Implementation of Motion Centroid Localization for Autonomous Navigation Ralph Etienne-Cummings Dept. of ECE, Johns Hopkins University, Baltimore, MD Viktor Gruev Dept. of ECE, Johns Hopkins University, Baltimore, MD Abstract Mohammed Abdel Ghani Dept. ofEE, S. Illinois University, Carbondale, IL A circuit for fast, compact and low-power focal-plane motion centroid localization is presented. This chip, which uses mixed signal CMOS components to implement photodetection, edge detection, ON-set detection and centroid localization, models the retina and superior colliculus. The centroid localization circuit uses time-windowed asynchronously triggered row and column address events and two linear resistive grids to provide the analog coordinates of the motion centroid. This VLSI chip is used to realize fast lightweight autonavigating vehicles. The obstacle avoiding line-following algorithm is discussed. 1 INTRODUCTION Many neuromorphic chips which mimic the analog and parallel characteristics of visual, auditory and cortical neural circuits have been designed [Mead, 1989; Koch, 1995]. Recently researchers have started to combine digital circuits with neuromorphic aVLSI systems [Boahen, 1996]. The persistent doctrine, however, has been that computation should be performed in analog, and only communication should use digital circuits. We have argued that hybrid computational systems are better equipped to handle the high speed processing required for real-world problem solving, while maintaining compatibility with the ubiquitous digital computer [Etienne, 1998]. As a further illustration of this point of view, this paper presents a departure form traditional approaches for focal plane centroid localization by offering a mixed signal solution that is simultaneously high-speed, low power and compact. In addition, the chip is interfaced with an 8-bit microcomputer to implement fast autonomous navigation. Implementation of centroid localization has been either completely analog or completely digital. The analog implementations, realized in the early 1990s, used focal plane current mode circuits to find a global continuos time centroid of the pixels' intensities [DeWeerth, 1992]. Due to their sub-threshold operation, these circuits are low power, but slow. On the other hand, the digital solutions do not compute the centroid at the focal 686 R. Etienne-Cummings, V. Grnev and M A. Ghani plane. They use standard CCO cameras, AID converters and OSP/CPU to compute the intensity centroid [Mansfield, 1996]. These software approaches offer multiple centroid localization with complex mathematical processing. However, they suffer from the usual high power consumption and non-scalability of traditional digital visual processing systems. Our approach is novel in many aspects. We benefit from the low power, compactness and parallel organization of focal plane analog circuits and the speed, robustness and standard architecture of asynchronous digital circuits. Furthermore, it uses event triggered analog address read-out, which is ideal for the visual centroid localization problem. Moreover, our chip responds to moving targets only by using the ON-set of each pixel in the centroid computation. Lastly, our chip models the retina and two dimensional saccade motor error maps of superior colliculus on a single chip [Sparks, 1990]. Subsequently, this chip is interfaced with a IlC for autonomous obstacle avoidance during line-following navigation. The line-following task is similar to target tracking using the saccadic system, except that the "eye" is fixed and the "head" (the vehicle) moves to maintain fixation on the target. Control signals provided to the vehicle based on decisions made by the IlC are used for steering and accelerating/braking. Here the computational flexibility and programmability of the IlC allows rapid prototyping of complex and robust algorithms. 2 CENTROID LOCALIZATION The mathematical computation of the centroid of an object on the focal plane uses intensity weighted average of the position of the pixels forming the object [OeWeerth, 1992]. Equation (1) shows this formulation. The implementation of this representation N N LI,x, LI,y; x= j =1 and y= ;=1 (1) N N L,I, L I, ,=1 1=1 can be quite involved since a product between the intensity and position is implied. To eliminate this requirement, the intensity of the pixels can be normalized to a single value within the object. This gives equation (2) since the intensity can be factored out of the summations. Normalization of the intensity using a simple threshold is not advised since Ix, Iy, x=~ and y=~ (2) N N Intensity Image Figure 1: Centroid computation architecture. X I X l+l X 1+2 X I+3 x1+4 Edges from pixels Figure 2: Centroid computation method. the value of the threshold is dependent on the brightness of the image and number of pixels forming the object may be altered by the thresholding process. To circumvent these problems, we take the view that the centroid of the object is defined in relation to its boundaries. This implies that edge detection (second order spatial derivative of intensity) can be used to highlight the boundaries, and edge labeling (the zero-crossing of the edges) can be used to normalize the magnitude of the edges. Subsequently, the centroid VLSI Implementation of Motion Centroid Localization for Autonomous Navigation 687 of the zero-crossings is computed. Equation (2) is then realized by projecting the zerocrossing image onto the x- and y-axis and performing two linear centroid determinations. Figure (1) shows this process. The determination of the centroid is computed using a resistance grid to associate the position of a column (row) with a voltage. In figure 2, the positions are given by the voltages Vi. By activating the column (row) switch when a pixel of the edge image appears in that column (row), the position voltage is connected to the output line through the switch impedance, Rs. As more switches are activated, the voltage on the output line approximates equation (2). Clearly, since no buffers are used to isolate the position voltages, as more switches are activated, the position voltages will also change. This does not pose a problem since the switch resistors are design to be larger than the position resistors (the switch currents are small compared to the grid current). Equation (3) gives the error between the ideal centroid and the switch loaded centroid in the worst case when Rs = on. In the equation, N is the number of nodes, M is the number of switches set and Xl and xM are the locations of the first and last set switches, respectively. This error is improved as Rs gets larger, and vanishes as N (M~N) approaches infinity. The terms Xi represent an ascending ordered list of the activated switches; x I may correspond to column five, for example. This circuit is compact since it uses only a simple linear resistive grid and MOS switches. It is low power because the total grid resistance, N x R, can be large. It can be fast when the parasitic capacitors are kept small. It provides an analog position value, but it is triggered by fast digital signals that activate the switches. V -V ~[ xCN+l) 1 error = mOll min £...J X __ --'-1-'--_-'--_ M(N + 1) .=1 • N + 1 + XI Xm (3) 3 MODELING THE RETINA AND SUPERIOR COLLICULUS 3.1 System Overview The centroid computation approach presented in section 2 is used to isolate the location of moving targets on a 20 focal plane array. Consequently, a chip which realizes a neuromorphic visual target acquisition system based on the saccadic generation mechanism of primates can be implemented. The biological saccade generation process is mediated by the superior colliculus, which contains a map of the visual field [Sparks, 1990}. In laboratory experiments, cellular recordings suggest that the superior colliculus provides the spatial location of targets to be foveated. Clearly, a great deal of neural circuitry exists between the superior colliculus and the eye muscle. Horiuchi has built an analog system which replicates most of the neural circuits (including the motor system) which are believed to form the saccadic system [Horiuchi, 1996]. Staying true to the anatomy forced his implementation to be a complex multi-chip system with many control parameters. On the other hand, our approach focuses on realizing a compact single chip solution by only mimicking the behavior of the saccadic system, but not its structure. 3.2 Hardware Implementation Our approach uses a combination of analog and digital circuits to implement the functions of the retina and superior colliculus at the focal plane. We use simple digital control ideas, such as pulse-width modulation and stepper motors, to position the "eye". The retina portion of this chip uses photodiodes, logarithmic compression, edge detection and zero-crossing circuits. These circuits mimic the first three layers of cells in the retina 688 R. Etienne-Cummings, V. Grnev and M. A. Ghani with mixed sub-threshold and strong inversion circuits. The edge detection circuit is realized with an approximation of the Laplacian operator implemented using the difference between a smooth (with a resistive grid) and unsmoothed version of the image [Mead, 1989]. The high gain of the difference circuit creates a binary image of approximate zero-crossings. After this point, the computation is performed using mixed analog/digital circuits. The zero-crossings are fed to ON-set detectors (positive temporal derivatives) which signal the location of moving or flashing targets. These circuits model the behavior of some of the amacrine and ganglion cells of the primate retina [Barlow, 1982]. These first layers of processing constitute all the "direct" mimicry of the biological models. Figure 3 shows the schematic of these early processing layers. The ON-set detectors provide inputs to the model of the superior colliculus circuits. The ON-set detectors allow us to segment moving targets against textured backgrounds. This is an improvement on earlier centroid and saccade chips that used pixel intensity. The essence of the superior colliculus map is to locate the target that is to be foveated. In our case, the target chosen to be foveated will be moving. Here motion is define simply as the change in contrast over time. Motion, in this sense, can be seen as being the earliest measurable attribute of the target which can trigger a saccade without requiring any high level decision making. Subsequently, the coordinates of the motion must be extracted and provided to the motor drivers. Figure 3: Schematic of the model of the retina. X M otion Cenuold ~! ~A ! Edge Detc:ctlon ON-set Detecuon Figure 4: Schematic of the model of the superior collicu Ius. The circuits for locating the target are implemented entirely with mixed signal, nonneuromorphic circuits. The theoretical foundation for our approach is presented in section 2. The ON-set detector is triggered when an edge of the target appears at a pixel. At this time, the pixel broadcasts its location to the edge of the array by activating row and column lines. This row (column) signal sets a latch at the right (top) of the array. The latches asynchronously activate switches and the centroid of the activated positions is provided. The latches remain set until they are cleared by an external control signal. This control signal provides a time-window over which the centroid output is integrated. This has the effect of reducing noise by combining the outputs of pixels which are activated at different instances even if they are triggered by the same motion (an artifact of small fill factor focal plane image processing). Furthermore, the latches can be masked from the pixels' output with a second control signal. This signal is used to de-activate the centroid VLSI Implementation of Motion Centroid Localization for Autonomous Navigation 689 circuit during a saccade (saccadic suppression). A centroid valid signal is also generated by the chip. Figure 4 shows a portion of the schematic of the superior colliculus model. 3.3 Results In contrast to previous work, this chip provides the 2-D coordinates of the centroid of a moving target. Figure 5 shows the oscilloscope trace of the coordinates as a target moves back and forth, in and out of the chip's field of view. The y-coordinate does change while the x-coordinate increases and decreases as the target moves to the left and right, respectively. The chip has been used to track targets in 2-D by making micro-saccades. In this case, the chip chases the target as it attempts to escape from the center. The eye movement is performed by converting the analog coordinates into PWM signals, which are used to drive stepper motors. The system performance is limited by the contrast sensitivity of the edge detection circuit, and the frequency response of the edge (high frequency cut-off) and ON-set (low frequency cut-off) detectors. With the appropriate optics, it can track walking or running persons under indoor or outdoor lighting conditions at close or far distances. Table I gives a summary of the chip characteristics. VarYing x· l'Oordmate Figure 5: Oscilloscope trace of 20 centroid for a moving target. Technology Chip Size Array Size Pixel Size Fill Factor Intensity Min. Contrast Response Time Power (chip) 1.2um ORBIT 4mm 1 12 x 10 llOxllOutn 11% 0.lu-1OOmW/cm 2 10% 2-106 Hz(@1 mW/cml) 5 mW (@l m W/cm~ Vdd = 6V) Table I: Chip characteristics. 4 APPLICATION: OBSTACLE A VOIDANCE DURING LINEFOLLOWING AUTONA VIGATION 4.1 System Overview The frenzy of activity towards developing neuromorphic systems over the pass 10 years has been mainly driven by the promise that one day engineers will develop machines that can interact with the environment in a similar way as biological organisms. The prospect of having a robot that can help humans in their daily tasks has been a dream of science fiction for many decades. As can be expected, the key to success is premised on the development of compact systems, with large computational capabilities, at low cost (in terms of hardware and power). Neuromorphic VLSI systems have closed the gap between dreams and reality, but we are still very far from the android robot. For all these robots, autonomous behavior, in the form of auto-navigation in natural environments, must be one of their primary skills. For miniaturization, neuromorphic vision systems performing most of the pre-processing, can be coupled with small fast computers to realize these compact yet powerful sensor/processor modules. 4.2 Navigation Algorithm The simplest form of data driven auto-navigation is the line-following task. In this task, the robot must maintain a certain relationship with some visual cues that guide its motion. In the case of the line-follower, the visual system provides data regarding the state of the 690 R. Etienne-Cummings, V. Gruev and M A. Ghani line relative to the vehicle, which results in controlling steering and/or speed. If obstacle avoidance is also required, auto-navigation is considerably more difficult. Our system handles line-following and obstacle avoidance by using two neuromorphic visual sensors that provide information to a micro-controller OlC) to steer, accelerate or decelerate the vehicle. The sensors, which uses the centroid location system outlined above, provides information on the position of the line and obstacles to the p,C, which provides PWM signals to the servos for controlling the vehicle. The algorithm implemented in the p,C places the two sensors in competition with each other to force the line into a blind zone between the sensors. Simultaneously, if an object enters the visual field from outside, it is treated as an obstacle and the p,C turns the car away from the object. Obstacle avoidance is given higher priority than line-following to avoid collisions. The p,C also keeps track of the direction of avoidance such that the vehicle can be re-oriented towards the line after the obstacle is pushed out of the field of view. Lastly, for line following, the position, orientation and velocity of drift, determined from the temporal derivative of the centroid, are used to track the line. The control strategy is to keep the line in the blind zone, while slowing down at corners, speeding up on straight aways and avoiding obstacles. The angle which the line or obstacle form with the x-axis also affects the speed. The value of the x-centroid relative to the y-centroid provides rudimentary estimate of the orientation of the line or obstacle to the vehicle. For example, angles less Follow . " AV~8id~nce ..... L Zone / AV~na;ce , ~ne ! 0I0s1ade i : ! '. )", / ~\?': ~ .... / ;i:~~~~··· ... \j ~ \;.. .. ······~i=~s..s Figure 6: Block diagram of the autonomous line-follower system. Figure 7: A picture of the vehicle. (greater) than +/- 45 degrees tend to have small (large) x-coordinates and large (small) ycoordinates and require deceleration (acceleration). Figure 6 shows the organization of the sensors on the vehicle and control spatial zones. Figure 7 shows the vehicle and samples of the line and obstacles. 4.3 Hardware Implementation The coordinates from the centroid localization circuits are presented to the p,C for analysis. The p,C used is the Microchip PIC16C74. This chip is chosen because of its five NO inputs and three PWM outputs. The analog coordinates are presented directly to the NO inputs. Two of the PWM outputs are connected to the steering and speed control servos. The PIC16C74 runs at 20 MHz and has 35 instructions, 4K by 8-b ROM and 80 by 20-b RAM. The program which runs on the PIC determines the control action to take, based on the signal provided by the neuromorphic visual sensors. The vehicle used is a four-wheel drive radio controlled model car (the radio receiver is disconnected) with Digital Proportional Steering (DPS). VLSI Implementation of Motion Centroid Localization for Autonomous Navigation 691 4.4 Results The vehicle was tested on a track composed of black tape on a gray linoleum floor with black and white obstacles. The track formed a closed loop with two sharp turns and some smooth S-curves. The neuromorphic vision chip was equipped with a 12.5 mm variable iris lens, which limited its field of view to about 100. Despite the narrow field of view, the car was able to navigate the track at an average speed of 1 mls without making any errors. On less curvy parts of the track, it accelerated to about 2 mls and slowed down at the corners. When the speed of the vehicle is scaled up, the errors made are mainly due to over steering. 5 CONCLUSION A 2D model of the saccade generating components of the superior colliculus is presented. This model only mimics the functionality the saccadic system using mixed signal focal plane circuits that realize motion centroid localization. The single chip combines a silicon retina with the superior colliculus model using compact, low power and fast circuits. Finally, the centroid chip is interfaced with an 8-bit IlC and vehicle for fast linefollowing auto navigation with obstacle avoidance. Here all of the required computation is performed at the visual sensor, and a standard IlC is the high-level decision maker. References Barlow H., The Senses: Physiology of the Retina, Cambridge University Press, Cambridge, England, 1982. Boahen K., "Retinomorphic Vision Systems II: Communication Channel Design," ISCAS 96, Atlanta, GA, 1996. DeWeerth, S. P., "Analog VLSI Circuits for Stimulus Localization and Centroid Computation," Int'l 1. Computer Vision, Vol. 8, No.2, pp. 191-202, 1992. Etienne-Cummings R., J Van der Spiegel and P. Mueller, "Neuromorphic and Digital Hybrid Systems," Neuromorphic Systems: Engineering Silicon from Neurobiology, L. Smith and A. Hamilton (Eds.), World Scientific, 1998. Horiuchi T., T. Morris, C. Koch and S. P. DeWeerth, "Analog VLSI Circuits for Attention-Based Visual Tracking," Advances in Neural Information Processing Systems, Vol. 9, Denver, CO, 1996. Koch C. and H. Li (Eds.), Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits, IEEE Computer Press, 1995. Mansfield, P., "Machine Vision Tackles Star Tracking," Laser Focus World, Vol. 30, No. 26, pp. S21-S24, 1996. Mead C. and M. Ismail (Eds.), Analog VLSI Implementation of Neural Networks, Kluwer Academic Press, Newell, MA, 1989. Sparks D., C. Lee and W. Rohrer, "Population Coding of the Direction, Amplitude and Velocity of Saccadic Eye Movements by Neurons in the Superior Colliculus," Proc. Cold Spring Harbor Symp. Quantitative Biology, Vol. LV, 1990.
|
1998
|
58
|
1,557
|
Discontinuous Recall Transitions Induced By Competition Between Short- and Long-Range Interactions in Recurrent Networks N.S. Skantzos, C.F. Beckmann and A.C.C. Coolen Dept of Mathematics, King's College London, Strand, London WC2R 2LS, UK E-mail: skantzos@mth.kcl.ac.uktcoolen@mth.kcl.ac.uk Abstract We present exact analytical equilibrium solutions for a class of recurrent neural network models, with both sequential and parallel neuronal dynamics, in which there is a tunable competition between nearestneighbour and long-range synaptic interactions. This competition is found to induce novel coexistence phenomena as well as discontinuous transitions between pattern recall states, 2-cycles and non-recall states. 1 INTRODUCTION Analytically solvable models of large recurrent neural networks are bound to be simplified representations of biological reality. In early analytical studies such as [1,2] neurons were, for instance, only allowed to interact with a strength which was independent of their spatial distance (these are the so-called mean field models). At present both the statics of infinitely large mean-field models of recurrent networks, as well as their dynamics away from saturation are well understood, and have obtained the status of textbook or review paper material [3,4]. The focus in theoretical research of recurrent networks has consequently turned to new areas such as solving the dynamics of large networks close to saturation [5], the analysis of finite size phenomenology [6], solving biologically more realistic (e.g. spike-based) models [7] or analysing systems with spatial structure. In this paper we analyse models of recurrent networks with spatial structure, in which there are two types of synapses: long-range ones (operating between any pair of neurons), and short-range ones (operating between nearest neighbours only). In contrast to early papers on spatially structured networks [8], one here finds that, due to the nearest neighbour interactions, exact solutions based on simple mean-field approaches are ruled out. Instead, the present models can be solved exactly by a combination of mean-field techniques and the so-called transfer matrix method (see [9]). In parameter regimes where the two synapse types compete (where one has long-range excitation with short-range inhibition, or long-range Hebbian synapses with short-range anti-Hebbian synapses) we find interesting and potentially useful novel phenomena, such as coexistence of states and discontinuous transitions between them. 338 N. S. Skantzos, C. F Beckmann and A. C. C. Coo/en 2 MODEL DEFINITIONS We study models with N binary neuron variables Ui = ±1. which evolve in time stochastically on the basis of post-synapi.ic potentials hi (8). following Prob[ui(t + 1) = ±1] = ~ [1 ± tanh[,Bhi(8(t))Jl hi(8) = L JijUj + ()i (1) #i The variables Jij and ()i represent synaptic interactions and firing thresholds. respectively. The (non-negative) parameter ,B controls the amount of noise. with,B = 0 and,B = 00 corresponding to purely random and purely deterministic response. respectively. If the synaptic matrix is symmetric. both a random sequential execution and a fully parallel execution of the stochastic dynamics (1) will evolve to a unique equilibrium state. The corresponding microscopic state probabilities can then both formally be written in the Boltzmann form Poo(a) ,...., exp[-,BH(a)], with [10] Hpar(a) = -~ ~logcosh[,Bhi(a)]- ~()iUi I S Hseq(a) = - LUiJijUj- L()Wi i<j i (2) For large systems the macroscopic observables of interest can be obtained by differentiation of the free energy per neuron f = -limN-+oo(,BN)-llog LiT exp[ -,BH(8)]. which acts as a generating function. For the synaptic interactions Jij and the thresholds ()i we now make the following choice: Jl model I : Jij = N ~i~j + Js(di,j+1 + di,j-d ~i~j ()i = ()~i (3) (which corresponds to the result of having stored a binary pattern { E {-1, 1}N through Hebbian-type learning). with h, Js , () E Rand i + N =i. The neurons can be thought of as being arranged on a periodic one-dimensional array. with uniform interactions of strength Jl~i~j / N , in combination with nearest neighbour interactions of strength JS~i~j . Note that model I behaves in exactly the same way as the following model II: Jl Jij = N + Js (di,j+1 + di,j-d (4) since a simple transformation Ui --t Ui~i maps one model into the other. Taking derivatives of f with respect to the parameters () and Js for model II produces our order parameters. expressed as equilibrium expectation values. For sequential dynamics we have of . 1 ~ of . 1 m = -- = hm L..J(Ui) a = -- = hm L(Ui+1Ui) o() N-+oo N . oJs N-+oo N . I I (5) For parallel dynamics the corresponding expressions turn out to be 10f . 1 ~ 1 of . 1 ~ _ m = -2 o() = J~oo N L..J(Ui) a = -2 oJs = J~oo N ~(Ui+1 tanh[,Bhi(u)]) , , (6) We have simplified (6) with the iJentities (Ui+1 tanh[,Bhi(a)]) = (Ui-1 tanh[,Bhi(a)]) and (tanh[,Bhi(5)j) = (Ui). which follow from (1) and from invariance under the transformation i --t N + 1 - i (for all i). For sequential dynamics a describes the average equilibrium state covariances of neighbouring neurons. For parallel dynamics a gives the average equilibrium state covariances of neurons at a given time t. and their neighbours at time t + 1 (the difference between the two meanings of a will be important in the presence of 2-cycIes). In model II m is the average activity in equilibrium. whereas for model lone finds m = lim N1 ~ ~i(Ui) N-+oo L..J 1 This is the familiar overlap order parameter of associative memory models [1 . 2], which measures the quality of pattern recall in equilibrium. The observable a transforms similarly. Competition between Short- and Long-Range Interactions 339 3 SOLUTION VIA TRANSFER MATRICES From this stage onwards our analysis will refer to model II, i.e eqn (4); the results can immediately be translated into the language of model I (3) via the transformation CTi -+ CTi~i. In calculating f it is advantageous to separate terms induced by the long-range synapses from those induced by the short-range ones, via insertion of 1 = f dm a[m - iv Ei CTi]. Upon using the integral representation of the a-function, we then arrive at with f = lim _l-IOg/dmdm e-{3NtP(m,m) N-+oo {3N ¢seq(m,m) = -imm - mO - ~Jtm2 {3~ log Rseq(m) (7) ¢par(m, m) = -imm - mO {3~ log Rpar(m, m) (8) The quantities R contain all complexities due to the short-range interactions in the model (they describe a periodic one-dimensional system with neighbour interactions only): O'E{ -l,l}N Rpar(m,m) = O'E{ -l,l}N They can be calculated using the transfer-matrix technique [9], which exploits an interpretation of the summations over the N neuron states CTi as matrix multiplications, giving ( e{3J. - i{3m Tseq = e-{3J. e-{3J· ) e{3J·+i{3m _ (COSh[{3W+ ]e-i{3m cosh[{3wo] ) Rpar(m, m) = Tr [T:ar] Tpar ~ cosh [{3wo] cosh[{3w_ ]eil-'m where Wo = Jtm + 0 and W± = Wo ± 2Ja• The identity Tr [TN] = >..f- + >..I'!, in which >..± are the eigenvalues of the 2 x 2 matrix T, enables us to take the limit N -+ 00 in our equations. The integral over (m, m) is for N -+ 00 evaluated by gradient descent, and is dominated by the saddle points of the exponent ¢. We thus arrive at the transparent result { A. ( A ) • A 0 1 J 2 1 I \ seq 'l'seq m,m = -~mm - m 2" tm 73 og,,+ A. ( A ) • A 0 1 I \ par (9) 'l'par m, m = -~mm - m 73 og "+ f = extr ¢(m, m) where >..~q and >..~ar are the largest eigenvalues of Tseq and T par. For simplicity, we will restrict ourselves to the case where 0 = 0; generalisation of what follows to the case of arbitrary 0, by using the full form of (9), is not significantly more difficult. The expressions defining the value(s) of the order parameter m can now be obtained from the saddle point equations om¢(m, m) = om¢(m, m) = O. Straightforward differentiation shows that with sequential: m = imJt, m = G(m; Jt, Ja) parallel: m =imJt, m = -imJt, m = G(m; Jt, Ja ) m = G(m;-Jt,-Ja) G( . J J) _ sinh[{3Jtm] m, t, a - -r========== Jsinh2[{3Jtm] + e-4{3J. for Jt 2: 0 for Jt < 0 (10) (11) Note that equations (to, 11) allow us to derive the physical properties of the parallel dynamics model from those of the sequential dynamics model via simple transformations. 340 N. S. Skantzos, C. F Beckmann and A. C. C. Coo/en 4 PHASE TRANSITIONS Our main order parameter m is to be determined by solving an equation of the form m = G(m), in which G(m) = G(m; Jl. Js ) for both sequential and parallel dynamics with Jl ~ 0, whereas G(m) = G(m;-Jl.-Js ) for parallel dynamics with Jl < O. Note that, due to G(O; h, Js ) = 0, the trivial solution m = 0 always exists. In order to obtain a phase diagram we have to perform a bifurcation analysis of the equations (10,11), and determine the combinations of parameter values for which specific non-zero solutions are created or annihilated (the transition lines). Bifurcations of non-zero solutions occur when m = G(m) and 1 = G'(m) (12) The first equation in (12) states that m must be a solution of the saddle-point problem, the second one states that this solution is in the process of being created/annihilated. Nonzero solutions of m = G (m) can come into existence in two qualitatively different ways: as continuous bifurcations away from the trivial solution m = 0, and as discontinuous bifurcations away from the trivial solution. These two types will have to be treated differently. 4.1 Continuous Transitions An analytical expression for the lines in the ({3Js • {3Jl) plane where continuous transitions occur between recall states (where m =f. 0) and non-recall states (where m = 0) is obtained by solving the coupled equations (12) for m = O. This gives: sequential: {3Jl = e-2/3J. parallel: {3Jl = e-2/3J. and {3Jl = _e2/3J. cont. trans. : (13) If along the transition lines (13) we inspect the behaviour of G(m) close to m = 0 we can anticipate the possible existence of discontinuous ones, using the properties of G(m) for m -+ ±oo, in combination with G(-m) = -O(m). Precisely at the lines (13) we have G(m) = m + i-G'''(O).m3 + O(m5 ) . Since liIllm-+oo G(m) = 1 one knows that for G11I(0) > 0 the function G(m) will have to cross the diagonal G(m) = m again at some value m > 0 in order to reach the limit G (00) = 1. This implies, in combination with G (-m) = -0 (m), that a discontinous transition must have already taken place earlier, and that away from the lines (13) there will consequently be regions where one finds five solutions of m = G(m) (two positive ones, two negative ones). Along the lines (13) the condition G11I (0) > 0, pointing at discontinuous transitions elsewhere, translates into sequential : parallel: 4.2 Discontinuous Transitions {3Jl > J3 and {3Js < - i log 3 I{3Jll > J3 and I{3Js l < - i log 3 (14) In the present models it turns out that one can also find an analytical expression for the discontinuous transition lines in the ({3Js , {3h) plane, in the form of a parametrisation. For sequential dynamics one finds a single line, parametrised by x = {3Jlm E [0,00): x3 {3Js (x) = _~ 10 [tanh(X) Sinh2(X)] discont. trans. : {3Jl(X) = ( ). g ( ) x-tanh x 4 x-tanh x (15) Since this parametrisation (15) obeys {3Js (O) = -i log3 and {3Jl(O) = J3, the discontinuous transition indeed starts precisely at the point predicted by the convexity of G (m) at m = 0, see (14). For sequential dynamics the line (15) gives all non-zero solutions of the coupled equations (12). For parallel dynamics one finds, in addition to (15), a second 'mirror image' transition line, generated by the transformation {{3Jl, {3Js } H {-{3h, -{3Js } . Competition between Short- and Long-Range Interactions 5 PHASE DIAGRAMS 6 .. ........ --coex -4 r--~ --1 ~ 2 ~ -2 r -, 1m1>O, a>O m=O, a<O 0 0 f3Jl f f3lt > -2 ~ m..(),a>O 1 -2 ~ j t , , -4 ~ ~ -4 -r -6 -6 -2 -I 0 2 -2 f3Js m=O, a<O 1m1>O, a<O 2-cycle -I ImI>O, a>O fixed point 341 1 1 2 Figure 1: Left: phase diagram for sequential dynamics, involving three regions: (i) a region with m = 0 only (here a = tanh[f3Js ]), (ii) a region with two m #- 0 fixed-point states (with opposite sign, and with identical a > 0), and (iii) a region where the m = 0 state and the two m #- 0 states coexist. The (i) -+ (ii) and (ii) -+ (iii) transitions are continuous (solid lines), whereas the (i) -+ (iii) transition is discontinuous (dashed line). Right: phase diagram for parallel dynamics, involving the above regions and transitions, as well as a second set of transition lines (in the region It < 0) which are exact reflections in the origin of the first set. Here. however, the m = 0 region has a = tanh[2f3Jsl, the two m #- 0 physical solutions describe 2-cycles rather than fixed-points, and the Jl < 0 coexistence region describes the coexistence of an m = 0 fixed-point and 2-cycles. Having detennined the transition lines in parameter space. we can turn to the phase diagrams. A detailed expose of the various procedures followed to detennine the nature of the various phases, which are also dependent on the type of dynamics used, goes beyond the scope of this presentation; here we can only present the resulting picture. 1 Figure 1 shows the phase diagram for the two types of dynamics, in the (j3Js , j3Jl ) plane (note: of the three parameters {j3, Js , Jd one is redundant). In contrast to models with nearest neighbour interactions only (Jl = 0, where no pattern recall ever will occur), and to models with mean-field interactions only (Js = 0, where pattern recall can occur), the combination of the two interaction types leads to qualitatively new modes of operation. This especially in the competition region, where Jl > 0 and Js < 0 (Hebbian long-range synapses, combined with anti-Hebbian short range ones). The novel features of the diagram can playa useful role: phase coexistence ensures that only sufficiently strong recall cues will evoke pattern recognition; the discontinuity of the transition subsequently ensures that in the latter case the recall will be of a substantial quality. In the case of parallel dynamics, similar statements can be made in the opposite region of synaptic competition, but now involving 2-cycles. Since figure 1 cannot show the zero noise region (13 = T- 1 = 00), we have also drawn the interesting competition region of the sequential dynamics phase diagram in the (Jl, T) plane, for Js = -1 (see figure 3, left picture). At T = 0 one finds coexistence of recall states (m i:- 0) and non-recall states (m = 0) for any Jl > 0, as soon as Js < O. In the same figure (right picture) we show the magnitude of the discontinuity in the order parameter m at the discontinuous transition, as a function of j3Jl. IDue to the occurrence of imaginary saddle-points in (10) and our strategy to eliminate the variable m by using the equation om<l>(m, m) = 0, it need not be true that the saddle-point with the lowest value of <1>( m, m) is the minimum of <I> (complex conjugation can induce curvature sign changes. and in addition the minimum could occur at boundaries or as special limits). Inspection of the status of saddle-points and identification of the physical ones in those cases where there are multiple solutions is thus a somewhat technical issue, details of which will be published elsewhere [11]. 342 N. S. Skantzos. C. F Beckmann and A. C. C. Coolen 6 1.0 5 0.8 4 m 06 f 3 m=IJ, a<O j T r , , , 0.4 t t , 2 t , , , , r , , 1 t , , 0.2 r ~ , , j , , cMSisttnct " f " .' 0 .' 0.0 0 2 4 6 8 10 0 2 4 6 8 10 Jl {31t Figure 2: Left picture: alternative presentation of the competition region of the sequential dynamics phase dia~am in figure 1. Here the system states and transitions are drawn in the (Jl' T) plane (T = {3- ), for Js = -1. Right picture: the magnitude of the 'jump' of the overlap m along the discontinuous transition line, as a function of {3Jl. The fact that for parallel dynamics one finds 2-cyc1es in the lower left corner of the phase diagram (figure 1) can be inferred from the exact dynamical solution available along the line Js = 0 (see e.g. [4]), provided by the deterministic map m(t + 1} = tanh(.BJlm(t}]. Finally we show, by way of further illustration of the coexistence mechanism, the value of reduced exponent ¢seq (m) given in (9), evaluated upon elimination of the auxiliary order parameter m: ¢(m) == ¢seq(m, imJi}' The result, for the parameter choice ({3, Ji) = (2, 3) and for three different short-range coupling stengths (corresponding to the three phase regimes: non-zero recall, coexistence and zero recall) is given in figure 3. In the same figure we also give the sequential dynamics bifurcation diagram displaying the value(s) of the overlap m as a function of {3Ji and for {3Js = -0.6 (a line crossing all three phase regimes in figure (l». 0.0 I 1 1.0 f f -0.5 r ~ 0.5 l I i I -1.0 -i ¢(m) m 0.0 -1.5 -2.0 r 1 -0.5 ~ i I -2.5 t j -1.0 -1.0 -0.5 0.0 0.5 1.0 2.0 2.5 3.0 3.5 4.0 m {3Jl Figure 3: Left: Graph of the reduced exponent ¢(m) = ¢aeq(m, imJl) for the parameter choice ({3, Jd = (2,3). The three lines (from upper to lower: Js = -1.2,-0.8,-0.2) correspond to regimes where (i) m':j:. 0 only (ii) coexistence of trivial and non-trivial recall states occurs and (iii) m = 0 only. Right: Sequential dynamics bifurcation diagram displaying for {3Js = -0.6 the possible recall solutions. For a critical {3Jl given by (I5) m jumps discontinuously to non-zero values. For increasing values of {3Jl the unstable m :j:. 0 solutions converge towards the trivial one until {3Jl = exp(1.2) where a continuous phase transition takes place and m = 0 becomes unstable. Competition between Short- and Long-Range Interactions 343 6 DISCUSSION In this paper we have presented exact analytical equilibrium solutions, for sequential and parallel neuronal dynamics, for a class of recurrent neural network models which allow for a tunable competition between short-range synapses (operating between nearest neighbours only) and long-range ones (operating between any pair of neurons). The present models have been solved exactly by a combination of mean-field techniques and transfer matrix techniques. We found that there exist regions in parameter space where discontinuous transitions take place between states without pattern recall and either states of partiaVfull pattern recall or 2-cycles. These regions correspond to the ranges of the network parameters where the competition is most evident, for instance, where one has strongly excitatory longrange interactions and strongly inhibitory short-range ones. In addition this competition is found to generate a coexistence of pattern recall states or 2-cycles with the non-recall state, which (in turn) induces a dependence on initial conditions of whether or not recall will at all take place. This study is, however, only a first step. In a similar fashion one can now study more complicated systems, where (in addition to the long-range synapses) the short-range synapses reach beyond nearest neighbours, or where the system is effectively on a two-dimensional (rather than one-dimensional) array. Such models can still be solved using the techniques employed here. A different type of generalisation would be to allow for a competition between synapses which would not all be of a Hebbian form, e.g. by having long-range Hebbian synapses (modeling processing via pyramidal neurons) in combination with shortrange inhibitory synapses without any effect of learning (modeling processing via simple inhibitory inter-neurons). In addition, one could increase the complexity of the model by storing more than just a single pattern. In the latter types of models the various pattern components can no longer be transformed away, and one has to turn to the methods of random field Ising models (see e.g. [12]). References [1] DJ. Amit, H. Gutfreund and H. Sompolinsky (1985), Phys. Rev. A32, 1007-1018 [2] D.J. Amit, H. Gutfreund and H. Sompolinsky (1985), Phys. Rev. Lett. 55, 1530-1533 [3] A.e.e. Cool en and D. Sherrington (1993), in J.G.Taylor (editor) Mathematical Approaches to Neural Networks, Elsevier Science Publishers, 293-306 [4] A.e.C. Coolen (1997), Statistical Mechanics of Neural Networks, King's College London Lecture Notes [5] A.C.e. Coolen, S.N. Laughton and D. Sherrington (1996), in D.S. Touretzky, M.C. Mozer and M.E. Hasselmo (eds) Advances in Neural Information Processing Systems 8, MIT Press [6] A. Castellanos, A.c.e. Coolen and L. Viana (1998), J. Phys. A 31, 6615-6634 [7] E. Domany, J.L. van Hemmen and K. Schulten (eds) (1994), Models of Neural Networks II, Springer [8] A.C.e. Coolen and L.G.Y.M. Lenders (1992), J. Phys A 25, 2593-2606 [9] J.M. Yeomans (1992), Statistical Mechanics of Phase Transitions, Oxford V.P. [10] P. Peretto (1984), Bioi. Cybem. 50, 51-62 [11] N.S. Skantzos and A.e.C. Coolen (1998), in preparation [12] V. Brandt and W. Gross (1978); Z. Physik B 31, 237-245
|
1998
|
59
|
1,558
|
Learning from Dyadic Data Thomas Hofmann·, Jan Puzicha+, Michael I. Jordan· • Center for Biological and Computational Learning, M .I.T Cambridge, MA, {hofmann, jordan}@ai.mit.edu + Institut fi.ir Informatik III , Universitat Bonn, Germany, jan@cs.uni-bonn.de Abstract Dyadzc data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This type of data arises naturally in many application ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework of learning from dyadic data by statistical mixture models. Our approach covers different models with fiat and hierarchical latent class structures. We propose an annealed version of the standard EM algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains. 1 Introduction Over the past decade learning from data has become a highly active field of research distributed over many disciplines like pattern recognition, neural computation, statistics, machine learning, and data mining. Most domain-independent learning architectures as well as the underlying theories of learning have been focusing on a feature-based data representation by vectors in an Euclidean space. For this restricted case substantial progress has been achieved. However, a variety of important problems does not fit into this setting and far less advances have been made for data types based on different representations. In this paper, we will present a general framework for unsupervised learning from dyadic data. The notion dyadic refers to a domain with two (abstract) sets of objects, ;r = {Xl , ... , XN} and Y = {YI, ... , YM} in which observations S are made for dyads (Xi, Yk). In the simplest case - on which we focus - an elementary observation consists just of (Xi, Yk) itself, i.e., a co-occurrence of Xi and Yk, while other cases may also provide a scalar value Wik (strength of preference or association). Some exemplary application areas are: (i) Computational linguistics with the corpus-based statistical analysis of word co-occurrences with applications in language modeling, word clustering, word sense disambiguation, and thesaurus construction. (ii) Textbased znJormatzon retrieval, where ,:{, may correspond to a document collection, Y Learningfrom Dyadic Data 467 to keywords, and (Xi, Yk) would represent the occurrence of a term Yk in a document Xi. (iii) Modeling of preference and consumption behavior by identifying X with individuals and Y with objects or stimuli as in collaborative jilterzng. (iv) Computer VIS tOn , in particular in the context of image segmentation, where X corresponds to imagE' locations, y to discretized or categorical feature values, and a dyad (Xi , Yk) represents a feature Yk observed at a particular location Xi. 2 Mixture Models for Dyadic Data Across different domains there are at least two tasks which playa fundamental role in unsupervised learning from dyadic data: (i) probabilistic modeling, i.e., learning a joint or conditional probability model over X xY, and (ii) structure discovery, e.g. , identifying clusters and data hierarchies. The key problem in probabilistic modeling is the data sparseness: How can probabilities for rarely observed or even unobserved co-occurrences be reliably estimated? As an answer we propose a model-based approach and formulate latent class or mixture models. The latter have the further advantage to offer a unifying method for probabilistic modeling and structure discovery. There are at least three (four, if both variants in (ii) are counted) different ways of defining latent class models: I. The most direct way is to introduce an (unobserved) mapping c : X X Y --+ {Cl , . . . , CK} that partitions X x Y into K classes. This type of model is called aspect-based and the pre-image c- l (cO') is referred to as an aspect. n. Alternatively, a class can be defined as a subset of one of the spaces X (or Y by symmetry, yielding a different model) , i.e., C : X --+ {Cl, . .. , CK} which induces a unique partitioning on X x Y by C(Xi , yk) == C(Xi) . This model is referred to as on e-szded clustering and c-l(ca ) ~ X is called a cluster. Ill. If latent classes are defined for both sets, c : X --+ {ci, .. . , cK} and C : Y --+ {cI , . .. , cD, respectively, this induces a mapping C which is a K . L partitioning of X x y. This model is called two-sided clustering. 2.1 Aspect Model for Dyadic Data In order to specify an aspect model we make the assumption that all co-occurrences in the sample set S are i.i.d. and that Xi and Yk are conditionally independent given the class. With parameters P(xilca), P(Yklca) for the class-conditional distributions and prior probabilities P( cO' ) the complete data probability can be written as P(S , c) = IT [P(Cik)P(Xilcik)P(Yklcik)t(x"Yk) , i,k (1) where n(xi, Yk) are the empirical counts for dyads in Sand Cik == C(Xi, Yk) . By summing over the latent variables C the usual mixture formulation is obtained P(S) = IT P(Xi, Ykt(X"Yk), where P(Xi , Yk) = L P(ca)P(xilca)P(Yk Ica ) . (2) i,k a Following the standard Expectation Maximization approach for maximum likelihood t'stimation [DE'mpster et al .. 1977], the E-step equations for the class posterior probabilities arE' given byl (3) 1 In the case of multiple observations of dyads it has been assumed that each observation may have a different latent class. If only one latent class variable is introduced for each dyad, slightly different equations are obtained. 468 T. Hofmann, J puzicha and M. 1. Jordan •• ~ ..... . .... _0 ....... . ! P (Ca) : maximal !P(XiICcx) : maximal !p(YklcCX> ............... -Ill , U.UU4 two 0.18 seven 0.10 tbree 0.10 four 0.06 five 0.06 years 0.11 bousand 0.1 buodred 0.1 days 0.07 cubits 0.05 up 0.40 dowoO.17 fortb 0.15 out 0.09 ioO.Ol bave 0.38 batb 0.22 bad 0.11 bast 0.09 be 0.02 ..... __ ......... 114, U.UU:l sbalt 0.18 tbe 0.95 bast 0.08 bis 0.006 wilt 0.08 my 0.005 art 0.07 our 0.003 if 0.05 tby 0.003 tbou 0.85 lord 0.09 DOt 0.01 bildreo 0.0 also 0.004 SOD 0.02 ndeed 0.00 land 0.02 aooiot 0.003 o Ie 0.02 Figure 1: Some aspects of the Bible (bigrams) . It is straightforward to derive the M-step re-estimation formulae I"', U.U~9 <.> 0.52 ee O. <:> 0.16 me 0.03 <,> 0.14 him 0.03 <;> 0.07 it 0.02 <?> 0.04 you 0.02 aDd 0.33 <?> 0.27 for 0.08 <,> 0.23 but 0.07 <.> 0.12 ben 0.0 <:> 0.06 so 0.02 <.> 0.04 P(ca) ex L n(xi' Yk)P{Cik = Ca}, P(xilca) ex L n(xi, Yk)P{Cik = Ca}, (4) i,k k and an analogous equation for P(Yk Ica). By re-parameterization the aspect model can also be characterized by a cross-entropy criterion. Moreover, formal equivalence to the aggregate Markov model, independently proposed for language modeling in [Saul, Pereira, 1997], has been established (cf. [Hofmann, Puzicha, 1998] for details). 2.2 One-Sided Clustering Model The complete data model proposed for the one-sided clustering model is P(S, c) = P( c)P(SIc) = (If P( c(x;)) ) (IT [P( x;)P(Y' Ic( X;))]n(x",,)) , (5) where we have made the assumption that observations (Xi, Yk) for a particular Xi are conditionally independent given c( xd . This effectively defines the mixture P(S) = IT P(S;) , P(S;) = L P(ca) IT [P(XdP(Yklea)r(X"Yk) , (6) a k where Si are all observations involving Xi. Notice that co-occurrences in Si are not independent (as they are in the aspect model) , but get coupled by the (shared) latent variable C(Xi). As before, it is straightforward to derive an EM algorithm with update equations P{ c( Xi) = Ca} ex P( Ca) IT P(Yk Icat(x. ,Yk), P(Yk lea) ex L n(Xi, Yk )P{ c( Xi) = ca} (7) k and P(ca) ex Li P{C(Xi) = cal, P(Xi) ex Lj n(xi,Yj)· The one-sided clustering model is similar to the distributional clustering model [Pereira et al. , 1993], however, there are two important differences: (i) the number of likelihood contributions in (7) scales with the number of observations - a fact which follows from Bayes' rule - and (ii) mixing proportions are rpissing in the original distributional clustering model. The one-sided clustering model corresponds to an unsupervised version of the naive Bayes' classifier, if we interpret Y as a feature space for objects Xi EX . There are also ways to weaken the conditional independence assumption, e.g., by utilizing a mixture of tree dependency models [Meila, Jordan, 1998]. 2.3 Two-Sided Clustering Model The latent variable structure of the two-sided clustering model significantly reduces the degrees of freedom in the specification of the class conditional distribution. We Learning from Dyadic Data 469 Figure 2: Exemplary segmentation results on Aerial by one-sided clustering. propose the following complete data model P(S, c) = II P(C(Xi))P(C(Yk)) [P(xi)P(Yk)1Tc(xi),c(YIc)f(x"yIc) (8) i,k where 1Tc:r: cll are cluster association parameters. In this model the latent variables 0" "Y in the X and Y space are coupled by the 1T-parameters. Therefore, there exists no simple mixture model representation for P(S). Skipping some of the technical details (cf. [Hofmann, Puzicha, 1998]) we obtain P(Xi) ex Lk n(xi,Yk), P(Yk) ex Li n(xi' Yk) and the M-step equations L i k n(xi, Yk)P{C(Xi) = c~ /\ C(Yk) = c~} 1Tc~.c~ = [Li P{C(Xi) = ~;} Lk n(xi, Yk)] [Lk P{C(Yk) = cn Li n(xi, Yk)] (9) as well as P(c~) = L i P{C(Xi) = c~} and P(c~) = Lk P{C(Xk) = cn . To preserve tractability for the remaining problem of computing the posterior probabilities in the E-step, we apply a factorial approximation (mean field approximation), i.e., P{C(Xi) = c~ /\ C(Yk) = cO ~ P{C(Xi) = c~}P{C(Yk) = cn. This results in the following coupled approximation equations for the marginal posterior probabilities P{ c(x;) = c~} ex P(c~) exp [~n(x;, y,) ~ PI cry,) = c'(} log "'~"~ 1 (10) and a similar equation for P {C(Yk) = c~}. The resulting approximate EM algorithm performs updates according to the sequence (CX- post., 1T, cLpost., 1T). Intuitively the (probabilistic) clustering in one set is optimized in alternation for a given clustering in the other space and vice versa. The two-sided clustering model can also be shown to maximize a mutual information criterion [Hofmann, Puzicha, 1998]. 2.4 Discussion: Aspects and Clusters To better understand the differences of the presented models it is elucidating to systematically compare the conditional probabilities P( CO' Ixd and P( CO' IYk): Aspect One-sided One-sided Two-sided Model X Clustering Y Clustering Clustering P(colxd P{x.ico' W{co' 2 P{c(xd = cO'} P{xdcO' W{ CO' 2 P{C(Xi) = c~} P(x,) P(x.) P(CoIYk ) P~lf.k Ic", W( c'" 2 P(lf.klcO' )P(cO' 2 P{C(Yk) = cO'} P{C(Yk) = c~} P(Yk) P(Yk) As can be seen from the above table, probabilities P(CoIXi) and P(CaIYk) correspond to posterior probabilities of latent variables if clusters are defined in the X-and Y-space, respectively. Otherwise, they are computed from model parameters. This is a crucial difference as, for example, the posterior probabilities are approaching 470 ' ~. ....... I .~ , f ..... I~ '''' P' '' ''''~ .... ... 1· .. --" .... < •• I~ h .... ' • ••• ",oi, ..... r... . .......... , ..... t. • .. ,,'" " .. ~ .. " 'dol ... n, ... " fl •••• ..-~ .. _.ie ~~.:: , ..... _ .. ',.<. ., .. «u. _ ••• 1 :~:: ~ Il10'' '' < • •• .,. :~::. ..... ,. . , ... < . ... 11 .. .. i . .. _ ;',117 1" ", " 'It 1111, . .... ]0 b ..... , •••• ,e. ..... 1, •• :,~:::. u ."'~' .< ~ •• I f . . .. c r ..... ..... ,~,-." ..•... , ..• , .. .flu "'~h" •.. Ii., d ..... ........... 'u.. ." .... u d d . .. Ho, _" .... :I~;:;·.~' ~:.:' .~I. .h.y h'" .. ,.. ,n., ........ hi u u ll • •• ".d . U.tl~ .. , .. ,... I • • ~ I''''''.' d·,. ,,, ••. U, .. ~i... ..10 •• cl~ )O,.i. ,.".... t._~.. D~~~"" .~ ~" u 1<1 II .1 " I~ 1& .. ~ ... , ~ ._.. . ~.c, .. ' I •• ,. • ..... _ . ._11 "'7 ........ .,..... • ... ~ ." ... , ' '''.,. I~ .. i,. to., Io'.,f .. , .......... ~,, " ~.'''< . I •• '''." ...... , .... ~ .i ul " ' .. tol. • •• 1-. ~:.:: .. ~.".. ~.::;,:'::.I ~:::: .. ::':~., ::.7.:,., t~·.·t ~;:.;::.' IT U . h.~. . ..... ,_. . 1 . . .... ,'" .... ,. oa." .... , .Iln., ... ... w ... ,,<., ., •• " • ~.... ," ua, ..... , " .. , .. ,h .... , ., " ••• , ..... ~~·:;·i:· ~, ... c; •• "" ... !:~:I:: ' ,... .. •• • • • L ::,':~~;~~ , ... ,,1 ... :~::,~ ~:. " .. ,fa_ . • ,.1 .... ~::~' ... 1. '10 • .i' .... .... ~., '.,"10 ... .0 31 .. I"". ... n, ,.1 .• •• 10'''' .. ,.<. f.:~I:.:,., ~::. ,,,II " . .. .. ... " ... , ... it, ,... , .. h lt. ::::,:,:;" •• 1::;::;. 1.<0' I."""'''' .. ~Ii.,<oJ ~ .•• ;::., :::;! .. '., ~:~::~', ;:~:,. ~~:::~.".~'. :::~'.~~~~' T. Hofmann, J. Puzicha and M. I. Jordan Figure 3: Two-sided clustering of LOB: 7r matrix and most probable words. Boolean values in the infinite data limit and P(Yklxd = Lo P{C(Xi)=Co}P(Yk!co) are converging to one of the class-conditional distributions. Yet, in the aspect model P(Yklxd = Lo P(CoIXi)P(Yk!co) and P(CoIXi) ex: P(Co)P(Xi!co) are typically not peaking more sharply with an increasing number of observations. In the aspect model, conditionals P(Yk IXi) are inherently a weighted sum of the 'prototypical' distributions P(Yk Ico ). Cluster models in turn ultimately look for the 'best' classconditional and weights are only indirectly induced by the posterior uncertainty. 3 The Cluster-Abstraction Model The models discussed in Section 2 all define a non-hierarchical, 'flat' latent class structure. However, for structure discovery it is important to find hierarchical data organizations. There are well-known architectures like the Hierarchical Mixtures of Experts [Jordan, Jacobs, 1994] which fit hierarchical models. Yet, in the case of dyadic data there is an alternative possibility to define a hierarchical model. The Cluster-Abstraction Model (CAM) is a clustering model (e.g., in X) where the conditionals P(Yk Ico) are itself xi-specific aspect mixtures, P(Yk leo, Xi) = LII P(Yk lall )P( alllco, Xi) with a latent aspect mapping a. To obtain a hierarchical organization, clusters Co are identified with the terminal nodes of a hierarchy (e.g., a complete binary tree) and aspects all with inner and terminal nodes. As a compatibility constraint it is imposed that P( all/co, xd = 0 whenever the node corresponding to all is not on the path to the terminal node co. Intuitively, conditioned on a 'horizontal' clustering c all observations (Xi, Yk) E Si for a particular Xi have to be generated from one of the 'vertical' abstraction levels on the path to c( Xi)' Since different clusters share aspects according to their topological relation, this favors a meaningful hierarchical organization of clusters. Moreover, aspects at inner nodes do not simply represent averages over clusters in their subtree as they are forced to explicitly represent what is common to all subsequent clusters. Skipping the technical details, the E-step is given by P{a(xi,Yk) = all/c(xi) = co} ex: P(alllco,xi)P(Yk/all) (11) P{ C(Xi) = co} ex: P( co) II L [P( alllco, Xi)P(Yk /a ll )r(X"Yk) (12) k II and the M-step formulae are P(Yk/all) ex: LiP{a(xi,Yk) = all}n(xi,Yk), P(co) ex: Li P{C(Xi) = co}, and P(alllco, Xi) ex: Lk P{a(xi ' Yk) = all/c(xi) = co}n(xi, Yk)' Learning from Dyadic Data ....... :f;:::::m······ function learn ptrfonn : weiCht '. train : ntlwork: .: learn \ problem i process ~ .' error '. : data : .:' example \ j modtl : network neural function "",wI ctntr ............ -- ~t~n ·-· ·· ·· ·-· , dynamjc process learn. :' Umt '" author :' a1corithm .: modd \ Infonn i rult ~ i synaptic.. i nrtwork \ f nruron ", ; map \ nd method d .. s~ir control neuron nrtwork visual memori ,talc ~rror 'r:lin neural actiy nrural imae associ neuron cener taleet desi", spike learn motion (papc ~onlinu learn frtttur infonn r~pons aicorithm ctll pattern <lrbitr~ri perform C);,I;\sin ne~.~ork ~~rs~ dy~.~m ,ic p~~tSS lra~ rundl.l.ln : 1I11~'rilh~ im.{ j model , !'in : control ; ~pt INtt:w. j ~idd~:r i ~~~!~-tc ;~=fit ~ :.:~ / ;?k. i ;~~:...; ~~:nl :::;eri i machin : update-: '_tur : .pUm . raw • m." .... · map paramll : bound; ute 1 cCHWtr-.in, lin· ... ~c~. ,.,"'// 7·~d~_" " ~""_-::;'bl l t certic ' i\, '1M1"I1 m .... 1 / ~ It.n:d .... .-. -;;:.. .. = object naI~ i """ ~:!:' ~.r ./ \ ... :-e:rsion demnln 1.caI ,..oIen: ~ni;~ i \=" ~;IKIU' ! ........ :::.......::.-::. ~;~-.// =!: ........ ~ ::: ::~appt"e("u. th~d 1MOt00itnn 'lUlu ,.,-...et ,,.,....w (: .... , ',.... d'.;:;" ';~'';ari ;;yne~nu::=t ~:::n ~i;~ :;::~~ :~it.ri r::kl' ::!::-' ~:;:: m:krn ( ... J .... theta rglat deted steM,. ... t thre:m.td pedsJ"a.pt nop) repoa "tat UrUyers tnnde!' -II ,.b.1' .plim mKhin ".ndud. ~tin Gscil t,..nsfonll 471 Figure 4: Parts of the top levels of a hierarchical clustering solution for the Neural document collection, aspects are represented by their 5 most probable word stems. 4 Annealed Expectation Maximization Annealed EM is a generalization of EM based on the idea of deterministic annealing [Rose et al., 1990] that has been successfully applied as a heuristic optimization technique to many clustering and mixture problems. Annealing reduces the sensitivity to local maxima, but, even more importantly in this context, it may also improve the generalization performance compared to maximum likelihood estimation.2 The key idea in annealed EM is to introduce an (inverse temperature) parameter (3, and to replace the negative (averaged) complete data log-likelihood by a substitute known as the free energy (both are in fact equivalent at f3 = 1). This effectively results in a simple modification of the E-step by taking the likelihood contribution in Bayes' rule to the power of ;3. In order to determine the optimal value for f3 we used an additional validation set in a cross validation procedure. 5 Results and Conclusions In our experiments we have utilized the following real-world data sets: (i) Cranfield: a standard test collection from information retrieval (N = 1400, M = 4898) , (ii) Penn : adjective-noun co-occurrences from the Penn Treebank corpus (N = 6931 , M = 4995) and the LOB corpus (N = 5448, M = 6052), (iii) Neural: a document collection with abstracts of journal papers on neural networks (N = 1278, M = 6065) , (iv) Bzble: word bigrams from the bible edition of the Gutenberg project (N = M = 12858), (v) Aerial: Textured aerial images for segmentation (N = 128x128, M = 192). In Fig. 1 we have visualized an aspect model fitted to the Bible bigram data. Notice that although X = Y the role of the preceding and the subsequent words in bigrams is quite different. Segmentation results obtained on Aerial applying the one-sided clustering model are depicted in Fig. 2. A multi-scale Gabor filter bank (3 octaves, 4 orientations) was utilized as an image representation (cf. [Hofmann et al. , 1998]). In Fig. 3 a two- sided clustering solution of LOB is shown. Fig. 4 shows the top levels of the hierarchy found by the Cluster-Abstraction Model in Neural. The inner node distributions provide resolution-specific descriptors for the documents in the conesponding subtree which can be utilized , e.g., in interactive browsing for information retrieval, Fig. 5 shows typical test set perplexity curves of the 2 Moreover, the tree topology for the CAM is heuristically grown via phase transitions. 472 T. Hofmann, 1. Puzicha and M 1. Jordan (a) (b) (c) ".;----=-~---;:--~-,:;---:~ .~EJoII,"""', .... Figure 5: Perplexity curves for annealed EM (aspect (a), (b) and one-sided clustering model (c)) on the Bible and Gran data. Aspect X-duster CAM X /Y-c1uster Aspect X-cluster CAM X /Y-cluster K f3 'P f3 'P f3 'P f3 'P f3 'P f3 'P f3 'P f3 'P Cran Penn 1 685 639 8 0.88 482 0.09 527 0.18 511 0.67 615 0.73 312 0.08 352 0.13 322 0.55 394 16 0.72 255 0.07 302 0.10 268 0.51 335 0.72 255 0.07 302 0.10 268 0.51 335 32 0.83 386 0.07 452 0.12 438 0.53 506 0.71 205 0.07 254 0.08 226 0.46 286 64 0.79 360 0.06 527 0.11 422 OA8 477 0.69 182 0.07 223 0.07 204 0.44 272 128 0.78 353 0.04 663 0.10 410 OA5 462 0.68 166 0.06 231 0.06 179 DAD 241 Table 1: Perplexity results for different models on the Gran (predicting words conditioned on documents) and Penn data (predicting nouns conditioned on adjectives). annealed EM algorithm for the aspect and clustering model (P = e- 1 where I is the per-observation log-likelihood). At {J = 1 (standard EM) overfitting is clearly visible, an effect that vanishes with decreasing (J. Annealed learning performs also better than standard EM with early stopping. Tab. 1 systematically summarizes perplexity results for different models and data sets. In conclusion mixture models for dyadic data have shown a broad application potential. Annealing yields a substantial improvement in generalization performance compared to standard EM, in particular for clustering models, and also outperforms a complexity control via J{. In terms of perplexity, the aspect model has the best performance. Detailed performance studies and comparisons with other state-of-the-art techniques will appear in forthcoming papers. References [Dempster et al., 1977] Dempster, A.P., Laird, N.M., Rubin, D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statist. Soc. B, 39, 1-38. [Hofmann, Puzicha, 1998] Hofmann, T., Puzicha, J. 1998. Statistical models for cooccurrence data. Tech. rept. Artifical Intelligence Laboratory Memo 1625, M.LT. [Hofmann et al., 1998] Hofmann, T., Puzicha, J., Buhmann, J.M. (1998). Unsupervised texture segmentation in a deterministic annealing framework. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8), 803-818. [Jordan, Jacobs, 1994] Jordan, M.L, Jacobs, R.A. (1994). Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6(2), 181-214. [Meila, Jordan, 1998] Meila, M., Jordan, M. L 1998. Estimating Dependency Structure as a Hidden Variable. In: Advances in Neural Information Processing Systems 10. [Pereira et al., 1993] Pereira, F.e.N., Tishby, N.Z., Lee, L. 1993. Distributional clustering of English words. Pages 189-190 of: Proceedings of the A CL. [Rose et al., 1990] Rose, K., Gurewitz, E., Fox, G. (1990). Statistical mechanics and phase transitions in clustering. Physical Review Letters, 65(8), 945-948. [Saul, Pereira, 1997] Saul, 1., Pereira, F. 1997. Aggregate and mixed-order Markov models for statistical language processing. In: Proceedings of the 2nd International Conference on Empirical Methods in Natural Language Processing.
|
1998
|
6
|
1,559
|
An Integrated Vision Sensor for the Computation of Optical Flow Singular Points Charles M. Higgins and Christof Koch Division of Biology, 139-74 California Institute of Technology Pasadena, CA 91125 [chuck,koch]@klab.caltech.edu Abstract A robust, integrative algorithm is presented for computing the position of the focus of expansion or axis of rotation (the singular point) in optical flow fields such as those generated by self-motion. Measurements are shown of a fully parallel CMOS analog VLSI motion sensor array which computes the direction of local motion (sign of optical flow) at each pixel and can directly implement this algorithm. The flow field singular point is computed in real time with a power consumption of less than 2 m W. Computation of the singular point for more general flow fields requires measures of field expansion and rotation, which it is shown can also be computed in real-time hardware, again using only the sign of the optical flow field. These measures, along with the location of the singular point, provide robust real-time self-motion information for the visual guidance of a moving platform such as a robot. 1 INTRODUCTION Visually guided navigation of autonomous vehicles requires robust measures of self-motion in the environment. The heading direction, which corresponds to the focus of expansion in the visual scene for a fixed viewing angle, is one of the primary sources of guidance information. Psychophysical experiments [WH88] show that humans can determine their heading direction very precisely. In general, the location of the singular point in the visual field provides important self-motion information. Optical flow, representing the motion seen in each local area of the visual field, is partic700 C. M. Higgins and C. Koch ularly compute-intensive to process in real time. We have previously shown [DHK97] a fully parallel, low power, CMOS analog VLSI vision processor for computing the local direction of motion. With onboard photoreceptors, each pixel computes in continuous time a vector corresponding to the sign of the local normal flow. In this article, we show how these motion vectors can be integrated in hardware to compute the singular point of the optical flow field. While each individual pixel suffers from transistor mismatch and spatial variability with respect to its neighbors, the integration of many pixels serves to average out these irregularities and results in a highly robust computation. This compact, low power self-motion processor is well suited for autonomous vehicle applications. Extraction of self-motion information has been a topic of research in the machine vision community for decades, and has generated volumes of research; see [FA97] for a good review. While many algorithms exist for determining flow field singular points in complex self-motion situations, few are suitable for real-time implementation. Integrated hardware attempts at self-motion processing have only begun recently, with the work of Indiveri et al [IKK96]. The zero crossing in a ID array of CMOS velocity sensors was used to detect one component of the focus of expansion. In a separate chip, the sum of a radial array of velocity sensors was used to compute the rate of flow field expansion, from which the time-to-contact can be calculated. McQuirk [McQ96] built a CCD-based image processor which used an iterative algorithm to locate consistent stable points in the image, and thus the focus of expansion. More recently, Deutschmann et al. [DW98] have extended Indiveri et al.'s work to 2D by summing rows and columns in a 2D CMOS motion sensor array and using software to detect zero crossings and find the flow field singular point. 2 SINGULAR POINT ALGORITHM In order to compute the flow field singular point, we compute the sum of the sign of optical flow over the entire field of view. Let the field of view be centered at (0,0) and bounded by ±L in both spatial dimensions; then (vector quantities are indicated in boldface) s = [:[: U(x ,y)dxdy (1) where U(x,y) = (Ux(x,y),Uy(x,y)) = sgn(V(x,y)) and V(x,y) is the optical flow field. Consider a purely expanding flow field with the focus of expansion (FOE) at the center of the visual field. Intuitively, the vector sum of the sign of optical flow will be zero, because each component is balanced by a spatially symmetric component with opposite sign. As the FOE moves away from the center of the visual field, the sum will increase or decrease depending on the FOE position. An expanding flow field may be expressed as Ve(x, y) = A(x, y) . ((x - Xe), (y - Ye)) (2) where A(x, y) denotes the local rate of expansion and (Xe, Ye ) is the focus of expansion. The integral (1) applied to this flow field yields as long as A is positive. Note that, due to the use of optical flow sign only, this quantity is independent of the speed of the flow field components. We will discuss in Section 5 how the positivity requirement of A can be relaxed somewhat. Integrated Computation of Optical Flow Singular Points 701 Similarly, a clockwise rotating flow field may be expressed as V r (x, y) = B (x, y) . ((y - Yr), - (x - X r)) (3) where B(x, y) denotes the local rate of rotation and (Xr, Yr) is the axis of rotation (AOR). The integral (1) applied to this flow field yields S = -4L· (Yr , -Xr ) as long as B is positive. Let us now consider the case of a combination of these expanding and rotating fields (2) and (3): V(x,y) = aVe + (1- a)Vr (4) This flow field is spiral in shape; the parameter a defines the mix of the two field types. The sum in this case is more complex to evaluate, but for a small (rotation dominating), S = -4L· (CXe + Yr, CYe - X r) and for a large (expansion dominating), S = -4L· (Xe + (l/C)Yr, Ye - (l/G)Xr) (5) (6) where C = l~:B' Since it is mathematically impossible to recover both the FOE and AOR with only two equations,l let us equate the FOE and AOR and concentrate on recovering the unique singular point of this spiral flow field. In order to do this, we need a measurement of the quantity C, which reflects the relative mix and strength of the expanding and rotating flow fields. 2.1 COEFFICIENTS OF EXPANSION AND ROTATION Consider a contour integral around the periphery of the visual field of the sign of optical flow components normal to the contour of integration. If we let this contour be a square of size 2L centered at (0,0), we can express this integral as 8LCexp = i: (Uy(x,L) - Uy(x, -L))dx + i: (Ux(L,y) - Ux(-L,y)) dy (7) This integral can be considered as a 'template' for expanding flow fields. The quantity Cexp reaches unity for a purely expanding flow field with FOE within the visual field, and reaches zero for a purely rotating flow field. A similar quantity for rotation may be defined by an integral of the sign of optical flow components parallel to the contour of integration: 8LCrot = i: (Ux(x, L) - Ux(x, -L)) dx + i: (Uy( -L, y) - Uy(L, y)) dy (8) It can be shown that for a small (rotation dominating), Gexp ~ C. As a increases, Cexp saturates at unity. Similarly, for () large (expansion dominating), Crot ~ (l/C). As a decreases, Grot saturates at unity. This suggests the following approximation to equations (5) and (6), letting Xs = Xe = Xr and Ys = Ye = Yr S = -4L . (CexpXs + CrotYs, GexpYs - CrotXs) (9) from which equation the singular point (Xs , Ys ) may be uniquely calculated. Note that this generalized expression also covers contracting and counterclockwise rotating fields (for which the quantities Cexp and Crot would be negative). 1 In fact, if A and B are constant, there exists no unique solution for the FOE and AOR. 702 C. M. Higgins and C. Koch 3 HARDWARE IMPLEMENTATION The real-time hardware implementation of the above algorithm utilizes a fully parallel 14x 13 CMOS analog VLSI motion sensor array. The elementary motion detectors are briefly described below. Each pixel in the array creates a local motion vector when crossed by a spatial edge; this vector is represented by two currents encoding the x and y components. These currents persist for an adjustable period of time after stimulation. By using the serial pixel scanners at the periphery of the chip (normally used to address each pixel individually), it is possible to connect all of these currents to the same output wire, thus implementing the sum required by the algorithm. In this mode, the current outputs of the chip directly represent the sum S in equation (1), and power consumption is less than 2 mW. A similar sum combining sensor row and column outputs around the periphery of the chip could be used to implement the quantities Gexp and Grot in equations (7) and (8). Due to the sign changes necessary, this sum cannot be directly implemented with the present implementation. However, it is possible to emulate this sum by scanning off the vector field and performing the sum in real-time software. 3.1 ELEMENTARY MOTION DETECTOR The ID elementary motion detector used in this processor is the IT! (Inhibit, Trigger, and Inhibit) sensor. Its basic operation is described in Figure 1; see [DHK97] for details. The sensor is edge sensitive, approximately invariant to stimulus contrast above 20% and functions over a stimulus velocity range from 10-800 pixels/sec. ~ PIXEL A ~ PIXEL B ~ PIXEL C ~--l ,-- ~ l ~ ~E~j ;TEDj ITED: ~~-- JJ}TJ[ __ ~ MOTION MOTION Vnght Vleft A Intensity Blntensity Cintensity Direction voltage Vrlghl Direction voltage Vlelt Output current lout __ ~Il~ __________ __ L time Figure 1: IT! sensor: a spatial edge crossing the sensor from left to right triggers direction voltages for both directions Vright and Viejt in pixel B. The same edge subsequently crossing pixel G inhibits the null direction voltage Viejt. The output current is continuously computed as the difference between Vright and Viejt; the resulting positive output current lout indicates rightward motion. Pixels B and A interact similarly to detect leftward motion, resulting in a negative output current. The output of each ID IT! sensor represents the order in which the three involved photoreceptors were crossed by a spatial edge. Like all local motion sensors, it suffers from the aperture problem, and thus can only respond to the optical flow normal to the local gradients of intensity. The final result of this computation is the sign of the projection of the normal flow onto the sensor orientation. Two such sensors placed orthogonally effectively compute the sign of the normal flow vector. Integrated Computation of Optical Flow Singular Points 15 05 o -05 -1 -1 5 FOE Y coordinate FOE X coordinate FOE Y coordinate (a) X output 703 FOE X coordinate (b) Youtput Figure 2: Hardware FOE computation: the chip was presented with a computer-generated image of high-contrast expanding circles; the FOE location was varied under computer control on a 2D grid. The measured chip current output has been scaled by a factor of 6 x 105 chip radii per Ampere. All FOE locations are shown in chip radii, where a radius of 1.0 corresponds to the periphery of the sensor array. Data shown is the mean output over one stimulus period; RMS variation is 0.27 chip radii. 4 SENSOR MEASUREMENTS In Figure 2, we demonstrate the hardware computation of the FOE. To generate this data, the chip was presented with a computer-generated image of high-contrast expanding circles. The focus of expansion was varied on a 2D grid under computer control, and the mean of the chip's output current over one period of the stimulus was calculated for each FOE position. This output varies periodically with the stimulus because each motion sensor stops generating output while being crossed by a stimulus edge. The RMS value of this variation for the expanding circles stimulus is 0.27 chip radii; this variation can be decreased by increasing the resolution of the sensor array. The data shows that the FOE is precisely located when it is within the chip's visual field. Each component of the chip output is virtually independent of the other. When the FOE is outside the chip's visual field, the chip output saturates, but continues to indicate the correct direction towards the FOE. The chip's AOR response to a rotating 'wagon wheel' stimulus is qualitatively and quantitatively very similar, and is not shown for lack of space. In Figure 3, the coefficients of expansion and rotation are shown for the same expanding circles stimulus used in Figure 2. Since these coefficients cannot be calculated directly by the present hardware, the flow field was scanned out of the chip and these quantities were calculated in real-time software. While the FOE is on the chip, Gexp remains near unity, dropping off as the FOE leaves the chip. As expected, Grot remains near zero regardless of the FOE position. Note that, because these coefficients are calculated by integrating a ring of only 48 sensors near the chip periphery, they have more spatial noise than the FOE calculation which integrates all 182 motion sensors. In Figure 4, a spiral stimulus is presented, creating an equal combination of expansion and rotation «() = 0.5 in equation (4)). The singular point is calculated from equation (9) using the optical flow field scanned from the chip. Due to the combination of the coefficients with the sum computation, more spatial noise has been introduced than was seen in the FOE case. However, the singular point is still clearly located when within the chip. When the 704 C. M. Higgins and C. Koch 0 .5 o - 05 - 1 - 1 o o - 1 FOE Y coordinate FOE X coordinate FOE Y coordinate FOE X coordinate (a) Cexp (b) Grot Figure 3: Coefficients of expansion and rotation: again using the computer-generated expanding circles stimulus, the FOE was varied on a 2D grid. All FOE locations are shown in chip radii, where a radius of 1.0 corresponds to the periphery of the sensor array. Data shown is the mean output over one stimulus period. 1.5 0 .5 o -0.5 -1 - 15 o Singular pt. Y coord. Singular pt. X COOrd. (a) X output 15 0.5 o -0.5 -1 -15 Singular pt Y coord. SIngular pt. X coord. (b) Y output Figure 4: Singular point calculation: the chip was shown a computer-generated image of a rotating spiral; the singular point location was varied under computer control on a 2D grid. All singular point locations are shown in chip radii, where a radius of 1.0 corresponds to the periphery of the sensor array. Data shown is the mean output over one stimulus period. singular point leaves the chip, the calculated position drops towards zero as the algorithm can no longer compute the mix of expansion and rotation. 5 DISCUSSION We have presented a simple, robust algorithm for computing the singular point of an optical flow field and demonstrated a real-time hardware implementation. Due to the use of the sign of optical flow only, the solution is independent of the relative velocities of components of the flow field. Because a large number of individual sensors are integrated to produce this output, it is quite robust to the spatial variability of the individual motion sensors. We have also shown how coefficients indicating the mix of expansion and rotation may be computed in hardware. A motion sensor array which directly computes these coefficients, as well as the flow field singular point, is currently in fabrication. Integrated Computation of Optical Flow Singular Points 705 In order to derive the equations relating the flow field sums to the FOE, it was necessary in Section 2 to make the unrealistic assumption that the optical flow field contains no areas of zero optical flow. Due to the persistence time of the motion sensor used, it is possible to relax this assumption significantly. As long as all parts of the visual field receive stimulation within the persistence time of the motion output, the optical flow field seen by the motion sensor array will contain no zeros and the singular point output will remain correct. This is a simple example of temporal motion integration. In fact, it is possible in practice to relax this assumption even further: as long as the location of zeros in the optical flow field is spatially random, the magnitude of the output will be reduced hut it will continue to provide a clear error signal pointing towards the flow field singular point. Because of the fully parallel design of the motion sensor array, larger arrays may be obtained by simply replicating pixels. The FOE summing algorithm is not affected by this increase in the number of pixels. As the number of pixels is increased, the average power consumption will increase sublinearly, because the sum output current (the dominant source of prolonged power consumption) can be maintained at approximately the same absolute value regardless of the number of pixels integrated. However, the periodic variation of the output with the stimulus will be decreased, the precision of the FOE output will be improved, and the need for temporal averaging will be reduced. Acknowledgments This research was supported by the Caltech Center for Neuromorphic Systems Engineering as a part of the National Science Foundation's Engineering Research Center program, as well as by the Office of Naval Research. The authors wish to thank Rainer Deutschmann for stimulating discussions. References [DHK97] R. Deutschmann, e. Higgins, and e. Koch. Real-time analog VLSI sensors for 2-D direction of motion. In Proceedings of the Int. Con! on Artificial Neural Networks, pages 1163-1168. Springer Verlag, 1997. [DW98] R. A. Deutschmann and O. G. Wenisch. Compressive computation in analog VLSI motion sensors. In Proceedings of Deutsche Arbeitsgemeinschaftfiir Mustererkennung, 1998. [FA97] e. Fermtiller and Y. Aloimonos. On the geometry of visual correspondence. International Journal of Computer Vision, 21(3):233-247,1997. [IKK96] G. Indiveri, 1. Kramer, and e. Koch. Parallel analog VLSI architectures for computation of heading direction and time-to-contact. In D.S. Touretzky, M.e. Mozer, and M.E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 720-726, Cambridge, MA, 1996. MIT. [McQ96] I. McQuirk. An analog VLSI chip for estimating the focus of expansion. Technical Report 1577, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 1996. [WH88] W. Warren and D. Hannon. Direction of self-motion is perceived from opticalflow. Nature, 336(6195):162-163,1988.
|
1998
|
60
|
1,560
|
Robust. Efficient, Globally-Optimized Reinforcement Learning with the Parti-Game Algorithm Mohammad A. AI-Ansari and Ronald J. Williams College of Computer Science, 161 CN Northeastern University Boston, MA 02115 alansar@ccs.neu.edu, rjw@ccs.neu.edu Abstract Parti-game (Moore 1994a; Moore 1994b; Moore and Atkeson 1995) is a reinforcement learning (RL) algorithm that has a lot of promise in overcoming the curse of dimensionality that can plague RL algorithms when applied to high-dimensional problems. In this paper we introduce modifications to the algorithm that further improve its performance and robustness. In addition, while parti-game solutions can be improved locally by standard local path-improvement techniques, we introduce an add-on algorithm in the same spirit as parti-game that instead tries to improve solutions in a non-local manner. 1 INTRODUCTION Parti-game operates on goal problems by dynamically partitioning the space into hyperrectangular cells of varying sizes, represented using a k-d tree data structure. It assumes the existence of a pre-specified local controller that can be commanded to proceed from the current state to a given state. The algorithm uses a game-theoretic approach to assign costs to cells based on past experiences using a minimax algorithm. A cell's cost can be either a finite positive integer or infinity. The former represents the number of cells that have to be traveled through to get to the goal cell and the latter represents the belief that there is no reliable way of getting from that cell to the goal. Cells with a cost of infinity are called losing cells while others are called winning ones. The algorithm starts out with one cell representing the entire space and another, contained within it, representing the goal region. In a typical step, the local controller is commanded to proceed to the center of the most promising neighboring cell. Upon entering a neighboring cell (whether the one aimed at or not), or upon failing to leave the current cell within 962 M A. AI-Ansari and R. J. Williams o 0 ~ .~ 0 s .... .l :~ s .... 0 • • !---:---:----:-...J.......:---! . (I) (0) (e) (d) Figure I: In these mazes, the agent is required to stan from the point marked Stan and reach the square goal cell. a timeout period, the result of this attempt is added to the database of experiences the algorithm has collected, cell costs are recomputed based on the updated database, and the process repeats. The costs are computed using a Dijkstra-like, one-pass minimax version of dynamic programming. The algorithm terminates upon entering the goal cell. If at any point the algorithm determines that it can not proceed because the agent is in a losing cell, each cell lying on the boundary between losing and winning cells is split across the dimension in which it is largest and all experiences involving cells that are split are discarded. Since parti-game assumes, in the absence of evidence to the contrary, that from any given cell every neighboring cell is reachable, discarding experiences in this way encourages exploration of the newly created cells. 2 PARTITIONING ONLY LOSING CELLS The win-lose boundary mentioned above represents a barrier the algorithm perceives that is preventing the agent from reaching the goal. The reason behind partitioning cells along this boundary is to increase the resolution along these areas that are crucial to reaching the goal and thus creating more regions along this boundary for the agent to try to get through. By partitioning on both sides of the boundary, parti-game guarantees that neighboring cells along the boundary remain close in size. Along with the strategy of aiming towards centers of neighboring cells, this produces pairings of winner-loser cells that form proposed "corridors" for the agent to try to go through to penetrate the barrier it perceives. In this section we investigate doing away with partitioning on the winning side, and only partition losing cells. Because partitioning can only be triggered with the agent on the losing side of the win-lose boundary, partitioning only losing cells would still give the agent the same kind of access to the boundary through the newly formed cells. However, this would result in a size disparity between winner- and loser-side cells and, thus, would not produce the winner side of the pairings mentioned above. To produce a similar effect to the pairings of parti-game, we change the aiming strategy of the algorithm. Under the new strategy, when the agent decides to go from the cell it currently occupies to a neighboring one, it aims towards the center point of the common surface between the two cells. While this does not reproduce the same line of motion of the original aiming strategy exactly, it achieves a very similar objective. Parti-game's success in high-dimensional problems stems from its variable resolution strategy, which partitions finely only in regions where it is needed. By limiting partitioning to losing cells only, we hope to increase the resolution in even fewer parts of the state space and thereby make the algorithm even more efficient. To compare the performance of parti-game to the modified algorithm, we applied both algorithms to the set of continuous mazes shown in Figure 1. For all maze problems we used a simple local controller that can move directly toward the specified target state. We also Robust, Efficient Reiriforcement Learning with the Parti-Game Algorithm 963 Figure 2: An ice puck on a hill. The puck can thrust horizontally to the left and to the right with a maximum force of I Newton. The state space is two-dimensional consisting of the horizontal position and velocity. The agent starts at the position marked Start at velocity zero and its goal is to reach the position marked Goal at velocity zero. Maximum thrust is not adequate to get the puck up the ramp so it has to learn to move to the left first to build up momentum Figure 3: A nine degree of freedom, snake-like arm that moves in a plane and is fixed at one tip, as depicted in Figure 3. The objective is to move the arm from the start configuration to the goal one, which requires curling and uncurling to avoid the barrier and the wall. applied both algorithms to the non-linear dynamics problem of the ice puck on a hill, depicted in Figure 2, which has been studied extensively in reinforcement learning literature. We used a local controller very similar to the one described in Moore and Atkeson (1995). Finally, we applied the algorithm to the nine-degree of freedom planar robot introduced in Moore and Atkeson (1995) and shown in Figure 3 and we used the same local controller described there. Additional results on the Acrobot problem (Sutton and Barto 1998) were not included here for space limitations but can be found in AI-Ansari and Williams (1998). We applied both algorithms to each of these problems, in each case performing as many trials as was needed for the solution to stabilize. The agent was placed back in the start state at the end of each trial. In the puck problem, the agent was also reset to the start state whenever it hit either of the barriers at the bottom and top of the slope. The results are shown in Table 1. The table compares the number of trials needed, the number of partitions, total number of steps taken in the world, and the length of the final trajectory. The table shows that the new algorithm indeed resulted in fewer total partitions in all prob, 1 1 .1 , · "--t-"" fI mtm \ ft! 1\ · I "'" · ~ 1\ · /1 I ~ f· c· · I , I f, , . (a) (b) (e) Figure 4: The final trial of applying the various algorithms to the maze in Figure 1 (a). (a) parti-game. (b) parti-game with partitioning only losing cells and (c) parti-game with partitioning only the largest losing cells. 964 M A. AI-Ansari and R. J. Williams · · · · · I I 0 · Figure 5: Parti-game needed 1194 partitions to reach the goal in the maze of Figure l(d). lems. It also improved in all problems in the number of trials required to stabilization_ It improved in all but one problem (maze d) in the length of the final trajectory, however the difference in length is very small. Finally, it resulted in fewer total steps taken in three of the six problems, but the total steps taken increased in the remaining three. To see the effect of the modification in detail, we show the result of applying parti-game and the modified algorithm on the maze of Figure l(a) in Figures 4(a) and 4(b), respectively. We can see how areas with higher resolution are more localized in Figure 4(b). 3 BALANCED PARTITIONING Upon close observation of Figure 4(a), we see that parti-game partitions very finely along the right wall of the maze. This behavior is even more clearly seen in parti-game's solution to the maze in Figure l(d), which is a simple maze with a single barrier between the start state and the goal. As we see in Table 1, parti-game has a very hard time reaching the goal in this maze. Figure 5 shows the 1194 partitions that parti-game generated in trying to reach the goal. We can see that partitioning along the barrier is very uneven, being extremely fine near the goal and growing coarser as the distance from the goal increases. Putting higher focus on places where the highest gain could be attained if a hole is found can be a desirable feature, but what happens in cases like this one is obviously excessive. One of the factors contributing to this problem of continuing to search at ever-higher resolutions in the part of the barrier nearest the goal is that any version of parti-game searches for solutions using an implicit trade-off between the shortness of a potential solution path and the resolution required to find this path. Only when the resolution becomes so fine that the number of cells through which the agent would have to pass in this potential shortcut exceeds the number of cells to be traversed when traveling around the barrier is the algorithm forced to look elsewhere for the actual opening. A conceptually appealing way to bias this search is to maintain a more explicit coarse-tofine search strategy. One way to do this is to try to keep the smallest cell size the algorithm generates as large as possible. In addition to achieving the balance we are seeking, this would tend to lower the total number of partitions and result in shallower tree structures needed to represent the state space, which, in tum, results in higher efficiency. To achieve these goals, we modified the algorithm from the previous section such that whenever partitioning is required, instead of partitioning all losing cells, we only partition those among them that are of maximum size. This has the effect of postponing splits that would lower the minimum cell size as long as possible. The results of applying the modified algorithm on the test problems are also shown in Table 1. Comparing the results of this version of the algorithm to those of partitioning all losing cells Robust. Efficient Reinforcement Learning with the Parti-Game Algorithm 965 , : ~ · · · / · \ I I I ~ \.P , (a) (b) Figure 6: The result of partitioning largest cells on the losing side in the maze of Figure I (d). Only two nials are required to stabilize. The first requires 1304 steps and 21 partitions. The second nial adds no new partitions and produces a path of only 165 steps. Problem Algorithm Trials Partitions Total Final I Steps Trajectory Length maze a original parti-game 3 444 35131 279 partition losing side 3 239 16652 256 partition largest losing 3 27 1977 270 mazeb original parti-game 6 98 5180 183 partition losing side 5 76 7187 175 partition largest losing 6 76 5635 174 mazec original parti-game 3 176 7768 416 partition losing side 2 120 10429 165 partition largest losing 2 96 6803 165 mazed original parti-game 2 1194 553340 149 partition losing side 2 350 18639 155 partition largest losing 2 21 1469 165 puck original parti-game 6 80 6764 240 parti tion losing side 2 18 3237 151 partition largest losing 2 18 3237 lSI nineoriginal parti-game 25 104 2970 58 joint partition losing side 17 61 3041 56 arm partition largest losing 7 37 2694 112 Table 1: Results of applying parti-game, parti-game with partitioning only losing cells and parti-game with partitioning the largest losing cells on three of the problem domains. Smaller numbers are better. Best numbers are shown in bold. on the win-lose boundary shows that this algorithm improves on parti-garne's performance even further. It outperforms the above algorithm in four problems in the total number of partitions required, while it ties it in the remaining two. It outperforms the above algorithm in total steps taken in five problems and ties it in one. It improves in the number of trials needed to stabilize in one problem, ties the above algorithm in four cases and ties partigame in the remaining one. In the length of the final trajectory, partitioning the largest losing cells does better in one case, ties partitioning only losing cells in two cases and does worse in three. This latter result is due to the generally larger partition sizes that result from the lower resolution that this algorithm produces. However, the increase in the number of steps is very minimal in all but the nine-joint arm problem. Figure 4(c) shows the result of applying the new algorithm to the maze of Figure l(a). In contrast to the other two algorithms depicted in the same figure, we can see that the new algorithm partitions very uniformly around the barrier. In addition, it requires the fewest number of partitions and total steps out of the three algorithms. Figure 6 shows that the new algorithm vastly outperforms parti-game on the maze in Figure l(d). Here, too, it partitions very evenly around the barrier and finds the goal very quickly, requiring far fewer steps and partitions. 966 M. A. AI-Ansari and R. J Williams 4 GLOBAL PATH IMPROVEMENT Parti-game does not claim to find optimal solutions. As we see in Figure 4, parti-game and the two modified algorithms settle on the longer of the two possible routes to the goal in this maze. In this section we investigate ways we could improve parti-game so that it could find paths of optimal form. It is important to note that we are not seeking paths that are optimal, since that is not possible to achieve using the cell shapes and aiming strategies we are using here. By a path of optimal form we mean a path that could be continuously deformed into an optimal path. 4.1 OTHER GRADIENTS As mentioned above, parti-game partitions only when the agent has no winning cells to aim for and the only cells partitioned are those that lie on the win-lose boundary. The win-lose boundary falls on the gradient between finite- and infinite-cost cells and it appears when the algorithm knows of no reliable way to get to the goal. Consistently partitioning along this gradient guarantees that the algorithm will eventually find a path to the goal, if one exists. However, gradients across which the difference in cost is finite also exist in a state space partitioned by parti-game (or any of the variants introduced in this paper). Like the winlose boundary, these gradients are boundaries through which the agent does not believe it can move directly. Although finding an opening in such a boundary is not essential to reaching the goal, these boundaries do represent potential shortcuts that might improve the agent's policy. Any gradient with a difference in cost of two or more is a location of such a potentially useful shortcut. Because such gradients appear throughout the space, we need to be selective about which ones to partition along. There are many possible strategies one might consider using to incorporate these ideas into parti-game. For example, since parti-game focuses on the highest gradients only, the first thing that comes to mind is to follow in parti-game's footsteps and assign partitioning priorities to cells along gradients based on the differences in values across those gradients. However, since the true cost function typically has discontinuities, it is clear that the effect of such a strategy would be to continue refining the partitioning indefinitely along such a discontinuity in a vain search for a nonexistent shortcut. 4.2 THE ALGORITHM A much better idea is to try to pick cells to partition in a way that would achieve balanced partitioning, following the rationale we introduced in section 3. Again, such a strategy would result in a uniform coarse-to-fine search for better paths along those other gradients. The following discussion could, in principle, apply to any of the three forms of parti-game studied up to this point. Because of the superior behavior of the version where we partition the largest cells on the losing side, this is the specific version we report on here, and we use the term modified parti-game to refer to it. The way we incorporated partitioning along other gradients is as follows. At the end of any trial in which the agent is able to go from the start state to the goal without any unexpected results of any of its aiming attempts, we partition the largest "losing cells" (i.e., higher-cost cells) that fall on any gradient across which costs differ by more than one. Because data about experiences involving cells that are partitioned is discarded, the next time modified parti-game is run, the agent will try to go through the newly formed cells in search of a shortcut. This algorithm amounts to simply running modified parti-game until a stable solution is Robust, Efficient Reinforcement Learning with the Parti-Game Algorithm 967 . . 11 1 \ I I I I .\ .. j ' ·/1 I I I I I • I I Figure 7: The solution found by applying the global improvement algorithm on the maze of Figure 1 (a). The solution proceeded exactly like that of the algorithm of section 3 until the solution in Figure 4(d) was reached. After that. eight additional iterations were needed to find the better trajectory, resulting in 22 additional partitions, for a total of 49. reached. At that point, it introduces new cells along some of the other gradients, and when it is subsequently run, modified parti-game is applied again until stabilization is achieved, and so on. The results of applying this algorithm to the maze of Figure l(a) is shown in Figure 7. As we can see, the algorithm finds the better solution by increasing the resolution around the relevant part of the barrier above the start state. In the absence of information about the form of the optimal trajectory, there is no natural termination criterion for this algorithm. It is designed to be run continually in search of better solutions. If, however, the form of the optimal solution is known in advance, the extra partitioning could be turned off after such a solution is found. 5 CONCLUSIONS In this paper we have presented three successive modifications to parti-game. The combination of the first two appears to improve its robustness and efficiency, sometimes dramatically, and generally yields better solutions. The third provides a novel way of performing non-local search for higher quality solutions that are closer to optimal. Acknowledgments Mohammad AI-Ansari acknowledges the continued support of King Saud University, Riyadh, Saudi Arabia and the Saudi Arabian Cultural Mission to the U.S.A. References AI-Ansari, M. A. and R. 1. Williams (1998). Modifying the parti-game algorithm for increased robustness, higher efficiency and better policies. Technical Report NU-CCS98-13, College of Computer Science, Northeastern University, Boston, MA. Moore, A. (1994a). Variable resolution reinforcement learning. In Proceedings of the Eighth Yale Workshop on Adaptive and Learning Systems. Center for Systems Science, Yale University. Moore, A. W. (1994b). The parti-game algorithm for variable resolution reinforcement learning in multidimensional state spaces. In Proceedings of Neural Information Processing Systems Conference 6. Morgan Kaufman. Moore, A. W. and C. O. Atkeson (1995). The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning 21. Sutton, R. S. and A. O. Barto (1998). Reinforcement Learning: An Introduction. MIT Press.
|
1998
|
61
|
1,561
|
Learning a Hierarchical Belief Network of Independent Factor Analyzers H. Attias* hagai@gatsby.ucl.ac.uk Sloan Center for Theoretical Neurobiology, Box 0444 University of California at San Francisco San Francisco, CA 94143-0444 Abstract Many belief networks have been proposed that are composed of binary units. However, for tasks such as object and speech recognition which produce real-valued data, binary network models are usually inadequate. Independent component analysis (ICA) learns a model from real data, but the descriptive power of this model is severly limited. We begin by describing the independent factor analysis (IFA) technique, which overcomes some of the limitations of ICA. We then create a multilayer network by cascading singlelayer IFA models. At each level, the IFA network extracts realvalued latent variables that are non-linear functions of the input data with a highly adaptive functional form, resulting in a hierarchical distributed representation of these data. Whereas exact maximum-likelihood learning of the network is intractable, we derive an algorithm that maximizes a lower bound on the likelihood, based on a variational approach. 1 Introduction An intriguing hypothesis for how the brain represents incoming sensory information holds that it constructs a hierarchical probabilistic model of the observed data. The model parameters are learned in an unsupervised manner by maximizing the likelihood that these data are generated by the model. A multilayer belief network is a realization of such a model. Many belief networks have been proposed that are composed of binary units. The hidden units in such networks represent latent variables that explain different features of the data, and whose relation to the ·Current address: Gatsby Computational Neuroscience Unit, University College London, 17 Queen Square, London WC1N 3AR, U.K. 362 H. Attias data is highly non-linear. However, for tasks such as object and speech recognition which produce real-valued data, the models provided by binary networks are often inadequate. Independent component analysis (ICA) learns a generative model from real data, and extracts real-valued latent variables that are mutually statistically independent. Unfortunately, this model is restricted to a single layer and the latent variables are simple linear functions of the data; hence, underlying degrees of freedom that are non-linear cannot be extracted by ICA. In addition, the requirement of equal numbers of hidden and observed variables and the assumption of noiseless data render the ICA model inappropriate. This paper begins by introducing the independent factor analysis (IFA) technique. IFA is an extension of ICA, that allows different numbers of latent and observed variables and can handle noisy data. The paper proceeds to create a multilayer network by cascading single-layer IFA models. The resulting generative model produces a hierarchical distributed representation of the input data, where the latent variables extracted at each level are non-linear functions of the data with a highly adaptive functional form. Whereas exact maximum-likelihood (ML) learning in this network is intractable due to the difficulty in computing the posterior density over the hidden layers, we present an algorithm that maximizes a lower bound on the likelihood. This algorithm is based on a general variational approach that we develop for the IFA network. 2 Independent Component and Independent Factor Analysis Although the concept of ICA originated in the field of signal processing, it is actually a density estimation problem. Given an L' x 1 observed data vector y, the task is to explain it in terms of an LxI vector x of unobserved 'sources' that are mutually statistically independent. The relation between the two is assumed linear, y = Hx + u, (1) where H is the 'mixing' matrix; the noise vector u is usually assumed zero-mean Gaussian with a covariance matrix A. In the context of blind source separation [1]-[4], the sOurce signals x should be recovered from the mixed noisy signals y with no knowledge of H, A, or the source densities P(Xi), hence the term 'blind'. In the density estimation approach, one regards (1) as a probabilistic generative model for the observed p(y), with the mixing matrix, noise covariance, and source densities serving as model parameters. In principle, these parameters should be learned by ML, followed by inferring the sources via a MAP estimator. For Gaussian sources, (1) is the factor analysis model, for which an EM algorithm exists and the MAP estimator is linear. The problem becomes interesting and more difficult for non-Gaussian sources. Most ICA algorithms focus on square (L' = L), noiseless (y = Hx) mixing, and fix P(Xi) using prior knowledge (but see [5] for the case of noisy mixing with a fixed Laplacian source prior). Learning H occurs via gradient-ascent maximization of the likelihood [1]-[4]. Source density parameters can also be adapted in this way [3],[4], but the resulting gradient-ascent learning is rather slow. This state of affairs presented a problem to ICA algorithms, since the ability to learn arbitrary sOurce densities that are not known in advance is crucial: using an inaccurate p( Xi) often leads to a bad H estimate and failed separation. This problem was recently solved by introducing the IFA technique [6]. IFA employs a semi-parametric model of the source densities, which allows learning them (as well as the mixing matrix) using expectation-maximization (EM). Specifically, P(Xi) is described as a mixture of Gaussians (MOG), where the mixture HierarchicalIFA Belief Networks 363 components are labeled by s = 1, ... , ni and have means f..ti,s and variances Ii,s: p( Xi) = ~ s p( S i = s) 9 (Xi f..ti,s, Ii,s). I The mixing proportions are parametrized using the softmax form: P(Si = s) = exp(ai,s)/ ~s' exp(ai,s'). Beyond noiseless leA, an EM algorithm for the noisy case (1) with any L, L' was also derived in [6] using the MOG description. 2 This algorithm learns a probabilistic model p(y I W) for the observed data, parametrized by W = (H,A,{ai,s,f..ti,s"i,s}) . A graphical representation of this model is provided by Fig. 1, if we set n = 1 and yO = bi = VI = 0 J J ,S J ,s . 3 Hierarchical Independent Factor Analysis In the following we develop a multilayer generalization of IFA, by cascading duplicates of the generative model introduced in [6]. Each layer n = 1, ... , N is composed of two sublayers: a source sublayer which consists of the units xi, i = 1, ... , Ln , and an output sublayer which consists of Yj, j = 1, ... , L~ . The two are linearly related via yn = Hnxn + un as in (1); un is a Gaussian noise vector with covariance An. The nth-layer source xi is described by a MOG density model with parameters ai S' f..ti,s' and Irs' in analogy to the IFA sources above. ' The important step is to determine how layer n depends on the previous layers. We choose to introduce a dependence of the ith source of layer n only on the ith output of layer n - 1. Notice that matching Ln = L~ _ l is now required. This dependence is implemented by making the means and mixture proportions of the Gaussians which compose p(xi) dependent on yr- l . Specifically, we make the replacements n n n n- l d n n bn n- l Th It··· d ·t £ f..ti,s -t f..ti ,s + vi,sYi an ai,s -t ai,s + i,sYi . e resu mg Jomt ensl y or layer n, conditioned on layer n - 1, is Ln p(sn ,xn,yn I yn-l, wn) = IIp(si I yr- 1 ) p(xi I si,yr- l ) p(yn I xn) , (2) i=I where vvn are the parameters of layer nand ( n _ I n-I) _ exp(ai,s + bi,syr- 1) p Si - S Yi '"' n n n-l ' L... exp(ai s' + bi s'Yi ) s' ' , ( n Inn-I) 9( n n n n- l n) P Xi Si = S, Yi = Xi f..ti s - Vi sYi "i s . " , The full model joint density is given by the product of (2) over n = 1, ... , N (setting yO = 0) . A graphical representation of layer n of the hierarchical IFA network is given in Fig. 1. All units are hidden except yN. To gain some insight into our network, we examine the relation between the nthlayer source xi and the n - lth-Iayer output yr- 1 . This relation is probabilistic and is determined by the conditional density p(xi I yr- 1 ) = ~s~ p(si I y~-l )p(xi I si,yr- 1 ) . Notice from (2) that this is a MOG density. Its yr- (dependent mean is given by Xi = f['(yr- 1 ) = LP(si = s I yr- 1 ) (f..t~s + vf,syr- 1) , (3) s IThroughout this paper, Q(x,~) =1 27r~ 1- 1/ 2 exp( _XT~ - IX/2) . 2However, for many sources the E-step becomes intractable, since the number TIi ni of source state configurations s = (s 1, ... , s L) depends exponentially on L. Such cases are treated in [6] using a variational approximation. 364 H. Attias n J-lj,s Figure 1: Layer n of the hierarchical leA generative model. and is a non-linear function of y~-l due to the softmax form of p(si I yr- 1). By adjusting the parameters, the function II' can assume a very wide range of forms: suppose that for state si , ai,s and bi,s are set so that p(si = s I yr- 1 ) is significant only in a small, continuous range of yr- 1 values, with different ranges associated with different s's. In this range, II' will be dominated by the linear term J.1.is + lIrs y~ -l. Hence, a desired ii can be produced by placing oriented line seg~ents ~t appropriate points above the yr-1-axis, then smoothly join them together by the p(si I yr- 1 ) . Using the algorithm below, the optimal form of ii will be learned from the data. Therefore, our model describes the data yf as a potentially highly complex function of the top layer sources, produced by repeated application of linear mixing followed by a non-linearity, with noise allowed at each stage. 4 Learning and Inference by Variational EM The need for summing over an exponentially large number of source state configurations (sr, ... , s"lJ, and integrating over the softmax functions p(si I yi), makes exact learning intractable in our network. Thus, approximations must be made. In the following we develop a variational approach, in the spirit of [8], to hierarchical IFA. We begin, following the approach of [7] ~o EM, by bounding the loglikelihood from below: £ = 10gp(yN) 2: l:n{Elogp(yn I xn) + l:i , s~[Elogp(xi I . si, y~-l) + E logp(si I y~ - l)]} - E log q, where E denotes averaging over the hidden layers using an arbitrary posterior q = q(Sl··-N,xI ... N,yl .. . N-l I yN). In exact EM, q at each iteration is the true posterior, parametrized by W 1 ... N from the previous iteration. In variational EM, q is chosen to have a form which makes learning tractable, and is parametrized by a separate set of parameters V I .. . N . These are optimized to bring q as close to the true posterior as possible. Hierarchical IFA Belief Networks 365 E-step. We use a variational posterior that is factorized across layers. Within layer n it has the form Ln q(sn, xn, yn I vn) = II Vf,si 9(zn _ pn, ~n) , (4) i=l for n < N, and q(sN, x N I VN) = TIi Vt,'Si 9(xN - pN, ~N). The variational parameters vn = (pn, ~n, {vf,s}) depend on the data yN. The full N -layer posterior is simply a product of (4) over n. Hence, given the data, the nth-layer sources and outputs are jointly Gaussian whereas the states sf are independent. 3 Even with the variational posterior (4), the term Elogp(sf I y~-l) in the lower bound cannot be calculated analytically, since it involves integration over the softmax function. Instead, we calculate yet a lower bound on this term. Let ci,s = ai,s + bi,syr- l and drop the unit and layer indices i, n, then logp(s I y) = -log(l + e- c, Ls'#s eC.'). B<?rrowing an idea from [8], we multiply and divide by e71• under the logarithm sign and use Jensen's inequality to get Elogp(s I y) 2': -TJsEcs -logE [e- 71• C • +e- (H71.)C. Ls'#sec.,], This results in a bound that can be calculated in closed form: Elogp(sf = sl yr- l ) 2': -v~TJ~e~ - v~ log (eJ:: + L ef':.,) = :Frs, (5) s'#s where en = an + bnpn-l jn = -'Ylncn + ('Ylnbn)2~n - I/2 jn = -(1 + 'Yln)cn + s s s y 's '/ S s '/ s s yy 's s' '/ s s ~, + [(1 + TJ~)b~ b~, F~~;l /2, and the subscript i is omitted. We also defined pn = (p~, p;)T and similarly ~xx, ~yy, ~xy = ~;x are the subblocks of~. Since (5) holds for arbitrary TJfs' the latter are treated as additional variational parameters which are optimized to tighten this bound. 4 To optimize the variational parameters V I .. N , we equate the gradient of the lower bound on I:- to zero and obtain ( (HTA-IH)n+An _(HTA-I)n ) n _(A-1H)n (A-l)n+Bn+1 p p~+l ) n-l Py (6) (7) where Ai} = Ls (Vi,s /'t ,s)n8ij , Eij = Ls (Vi,slli,s /'i,s)nsij, af = Ls (Vi ,sJ-ti,s/'i,s)n, and f3t = Ls(Vi,sJ-ti,slli,s/'i,S)n. (All parameters within (- . . )n belong to layer n). Fntl contain the corresponding derivatives of :F";+l (5), summed over s. For the p , state posteriors we have 1 (n 1 O:Fn) vn = _ exp 's + _[(pn _ lin _ lInpn-I)2 + ~n + (lIn)2~n-ll + __ s (8) s Zn 2 2",n x f"s s Y xx s YY ovn' /s s 3It is easy to introduce more structure into (4) by allowing the means p~ to depend on 8~, and the covariances ~0 to depend on 87, 8;, thus making the approximation more accurate (but more complex) while maintaining tractability. 4 An alternative approach to handle E log p( 87 I y~ - l) is to approximate the required integral by, e.g., the maximum value of the integrand, possibly including Gaussian corrections. The resulting approximation is simpler than (5); however, it is no longer guaranteed to bound the log-likelihood from below. 366 H. Attias where the unit subscript i is omitted (i.e., ~~x = ~~x , ii) ; zn = Zi is set such that 2:s v~s = 1. A simple modification of these equations is required for layer n = N. The optimal V I .. R are obtained by solving the fixed-point equations (6~8) iteratively for each data vector yN, keeping the generative parameters W I ... N fixed. Notice that these equations couple layer n to layers n ± 1. The additional parameters 1}~s are adjusted using gradient ascent on .'Frs' Once learning is complete, the inference problem is solved since the MAP estimate of the hidden unit values given the data is readily available from pi and v~s· M-Step. In terms of the variational parameters obtained in the E-step, the new generative parameters are given by Hn (pnpn T + ~n )(pnpn T + ~n )~1 y x yx x x xx ' An pnpn T + ~n _ H n (pnpn T + ~n ) (9) y y yy x x xy , ( f.l~ ) v;1 _ [(pn _ /In _ vnpn~1)2 + ~n + (vn)2~n~l] vn v n x rs s y xx s yy s ' s (10) omitting the subscript i as in (8), and are slightly modified for layer N. In batch mode, averaging over the data is implied and the v;- do not cancel out. Finally, the softmax parameters ai,s' bi,s are adapted by gradient ascent on the bound (5). 5 Discussion The hierarchical IFA network presented here constitutes a quite general framework for learning and inference using real-valued probabilistic models that are strongly non-linear but highly adaptive. Notice that this network includes both continuous xi, yi and binary si units, and can thus extract both types of latent variables. In particular, the uppermost units sI may represent class labels in classification tasks. The models proposed in [9]-[11] can be viewed as special cases where xi is a prescribed deterministic function (e.g., rectifier) of the previous outputs yj ~ l: in the IFA network, a deterministic (but still adaptive) dependence can be obtained by setting the variances 'ris = O. Note that the source xi in such a case assumes only the values f.li,s, and thus corresponds to a discrete latent variable. The learning and inference algorithm presented here is based on the variational approach. Unlike variational approximations in other belief networks [8],[10] which use a completely factorized approximation, the structure of the hierarchical IFA network facilitates using a variational posterior that allows correlations among hidden units occupying the same layer, thus providing a more accurate description of the true posterior. It would be interesting to compare the performance of our variational algorithm with the belief propagation algorithm [12] which, when adapted to the densely connected IFA network, would also be an approximation. Markov chain Monte Carlo methods, including the more recent slice sampling procedure used in [11], would become very slow as the network size increases. It is possible to consider a more general non-linear network along the lines of hierarchical IFA. Notice from (2) that given the previous layer output yn ~ l, the mean output of the next layer is Yi = 2:j H[jfP(yj~l) (see (3)), i.e. a linear mixing preceded by a non-linear function operating on each output component separately. However, if we eliminate the sources xj, replace the individual source HierarchicalIFA Belief Networks 367 states sj by collective states sn, and allow the linear transformation to depend on sn, we arrive at the following model: p(sn = s I yn-l) ex: exp(a~ + b~Tyn-l), p(yn I sn = s,yn-l) = 9(yn h~ H~yn- l,An). Now we have yn = 2: s p(sn = s I yn-l )(h~ + H~yn - l) == F(yn- l), which is a more general non-linearity. Finally, the blocks {yn , xn, sn I yn- l} (Fig. 1), or alternatively the blocks {yn, sn I yn- l} described above, can be connected not only vertically (as in this paper) and horizontally (creating layers with multiple blocks), but in any directed acyclic graph architecture, with the variational EM algorithm extended accordingly. Acknowledgements I thank V. de Sa for helpful discussions. Supported by The Office of Naval Research (N00014-94-1-0547), NIDCD (R01-02260), and the Sloan Foundation. References [1] Bell, A.J. and Sejnowski, T.J. (1995). An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7, 1129-1159. [2] Cardoso, J.-F. (1997). Infomax and maximum likelihood for source separation. IEEE Signal Processing Letters 4, 112-114. [3] Pearlmutter, B.A. and Parra, L.C. (1997). Maximum likelihood blind source separation: A context-sensitive generalization of ICA. Advances in Neural Information Processing Systems 9 (Ed. Mozer, M.C. et al), 613-619. MIT Press. [4] Attias, H. and Schreiner, C.E. (1998). Blind source separation and deconvolution: the dynamic component analysis algorithm. Neural Computation 10, 13731424. [5] Lewicki, M.S. and Sejnowski, T.J. (1998). Learning nonlinear overcomplete representations for efficient coding. Advances in Neural Information Processing Systems 10 (Ed. Jordan, M.L et al), MIT Press. [6] Attias, H. (1999). Independent factor analysis. Neural Computation, in press. [7] Neal, R.M. and Hinton, G.E. (1998). A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in Graphical Models (Ed. Jordan, M.L) , Kluwer Academic Press. [8] Saul, L.K., Jaakkola, T ., and Jordan, M.I. (1996). Mean field theory of sigmoid belief networks. Journal of Artificial Intelligence Research 4, 61-76. [9] Frey, B.J. (1997) Continuous sigmoidal belief networks trained using slice sampling. Advances in Neural Information Processing Systems 9 (Ed. Mozer, M.C. et al). MIT Press. [10] Frey, B.J. and Hinton, G.E. (1999). Variational learning in non-linear Gaussian belief networks. Neural Computation, in press. [11] Ghahramani, Z. and Hinton, G.E. (1998). Hierarchical non-linear factor analysis and topographic maps. Advances in Neural Information Processing Systems 10 (Ed. Jordan, M.L et al), MIT Press. [12] Pearl, J . (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Mateo, CA.
|
1998
|
62
|
1,562
|
A Reinforcement Learning Algorithm in Partially Observable Environments Using Short-Term Memory Nobuo Suematsu and Akira Hayashi Faculty of Computer Sciences Hiroshima City University 3-4-1 Ozuka-higashi, Asaminami-ku, Hiroshima 731-3194 Japan { suematsu,akira} @im.hiroshima-cu.ac.jp Abstract We describe a Reinforcement Learning algorithm for partially observable environments using short-term memory, which we call BLHT. Since BLHT learns a stochastic model based on Bayesian Learning, the overfitting problem is reasonably solved. Moreover, BLHT has an efficient implementation. This paper shows that the model learned by BLHT converges to one which provides the most accurate predictions of percepts and rewards, given short-term memory. 1 INTRODUCTION Research on Reinforcement Learning (RL) problem for partially observable environments is gaining more attention recently. This is mainly because the assumption that perfect and complete perception of the state of the environment is available for the learning agent, which many previous RL algorithms require, is not valid for many realistic environments. model-free Figure I: Three approaches One of the approaches to the problem is the model-free approach (Singh et al. 1995; Jaakkola et al. 1995) (arrow a in the Fig.l) which gives up state estimation and uses memory-less policies. We can not expect the approach to find a really effective policy when it is necessary to accumulate information to estimate the state. Model based approaches are superior in these environments. A popular model based approach is via a Partially Observable Markov Decision Process (POMDP) model which represents the decision process of the agent. In Fig.1 the approach is described by the route from "World" to "Policy" through "POMDP". The approach has two serious difficulties. One is in the learning of POMDPs (arrow b in Fig.I). Abe and 1060 N. Suematsu and A. Hayashi Warmuth (1992) shows that learning of probabilistic automata is NP-hard, which means that learning of POMDPs is also NP-hard. The other difficulty is in finding the optimal policy of a given POMDP model (arrow c in Fig. I ). Its PSAPCE-hardness is shown in Papadimitriou and Tsitsiklis (1987). Accordingly, the methods based on this approach (Chrisman 1992; McCallum 1993), will not scale well to large problems. The approach using short-term memory is computationally more tractable. Of course we can construct environments in which long-term memory is essential. However, in many environments, because of their stochasticity, the significance of the past information decreases exponentially fast as the time goes. In such environments, memories of moderate length will work fine. McCallum (1995) proposes "utile suffix memory" (USM) algorithm. USM uses a tree structure to represent short-term memories with variable length. USM's model learning is based on a statistical test, which requires time and space proportional to the learning steps. This makes it difficult to adapt USM to the environments which require long learning steps. USM suffers from the overfitting problem which is a difficult problem faced by most of model based learning methods. USM may overfit or underfit up to the significance level used for the statistical test and we can not know its proper level in advance. In this paper, we introduce an algorithm called BLHT (Suematsu et al. 1997), in which the environment is modeled as a history tree model (HTM), a stochastic model with variable memory length. Although BLHT shares the tree structured representation of short-term memory with USM, the computational time required by BLHT is constant in each step and BLHT copes with environments which require large learning steps. In addition, because BLHT is based on Bayesian Learning, the overfitting problem is solved reasonably in it. A similar version of HTMs was introduced and has been used for learning of Hidden Markov Models in Ron et at. (1994). In their learning method, a tree is grown in a similar way with USM. If we try to adapt it to our RL problem, it will face the same problems with USM. This paper shows that the HTM learned by BLHT converges to the optimal one in the sense that it provides the most accurate predictions of percepts and rewards, given shortterm memory. BLHT can learn a HTM in an efficient way (arrow d in Fig.l). And since HTMs compose a subset of Markov Decision Processes (MDPs), it can be efficiently solved by Dynamic Programming (DP) techniques (arrow e in Fig. I). So, we can see BLHT as an approach to follow an easy way from "World" to "Policy" which goes around "POMDP". 2 THE POMDP MODEL The decision process of an agent in a partially observable environment can be formulated as a POMDP. Let the finite set of states of the environment be S, the finite set of agent's actions be A, and the finite set of all possible percepts be I. Let us denote the probability of and the reward for making transition from state 8 to 8' using action a by Ps'lsa and Wsas' respectively. We also denote the probability of obtaining percept i after a transition from 8 to 8' using action a by 0ilsas" Then, a POMDP model is specified by (S, A,I, P, 0, W, xo), where P = {Ps/l sa 18,8' E S,a E A}, 0 = {oilsas,18,8' E S,a E A,i E I}, W = {Wsas,18, 8' E S, a E A}, and Xo = (X~l" .. , x~ I SI_l) is the probability distribution of the initial state. We denote the history of actions and percepts of the agent till time t, ( ... , at-2, it-I, at-I, it) by Dt . If the POMDP model, M = (S, A,I, P, 0, W, Xi) is given, one can compute the belief state, Xt = (X~l"'" x~ISI_l) from Df, which is the state estimation at time t. We denote the mapping from histories to belief states defined by POMDP model M by X M( .), that is, Xt = X M(Dt). The belief state Xt is the most precise state estimation and it is known to be the sufficient statistics for the optimal policy in POMDPs (Bertsekas 1987). It is also known that the stochastic process {Xt, t 2:: O} is an MDP in the continuous An RL Algorithm in Partially Observable Environments Using Memory 1061 3 BAYESIAN LEARNING OF HISTORY TREE MODELS (BLHT) In this section. we summarize our RL algorithm for partially observable environments. which we call BLHT (Suematsu et at. 1997). 3.1 HISTORY TREE MODELS BLHT is Bayesian Learning on a hypothesis space which is composed of predictive models. which we call History Tree Models (HTMs). Given short-term memory. a HTM provides the probability disctribution of the next percept and the expected immediate reward for each action. A HTM is represented by a tree structure called a history tree and parameters given for each leaf of the tree. A history tree h associates history Dt with a leaf as follows. Starting from the root of h. we check the most recent percept. it and follow the appropriate branch and then we check the action at-l and follow the appropriate branch. This procedure is repeated till we reach a leaf. We denote the reached leaf by Ah(Dt ) and the set of leaves of h by Lh. Each leaf l E Lh has parameters Billa and Wla. Billa denotes the probability of observing i at time t + 1 when Ah(Dt} = l and the last action at was a. Wla denotes the expected immediate reward for performing a when Ah(Dt ) = l. Let 8 h = {Billa liE T, l E Lh,a E A}. (a) b (b) ~ 1 2 f-~ ---- it /'-.... a b f-~ --- at-l a/"-.. ............... ---"--../--..-1 2 1 2 ~ it-l Figure 2: (a) A three-state environment. in which the agent receives percept 1 in state 1 and percept 2 in states 2a and 2b. (b) A history tree which can represent the environment. Fig. 2 shows a three-state environment (a) and a history tree which can represent the environment (b). We can construct a HTM which is equivalent with the environment by setting appropriate parameters in each leaf of the history tree. 3.2 BAYESIAN LEARNING BLHT is designed as Bayesian Learning on the hypothesis space. 11.. which is a set of history trees. First we show the posterior probability of a history tree h E 11. given history D t . To derive the posterior probability we set the prior density of 8h as p(8h lh) = II II Kia II B~:~a-l, IELh aEA iEI where Kia is the normalization constant and ailla is a hyper parameter to specify the prior density. Then we can have the posterior probabili,ty of h. n· I r(N~1 + a'll ) P(hID 11.) = P(hI1l.) II II K ,E , la ~ a (I) t, Ct la r(Nt + a) , IELh aEA la la where Ct is the normalization constant. r(·) is the gamma function. Nflla is the number of times i is observed after executing a when Ah(Dt,) = l in the history Dt • N/ = "t " a L.JiEI N illa• and ala = L.JiEI ailla' Next. we show the estimates of the parameters. We use the average of Billa with its posterior 1062 N. Suematsu and A. Hayashi density as the estimate, 8~lla' which is expressed as ~t Nflla + ailla ()"II = -'-;---taN/a + a'a . W'a is estimated just by accumulating rewards received after executing a when Ah(Dt ) = l, and dividing it by the number of times a was performed when Ah (Dt ) = l, N/a • That is, 1 N/'a wIa = Nt L Ttk+1, la k=l where tk is the k-th occurrence of execution of a when Ah(Dt ) = l. 3.3 LEARNING ALGORITHM In principle, by evaluating Eq.( J) for all h E 11., we can extract the MAP model. However, it is often impractical, because a proper hypothesis space 11. is very large when the agent has little prior knowledge concerning the environment. Fortunately, we can design an efficient learning algorithm by assuming that the hypothesis space, 11., is the set of pruned trees of a large history tree h1i and the ratio of prior probabilities of a history tree h and hi obtained by pruning off subtree Llh from h is given by a known function q( Llh) I . We define function g(hIDt,1I.) by taking logarithm of the R.H.S. of Eq.(J) without the normalization constant, which can be rewritten as g(hIDt,1I.) = log P(hI1l.) + L At, (2) IEC h where At = ""'1 [K ItEI r(Nfl/a + a i ll a)] I ~ og la reNt +). (3) aEA la ala Then, we can extract the MAP model by finding the history tree which maximizes g. Eq.(2) shows that g(hIDt, 11.) can be evaluated by summing up At over Lh. Accordingly, we can implement an efficient algorithm using the tree h1i whose each (internal or leaf) node 1 stores AI, N il/a , ail/a, and Wla· Suppose that the agent observed it+l when the last action was at. Then, from Eq.(3), At+l I og N' +0</ .' { At + I Nt,tl l/a , +o<;,tll/ a , cor lEND, I la, a, AI otherwise (4) where N D, is the set of nodes on the path from the root to leaf Ah~ (Dt ). Thus, h1i is updated just by evaluating Eq(4), adding I to Nil /a ' and recalculating Wla in nodes of ND ,. After h1i is updated, we can extract the MAP model using the procedure "Find-MAPSubtree" shown in Fig. 3(a). We show the learning algorithm in Fig.3(b), in which the MAP model is extracted and policy 7r is updated only when a given condition is satisfied. 4 LIMIT THEOREMS In this section, we describe limit theorems of BLHT. Throughout the section, we assume that policy 7r is used while learning and the stochastic process {(st, at, it+d, t ~ O} is ergodic under 7r • First we show a theorem which ensures that the history tree model learned by BLHT does not miss any relevant memories (see Suematsu et al. (1997) for the proof). I The condition is satisfied, for example, when P(hl1i) ex ")'Ikl where 0 < ")' ~ 1 and Ihl denotes the size of h. An RL Algorithm in Partially Observable Environments Using Memory 1063 10 - u tree no e Mam-Loop(condltlOn C) I: hf- .Af-O I: t f- O. D t f- () 2: C f- {all child nodes of node l} 3: if ICI = 0 then return {l, Ad 2: rr f- "policy selecting action at random" 3: at f- rr(Dt) or exploratory action 4: for each c E C do 5: {Llhc, Ac} f- Find-MAP-Subtree( c) 6: Llh f- Llh U Llhc 7: A f- A+ Ac 8: end 9: Llg f-logq(Llh) + A - Al 10: if Llg > 0 then return {Llh, A} 11: else return l, Al (a) 4: perform at and receive it+l and rt+l 5: update hll. 6: if (condition C is satisfied) do 7: h f- Find-MAP-Subtree(Root(hll» 8: rr f- Dynamic-Programming(h) 9: end 10: Dt+l f- (Dt,at,it+l), t f- t + 1 II: goto 3 (b) Figure 3: The procedure to find MAP subtree (a) and the main loop (b). Theorem 1 For any h E 11.. lim !g(hIDt,11.) = -Hh(IIL, A), t-too t where Hh(IIL, A) is the conditional entropy ofit+1 given It = Ah(Dt ) and at defined by Hh(IIL,A) == Err {z: -Prr (it+l = i I lt,at)logPrr (it+1 = i Ilt,at)}, iEI where Prr (.) and Err (.) denotes probability and expected value under 7r respectively. Let the history tree shown in Fig.2(b) be h* and a history tree obtained by pruning a subtree of h* be h-. Then, for the environment shown in Fig.2(a) Hh- (IlL, A) > Hh• (IlL, A), because h - misses some relevant memories and it makes the conditional entropy increase. Since BLHT learns the history tree which maximizes g(hIDt , 11.) (minimizes Hh(IIL , A), the learned history tree does not miss any relevant memory. Next we show a limit theorem concerning the estimates of the parameters. We denote the true POMDP model by M = (S, A, I, P, 0, W, Xi) and define the following parameters, O'i lsa P(it+l = i I St = s,at = a) = z: Ps'l saOi lsas' s'ES J-Lsa = E(rt+ll st = s,at = a) = z: wsas'Ps'lsa' s'ES Then, the following theorem holds. Theorem 2 For any leaf I E Ch, a E A. i E I lim w:a = '"' J-LsaY:lla' t-too ~ sES where Y:lla == Prr(St = SIAh(Dt) = I, at = a). Outline of proof: Using the Ergodic Theorem, We have lim O!lla = Prr (it+l = ilAh(Dd = I, at = a). t-too (5) (6) 1064 N. Suematsu and A. Hayashi By expanding R.H.S of the above equation using the chain rule, we can derive Eq.(5). Eq.(6) can be derived in a similar way. • To explain what Theorem 2 means clearly, we show the relationship between Y;lla and the belief state Xt. P7r (St = SIAh(Dd = i, at = a, Xo = Xi) L P(St = SIDt = D, at = a, Xo = xi)P7r (Dt = Dlit = i, at = a, Xo = Xi) DEDI = 1 L :n.D~ (D){ X M(D)}s P7r (Dt = Dlit = i, at = a, Xo = xi)dx X DEDI I Ix XS P7r(Xt = xlit = i, at = a, Xo = xi)dx, where Vi == {DtIAh(Dt) = I}, :n.B(-) is the indicator function of a set B, V~ == {DtIX M(Dd = x}, and dx = dXl'" dXISI-l' Under the ergodic assumption, by taking limt-too of the above equation, we have Yla = Ix xCPia(x)dx (7) where Yla = (Y;llla' ... , Y;ISI-I Ila) and CPia (x) = P7r (Xt = xIAh(Dt) = i, at = a). We see from Eq.(7) that Yla is the average of belief state Xt with conditional density CPia, that is, the belief states distributed according to CPla are represented by Yia' When shortterm memory of i gives the dominant information of Dt. CPia is concentrated and Yla is a reasonable approximation of the belief states. An extreme of the case is when CPia is non-zero only at a point in X. Then YIa = Xt when Ah(Dd = i. Please note that given short-term memory represented by i and a, YIa is the most accurate state estimation. Consequently, Theorem 1 and 2 ensure that learned HTM converges to the model which provides the most accurate predictions of percepts and rewards among 1/.. This fact provides a solid basis for BLHT, and we believe BLHT can be compared favorably with other methods using short-term memory. Of course, Theorem 1 and 2 also say that BLHT will find the optimal policy if the environment is Markovian or semi-Markovian whose order is small enough for the equivalent model to be contained in 1/.. 5 EXPERIMENT We made experiments in various environments. In this paper, we show one of them to demonstrate the effectiveness of BLHT. The environment we used is the grid world shown in Fig.4(a). The agent has four actions to change its location to one of the four neighboring grids, which will fail with probability 0.2. On failure, the agent does not change the location with probability 0.1 or goes to one ofthe two grids which are perpendicular to the direction the agent is trying to go with probability 0.1. The agent can detect merely the existence of the four surrounding walls. The agent receives a reward of 10 when he reaches the goal which is the grid marked with "G" and - 1 when he tries to go to a grid occupied by an obstacle. At the goal, any action will relocate the agent to one of the starting states which are marked with "S" at random. In order to achieve high performance in the environment, the agent has to select different actions for an identical immediate percept, because many of the states are aliased (i.e. they look identical by the immediate percepts). The environment has 50 states, which is among the largest problems shown in the literature of the model based RL techniques for partially observable environments. Fig.4(b) shows the learning curve which is obtained by averaging over 10 independent runs. While learning, the agent updated the policy every 10 trials (10 visits to the goal) and the An RL Algorithm in Partially Observable Environments Using Memory 1065 policy was evaluated through a run of 100,000 steps. Actions were selected using the policy or at random and the probability of selecting at random was decreased exponentially as the time goes. We used the tree which has homogeneous depth of 5 as h1i.. In Fig.4(b), the horizontal broken line indicates the average reward for the MOP model obtained by assuming perfect and complete perception. It gives an upper bound for the original problem, and it will be higher than the optimal one for the original problem. The learning curve shown there is close to the upper bound in the later stage. (a) (b) 1 - .-.- .--- --.-.-.. -.---.-.-.. -- .- - - ---.-._-. 0.8 0.6 0.4 0.2 o 2000 4000 6000 8000 10000 trials Figure 4: The grid world (a) and the learning curve (b). 6 SUMMARY This paper has described a RL algorithm for partially observable environments using shortterm memory, which we call BLHT. We have proved that the model learned by BLHT converges to the optimal model in given hypothesis space, 1{, which provides the most accurate predictions of percepts and rewards, given short-term memory. We believe this fact provides a solid basis for BLHT, and BLHT can be compared favorably with other methods using short-term memory. References Abe, N. and M. K. Warmuth (1992). On the computational compleixy of apporximating distributions by probabilistic automata. Machine Learning, 9:205-260. Bertsekas, D. P. (1987). Dyanamic Programming. Prentice-Hall. Chrisman, L. (1992). Reinforcemnt learning with perceptual aliasing: The perceptual distinctions approach. In Proc. the 10th National Conference on Artificial Intelligence. Jaakkola, T., S. P. Singh, and M. I. Jordan (1995). Reinforcement learning algorithm for parially observable markov decision problems. In Advances in Neural Information Processing Systems 7, pp. 345-352. McCallum, R. A. (1993). Overcoming incomplete perception with utile distiction memory. In Proc. the 10th International Conference on Machine Learning. McCallum, R. A. (1995). Instance-based utile distinctions for reinforcement learning with hidden state. In Proc. the 12th International Conference On Machine Learning. Papadimitriou, C. H. and J. N. Tsitsiklis (1987). The compleXity of markov decision processes. Mathematics of Operations Research, 12(3):441-450. Ron, D., Y. Singer, and N. Tishby (1994). Learning probabilistic automata with variable memory length. In Proc. of Computational Learning Theory, pp. 35-46. Singh, S. P., T. Jaakkola, and M. I. Jordan (1995). Learning without state-estimation in partially observable markov decision processes. In Proc. the 12th International Conference on Machine Learning, pp. 284-292. Suematsu, N., A. Hayashi, and S. Li (1997). A Bayesian approch to model learning in nonmarkovian environments. In Proc. the 14th International Conference on Machine Learning, pp. 349-357.
|
1998
|
63
|
1,563
|
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs David A. Nix dnix@lanl.gov Computer Research & Applications CIC-3, MS B265 Los Alamos National Laboratory Los Alamos, NM 87545 John E. Hogden hogden@lanl.gov Computer Research & Applications CIC-3, MS B265 Los Alamos National Laboratory Los Alamos, NM 87545 Abstract We describe Maximum-Likelihood Continuity Mapping (MALCOM), an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete "hidden" space constrained by a fixed finite-automaton architecture, MALCOM has a continuous hidden space-a continuity map-that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a more realistic model of the speech production process. To evaluate the extent to which MALCOM captures speech production information, we generated continuous speech continuity maps for three speakers and used the paths through them to predict measured speech articulator data. The median correlation between the MALCOM paths obtained from only the speech acoustics and articulator measurements was 0.77 on an independent test set not used to train MALCOM or the predictor. This unsupervised model achieved correlations over speakers and articulators only 0.02 to 0.15 lower than those obtained using an analogous supervised method which used articulatory measurements as well as acoustics .. 1 INTRODUCTION Hidden Markov models (HMMs) are generally considered to be the state of the art in speech recognition (e.g., Young, 1996). The strengths of the HMM framework include a rich mathematical foundation, powerful training and recognition algorithms for large speech corpora, and a probabilistic framework that can incorporate statistical phonology and syntax (Morgan & Bourlard, 1995). However, HMMs are known to be a poor model of the speech production process. While speech production is a continuous, temporally evolving process, HMMs treat speech production as a discrete, finite-state system where the current state depends only on the immediately preceding state. Furthermore, while HMMs are designed to capture temporal information as state transition probabilities, Bourlard et al., Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs 745 (1995) suggest that when the transition probabilities are replaced by constant values, recognition results do not significantly deteriorate. That is, while transitions are often considered the most perceptually relevent component of speech, the conventional HMM framework is poor at capturing transition information. Given these deficiencies, we are considering alternatives to the HMM approach that maintain its strengths while improving upon its weaknesses. This paper describes one such model called Maximum-Likelihood Continuity Mapping (MALCOM). We first review a general statistical framework for speech recognition so that we can compare the HMM and MALCOM formulations. Then we consider what the abstract hidden state represents in MALCOM, demonstrating empirically that the paths through MALCOM's hidden space are closely related to the movements of the speech production articulators. 2 A GENERAL FRAMEWORK FOR SPEECH RECOGNITION Consider an unknown speech waveform that is converted by a front-end signal-processing module into a sequence of acoustic vectors X. Given a space of possible utterances, W, the task of speech recognition is to return the most likely utterance W * given the observed acoustic sequence X . Using Bayes' rule this corresponds to (1) In recognition, P(X) is typically ignored because it is constant over all W, and the posterior P(WIX) is estimated as the product of the prior probability of the word sequence, P(W), and the probability that the observed acoustics were generated by the word sequence, P(XI W). The prior P(W) is estimated by a language model, while the production probability P(XIW) is estimated by an acoustic model. In continuous speech recognition, the product of these terms must be maximized over W; however, in this paper, we will restrict our attention to the form of the acoustical model only. Every candidate utterance W corresponds to a sequence of word/phone models M w such that P(XIW) = P(XIMw), and each M w considers all possible paths through some "hidden" space. Thus, for each candidate utterance, we must calculate P(XIM w) = i P(XIY, Mw )P(YIMw)dY, (2) where Y is some path through the hidden space. 2.1 HIDDEN MARKOV MODELS Because HMMs are finite-state machines with a given fixed architecture, the path Y through the hidden space corresponds to series of discrete states, simplifying the integral of Eq. (2) to a sum. However, to avoid computing the contribution of all possible paths, the Viterbi approximation-considering only the single path that maximizes Eq. (2)-is frequently used without much loss in recognition performance (Morgan & Bourlard, 1995). Thus, P(XIMw) ~ ar!?"ymaxP(XIY, Mw)P(YIM w). (3) The first term corresponds to the product of the emission probabilities of the acoustics given the state sequence and is typically estimated by mixtures of high-dimensional Gaussian densities. The second term corresponds to the product of the state transition probabilities. However, because Bourlard et al. (1995) found that this second term contributes little to recognition performance, the modeling power of the conventional HMM must reside in the first term. Training the HMM system involves estimating both the emission and the 746 D. A. Nix and J. E. Hogden transition probabilities from real speech data. The Baum-Welchlforward-backward algorithm (e.g., Morgan & Scofield, 1994) is the standard computationally efficient algorithm for iteratively estimating these distributions. 2.2 MAXIMUM-LIKELIHOOD CONTINUITY MAPPING (MALCOM) In contrast to HMMs, the multi-dimensional MALCOM hidden space is continuous-there are an infinite number states and paths through them. While the HMM is constrained by a fixed architecture, MALCOM is constrained by the notion of continuity of the hidden path. That is, the path must be smooth and continuous: it may not carry any energy above a given cutoff frequency. Unlike the discrete path in an HMM, the smooth hidden path in MALCOM attempts to emulate the motion of the speech articulators in what we call a continuity map (CM). Unless we know how to evaluate the integral of Eq. (2) (which we currently do not), we must also make the Viterbi approximation and approximate P(XIM w ) by considering only the single path that maximizes the likelihood of the acoustics X given the utterance model M w , resulting in Eq. (3) once again. Analogously, the first term, P(XIY, M w ), corresponds to the acoustic generation probability given the hidden path, and the second term corresponds to the probability of the hidden path given the utterance model. This paper focuses on the first term because this is the term that produces conventional HMM performance. 1 Common to all Mw is a set of N probability density functions (pdfs) <I> that define the CM hidden space, modeling the likelihood of Y given X for an N -code vector quantization (VQ) of the acoustic space. Because these pdfs are defined over the low-dimensional CM space instead of the high-dimensional acoustic space (e.g., 6 vs. 40+), MALCOM requires many fewer parameters to be estimated than the corresponding HMM. 3 THE MALCOM ALGORITHM We now turn to developing an algorithm to estimate both the CM pdfs <I> and the corresponding paths Y that together maximize the likelihood of a given time series of acoustics, C = P(X I Y , <1» . This is an extension of the method first proposed by Hogden (1995), in which he instead maximized P(YIX, <1» using vowel data from a single speaker. Starting with random but smooth Y , the MALCOM training algorithm generates a CM by iterating between the following two steps: (1) Given Y, reestimate <I> to maximize C; and (2) Given <1>, reestimate smooth paths Y to maximize £. 3.1 LOG LIKELIHOOD FUNCTION To specify the log likelihood function C, we make two dependence claims and one independence assumption. First we claim that Yt depends (to at least some small extent) on all other Y in the utterance, an expression of the continuity constraint described above. We make another natural claim that Xt depends on Yt, that the path configuration at time t influences the corresponding acoustics. However, we do make the conditional independence assumption that n C = P(XIY, <1» = IT P(XtIYt , <1». (4) t=l Note that Eq. (4) does not assume that each Xt is independent OfXt-l (as is often assumed in data modeling); it only assumes that the conditioning of Xt on Yt is independent from IHowever, we are currently developing a model of P(YIM w ) to replace the corresponding (and useless) term in the conventional HMM formulation as well (Hogden et al. , 1998). Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs 747 t-l to t. For example, because Xt depends on Yt, Yt depends on all othery (the smoothness constraint), and Xt-1 depends on Yt-1, Xt is not assumed to be independent of all other xs in the utterance. With a log transformation and an invocation of Bayes' rule, we obtain the MALCOM log likelihood function: n InL: = L [InP(Ytlxt, <1» + InP(xt) -InP(Ytl<l»]· (5) t=1 We model each P(Yt IXt, <1» by a probability density function (pdf) p[Ytlxt, <l>j (Xt)], where the particular model <l>j depends on which of the N VQ codes Xt is assigned to. Here we use a simple multi-dimensional Gaussian for each pdf, but we are currently exploring the use of multi-modal mixtures of Gaussians to represent the pdfs for sounds such as stop consonants for which the inverse map from acoustics to articulation may not be unique (Nix, 1998). Next, we need an estimate of P(Ytl<l», which can be obtained by summing over all VQ partitions: P(Ytl<l» :::::: L~=l p(YtIXj, <l>j)P(Xj). We estimate P(Xj) by calculating the relative frequency of each acoustic code in the VQ codebook. 3.2 PDF ESTIMATION For step (1) of training, we use gradient-based optimization to reestimate the means of the Gaussian pdfs for each acoustic partition, where the gradient of Eq.(5) with respect to the mean of pdf i is V' JL.In L: = L E;l(Yt -JLi) _ t {L~=1 p[Y~Xj, <I>(Xj)]P(Xj)Ej 1(Yt - JLj)} tEx(t)=x. t=1 Lj=l p[Ytlxj, <I>(Xj)]P(Xj) (6) where E is the covariance matrix for each pdf. For the results in this paper, we use a common radially symmetric covariance matrix for all pdfs and reestimate the covariance matrix after each path optimization step.2 In doing the optimization, we employ the following algorithm: 1. Make an initial guess of each JLi as the means of the path configurations corresponding to the observed acoustics X E Xt . 2. Construct V' JL In L: by considering Eq. (6) over all N acoustic partitions. 3. Determine a search direction for the optimization using, for example, conjugate gradients and perform a line search along this direction (Press et al., 1988). 4. Repeat steps [2]-[3] until convergence. To avoid potential degenerate solutions, after each pdf optimization step, the dimensions of the CM are orthogonalized. Furthermore, because the scale of the continuity map is meaningless (only its topological arrangement matters), the N pdfmeans are scaled to zero mean, unit variance before each path optimization step. 3.3 PATH ESTIMATION For step (2) of training, we use gradient-based optimization to reestimate Y, where the gradient of the log likelihood function with respect to a specific Yt is given by V'y InL: = V'y,p[Ytlxt,<I>(xt)] _ V'y, F~=IP[YtIXj,<I>(Xj)]P(Xj) (7) , p[YtIXt, <I>(Xt)] Lj=1 p[Ytlxj, <I>(Xj)]P(Xj) 2However, we are currently exploring the effects of individual and diagonal covariance matrices. 748 D. A. Nix and J. E. Hogden In doing the optimization, we employ the following gradient-based algorithm: 1. Make an initial guess of the path yO as the means of the pdfs corresponding to the observed acoustic sequence X. 2. Low pass filter yo. 3. Construct \1y InC by considering Eq. (7) over all t. 4. Determine a search direction for the optimization using, for example, conjugate gradients (Press et at., 1988). 5. Low-pass filter this search direction using the same filter as in step [2]. 6. Perform a line search along the filtered direction (Press et at., 1988). 7. Repeat steps [3]-[6] until convergence. Because neither the line search direction nor the initial estimate yO contains energy above the cutoff frequency of the low-pass filter, their linear addition-the next estimate of Y will not contain energy above the cutoff frequency either. Thus, steps [2] and [5] implement the desired smoothness constraint. 4 COMPARNG MALCOM PATHS TO SPEECH ARTICULATION To evaluate our claim that MALCOM paths are topologically related to articulator motions, we construct a regression predictor from Y to measured articulator data using the training data and test the quality of this predictor on an independent test set. Our speech corpus consists of data from two male and one female native speakers of German. This data was obtained from Dr. Igor Zlokarnik and recorded at the Technical University of Munich, Germany using electro-magnetic articulography (EMA) (Perkell et al., 1992). Each speaker's articulatory measurements and acoustics were recorded for the same 108 sentences, where each sentence was about 4 seconds long. The acoustics were recorded using a room-placed microphone and sampled using 16-bit resolution at 16 kHz. Prior to receiving the data from Munich, the data were resampled at 11025 Hz. To represent the acoustic signal in compact vector time-series, we used 256sample (23.2 msec) Hamming-windowed frames, with a new frame starting every 5.8 msec (75% overlap). We transform each frame into a 13th-order LPC-cepstral coefficient vector at (12 cepstral features plus log gain-see Morgan& Scofield, 1994). A full acoustical feature vector Xt consists of a window of seven frames such that Xt is made up of the frames {at-6,at-4,at-2,at,at+2, at+4,at+6}. To VQ the acoustic space we used the classical kmeans algorithm (e.g., Bishop, 1995), but we used 512 codes to model the vowel data, and 256 codes each to model the stop consonants, the fricatives, the nasals, and the liquids (1536 codes combined).3 The articulatory data consist of the (x, y) coordinates of 4 coils along the tongue and the ycoordinates of coils on the jaw and lower lip. Figure 1 illustrates the approximate location of each coil. The data were originally sampled at 250 Hz but were resampled to 172.26 Hz to match one articulatory sample for each 75%-overlapping acoustic frame of 256 samples. The articulatory data were subsequenpy low-pass filtered at 15 Hz to remove measurement noise. Sentences 1-90 were used as a training set, and sentences 91-108 were withheld for evaluation. A separate CM was generated for each speaker using the training data. We used an 8 Hz cutoff frequency because the measured articulatory data had very little energy above 8 Hz, and a 6-dimensional continuity map was used because the first six principal components capture 99% of the variance of the corresponding articulator data (Nix, 1998). 3This acoustic representation and VQ scheme were determined to work well for modeling real articulator data (Nix, 1998), so they were used here as well. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs 749 \ ,/ ~ Tongue mIddle (T2) / Tongue dorsum (T3) ( • -_ _ Tongue back (T4) TOngUetIP~1~ j~t;;"-- -~ -t\ --i l---------. J ,c- "I \ \i~~ ---, I Lower lop (LL) ~ . , '~""~ i I :'-'-~ ( (/ / " ~ ' I .' ~ __ \ Lower jaw (LJ) ,X t,:' ,': I , -- ,~ '~ \ ,~\ ,.~~:,~ \ \(, .. ~~J 'Figure 1: Approximate positions of EMA coils for speech articulation measurements. Because the third term in Eq. (5) is computationally complex, we approximated Eq. (5) by only its first term (the second term is constant during training) until In C, calculated at the end of each iteration using all terms, started to decrease. At this point we started using both the first and third terms of Eq. (5). In each pdf and path optimization step, our convergence criterion was when the maximum movement of a mean or a path was < 10-4 . Our convergence criterion for the entire algorithm was when the correlation of the paths from one full iteration of pdf and path optimization to another was> 0.99 in all dimensions. This usually took about 30 iterations. To evaluate the extent to which MALCOM hidden paths capture information related to articulation, we used the same training set to estimate a non-linear regression function from the output generated by MALCOM to the corresponding measured articulator data. We used an ensemble of 10 single-hidden-Iayer, 32-hidden unit, multi-layer perceptrons trained on different 2/3-training, 1/3-early stopping partitions of the training set, where the results of the ensemble on the test set were averaged (e.g., Bishop, 1995). A linear regression produced results approximately 10% worse than those we report here. To contrast with the unsupervised MALCOM method, we also tested a supervised method in which the articulatory data was available for training as well as evaluation. This involved only the pdf optimization step of MALCOM because the paths were fixed as the articulator measurements. The resulting pdfs were then used in the path optimization step to determine paths for the test data acoustics. We could then measure what fraction of this supervised performance the unsupervised MALCOM attained. 5 RESULTS AND CONCLUSIONS The results of this regression on the test set are plotted in Figure 2. The MALCOM paths had a median correlation of 0.77 with the actual articulator data, compared to 0.84 for the comparable supervised method. Thus, using only the speech acoustics, MALCOM generated continuity maps with correlations to real articulator measurements only 0.02 to 0.15 lower than the corresponding supervised model which used articulatory measurements as well as acoustics. Given that (1) MALCOM fits into the same probabilistic framework for speech recognition as HMMs and (2) MALCOM's hidden paths capture considerable information about the speech production process, we believe that MALCOM will prove to be a viable alternative to the HMM for speech processing tasks. Our current work emphasizes developing a word model to complete the MALCOM formulation and test a full speech recognition system. Furthermore, MALCOM is applicable to any other task to which HMMs can be applied, 750 D. A. Nix and 1. E. Hogden 0.8 0.2 OL-~ __ ~~~~ __ ~~~~~-L~~~ __ -L~ __ LJ __ -L~ __ L-~ T1x T1y T2x T2y T3x T3y T4x T4y LLy Uy Articulator dimension Figure 2: Correlation between estimated and actual articulator trajectories on the independent test set averaged across speakers. Each full bar is the performance of the supervised analogy to MALCOM, and the horizontal line on each bar is the performance of MALCOM itself. including fraud detection (Hogden, 1997) and text processing. Acknowledgments We would like to thank James Howse and Mike Mozer for their helpful comments on this manuscript and Igor Zlokarnik for sharing his data with us. This work was performed under the auspices of the U.S. Department of Energy. References Bishop, C.M. (1995). Neural Networks for Pattern Recognition, NY: Oxford University Press, Inc. Bourlard, H. Konig, Y., & Morgan, N. (1995). "REMAP: Recursive estimation and maximization of a posteriori probabilities, application to transition-based connectionist speech recognition," International Computer Science Institute Technical Report TR-94-064. Hogden, J. (1995). "Improving on hidden Markov models: an articulatorily constrained, maximumlikelihood approach to speech recognition and speech coding," Los Alamos National Laboratory Technical Report, LA-UR-96-3945. Hogden, J. (1997). "Maximum likelihood continuity mapping for fraud detection," Los Alamos National Laboratory Technical Report, LA-UR-97-992. Hogden, 1., Nix, D.A., Gracco, v., & Rubin, P. (1998). "Stochastic word nodels for articulatorily constrained speech recognition and synthesis," submitted to Acoustical Society of America Conference, 1998. Morgan, N. & Bourlard, H.A. (1995). "Neural Networks for Statistical Recognition of Continuous Speech," Proceedings of the IEEE, 83(5), 742-770. Morgan, D.P., & Scofield, c.L. (1992). Neural Networks and Speech Processing, Boston, MA: Kluwer Academic Publishers. Nix, D.A. (1998). Probabilistic methods for inferring vocal-tract articulation from speech acoustics, Ph.D. Dissertation, U. of CO at Boulder, Dept. of Computer Science, in preparation. Perkell, 1.S., Cohen, M.H., Svirsky, M.A., Matthies, M.L., Garabieta, I., & Jackson, M.T.T. (1992). "Electromagnetic midsagittal articulo meter systems for transducing speech articulatory movements," Journal of the Acoustical Society of America, 92(6), 3078-3096. Press, W.H., Teukolsky, S.A., Vetterling, w.T., & Flannery, B.P. (1988). Numerical Recipes in C Cambridge University Press. Young, SJ. (1996). "A review of large-vocabulary continuous speech recognition," IEEE Signal Processing Magazine, September, 45-57.
|
1998
|
64
|
1,564
|
Semiparametric Support Vector and Linear Programming Machines Alex J. Smola, Thilo T. Frie6, and Bernhard Scholkopf GMD FIRST, Rudower Chaussee 5, 12489 Berlin {smola, friess, bs }@first.gmd.de Abstract Semiparametric models are useful tools in the case where domain knowledge exists about the function to be estimated or emphasis is put onto understandability of the model. We extend two learning algorithms - Support Vector machines and Linear Programming machines to this case and give experimental results for SV machines. 1 Introduction One of the strengths of Support Vector (SV) machines is that they are nonparametric techniques, where one does not have to e.g. specify the number of basis functions beforehand. In fact, for many of the kernels used (not the polynomial kernels) like Gaussian rbf- kernels it can be shown [6] that SV machines are universal approximators. While this is advantageous in general, parametric models are useful techniques in their own right. Especially if one happens to have additional knowledge about the problem, it would be unwise not to take advantage of it. For instance it might be the case that the major properties of the data are described by a combination of a small set of linear independent basis functions {¢Jt (.), ... , ¢n (.)}. Or one may want to correct the data for some (e.g. linear) trends. Secondly it also may be the case that the user wants to have an understandable model, without sacrificing accuracy. For instance many people in life sciences tend to have a preference for linear models. This may be some motivation to construct semiparametric models, which are both easy to understand (for the parametric part) and perform well (often due to the nonparametric term). For more advocacy on semiparametric models see [1]. A common approach is to fit the data with the parametric model and train the nonparametric add-on on the errors of the parametric part, Le. fit the nonparametric part to the errors. We show in Sec. 4 that this is useful only in a very restricted 586 A. 1. Smola, T. T. FriejJ and B. SchOlkopf situation. In general it is impossible to find the best model amongst a given class for different cost functions by doing so. The better way is to solve a convex optimization problem like in standard SV machines, however with a different set of admissible functions n f(x) = (w,1jJ(x)) + 2:f3irPi(X). (1) i=l Note that this is not so much different from the classical SV [10J setting where one uses functions of the type f(x) = (w, 1jJ(x)) + b. (2) 2 Semiparametric Support Vector Machines Let us now treat this setting more formally. For the sake of simplicity in the exposition we will restrict ourselves to the case of SV regression and only deal with the c- insensitive loss function 1~lc = max{O, I~I - c}. Extensions of this setting are straightforward and follow the lines of [7J. Given a training set of size f, X := {(Xl, yd , ., . ,(xe, ye)} one tries to find a function f that minimizes the functional of the expected riskl R[JJ = J c(f(x) - y)p(x, y)dxdy. (3) Here c(~) denotes a cost function, i.e. how much deviations between prediction and actual training data should be penalized. Unless stated otherwise we will use c(~) = 1~lc . As we do not know p(x, y) we can only compute the empirical risk Remp[JJ (i.e. the training error). Yet, minimizing the latter is not a good idea if the model class is sufficiently rich and will lead to overfitting. Hence one adds a regularization term T [JJ and minimzes the regularized risk functional e Rreg[J] = 2: C(f(Xi) - Yi) + AT[J] with A > O. (4) i=l The standard choice in SV regression is to set T[J] = ~llwI12. This is the point of departure from the standard SV approach. While in the latter f is described by (2), we will expand f in terms of (1). Effectively this means that there exist functions rPl (.), ... , rPn (.) whose contribution is not regularized at all. If n is sufficiently smaller than f this need not be a major concern, as the VCdimension of this additional class of linear models is n, hence the overall capacity control will still work, provided the nonparametric part is restricted sufficiently. Figure 1 explains the effect of choosing a different structure in detail. Solving the optimization equations for this particular choice of a regularization term, with expansion (1), the c- insensitive loss function and introducing kernels 1 More general definitions, mainly in terms of the cost function, do exist but for the sake of clarity in the exposition we ignored these cases. See [10] or [7] for further details on alternative definitions of risk functionals. Semiparametric Support Vector and Linear Programming Machines , I .'" , , , ---------- ------ -----..... ..... .I , - .......... " " " ----_ ..... " , ' I f "\) \ , /} I , '..... - -;; I ---,. I ----" -------------------587 Figure 1: Two different nested subsets (solid and dotted lines) of hypotheses and the optimal model (+) in the realizable case. Observe that the optimal model is already contained in much a smaller (in this diagram size corresponds to the capacity of a subset) subset of the structure with solid lines than in the structure denoted by the dotted lines. Hence prior knowledge in choosing the structure can have a large effect on generalization bounds and performance. following [2J we arrive at the following primal optimization problem: l minimize %llwl12 + L ~i +~; subject to i=l n (W,1jJ(Xi)) + L (3j¢j(Xi) - Yi < to + ~i j=l n Yi - (w, 1jJ(xd) L (3j¢j (Xi) < to + ~i j=l > 0 (5) Here k(x, x') has been written as (1jJ(x) , 1jJ(x' )). Solving (5) for its Wolfe dual yields maXImIze subject to { -~ i,El (ai - ai)(aj - aj)k(xi,Xj) ( ( -E L (ai + an + L Yi (ai - an i=l i=l { ( L(ai - an¢j(Xi) i=l Lti,ai o for all 1 ~ j ~ n E [0,1/ >.J (6) Note the similarity to the standard SV regression model. The objective function and the box constraints on the Lagrange multipliers ai, a; remain unchanged. The only modification comes from the additional unregularized basis functions. Whereas in the standard SV case we only had a single (constant) function b· 1 we now have an expansion in the basis (3i ¢i ( .). This gives rise to n constraints instead of one. Finally f can be found as l n l f(x) = L(ai - a;)k(xi' x) + L (3i¢i(X) since w = L(ai - ai)1jJ(xi). (7) i=l i=l i=l The only difficulty remaining is how to determine (3i. This can be done by exploiting the Karush- Kuhn- Tucker optimality conditions, or much more easily, by using an interior point optimization code [9J. In the latter case the variables (3i can be obtained as the dual variables of the dual (dual dual = primal) optimization problem (6) as a by product of the optimization process. This is also how these variables have been obtained in the experiments in the current paper. 588 A. 1. Smola, T. T. FriefJ and B. SchOlkopf 3 Semiparametric Linear Programming Machines Equation (4) gives rise to the question whether not completely different choices of regularization functionals would also lead to good algorithms. Again we will allow functions as described in (7). Possible choices are T[J] = ~//wI12 + t /~i/ (8) or or i=l t T[f] = L lai - a:/ i=l tIn T[f] = L lai - a:1 +"2 L ~dJjMij i=l i ,j=l (9) (10) for some positive semidefinite matrix M. This is a simple extension of existing methods like Basis Pursuit [3] or Linear Programming Machines for classification (see e.g. [4]). The basic idea in all these approaches is to have two different sets of basis functions that are regularized differently, or where a subset may not be regularized at all. This is an efficient way of encoding prior knowledge or the preference of the user as the emphasis obviously will be put mainly on the functions with little or no regularization at all. Eq. (8) is essentially the SV estimation model where an additional linear regularization term has been added for the parametric part. In this case the constraints of the optimization problem (6) change into t -1 < E(ai-ai)¢j(xd < 1 forall1:::;j:::;n i=l (11) ai,ar E [O,l/A] It makes little sense (from a technical viewpoint) to compute Wolfe's dual objective function in (10) as the problem does not get significantly easier by doing so. The best approach is to solve the corresponding optimization problem directly by some linear or quadratic programming code, e.g. [9]. Finally (10) can be reduced to the case of (8) by renaming variables accordingly and a proper choice of M. 4 Why Backfitting is not sufficient One might think that the approach presented above is quite unnecessary and overly complicated for semi parametric modelling. In fact, one could try to fit the data to the parametric model first, and then fit the nonparametric part to the residuals. In most cases, however, this does not lead to finding the minimum of (4). We will show this at a simple example. Take a SV machine with linear kernel (i.e. k(x, x') = (x, x')) in one dimension and a constant term as parametric part (i.e. f(x) = wx + $). This is one of the simplest semiparametric SV machines possible. Now suppose the data was generated by Yi = Xi where Xi 2: 1 (12) without noise. Clearly then also Yi 2: 1 for all i. By construction the best overall fit of the pair (~, w) will be arbitrarily close to (0,1) if the regularization parameter A is chosen sufficiently small. For backfitting one first carries out the parametric fit to find a constant ~ minimizing the term E;=l C(Yi - $). Depending on the chosen cost function c(·), ~ will be the mean (L2-error), the median (L1-error), etc., of the set {Yl, ... , Yt}· As all Yi 2: 1 Semi parametric Support Vector and Linear Programming Machines 589 2 -- __ .... _-... , , , Figure 2: Left: Basis functions used in the toy example. Note the different length scales of sin x and sinc 27rx. For convenience the functions were shifted by an offset of 2 and 4 respectively. Right: Training data denoted by '+', nonparametric (dashdotted line), semiparametric (solid line), and parametric regression (dots). The regularization constant was set to A = 2. Observe that the semiparametric model picks up the characteristic wiggles of the original function. also {3 ~ 1 which is surely not the optimal solution of the overall problem as there (3 would be close to a as seen above. Hence not even in the simplest of all settings backfitting minimizes the regularized risk functional, thus one cannot expect the latter to happen in the more complex case either. There exists only one case in which backfitting would suffice, namely if the function spaces spanned by the kernel expansion {k(Xi")} and {4>i(')} were orthogonal. Consequently in general one has to jointly solve for both the parametric and the semiparametric part. 5 Experiments The main goal of the experiments shown is a proof of concept and to display the properties of the new algorithm. We study a modification of the Mexican hat function, namely f(x) = sinx + sinc(27r{x - 5)). (13) Data is generated by an additive noise process, i.e. Yi = f(xd + ~i' where ~i is additive noise. For the experiments we choose Gaussian rbf-kernels with width u = 1/4, normalized to maximum output 1. The noise is uniform with 0.2 standard deviation, the E:-insensitive cost function I . Ie with E = 0.05. Unless stated otherwise averaging is done over 100 datasets with 50 samples each. The Xi are drawn uniformly from the interval [0,10]. L1 and L2 errors are computed on the interval [0, 10] with uniform measure. Figure 2 shows the function and typical predictions in the nonparametric, semiparametric, and parametric setting. One can observe that the semiparametric model including sin x, cos x and the constant function as basis functions generalizes better than the standard SV machine. Fig. 3 shows that the generalization performance is better in the semiparametric case. The length of the weight vector of the kernel expansion IIwll is displayed in Fig. 4. It is smaller in the semiparametric case for practical values of the regularization strength. To make a more realistic comparison, model selection (how to determine 1/ A) was carried out by la-fold cross validation for both algorithms independently for all 100 datasets. Table 1 shows generalization performance for both a nonparametric model, a correctly chosen and an incorrectly chosen semiparametric model. The experiments indicate that cases in which prior knowledge exists on the type of functions to be used will benefit from semiparametric modelling. Future experiments will show how much can be gained in real world examples. 590 A. 1. Smola, T. T. FriejJ and B. Sch6lkopj .. .35 03 0" 02 015 o~oL,·· ~~"·:------'O'-:-, ~~'O:-, --,,'-:-., ~"""""O·~--'", 071~"'---~-~r=====::::;l 1 SeF\'llP&l'ametnc Mode~ _ ._. _.-...... -_ ....... _-11- ___ ..... 06 -'''' 05 .. _ . ~tnc Model J \ \ \ .\ , \ '. Figure 3: L1 error (left) and L2 error (right) of the nonparametric / semiparametric regression computed on the interval [0,10] vs. the regularization strength 1/),. The dotted lines (although hardly visible) denote the variance of the estimate. Note that in both error measures the semiparametric model consistently outperforms the nonparametric one. ," O(). 003 002 00\ I Nonparam. Figure 4: Length of the weight vector w in feature space CEi,j(ai - ai)(aj - aj)k(xi,Xj))1/2 vs. regularization strength. Note that Ilwl!' controlling the capacity of that part of the function, belonging to the kernel expansion, is smaller (for practical choices of the regularization term) in the semiparametric than in the nonparametric model. If this difference is sufficiently large the overall capacity of the resulting model is smaller in the semiparametric approach. As before dotted lines indicates the variance. Figure 5: Estimate of the parameters for sin x (top picture) and cos x (bottom picture) in the semiparametric model vs. regularization strength 1/),. The dotted lines above and below show the variation of the estimate given by its variance. Training set size was f. = 50. Note the small variation of the estimate. Also note that even in the parametric case 1/), ~ 0 neither the coefficient for sin x converges to 1, nor does the corresponding term for cos x converge to O. This is due to the additional frequency contributions of sinc 27rx. I Semi par am. I Semiparam. I sin x, cos x, 1 sin 2x, cos 2x, 1 L1 error I 0.1263 ± 0.0064 (12) I 0.0887 ± 0.0018 (82) I 0.1267 ± 0.0064 (6) I L2 error I 0.1760 ± 0.0097 112)1 0.1197 ± 0.0046 (82) I 0.1864 ± 0.0124 (6) I Table 1: Ll and L2 error for model selection by 10-fold crossvalidation. The correct semiparametric model (sin x, cos x, 1) outperforms the nonparametric model by at least 30%, and has significantly smaller variance. The wrongly chosen nonparametric model (sin 2x, cos 2x, 1), on the other hand, gives performance comparable to the non parametric one, in fact, no significant performance degradation was noticeable. The number in parentheses denotes the number of trials in which the corresponding model was the best among the three models. Semiparametric Support Vector and Linear Programming Machines 591 6 Discussion and Outlook Similar models have been proposed and explored in the context of smoothing splines. In fact, expansion (7) is a direct result of the representer theorem, however only in the case of regularization in feature space (aka Reproducing Kernel Hilbert Space, RKHS). One can show [5] that the expansion (7) is optimal in the space spanned by the RKHS and the additional set of basis functions. Moreover the semi parametric setting arises naturally in the context of conditionally positive definite kernels of order m (see [8]). There, in order to use a set of kernels which do not satisfy Mercer's condition, one has to exclude polynomials up to order m - 1. Hence, to with that one has to add polynomials back in 'manually' and our approach presents a way of doing that. Another application of semiparametric models besides the conventional approach of treating the nonparametric part as nuisance parameters [1] is the domain of hypothesis testing, e.g. to test whether a parametric model fits the data sufficiently well. This can be achieved in the framework of structural risk minimization [10] given the different models (nonparametric vs. semiparametric vs. parametric) one can evaluate the bounds on the expected risk and then choose the model with the lowest error bound. Future work will tackle the problem of computing good error bounds of compound hypothesis classes. Moreover it should be easily possible to apply the methods proposed in this paper to Gaussian processes. Acknowledgements This work was supported in part by grants of the DFG Ja 379/51 and ESPRIT Project Nr. 25387- STORM. The authors thank Peter Bartlett, Klaus- Robert Muller, Noboru Murata, Takashi Onoda, and Bob Williamson for helpful discussions and comments. References [1] P.J. Bickel, C.A.J. Klaassen, Y. Ritov, and J.A. Wellner. Efficient and adaptive estimation for semiparametric models. J. Hopkins Press, Baltimore, ML, 1994. [2] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In COLT'92, pages 144- 15'2, Pittsburgh, PA, 1992. [3] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. Technical Report 479, Department of Statistics, Stanford University, 1995. [4] T.T. FrieB and R.F. Harrison. Perceptrons in kernel feature spaces. TR RR720, University of Sheffield, Sheffield, UK, 1998. [5] G.S. Kimeldorf and G. Wahba. A correspondence between Bayesan estimation on stochastic processes and smoothing by splines. Ann. Math. Statist., 2:495502, 1971. [6] C.A. Micchelli. Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11- 22, 1986. [7] A. J. Smola and B. Scholkopf. On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica, 22:211231,1998. [8] A.J. Smola, B. Scholkopf, and K. Muller. The connection between regularization operators and support vector kernels. Neural Netw., 11:637- 649, 1998. [9] R.J. Vanderbei. LOQO: An interior point code for quadratic programming. TR SOR-94-15, Statistics and Operations Research, Princeton Univ., NJ, 1994. [10] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995.
|
1998
|
65
|
1,565
|
Regularizing AdaBoost Gunnar Riitsch, Takashi Onoda; Klaus R. M iiller GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany {raetsch, onoda, klaus }@first.gmd.de Abstract Boosting methods maximize a hard classification margin and are known as powerful techniques that do not exhibit overfitting for low noise cases. Also for noisy data boosting will try to enforce a hard margin and thereby give too much weight to outliers, which then leads to the dilemma of non-smooth fits and overfitting. Therefore we propose three algorithms to allow for soft margin classification by introducing regularization with slack variables into the boosting concept: (1) AdaBoostreg and regularized versions of (2) linear and (3) quadratic programming AdaBoost. Experiments show the usefulness of the proposed algorithms in comparison to another soft margin classifier: the support vector machine. 1 Introd uction Boosting and other ensemble methods have been used with success in several applications, e.g. OCR [13, 8]. For low noise cases several lines of explanation have been proposed as candidates for explaining the well functioning of boosting methods. (a) Breiman proposed that during boosting also a "bagging effect" takes place [3] which reduces the variance and effectively limits the capacity of the system and (b) Freund et al. [12] show that boosting classifies with large margins, since the error function of boosting can be written as a function of the margin and every boosting step tries to minimize this function by maximizing the margin [9, 11]. Recently, studies with noisy patterns have shown that boosting does indeed overfit on noisy data, this holds for boosted decision trees [10], RBF nets [11] and also other kinds of classifiers (e.g. [7]). So it is clearly a myth that boosting methods will not overfit. The fact that boosting is trying to maximize the margin, is exactly also the argument that can be used to understand why boosting must necessarily overfit for noisy patterns or overlapping distributions and we give asymptotic arguments for this statement in section 3. Because the hard margin (smallest margin in the trainings set) plays a central role in causing overfitting, we propose to relax the hard margin classification and allow for misclassifications by using the soft margin classifier concept that has been applied to support vector machines successfully [5]. ·permanent address: Communication & Information Research Lab. CRIEPI, 2-11-1 Iwado kita, Komae-shi, Tokyo 201-8511, Japan. Regularizing AdaBoost 565 Our view is that the margin concept is central for the understanding of both support vector machines and boosting methods. So far it is not clear what the optimal margin distribution should be that a learner has to achieve for optimal classification in the noisy case. For data without noise a hard margin might be the best choice. However, for noisy data there is always the trade-off in believing in the data or mistrusting it, as the very data point could be an outlier. In general (e.g. neural network) learning strategies this leads to the introduction of regularization which reflects the prior that we have about a problem. We will also introduce a regularization strategy (analogous to weight decay) into boosting. This strategy uses slack variables to achieve a soft margin (section 4). Numerical experiments show the validity of our regularization approach in section 5 and finally a brief conclusion is given. 2 AdaBoost Algorithm Let {ht(x) : t = 1, ... ,T} be an ensemble of T hypotheses defined on input vector x and e = [Cl ... CT] their weights satisfying Ct > 0 and lei = 2:t Ct = 1. In the binary classification case, the output is one of two class labels, i.e. ht (x) = ±1. The ensemble generates the label which is the weighted majority of the votes: sgn (2:t Ctht(x)). In order to train this ensemble of T hypotheses {ht(x)} and e, several algorithms have been proposed: bagging, where the weighting is simply Ct = l/T [2] and AdaBoost/ Arcing, where the weighting scheme is more complicated [12]. In the following we give a brief description of AdaBoost/ Arcing. We use a special form of Arcing, which is equivalent to AdaBoost [4]. In the binary classification case we define the margin for an input-output pair Zi = (Xi, Yi), i = 1, ... ,1 by T mg(zi' e) = Yi L Ctht(Xi), (1) t=l which is between -1 and 1, if lei = 1. The correct class is predicted, if the margin at Z is positive. When the positivity of the margin value increases, the decision correctness becomes larger. AdaBoost maximizes the margin by (asymptotically) minimizing a function of the margin mg(zi' e) [9, 11] g(b) = t, exp { -1~lmg(Zi' C)}, (2) where b = [bl ... bTl and Ibl = 2:t bt (starting from b = 0). Note that bt is the unnormalized weighting of the hypothesis ht, whereas e is simply a normalized version of b, i.e. e = b/lbl. In order to find the hypothesis ht the learning examples Zi are weighted in each iteration t with Wt(Zi). Using a bootstrap on this weighted sample we train ht ; alternatively a weighted error function can be used (e.g. weighted MSE). The weights Wt(Zi) are computed according tol () exp{-lbt-llmg(zi,et-l)/2} Wt Zi = I 2:j=l exp {-Ibt-dmg(zj, et-d/2} (3) and the training error tOt of ht is computed as tOt = 2:~=1 Wt(zi)I(Yi t ht(Xi)), where I(true) = 1 and I(false) = O. For each given hypothesis ht we have to find a weight bt , such that g(b) is minimized. One can optimize this parameter by a line search 1 This direct way for computing the weights is equivalent to the update rule of AdaBoost. 566 G. RaIsch. T. Onoda and K.-R. Maller or directly by analytic minimization [4], which gives bt = 10g(1 - €t} - log ft. Interestingly, we can write () 8g(ht-d/8mg(zi, h t- 1 } Wt Zi = I ' 2:j=l 8g(ht-d/8mg(zj, ht-d (4) as a gradient of g(ht - 1 ) with respect to the margins. The weighted minimization with Wt(Zi) will give a hypothesis ht which is an approximation to the best possible hypothesis h; that would be obtained by minimizing 9 directly. Note that, the weighted minimization (bootstrap, weighted LS) will not necessarily give hi, even if €t is minimized [11]. AdaBoost is therefore an approximate gradient descent method which minimizes 9 asymptotically. 3 Hard margins A decrease of g(c, Ihl) := g(h) is predominantly achieved by improvements of the margin mg(zi' c). IT the margin mg(zi, c) is negative, then the error g(c, Ihl) takes clearly a big value, which is additionally amplified by Ihl. So, AdaBoost tries to decrease the negative margin efficiently to improve the error g(c, Ihl). Now, let us consider the asymptotic case, where the number of iterations and therefore also Ihl take large values [9]. In this case, when the values of all mg(zi,c),i = 1,···,l, are almost the same but have small differences, these differences are amplified strongly in g(c, Ihl). Obviously the function g(c, Ihl) is asymptotically very sensitive to small differences between margins. Therefore, the margins mg(zi' c) of the training patterns from the margin area (boundary area between classes) should asymptotically converge to the same value. From Eq. (3), when Ihl takes a very big value, AdaBoost learning becomes a "hard competition" case: only the pattern with smallest margin will get high weights, the other patterns are effectively neglected in the learning process. In order to confirm that the above reasoning is correct, Fig. 1 shows margin distributions after 104 AdaBoost iterations for a toy example [9] at different noise levels generated by uniform distribution U(0.0,u 2 ) (left). From this figure, it becomes apparent that the margin distribution asymptotically makes a step at a fixed size of the margin for training patterns which are in the margin area. In previous studies [9, 11] we observed that those patterns exhibit a large overlap to support vectors in support vector machines. The numerical results support our theoretical asymptotic analysis. The property of AdaBoost to produce a big margin area (no pattern in the area, i.e. a hard margin), will not always lead to the best generalization ability (d. [5, 11]). This is especially true, 0 09 0.8 I , 0 1'. 0 ·00 0 '" 0 F: " 0.22 I 0.215 .~ 0.5 , , ~ o. , 0.21 , , ~ 0.3 0.205 0.2 0.1 0.2 0 0 0.2 0.' 0.6 0.8 0.1'15 stability 10' 10' 10' 10' 10' 1.5 2.5 Figure 1: Margin distributions for AdaBoost (left) for different noise levels (a 2 = O%(dotted), 9%(dashed), 16%(solid» with fixed number of RBF-centers for the base hypothesis and typical overfitting behaviour in the generalization error as a function of the number of iterations (middle) and a typical decision line (right) generated by AdaBoost using RBF networks in the case with noise (here: 30 centers and a 2 = 16%; smoothed) Regularizing AdaBoost 567 if the training patterns have classification or input noise. In our experiments with noisy data, we often observed that AdaBoost made overfitting (for a high number of boosting iterations). Fig. 1 (middle) shows a typical overfitting behaviour in the generalization error for AdaBoost: after only 80 boosting iterations the best generalization performance is already achieved. Quinlan [10] and Grove et al. [7] also observed overfitting and that the generalization performance of AdaBoost is often worse than that of the single classifier, if the data has classification noise. The first reason for overfitting is the increasing value of Ibl: noisy patterns (e.g. bad labelled) can asymptotically have an "unlimited" influence to the decision line leading to overfitting (cf. Eq. (3)). Another reason is the classification with a hard margin, which also means that all training patterns will asymptotically be correctly classified (without any capacity limitation!). In the presence of noise this will certainly be not the right concept, because the best decision line (e.g. Bayes) usually will not give a training error of zero. So, the achievement of large hard margins for noisy data will produce hypotheses which are too complex for the problem. 4 How to get Soft Margins Changing AdaBoost's error function In order to avoid overfitting, we introduce slack variables, which are similar to those of the support vector algorithm [5, 14], into AdaBoost. We know that all training patterns will get non-negative stabilities after many iterations(see Fig. 1(left)), i.e. mg(zi, c) 2: p for all i = 1, ... , I, where p is the minimum margin of the patterns. Due to this fact, AdaBoost often produces high weights for the difficult training patterns by enforcing a non-negative margin p 2: 0 (for every pattern including outliers) and this property will eventually lead to overfitting, as observed in Fig. 1. Therefore, we introduce some variables ~i - the slack variables and get mg(zi, c) 2: p C~L ~f > O. (5) In these inequalities, ~! are positive and if a training pattern has high weights in the previous iterations, the ~! should be increasing. In this way, for example, we do not force outliers to be classified according to their possibly wrong labels, but we allow for some errors. In this sense we get a trade-off between the margin and the importance of a pattern in the training process (depending on the constant C 2: 0). If we choose C = 0 in Eq. (5), the original AdaBoost algorithm is retrieved. If C is chosen too high, the data is not taken seriously. We adopt a prior on the weights Wr(Zi) that punishes large weights in analogy to weight decay and choose €l ~ (t, c,. Wc(Zi) r (6) where the inner sum is the cumulative weight of the pattern in the previous iterations (we call it influence of a pattern - similar to Lagrange multipliers in SVMs) . By this ~!, AdaBoost is not changed for easy classifiable patterns, but is changed for difficult patterns. From Eq. (5), we can derive a new error function: I 9reg(ct,lbtl) = ~exp{ -1~tlmg(zi,Ct) - C~f} (7) By this error function, we can control the trade-off between the weights, which the pattern had in the last iterations, and the achieved margin. The weight Wt(Zi) of a pattern is computed as the derivative ofEq. (7) subject to mg(zi, b t - 1 ) (cf. Eq. (4)) and is given by () exp {lbt-11(mg(zi,Ct-d ~:-1)/2} Wt Zi = I { t 1 } . Ej =l exp Ibt-11(mg(zj, Ct-t} ~j - )/2 (8) 568 G. Riitsch, T. Onoda and K.-R. Muller Table 1: Pseudocode description of the algorithms LP-AdaBoost(Z, T) I LPreg-AdaBoost(Z, T, C) I QPreg-AdaBoost(Z, T, C) Run Ada Boost on dataset Z to get T hypotheses h and their weights c C I . L {-I if h t (Xi) =1= Yi onstruct oss matnx i,t = 1 otherwise minimize -p S.t. E~=l CtLi,t ~ P Ct ~ 0, ECt = 1 minimize -p+C2:·ei minimize IlbW +CE·ei T • T • S.t. 2:t=l CtLi,t ~ P + ei S.t. Et=l btLi ,t ~ 1 - ei Ct ~ 0, E Ct = 1 bt ~ 0 {i ~ 0 {i ~ 0 Thus we can get an update rule for the weight of a training pattern [11] Wt(Zi) = Wt-l (Zi) exp{bt-1I(Yi =I ht- 1 (Xi») + C~:-2Ibt_21 C~;-llbt_ll}. (9) It is more difficult to compute the weight bt of the t-th hypothesis analytically. However, we can get bt by a line search procedure over Eq. (7), which has an unique solution because 8~t greg> 0 is satisfied. This line search can be implemented very efficiently. With this line-search, we can now also use real-valued outputs of the base hypotheses, while the original AdaBoost algorithm could not (d. also [6]). Optimizing a given ensemble In Grove et al. [7], it was shown how to use linear programming to maximize the minimum margin for a given ensemble and LP-AdaBoost was proposed (table 1 left). This algorithm maximizes the minimum margin on the training patterns. It achieves a hard margin (as AdaBoost asymptotically does) for small number of iterations. For the reasoning for a hard margin (section 3) this can not generalize well. If we introduce slack variables to LP-AdaBoost, one gets the algorithm LP reg-AdaBoost (table 1 middle) [11]. This modification allows that some patterns have lower margins than p (especially lower than 0). There is a trade-off: (a) make all margins bigger than p and (b) maximize p. This trade-off is controlled by the constant C. Another formulation of a optimization problem can be derived from the support vector algorithm. The optimization objective of a SVM is to find a function hW which minimizes a functional of the form E = IlwW + C 2:i ~i' where Yih(Xi) ~ 1 ~i and the norm of the parameter vector w is the measure for the complexity of the hypothesis hW [14]. For ensemble learning we do not have such a measure of complexity and so we use the norm of the hypotheses weight vector b. For Ibl = 1 this is a small value, if the elements are approximately equal (analogy to bagging) and has high values, when there are some strongly emphasized hypotheses (far away from bagging). Experimentally, we found that IIbl12 is often larger for more complex hypothesis. Thus, we can apply the optimization principles of SVMs to AdaBoost and get the algorithm QPreg-AdaBoost (table 1 right). We effectively use a linear SVM on top of the results of the base hypotheses. 5 Experiments In order to evaluate the performance of our new algorithms, we make a comparison among the single RBF classifier, the original AdaBoost algorithm, AdaBoostreg (with RBF nets), LfQPreg-AdaBoost and a Support Vector Machine (with RBF kernel). We use ten artificial and real world datasets from the DCI and DELVE benchmark repositories: banana (toy dataset as in [9, 11]), breast cancer, image segment, ringnorm, flare sonar, splice, new-thyroid, titanic, twonorm, waveform. Some of the problems are originally not binary classification problems, hence a (random) partition into two classes was used. At first we generate 20 partitions into training and test set (mostly ~ 60% : 40%). On each partition we train the classifier and get its test set error. The performance is averaged and we get table 2. Regularizing AdaBoost 569 Table 2: Comparison among the six methods: Single RBF classifier, AdaBoost(AB), AdaBoostreg (ABreg), L/QP reg-AdaBoost (L/QPR) and a Support Vector Machine(SVM): Estimation of generalization error in % on 10 datasets (best method in bold face). Clearly, AdaBoostreg gives the best overall performance. For further explanation see text. RBF AB ABreg LPR QPR SVM Banana 10.9±0.5 12.3±0.7 lO.1±O.5 10.8±0.4 10.9±0.5 11.5±4.7 Cancer 28.7±5.3 30.5±4.5 26.3±4.3 31.0±4.2 26.2±4.7 26.1±4.8 Image 2.8±0.7 2.5±0.7 2.5±0.7 2.6±0.6 2.4±O.5 2.9±0.7 Ringnorm 1.1±O.3 2.0±0.2 1.1±O.2 2.2±0.4 1.9±0.2 1.1±O.1 FSonar 34.6±2.1 35.6±1.9 33.6±1.7 35.7±4.5 36.2±1.7 32.5±1.1 Splice 1O.0±0.3 10.1±0.3 9.5±O.2 10.2±1.6 10.1±0.5 10.9±0.7 Thyroid 4.8±2.4 4.4±1.9 4.4±2.1 4.4±2.0 4.4±2.2 4.8±2.2 Titanic 23.4±1.7 22.7±1.2 22.5±1.0 22.9±1.9 22.7±1.0 22.4±1.0 Twonorm 2.8±0.2 3.1±0.3 2.1±2.1 3.4±0.6 3.0±0.3 3.0±0.2 Waveform 10.7±1.0 10.8±0.4 9.9±0.9 10.6±1.0 10.1±0.5 9.8±O.3 Mean '70 6.7 9.6 1.0 11.1 4.7 6.3 Winner '70 16.4 8.2 28.5 15.0 15.3 16.6 We used RBF nets with adaptive centers (some conjugate gradient iterations to optimize positions and widths of the centers) as base hypotheses as described in [1, 11]. In all experiments, we combined 200 hypotheses. Clearly, this number of hypotheses may be not optimal, however Adaboost with optimal early stopping is not better than AdaBoost.reg . The parameter C of the regularized versions of AdaBoost and the parameters (C, a) of the SVM are optimized by the first five training datasets. On each training set 5-fold-cross validation is used to find the best model for this dataset2 . Finally, the model parameters are computed as the median of the five estimations. This way of estimating the parameters is surely not possible in practice, but will make this comparison more robust and the results more reliable. The last but one line in Tab. 2 shows the line 'Mean %', which is computed as follows: For each dataset the average error rate of all classifier types are divided by the minimum error rate and 1 is subtracted. These resulting numbers are averaged over the 10 datasets. The last line shows the probabilities that a method wins, i.e. gives the smallest generalization error, on the basis of our experiments (averaged over all ten datasets). Our experiments on noisy data show that (a) the results of AdaBoost are in almost all cases worse than the single classifier (clear overfitting effect) and (b) the results of AdaBoostreg are in all cases (much) better than those of AdaBoost and better than that of the single classifier. Furthermore, we see clearly, that (c) the single classifier wins as often as the SVM, (d) L/QPreg-AdaBoost improves the results of AdaBoost, (e) AdaBoostreg wins most often. L/QP reg-AdaBoost improves the results of AdaBoost in almost cases due to established the soft margin. But the results are not as good as the results of AdaBoostreg and the SVM, because the hypotheses generated by AdaBoost (aimed to construct a hard margin) may be not the appropriate ones generate a good soft margin. We also observe that quadratic programming gives slightly better results than linear programming. This may be due to the fact that the hypotheses coefficients generated by LPreg-AdaBoost are more sparse (smaller ensemble). Bigger ensembles may have a better generalization ability (due to the reduction of variance [3]). The worse performance of SVM compared to AdaBoostreg and the unexpected tie between SVM and RBF net may be explained with (a) the fixed a of the RBFkernel (loosing multi-scale information), (b) coarse model selection, (c) worse error function ofthe SV algorithm (noise model). Sumarizing, AdaBoost is useful for low noise cases, where the classes are separable (as shown for OCR[13, 8]). AdaBoostreg extends the applicability of boosting to "difficult separable" cases and should be applied, if the data is noisy. 2The parameters are only near-optimal. Only 10 values for each parameter are tested. 570 G. Ratsch, T. Onoda and K.-R. Maller 6 Conclusion We introduced three algorithms to alleviate the overfitting problems of boosting algorithms for high noise data: (1) direct incorporation ofthe regularization term into the error function (Eq.(7)), use of (2) linear and (3) quadratic programming with constraints given by the slack variables. The essence of our proposal is to introduce slack variables for regularization in order to allow for soft margin classification in contrast to the hard margin classification used before. The slack variables basically allow to control how much we trust the data, so we are permitted to ignore outliers which would otherwise have spoiled our classification. This generalization is very much in the spirit of support vector machines that also trade-off the maximization of the margin and the minimization of the classification errors in the slack variables. In our experiments, AdaBoostreg showed a better overall generalization performance than all other algorithms including the Support Vector Machines. We conjecture that this unexpected result is mostly due to the fact that SVM can only use one CT and therefore loose scaling information. AdaBoost does not have this limitation. So far we balance our trust in the data and the margin maximization by cross validation. Better would be, if we knew the "optimal" margin distribution that we could achieve for classifying noisy patterns, then we could of course balance the errors and the margin sizes optimally. In future works, we plan to establish more connections between AdaBoost and SVM. Acknowledgements: We thank for valuable discussions with A. Smola, B. Sch6lkopf, T. FrieB and D. Schuurmans. Partial funding from EC STORM project grant number 25387 is greatfully acknowledged. The breast cancer domain was obtained from the University Medical Centre, Inst. of Oncology, Ljubljana, Yugoslavia. Thanks go to M. Zwitter and M. Soklic for providing the data. References [1] C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon, 1995. [2] L. Breiman. Bagging predictors. Machine Learning, 26(2):123- 140, 1996. [3] L. Breiman. Arcing classifiers. Tech.Rep.460, Berkeley Stat.Dept., 1997. [4] L. Breiman. Prediction games and arcing algorithms. Tech.Rep. 504, Berkeley Stat.Dept., 1997. [5] C. Cortes, V. Vapnik. Support vector network. Mach.Learn., 20:273-297,1995. [6] R. Schapire, Y. Singer. Improved Boosting Algorithms Using Confidence-rated Predictions. In Proc. of COLT'98. [7] A.J. Grove, D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proc. 15th Nat. Conf. on AI, 1998. To appear. [8] Y. LeCun et al. Learning algorithms for classification: A comparism on handwritten digit recognistion. Neural Networks, pages 261-276, 1995. [9] T. Onoda, G. Ratsch, and K.-R. Muller. An asymptotic analysis of adaboost in the binary classification case. In Proc. of ICANN'98, April 1998. [10] J. Quinlan. Boosting first-order learning. In Proc. of the 7th Internat. Workshop on Algorithmic Learning Theory, LNAI, 1160,143-155. Springer. [11] G. Ratsch. Soft Margins for AdaBoost. August 1998. Royal Holloway College, Technical Report NC-TR-1998-021. Submitted to Machine Learning. [12] R. Schapire, Y. Freund, P. Bartlett, W. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Mach. Learn. , 148-156, 1998. [13] H. Schwenk and Y. Bengio. -Adaboosting neural networks: Application to online character recognition. In ICANN'97, LNCS, 1327,967-972,1997. Springer. [14] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
|
1998
|
66
|
1,566
|
The Bias-Variance Tradeoff and the Randomized GACV Grace Wahba, Xiwu Lin and Fangyu Gao Dept of Statistics Univ of Wisconsin 1210 W Dayton Street Madison, WI 53706 wahba,xiwu,fgao@stat.wisc.edu Dong Xiang SAS Institute, Inc. SAS Campus Drive Cary, NC 27513 sasdxx@unx.sas.com Ronald Klein, MD and Barbara Klein, MD Dept of Ophthalmalogy 610 North Walnut Street Madison, WI 53706 kleinr,kleinb@epi.ophth.wisc.edu Abstract We propose a new in-sample cross validation based method (randomized GACV) for choosing smoothing or bandwidth parameters that govern the bias-variance or fit-complexity tradeoff in 'soft' classification. Soft classification refers to a learning procedure which estimates the probability that an example with a given attribute vector is in class 1 vs class O. The target for optimizing the the tradeoff is the Kullback-Liebler distance between the estimated probability distribution and the 'true' probability distribution, representing knowledge of an infinite population. The method uses a randomized estimate of the trace of a Hessian and mimics cross validation at the cost of a single relearning with perturbed outcome data. 1 INTRODUCTION We propose and test a new in-sample cross-validation based method for optimizing the biasvariance tradeoff in 'soft classification' (Wahba et al1994), called ranG ACV (randomized Generalized Approximate Cross Validation). Summarizing from Wahba et al(l994) we are given a training set consisting of n examples, where for each example we have a vector t E T of attribute values, and an outcome y, which is either 0 or 1. Based on the training data it is desired to estimate the probability p of the outcome 1 for any new examples in the The Bias-Variance TradeofJand the Randomized GACV 621 future. In 'soft' classification the estimate p(t) of p(t) is of particular interest, and might be used by a physician to tell patients how they might modify their risk p by changing (some component of) t, for example, cholesterol as a risk factor for heart attack. Penalized likelihood estimates are obtained for p by assuming that the logit f(t), t E T, which satisfies p(t) = ef(t) 1(1 + ef(t») is in some space 1{ of functions. Technically 1{ is a reproducing kernel Hilbert space, but you don't need to know what that is to read on. Let the training set be {Yi, ti, i = 1,···, n}. Letting Ii = f(td, the negative log likelihood .c{Yi, ti, fd of the observations, given f is n .c{Yi, ti, fd = 2::[-Ydi + b(li)], (1) i=1 where b(f) = log(l + ef ). The penalized likelihood estimate of the function f is the solution to: Find f E 1{ to minimize h. (I): n h.(f) = 2::[-Ydi + b(ld) + J>.(I), (2) i=1 where 1>.(1) is a quadratic penalty functional depending on parameter(s) A = (AI, ... , Aq) which govern the so called bias-variance tradeoff. Equivalently the components of A control the tradeoff between the complexity of f and the fit to the training data. In this paper we sketch the derivation of the ranG ACV method for choosing A, and present some preliminary but favorable simulation results, demonstrating its efficacy. This method is designed for use with penalized likelihood estimates, but it is clear that it can be used with a variety of other methods which contain bias-variance parameters to be chosen, and for which minimizing the Kullback-Liebler (K L) distance is the target. In the work of which this is a part, we are concerned with A having multiple components. Thus, it will be highly convenient to have an in-sample method for selecting A, if one that is accurate and computationally convenient can be found. Let P>. be the the estimate and p be the 'true' but unknown probability function and let Pi = p(td,p>.i = p>.(ti). For in-sample tuning, our criteria for a good choice of A is the KL distance KL(p,p>.) = ~ E~I[PilogP7. + (1- pdlogg~::?)]. We may replace K L(p,p>.) by the comparative K L distance (C K L), which differs from K L by a quantity which does not depend on A. Letting hi = h (ti), the C K L is given by 1 n CKL(p,p>.) == CKL(A) = ;;, 2:: [-pd>'i + b(l>.i)). (3) i=) C K L(A) depends on the unknown p, and it is desired is to have a good estimate or proxy for it, which can then be minimized with respect to A. It is known (Wong 1992) that no exact unbiased estimate of CK L(A) exists in this case, so that only approximate methods are possible. A number of authors have tackled this problem, including Utans and M90dy(1993), Liu(l993), Gu(1992). The iterative U BR method of Gu(l992) is included in GRKPACK (Wang 1997), which implements general smoothing spline ANOVA penalized likelihood estimates with multiple smoothing parameters. It has been successfully used in a number of practical problems, see, for example, Wahba et al (1994,1995). The present work represents an approach in the spirit of GRKPACK but which employs several approximations, and may be used with any data set, no matter how large, provided that an algorithm for solving the penalized likelihood equations, either exactly or approximately, can be implemented. 622 G. Wahba et al. 2 THE GACV ESTIMATE In the general penalized likelihood problem the minimizer 1>,(-) of (2) has a representation M n 1>.(t) = L dv<Pv(t) + L CiQ>.(ti, t) (4) v=l i=l where the <Pv span the null space of 1>" Q>.(8, t) is a reproducing kernel (positive definite function) for the penalized part of 7-1., and C = (Cl' ... ,Cn)' satisfies M linear conditions, so that there are (at most) n free parameters in 1>.. Typically the unpenalized functions <Pv are low degree polynomials. Examples of Q(ti,') include radial basis functions and various kinds of splines; minor modifications include sigmoidal basis functions, tree basis functions and so on. See, for example Wahba( 1990, 1995), Girosi, Jones and Poggio( 1995). If f>.C) is of the form (4) then 1>,(f>.) is a quadratic form in c. Substituting (4) into (2) results in h a convex functional in C and d, and C and d are obtained numerically via a Newton Raphson iteration, subject to the conditions on c. For large n, the second sum on the right of (4) may be replaced by L~=1 Cik Q>. (tik , t), where the tik are chosen via one of several principled methods. To obtain the CACV we begin with the ordinary leaving-out-one cross validation function CV(.\) for the CKL: n ( _ 1 "" [-i] ( ] CV .\) - LJ-yd>.i + b 1>.i) , n (5) i=1 where fl- i ] the solution to the variational problem of (2) with the ith data point left out and fti] is the value of fl- i] at ti . Although f>.C) is computed by solving for C and d the CACV is derived in terms of the values (it"", fn)' of f at the ti. Where there is no confusion between functions f(-) and vectors (it, ... ,fn)' of values of fat tl, ... ,tn, we let f = (it, ... " fn)'. For any f(-) of the form (4), J>. (f) also has a representation as a non-negative definite quadratic form in (it, ... , fn)'. Letting L:>. be twice the matrix of this quadratic form we can rewrite (2) as n 1 h(f,Y) = L[-Ydi + b(/i)] + 2f'L:>.f. i=1 (6) Let W = W(f) be the n x n diagonal matrix with (/ii == Pi(l - Pi) in the iith position. Using the fact that (/ii is the second derivative of b(fi), we have that H = [W + L:>.] - 1 is the inverse Hessian of the variational problem (6). In Xiang and Wahba (1996), several Taylor series approximations, along with a generalization of the leaving-out-one lemma (see Wahba 1990) are applied to (5) to obtain an approximate cross validation function ACV(.\), which is a second order approximation to CV(.\) . Letting hii be the iith entry of H , the result is CV(.\) ~ ACV('\) = .!. t[-Yd>.i + b(f>.i)] + .!. t hiiYi(Yi - P>.i) . (7) n i= l n i=1 [1 - hiwii] Then the GACV is obtained from the ACV by replacing hii by ~ L~1 hii == ~tr(H) and replacing 1 - hiWii by ~tr[I - (Wl/2 HWl/2)], giving 1 ~ ] tr(H) L~l Yi(Yi - P>.i) CACV('\) = ;; t;;[-Yd>.i + b(1).i) + -n-tr[I _ (Wl/2HWl /2)] , (8) where W is evaluated at 1>.. Numerical results based on an exact calculation of (8) appear in Xiang and Wahba (1996). The exact calculation is limited to small n however. The Bias-Variance TradeofJand the Randomized GACV 623 3 THE RANDOMIZED GACV ESTIMATE Given any 'black box' which, given >., and a training set {Yi, ti} produces f>. (.) as the minimizer of (2), and thence f>. = (fA 1 , "' , f>.n)', we can produce randomized estimates of trH and tr[! - W 1/ 2 HW1/2J without having any explicit calculations of these matrices. This is done by running the 'black box' on perturbed data {Vi + <5i , td. For the Yi Gaussian, randomized trace estimates of the Hessian of the variational problem (the 'influence matrix') have been studied extensively and shown to be essentially as good as exact calculations for large n, see for example Girard(1998). Randomized trace estimates are based on the fact that if A is any square matrix and <5 is a zero mean random n-vector with independent components with variance (TJ, then E<5' A<5 = ~ tr A. See Gong et al( 1998) and u" references cited there for experimental results with multiple regularization parameters. Returning to the 0-1 data case, it is easy to see that the minimizer fA(') of 1;.. is continuous in Y, not withstanding the fact that in our training set the Yi take on only values 0 or 1. Letting if = UA1,"', f>.n)' be the minimizer of (6) given y = (Y1,"', Yn)', and if+O be the minimizer given data y+<5 = (Y1 +<51, ... ,Yn +<5n)' (the ti remain fixed), Xiang and Wahba (1997) show, again using Taylor series expansions, that if+O - ff ,....., [WUf) + ~AJ-1<5. This suggests that ~<5'Uf+O - ff) provides an estimate oftr[W(ff) + ~At1. However, u" if we take the solution ff to the nonlinear system for the original data Y as the initial value for a Newton-Raphson calculation of ff+O things become even simpler. Applying a one step Newton-Raphson iteration gives (9) Since Pjf(ff,y + <5) = -<5 + PjfUf,Y) = -<5, and [:;~f(ff,Y + <5)J- 1 [ 82 h(fY )J- 1 h f y+o,l fY [ 8 2 h(fY )J- 1 J: h f y+o,l fY 8?7if A' Y ,we ave A A + 8?7if A' Y u so t at A A [WUf) + EAt 1<5. The result is the following ranGACV function: n <5' (fY+O,l fY) ",n ( ) ranGACV(>.) = .!. ~[- 'I '+bU .)J+ A A wi=l Yi Yi - PAi . n ~ Yz At At n [<5'<5 - <5'WUf)Uf+O,l - ff)J (10) To reduce the variance in the term after the '+' in (10), we may draw R independent replicate vectors <51,'" ,<5R , and replace the term after the '+' in (1O)b 1... ",R o:(fr Hr .1 -ff) 2:7-1 y.(y.-P>..) to obtain an R-replicated y R wr=l n [O~Or-O~ W(fn(f~+Ar . l-ff)1 ranGACV(>.) function. 4 NUMERICAL RESULTS In this section we present simulation results which are representative of more extensive simulations to appear elsewhere. In each case, K < < n was chosen by a sequential clustering algorithm. In that case, the ti were grouped into K clusters and one member of each cluster selected at random. The model is fit. Then the number of clusters is doubled and the model is fit again. This procedure continues until the fit does not change. In the randomized trace estimates the random variates were Gaussian. Penalty functionals were (multivariate generalizations of) the cubic spline penalty functional>. fa1 U" (X))2, and smoothing spline ANOVA models were fit. 624 G. Wahba et at. 4.1 EXPERIMENT 1. SINGLE SMOOTHING PARAMETER In this experiment t E [0,1], f(t) = 2sin(10t), ti = (i - .5)/500, i = 1,···,500. A random number generator produced 'observations' Yi = 1 with probability Pi = el , /(1 + eli), to get the training set. Q A is given in Wahba( 1990) for this cubic spline case, K = 50. Since the true P is known, the true CKL can be computed. Fig. l(a) gives a plot of CK L(A) and 10 replicates of ranGACV(A). In each replicate R was taken as 1, and J was generated anew as a Gaussian random vector with (115 = .001. Extensive simulations with different (115 showed that the results were insensitive to (115 from 1.0 to 10-6• The minimizer of C K L is at the filled-in circle and the 10 minimizers of the 10 replicates of ranGACV are the open circles. Anyone of these 10 provides a rather good estimate of the A that goes with the filled-in circle. Fig. l(b) gives the same experiment, except that this time R = 5. It can be seen that the minimizers ranGACV become even more reliable estimates of the minimizer of C K L, and the C K L at all of the ranG ACV estimates are actually quite close to its minimum value. 4.2 EXPERIMENT 2. ADDITIVE MODEL WITH A = (Al' A2) Here t E [0,1] 0 [0,1]. n = 500 values of ti were generated randomly according to a uniform distribution on the unit square and the Yi were generated according to Pi = eli j(l + el,) with t = (Xl,X2) and f(t) = 5 sin 27rXl - 3sin27rX2. An additive model as a special case of the smoothing spline ANOVA model (see Wahba et al, 1995), of the form f(t) = /-l + h(xd + h(X2) with cubic spline penalties on hand h were used. K = 50, (115 = .001, R = 5. Figure l(c) gives a plot of CK L(Al' A2) and Figure l(d) gives a plot of ranGACV(Al, A2). The open circles mark the minimizer of ranGACV in both plots and the filled in circle marks the minimizer of C K L. The inefficiency, as measured by CKL()..)/minACKL(A) is 1.01. Inefficiencies near 1 are typical of our other similar simulations. 4.3 EXPERIMENT 3. COMPARISON OF ranGACV AND UBR This experiment used a model similar to the model fit by GRKPACK for the risk of progression of diabetic retinopathy given t = (Xl, X2, X3) = (duration, glycosylated hemoglobin, body mass index) in Wahba et al(l995) as 'truth'. A training set of 669 examples was generated according to that model, which had the structure f(Xl, X2, X3) = /-l + fl (xd + h (X2) + h (X3) + fl,3 (Xl, X3). This (synthetic) training set was fit by GRKPACK and also using K = 50 basis functions with ranG ACV. Here there are P = 6 smoothing parameters (there are 3 smoothing parameters in f13) and the ranGACV function was searched by a downhill simplex method to find its minimizer. Since the 'truth' is known, the CKL for)" and for the GRKPACK fit using the iterative UBR method were computed. This was repeated 100 times, and the 100 pairs of C K L values appears in Figure l(e). It can be seen that the U BR and ranGACV give similar C K L values about 90% of the time, while the ranG ACV has lower C K L for most of the remaining cases. 4.4 DATA ANALYSIS: AN APPLICATION Figure 1(f) represents part of the results of a study of association at baseline of pigmentary abnormalities with various risk factors in 2585 women between the ages of 43 and 86 in the Beaver Dam Eye Study, R. Klein et al( 1995). The attributes are: Xl = age, X2 =body mass index, X3 = systolic blood pressure, X4 = cholesterol. X5 and X6 are indicator variables for taking hormones, and history of drinking. The smoothing spline ANOVA model fitted was f(t) = /-l+dlXl +d2X2 + h(X3)+ f4(X4)+ h4(X3, x4)+d5I(x5) +d6I(x6), where I is the indicator function. Figure l(e) represents a cross section of the fit for X5 = no, X6 = no, The Bias-Variance Tradeoff and the Randomized GACV 625 X2, X3 fixed at their medians and Xl fixed at the 75th percentile. The dotted lines are the Bayesian confidence intervals, see Wahba et al( 1995). There is a suggestion of a borderline inverse association of cholesterol. The reason for this association is uncertain. More details will appear elsewhere. Principled soft classification procedures can now be implemented in much larger data sets than previously possible, and the ranG ACV should be applicable in general learning. References Girard, D. (1998), 'Asymptotic comparison of (partial) cross-validation, GCV and randomized GCV in nonparametric regression', Ann. Statist. 126, 315-334. Girosi, F., Jones, M. & Poggio, T. (1995), 'Regularization theory and neural networks architectures', Neural Computatioll 7,219-269. Gong, J., Wahba, G., Johnson, D. & Tribbia, J. (1998), 'Adaptive tuning of numerical weather prediction models: simultaneous estimation of weighting, smoothing and physical parameters', MOllthly Weather Review 125, 210-231. Gu, C. (1992), 'Penalized likelihood regression: a Bayesian analysis', Statistica Sinica 2,255-264. Klein, R., Klein, B. & Moss, S. (1995), 'Age-related eye disease and survival. the Beaver Dam Eye Study', Arch Ophthalmol113, 1995. Liu, Y. (1993), Unbiased estimate of generalization error and model selection in neural network, manuscript, Department of Physics, Institute of Brain and Neural Systems, Brown University. Utans, J. & Moody, J. (1993), Selecting neural network architectures via the prediction risk: application to corporate bond rating prediction, in 'Proc. First Int'I Conf. on Artificial Intelligence Applications on Wall Street', IEEE Computer Society Press. Wahba, G. (1990), Spline Models for Observational Data, SIAM. CBMS-NSF Regional Conference Series in Applied Mathematics, v. 59. Wahba, G. (1995), Generalization and regularization in nonlinear learning systems, in M. Arbib, ed., 'Handbook of Brain Theory and Neural Networks', MIT Press, pp. 426430. Wahba, G., Wang, Y., Gu, c., Klein, R. & Klein, B. (1994), Structured machine learning for 'soft' classification with smoothing spline ANOVA and stacked tuning, testing and evaluation, in J. Cowan, G. Tesauro & J. Alspector, eds, 'Advances in Neural Information Processing Systems 6', Morgan Kauffman, pp. 415-422. Wahba, G., Wang, Y., Gu, C., Klein, R. & Klein, B. (1995), 'Smoothing spline AN OVA for exponential families, with application to the Wisconsin Epidemiological Study of Diabetic Retinopathy' , Ann. Statist. 23, 1865-1895. Wang, Y. (1997), 'GRKPACK: Fitting smoothing spline analysis of variance models to data from exponential families', Commun. Statist. Sim. Compo 26,765-782. Wong, W. (1992), Estimation of the loss of an estimate, Technical Report 356, Dept. of Statistics, University of Chicago, Chicago, II. Xiang, D. & Wahba, G. (1996), 'A generalized approximate cross validation for smoothing splines with non-Gaussian data', Statistica Sinica 6, 675-692, preprint TR 930 available via www. stat. wise. edu/-wahba - > TRLIST. Xiang, D. & Wahba, G. (1997), Approximate smoothing spline methods for large data sets in the binary case, Technical Report 982, Department of Statistics, University of Wisconsin, Madison WI. To appear in the Proceedings of the 1997 ASA Joint Statistical Meetings, Biometrics Section, pp 94-98 (1998). Also in TRLIST as above. 10 (0 c:i 0 (0 c:i 10 10 c:i 0 ~ 0 C\I (0 c:i co 10 c:i (0 10 626 .'. -8 -7 CKL ranGACV -6 -5 log lambda (a) 9.29 r,,6 :0' -4 ~, O. 7 O. 9 -7 -6 -5 log lambda1 (c) o 12! o -3 \~7 0\ O. 4 -4 10 (0 c:i o (0 c:i 10 10 c:i o 10 c:i CKL -8 -7 G, Wahba et aI, -6 -5 log lambda (b) -4 -3 "f ... 0,28 (0 c:i .~ =...,. .0 . ca O .0 e a.. C\I c:i o .. ········r ..... ranGACV .' .25 0'.2,4 -7 " ...... 0:\13 :.· .. O-:!4!7 ': : 0 . . . . . . .: 0'F5 0'F8 0.[32 -6 -5 log lambda1 (d) -4 c:i ~--------.-------.--------r--~ c:i ~ __ ~ ____ ,-____ .-__ -. ____ .-__ ~ 0,56 0,58 0,60 ranGACV (e) 0,62 100 150 200 250 300 350 400 Cholesterol (mg/dL) (f) Figure 1: (a) and (b): Single smoothing parameter comparison of ranGACV and CK L. (c) and (d): Two smoothing parameter comparison of ranGACV and CK L. (e): Comparison of ranG ACV and U B R. (f): Probability estimate from Beaver Dam Study Graph Matching for Shape Retrieval Benoit Huet, Andrew D.J. Cross and Edwin R. Hancock' Department of Computer Science, University of York York, YOI 5DD, UK Abstract This paper describes a Bayesian graph matching algorithm for data-mining from large structural data-bases. The matching algorithm uses edge-consistency and node attribute similarity to determine the a posteriori probability of a query graph for each of the candidate matches in the data-base. The node feature-vectors are constructed by computing normalised histograms of pairwise geometric attributes. Attribute similarity is assessed by computing the Bhattacharyya distance between the histograms. Recognition is realised by selecting the candidate from the data-base which has the largest a posteriori probability. We illustrate the recognition technique on a data-base containing 2500 line patterns extracted from real-world imagery. Here the recognition technique is shown to significantly outperform a number of algorithm alternatives. 1 Introduction Since Barrow and Popplestone [1] first suggested that relational structures could be used to represent and interpret 2D scenes, there has been considerable interest in the machine vision literature in developing practical graph-matching algorithms [8, 3, 10]. The main computational issues are how to compare relational descriptions when there is significant structural corruption [8, 10] and how to search for the best match [3]. Despite resulting in significant improvements in the available methodology for graph-matching, there has been little progress in applying the resulting algorithms to large-scale object recognition problems. Most of the algorithms developed in the literature are evaluated for the relatively simple problem of matching a model-graph against a scene known to contain the relevant structure. A more realistic problem is that of taking a large number (maybe thousands) of scenes and retrieving the ones that best match the model. Although this problem is key to data-mining from large libraries of visual information, it has invariably been approached using low-level feature comparison techniques. Very little effort [7,4] has been devoted to matching • corresponding author erh@cs.york.ac.uk Graph Matching for Shape Retrieval 897 higher-level structural primitives such as lines, curves or regions. Moreover, because of the perceived fragility of the graph matching process, there has been even less effort directed at attempting to retrieve shapes using relational information. Here we aim to fill this gap in the literature by using graph-matching as a means of retrieving the shape from a large data-based that most closely resembles a query shape. Although the indexation images in large data-bases is a problem of current topicality in the computer vision literature [5, 6, 9], the work presented in this paper is more ambitious. Firstly, we adopt a structural abstraction of the shape recognition problem and match using attributed relational graphs. Each shape in our data-base is a pattern of line-segments. The structural abstraction is a nearest neighbour graph for the centre-points of the line-segments. In addition, we exploit attribute information for the line patterns. Here the geometric arrangement of the line-segments is encapsulated using a histogram of Euclidean invariant pairwise (binary) attributes. For each line-segment in turn we construct a normalised histogram of relative angle and length with the remaining line-segments in the pattern. These histograms capture the global geometric context of each line-segment. Moreover, we interpret the pairwise geometric histograms as measurement densities for the line-segments which we compare using the Bhattacharyya distance. Once we have established the pattern representation, we realise object recognition using a Bayesian graph-matching algorithm. This is a two-step process. Firstly, we establish correspondence matches between the individual tokens in the query pattern and each of the patterns in the data-base. The correspondences matches are sought so as to maximise the a posteriori measurement probability. Once the MAP correspondence matches have been established, then the second step in our recognition architecture involves selecting the line-pattern from the data-base which has maximum matching probability. 2 MAP Framework Formally our recognition problem is posed as follows. Each ARG in the database is a triple, G = (Vc, Ec, Ac), where Vc is the set of vertices (nodes), Ec is the edge set (Ec C Vc x Vc), and Ac is the set of node attributes. In our experimental example, the nodes represent line-structures segmented from 2D images. The edges are established by computing the N-nearest neighbour graph for the line-centres. Each node j E Vc is characterised by a vector of attributes, ~j and hence Ac = {~j jj E Vc}. In the work reported here the attribute-vector is represents the contents of a normalised pairwise attribute histogram. The data-base of line-patterns is represented by the set of ARG's D = {G}. The goal is to retrieve from the data-base D, the individual ARG that most closely resembles a query pattern Q = (VQ' EQ, AQ). We pose the retrieval process as one of associating with the query the graph from the data-base that has the largest a posteriori probability. In other words, the class identity of the graph which most closely corresponds to the query is wQ = arg max P(G' IQ) C'EV However, since we wish to make a detailed structural comparison of the graphs, rather than comparing their overall statistical properties, we must first establish a set of best-match correspondences between each ARG in the data-base and the query Q. The set of correspondences between the query Q and the ARG G is a relation fc : Vc f-7 VQ over the vertex sets of the two graphs. The mapping function consists of a set of Cartesian pairings between the nodes of the two graphs, 898 B. Huet, A. D. 1. Cross and E. R. Hancock i.e. Ie = {(a,a);a E Ve,a E VQ} ~ Ve x VQ . Although this may appear to be a brute force method, it must be stressed that we view this process of correspondence matching as the final step in the filtering of the line-patterns. We provide more details of practical implementation in the experimental section of this paper. With the correspondences to hand we can re-state our maximum a posteriori probability recognition objective as a two step process. For each graph G in turn, we locate the maximum a posteriori probability mapping function Ie onto the query Q. The second step is to perform recognition by selecting the graph whose mapping function results in the largest matching probability. These two steps are succinctly captured by the following statement of the recognition condition wQ = arg max max P(fe,IG', Q) e'ED la' This global MAP condition is developed into a useful local update formula by applying the Bayes formula to the a posteriori matching probability. The simplification is as follows PU IG Q) = p(Ae, AQl/e)P(felVe, Ee, VQ, EQ)P(Ve , Ee)P(VQ, EQ) e , P(G)P(Q) The terms on the right-hand side of the Bayes formula convey the following meaning. The conditional measurement density p(Ae,AQl/e) models the measurement similarity of the node-sets of the two graphs. The conditional probability P(feIEe, EQ) models the structural similarity of the two graphs under the current set of correspondence matches. The assumptions used in developing our simplification of the a posteriori matching probability are as follows. Firstly, we assume that the joint measurements are conditionally independent of the structure of the two graphs provided that the set of correspondences is known, i.e. P(Ae, AQl/e, Ee, Ve, EQ, VQ) = P(Ae, AQl/e). Secondly, we assume that there is conditional independence of the two graphs in the absence of correspondences. In other words, P(Ve, Ee, VQ, EQ) = P(VQ, EQ)P(Ve, Ee) and P(G, Q) = P(G)P(Q). Finally, the graph priors P(Ve, Ee) , P(VQ, EQ) P(G) and P( Q) are taken as uniform and are eliminated from the decision making process. To continue our development, we first focus on the conditional measurement density, p(Ae, AQl/e) which models the process of comparing attribute similarity on the nodes of the two graphs. Assuming statistical independence of node attributes, the conditional measurement density p( Ae, AQ lie), can be factorised over the Cartesian pairs (a, a) E Ve x VQ which constitute the the correspondence match Ie in the following manner p(Ae, AQl/e) = II P(~a' ~ol/e(a) = a) (a,o)E/a As a result the correspondence matches may be optimised using a simple node-bynode discrete relaxation procedure. The rule for updating the match assigned to the node a of the graph G is le(a) = arg max P(~a'~o)l/(a) = a)P(feIEe,EQ) oEVQU{4>} In order to model the structural consistency of the set of assigned matches,we turn to the framework recently reported by Finch, Wilson and Hancock [2}. This work provides a framework for computing graph-matching energies using the weighted Hamming distance between matched cliques. Since we are dealing with a large-scale object recognition system, we would like to minimise the computational overheads associated with establishing correspondence matches. For this reason, rather than Graph Matchingfor Shape Retrieval 899 working with graph neighbourhoods or cliques, we chose to work with the relational units of the smallest practical size. In other words we satisfy ourself with measuring consistency at the edge level. For edge-units, the structural matching probability P(fa!Va, Ea, VQ, EQ) is computed from the formula (a,b)EEG (Ct ,(J)EEQ where Pe is the probability of an error appearing on one of the edges of the matched structure. The Sa,Ct are assignment variables which are used to represent the current state of match and convey the following meaning Sa Ct = {I if fa (a) = a , 0 otherwise 3 Histogram-based consistency We now furnish some details of the shape retrieval task used in our experimental evaluation of the recognition method. In particular, we focus on the problem of recognising 2D line patterns in a manner which is invariant to rotation, translation and scale. The raw information available for each line segment are its orientation (angle with respect to the horizontal axis) and its length (see figure 1). To illustrate how the Euclidean invariant pairwise feature attributes are computed, suppose that we denote the line segments associated with the nodes indexed a and b by the vectors Ya and Yb respectively. The vectors are directed away from their point of intersection. The pairwise relative angle attribute is given by (Ja ,b = arccos [I:: 1·1::1] From the relative angle we compute the directed relative angle. This involves giving d ~:.~~: b-! ---------c:----;-~:--~------. o..b ---------------. D;b Figure 1: Geometry for shape representation the relative angle a positive sign if the direction of the angle from the baseline Ya to its partner Yb is clockwise and a negative sign if it is counter-clockwise. This allows us to extend the range of angles describing pairs of segments from [0,7I"J to [-7I",7I"J. The directed relative position {}a,b is represented by the normalised length ratio between the oriented baseline vector Ya and the vector yl joining the end (b) of the baseline segment (ab) to the intersection of the segment pair (cd). 1 {}a,b = D l+~ 2 Dab 900 B. Huet, A. D. 1. Cross and E. R. Hancock The physical range of this attribute is (0, IJ. A relative position of 0 indicates that the two segments are parallel, while a relative position of 1 indicates that the two segments intersect at the middle point of the baseline. The Euclidean invariant angle and position attributes 8a,b and {)a ,b are binned in a histogram. Suppose that Sa(J-l, v) = {(a, b)18a,b E All 1\ {)a,b E Rv 1\ bE VD} is the set of nodes whose pairwise geometric attributes with the node a are spanned by the range of directed relative angles All and the relative position attribute range Rv. The contents of the histogram bin spanning the two attribute ranges is given by Ha(J-l, v) = ISa(J-l, v)l. Each histogram contains nA relative angle bins and nR length ratio bins. The normalised geometric histogram bin-entries are computed as follows Ha(J-l, v) ha(J-l, v) = "nA "nR H ( ) ~Il'=l ~v'=l a J-l, v The probability of match between the pattern-vectors is computed using the Bhattacharyya distance between the normalised histograms. I:~~l I:~~l ha(J-l, v)ha(J-l, v) P(f(a) = al~a' ~a) = L I:nA I:nR h ( )h ( ) = exp[-Ba,aJ j'EQ Il'=l v'=l a J-l, V a J-l, V With this modelling ingredient, the condition for recognition is WQ = arg~~% L L {-Ba,a-Bb,iJ+ln(I-Pe)Sa,aSb,iJ+lnPe(I-Sa,aSb,/3)} (a , b}EE~ (a,iJ}EEQ 4 Experiments The aim in this section is to evaluate the graph-based recognition scheme on a database of real-world line-patterns. We have conducted our recognition experiments with a data-base of 2500 line-patterns each containing over a hundred lines. The line-patterns have been obtained by applying line/edge detection algorithms to the raw grey-scale images followed by polygonisation. For each line-pattern in the database, we construct the six-nearest neighbour graph. The feature extraction process together with other details of the data used in our study are described in recent papers where we have focussed on the issues of histogram representation [4J and the optimal choice of the relational structure for the purposes of recognition. In order to prune the set of line-patterns for detailed graph-matching we select about 10% of the data-base using a two-step process. This consists of first refining the data-base using a global histogram of pairwise attributes [4J . The top quartile of matches selected in this way are then further refined using a variant of the Haussdorff distance to select the set of pairwise attributes that best match against the query. The recognition task is posed as one of recovering the line-pattern which most closely resembles a digital map. The original images from which our line-patterns have been obtained are from a number of diverse sources. However, a subset of the images are aerial infra-red line-scan views of southern England. Two of these infra-red images correspond to different views of the area covered by the digital map. These views are obtained when the line-scan device is flying at different altitudes. The line-scan device used to obtain the aerial images introduces severe barrel distortions and hence the map and aerial images are not simply related via a Euclidean or affine transformation. The remaining line-patterns in the data-base have been extracted from trademarks and logos. It is important to stress that although the raw images are obtained from different sources, there is nothing salient about their associated line-pattern representations that allows us to distinguish them from one-another. Graph Matchingfor Shape Retrieval 901 (a) Digital Map (b) Target 1 (c) Target 2 Figure 2: Images from the data-base Moreover, since it is derived from a digital map rather than one of the images in the data-base, the query is not identical to any of the line-patterns in the model library. We aim to assess the importance of different attributes representation on the retrieval process. To this end, we compare node-based and the histogram-based attribute representation. \Ve also consider the effect of taking the relative angle and relative position attributes both singly and in tandem. The final aspect of the comparison is to consider the effects of using the attributes purely for initialisation purposes and also in a persistent way during the iteration of the matching process. To this end we consider the following variants of our algorithm . • Non-Persistent Attributes: Here we ignore the attribute information provided by the node-histograms after the first iteration and attempt to maximise the structural congruence of the graphs . • Local attributes: Here we use only the single node attributes rather than an attribute histogram to model the a posteriori matching probabilities. Graph Matching Strategy Retrieval Iterations Accuracy per recall ReI. Position Attribute iInitialisation only) 39% 5.2 ReI. Angle Attribute (Initialisation only) 45% 4.75 ReI. Angle + Position Attributes (Initialisation only) 58% 4.27 1D ReI. Position Histogram (Initialisation only) 42% 4.7 1D ReI. Angle Histogram (Initialisation only) 59% 4.2 2D Histogram (Initialisation only) 68% 3.9 ReI. Position Attribute (Persistent) 63% 3.96 ReI. Angle Attribute (Persistent) 89% 3.59 ReI. Angle + Position Attributes (Persistent) 98% 3.31 1D ReI. Position Histogram (Persistent) 66% 3.46 1D ReI. Angle Histogram (Persistent) 92% 3.23 2D Histogram (Persistent) 100% 3.12 Table 1: Recognition performance of various recognition strategies averaged over 26 queries in a database of 260 line-patterns In Table 1 we present the recognition performance for each of the recognition strategies in turn. The table lists the recall performance together with the average number 902 B. Huet, A. D. 1. Cross and E. R. Hancock of iterations per recall for each of the recognition strategies in turn. The main features to note from this table are as follows. Firstly, the iterative recall using the full histogram representation outperforms each of the remaining recognition methods in terms of both accuracy and computational overheads. Secondly, it is interesting to compare the effect of using the histogram in the initialisation-only and iteration persistent modes. In the latter case the recall performance is some 32% better than in the former case. In the non-persistent mode the best recognition accuracy that can be obtained is 68%. Moreover, the recall is typically achieved in only 3.12 iterations as opposed to 3.9 (average over 26 queries on a database of 260 images). Finally, the histogram representation provides better performance, and more significantly, much faster recall than the single-attribute similarity measure. When the attributes are used singly, rather than in tandem, then it is the relative angle that appears to be the most powerful. 5 Conclusions We have presented a practical graph-matching algorithm for data-mining in large structural libraries. The main conclusion to be drawn from this study is that the combined use of structural and histogram information improves both recognition performance and recall speed. There are a number of ways in which the ideas presented in this paper can be extended. Firstly, we intend to explore more a perceptually meaningful representation of the line patterns, using grouping principals derived from Gestalt psychology. Secondly, we are exploring the possibility of formulating the filtering of line-patterns prior to graph matching using Bayes decision trees. References [1] H. Barrow and R. Popplestone. Relational descriptions in picture processing. Machine Intelligence, 5:377- 396, 1971. [2] A. Finch, R. Wilson, and E. Hancock. Softening discrete relaxation. Advances in NIPS 9, Edited by M. Mozer, M. Jordan and T. Petsche, MIT Press, pages 438- 444, 1997. [3] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. IEEE PAMI, 18:377- 388, 1996. [4] B. Huet and E. Hancock. Relational histograms for shape indexing. IEEE ICC V, pages 563- 569, 1998. [5] W. Niblack et al.. The QBIC project: Querying images by content using color, texture and shape. Image and Vision Storage and Retrieval, 173- 187, 1993. [6] A. P. Pentland, R. W. Picard, and S. Scarloff. Photobook: tools for contentbased manipulation of image databases. Storage and Retrieval for Image and Video Database II, pages 34- 47, February 1994. [7] K. Sengupta and K. Boyer. Organising large structural databases. IEEE PAMI, 17(4):321- 332,1995. [8] 1. Shapiro and R. Haralick. A metric for comparing relational descriptions. IEEE PAMI, 7(1):90- 94, 1985. [9] M. Swain and D. Ballard. Color indexing. International Journal of Computer Vision, 7(1) :11- 32, 1991. [10] R. Wilson and E. R. Hancock. Structural matching by discrete relaxation. IEEE PAMI, 19(6):634- 648, June 1997.
|
1998
|
67
|
1,567
|
DTs: Dynamic Trees Christopher K. I. Williams Nicholas J. Adams Institute for Adaptive and Neural Computation Division of Informatics, 5 Forrest Hill Edinburgh, EHI 2QL, UK. http://www.anc.ed.ac . uk/ ckiw~dai.ed.ac.uk nicka~dai.ed.ac.uk Abstract In this paper we introduce a new class of image models, which we call dynamic trees or DTs. A dynamic tree model specifies a prior over a large number of trees, each one of which is a tree-structured belief net (TSBN). Experiments show that DTs are capable of generating images that are less blocky, and the models have better translation invariance properties than a fixed, "balanced" TSBN. We also show that Simulated Annealing is effective at finding trees which have high posterior probability. 1 Introduction In this paper we introduce a new class of image models, which we call dynamic trees or DTs. A dynamic tree model specifies a prior over a large number of trees, each one of which is a tree-structured belief net (TSBN). Our aim is to retain the advantages of tree-structured belief networks, namely the hierarchical structure of the model and (in part) the efficient inference algorithms, while avoiding the "blocky" artifacts that derive from a single, fixed TSBN structure. One use for DTs is as prior models over labellings for image segmentation problems. Section 2 of the paper gives the theory of DTs, and experiments are described in section 3. 2 Theory There are two essential components that make up a dynamic tree network (i) the tree architecture and (ii) the nodes and conditional probability tables (CPTs) in the given tree. We consider the architecture question first. DTs: Dynamic Trees o o o o o o o o 0 0 0 0 000 10000000000000000 (a) (c) 635 (d) Figure 1: (a) "Naked" nodes, (b) the "balanced" tree architecture, (c) a sample from the prior over Z, (d) data generated from the tree in (c). Consider a number of nodes arranged into layers, as in Figure lea). We wish to construct a tree structure so that any child node in a particular layer will be connected to a parent in the layer above. We also allow there to be a null parent for each layer, so that any child connected to it will become a new root. (Technically we are constructing a forest rather than a tree.) An example of a structure generated using this method is shown in Figure 1 ( c). There are a number of ways of specifying a prior over trees. If we denote by Zi the indicator vector which shows to which parent node i belongs, then the tree structure is specified by a matrix Z whose columns are the individual Zi vectors (one for each node). The scheme that we have investigated so far is to set P(Z) = It P(Zi). In our work we have specified P(Zi) as follows. Each child node is considered to have a "natural" parent-its parent in the balanced structure shown in Figure l(b). Each node in the parent layer is assigned an "affinity" for each child node, and the "natural" parent has the highest affinity. Denote the affinity of node k in the parent layer by ak. Then we choose P(Zi = ek) = e!3a/e / EjEPai e!3aj, where (3 is some positive constant and ek is the unit vector with a 1 in position k. Note that the "null" parent is included in the sum, and has affinity anull associated with it, which affects the relative probability of "orphans". We have named this prior the "full-time-node-employment" prior as all the nodes participate in the creation of the tree structure to some degree. Having specified the prior over architectures, we now need to translate this into a TSBN. The units in the tree are taken to be C-class multinomial random variables. Each layer of the structure has associated with it a prior probability vector 7f1 and CPT MI. Given a particular Z matrix which specifies a forest structure, the probability of a particular instantiation of all of the random variables is simply the product of the probabilities of all of the trees, where the appropriate root probabilities and CPTs are picked up from the 7fIS and MIS. A sample generated from the tree structure in Figure l(c) is shown in Figure led). 636 C. K. I. Williams and N. 1. Adams Our intuition as to why DTs may be useful image models is based on the idea that most pixels in an image are derived from a single object. We think of an object as being described by a root of a tree, with the scale of the object being determined by the level in the tree at which the root occurs. In this interpretation the ePTs will have most of their probability mass on the diagonal. Given some data at the bottom layer of units, we can form a posterior over the tree structures and node instantiations of the layers above. This is rather like obtaining a set of parses for a number of sentences using a context-free grammarl . In the DT model as described above different examples are explained by different trees. This is an important difference with the usual priors over belief networks as used, e.g. in Bayesian averaging over model structures. Also, in the usual case of model averaging, there is normally no restriction to TSBN structures, or to tying the parameters (1rIS and MIS) between different structures. 2.1 Inference in DTs We now consider the problem of inference in DTs, i.e. obtaining the posterior P(Z, XhlXv) where Z denotes the tree-structure, Xv the visible units (the image clamped on the lowest level) and X h the hidden units. In fact, we shall concentrate on obtaining the posterior marginal P(ZIXv), as we can obtain samples from P(XhIXv, Z) using standard techniques for TSBNs. There are a very large number of possible structures; in fact for a set of nodes created from a balanced tree with branching factor b and depth D (with the top level indexed by 1) there are IT~=2(b(d-2) + l)b(d-l) possible forest structures. Our objective will be to obtain the maximum a posteriori (MAP) state from the posterior P(ZIXv) ex P(Z)P(XvIZ) using Simulated Annealing.2 This is possible because two components P(Z) and P(XvIZ) are readily evaluated. P(XvIZ) can be computed from ITr (Exr A(Xr )'7r(xr)), where A(Xr) and 7r(xr) are the Pearl-style vectors of each root r of the forest. An alternative to sampling from the posterior P(Z, XhlXv) is to use approximate inference. One possibility is to use a mean-field-type approximation to the posterior of the form QZ(Z)Qh(Xh) (Zoubin Ghahramani, personal communication, 1998). 2.2 Comparing DTs to other image models Fixed-structure TSBNs have been used by a number of authors as models of images (Bouman and Shapiro, 1994), (Luettgen and Willsky, 1995). They have an attractive multi-scale structure, but suffer from problems due to the fixed tree structure, which can lead to very "blocky" segmentations. Markov Random Field (MRF) models are also popular image models; however, one of their main limitations is that inference in a MRF is NP-hard. Also, they lack an hierarchical structure. On the other hand, stationarity of the process they define can be easily ensured, which lCFGs have a O(n3 ) algorithm to infer the MAP parse; however, this algorithm depends crucially on the one-dimensional ordering of the inputs. We believe that the possibility of crossed links in the DT architecture means that this kind of algorithm is not applicable to the DT case. Also, the DT model can be applied to 2-d images, where the O(n3 ) algorithm is not applicable. 2It is also possible to sample from the posterior using, e.g. Gibbs Sampling. DTs: Dynamic Trees 637 is not the case for fixed-structure TSBNs. One strategy to overcome the fixed structure of TSBNs is to break away from the tree structure, and use belief networks with cross connections e.g. (Dayan et ai., 1995). However, this means losing the linear-time belief-propagation algorithms that can be used in trees (Pearl, 1988) and using approximate algorithms. While it is true that inference over DTs is also NP-hard, we do retain a"clean" semantics based on the fact that we expect that each pixel should belong to one object, which may lead to useful approximation schemes. 3 Experiments In this section we describe two experiments conducted on the DT models. The first has been designed to compare the translation performance of DTs with that of the balanced TSBN structure and is described in section 3.1. In section 3.2 we generate 2-d images from the DT model, find the MAP Dynamic Tree for these images, and contrast their performance in relative to the balanced TSBN. 3.1 Comparing DTs with the balanced TSBN We consider a 5-1ayer binary tree with 16 leaf nodes, as shown in Figure 1. Each node in the tree is a binary variable, taking on values of white/black. The 7r1'S, M,'s and affinities were set to be equal in each layer. The values used were 7r = (0.75,0.25) with 0.75 referring to white, and M had values 0.99 on the diagonal and 0.01 offdiagonal. The affinities3 were set as 1 for the natural parent, 0 for the nearest neighbour(s) of the natural parent, -00 for non-nearest neighbours and anull = 0, with f3 = 1.25. -~, ~~~~.~.~,~.~~~~. ... (a) 5 black nodes , . . . , . . . , /" r " 1\ I \ \,' \ I , I \ r \ I I' \ . \ r ~,/ \,,: \1 " . . . ... " !\I~ ~i , ~ : · .. · " · " " '. " \ I \' I I '/ \~ (b) 4 black nodes Figure 2: Plots of the unnormalised log posterior vs position of the input pattern for (a) the 5-black-nodes pattern and (b) 4-black-nodes pattern. To illustrate the effects of translation, we have taken a stimulus made up of a bar of five black pixels, and moved it across the image. The unnormalised log posterior for a particular Z configuration is logP(Z) + logP(XvIZ). This is computed for the balanced TSBN architecture, and compared to the highest value that can be found by conducting a search over Z. These results are plotted in Figure 2(a). The x-axis denotes the position of the left hand end of the bar (running from 1 to 3The affinities are defined up to the addition of an arbitrary constant. 638 C. K. I. Williams and N. 1. Adams 12), and the y-axis shows the posterior probability. Note that due to symmetries there are in reality fewer than 12 distinct configurations. Figure 2(a) shows clearly that the balanced TSBN is a poor model for this stimulus, and that much better interpretations can be found using DTs, even though the "natural parent" idea ensures that the logP(Z) is always larger for the balanced tree. Notice also how the balanced TSBN displays greater sensitivity of the log posterior with respect to position than the DT model. Figure 2 shows both the "optimal" log posterior (found "by hand", using intuitions as to the best trees), and the those of the MAP models discovered by Simulated Annealing. Annealing was conducted from a starting temperature of 1.0 and exponentially decreased by a factor of 0.9. At each temperature up to 2000 proposals could be made, although transition to the next temperature would occur after 200 accepted steps. The run was deemed to have converged after five successive temperature steps were made without accepting a single step. We also show the log posterior of trees found by Gibbs sampling from which we report the best configuration found from four separate runs (with different random starting positions), each of which was run for 25,000 sweeps through all of the nodes. In Figure 2(b) we have shown the log posterior for a stimulus made up of four black nodes4 . In this case the balanced TSBN is even more sensitive to the stimulus location, as the four black nodes fit exactly under one sub-tree when they are in positions 1, 5, 9 or 13. By contrast, the dynamic tree is less sensitive to the alignment, although it does retain a preference for the configuration most favoured by the balanced TSBN. This is due to the concept of a "natural" parent built into the (current) architecture (but see Section 4 for further discussion). Clearly these results are somewhat sensitive to settings of the parameters. One of the most important parameters is the diagonal entry in the CPT. This controls the relative desirability of having a disconnection against a transition in the tree that involves a colour change. For example, if the diagonal entry in the CPT is reduced to 0.95, the gap between the optimal and balanced trees in Figure 2(b) is decreased. We have experimented with CPT entries of 0.90,0.95 and 0.99, but otherwise have . not needed to explore the parameter space to obtain the results shown. 3.2 Generating from the prior and finding the MAP Tree in 2-d We now turn our attention to 2-d images. Considering a 5 layer quad-tree node arrangement gives a total of 256 leaf nodes or a 16x16 pixel image. A structural plot of such a tree generated from the prior is shown in figure 3. Each sub-plot is a slice through the tree showing the nodes on successive levels. The boxes represent a single node on the current level and their shading indicates the tree to which they belong. Nodes in the parent layer above are superimposed as circles and the lines emanating from them shows their connectivity. Black circles with a smaller white circle inside are used to indicate root nodes. Thus in the example above we see that the forest consists of five trees, four of whose roots lie at level 3 (which between them account for most of the black in the image, Figure 3(f», while the root node at level 1 is responsible for the background. 4The parameters are the same as above, except that anull in level 3 was set to 10.0 to encourage disconnections at this level. DTs: Dynamic Trees 639 (a) (b) (c) (e) (d) (f) Figure 3: Plot of the MAP Dynamic Tree of the accompanying image (f). Broadly speaking the parameters for the 2-d DTs were set to be similar to the I-d trees of the previous section, except that the disconnection affinities were set to favour disconnections higher up the tree, and to values for the leaf level such that leaf disconnection probabilities tend to zero. In practice this resulted in all leaves being connected to parent nodes (which is desirable as we believe that single-pixel objects are unlikely). The (3 values increase with tree depth so that lower levels nodes choose parents from a tighter neighbourhood. The 7ft and Mt values were unchanged, and again we consider binary valued nodes. A suite of 600 images were created by sampling DTs from the above prior and then generating 5 images from each. Figure 3(f) shows an example of an image generated by the DT and it can be seen that the "blockiness" exhibited by balanced TSBNs is not present . . ':. . (a) (b) Figure 4: (a) Comparison of the MAP DT log posterior against that of the quad-tree for 600 images, (b) tree generated from the "part-time-node-employment" prior. 640 C. K. I. Williams and N. J Adams The MAP Dynamic Tree for each of these images was found by Simulated Annealing using the same exponential strategy described earlier, and their log posteriors are compared with those of the balanced TSBN in the plot 4(a). The line denotes the boundary of equal log posterior and the location of all the points above this clearly shows that in every case the MAP tree found has a higher posterior. 4 Discussion Above we have demonstrated that DT models have greater translation invariance and do not exhibit the blockiness of the balanced TSBN model. We also see that Simulated Annealing methods are successful at finding trees that have high posterior probability. We now discuss some extensions to the model. In the work above we have kept the balanced tree arrangement of nodes. However, this could be relaxed, giving rise to roughly equal numbers of nodes at the various levels (cf stationary wavelets). This would be useful (a) for providing better translation invariance and (b) to avoid slight shortages of hidden units that can occur when patterns that are "misaligned" wrt the balanced tree are presented. In this case the prior over Z would need to be adjusted to ensure a high proportion of tree-like structures, by generating the z's and x's in layers, so that the z's can be contingent on the states of the units in the layer above. We have devised a prior of this nature and called it the "part-timeemployment" prior as the nodes can decide whether or not they wish to be employed in the tree structure or remain redundant and inactive. An example tree generated from this prior is shown in figure 4(b); we plan to explore this direction further in on-going research. Other research directions include the learning of parameters in the networks (e.g. using EM), and the introduction of additional information at the nodes; for example one might use real-valued variables in addition to the multinomial variables considered above. These additional variables might be used to encode information such as that concerning the instantiation parameters of objects. Acknowledgements This work stems from a conversation between CW and Zoubin Gharahmani at the Isaac Newton Institute in October 1997. We thank Zoubin Ghahramani, Geoff Hinton and Peter Dayan for helpful conversations, and the Isaac Newton Institute for Mathematical Sciences (Cambridge, UK) for hospitality during the "Neural Networks and Machine Learning" programme. NJA is supported by an EPSRC research studentship, and the work of CW is partially supported by EPSRC grant GR/L03088, Combining Spatially Distributed Predictions From Neural Networks. References Bouman, C. A. and M. Shapiro (1994). A Multiscale Random Field Model for Bayesian Image Segmentation. IEEE Transactions on Image Processing 3(2),162-177. Dayan, P., G. E. Hinton, R. M. Neal, and R. S. Zemel (1995). The Helmholtz Machine. Neural Computation 7(5)r 889-904. Luettgen, M. R. and A. S. Willsky (1995). Likelihood Calculation for a Class of Multiscale Stocahstic Models, with Application to Texture Discrimination. IEEE Trans. Image Processing 4(2», 194-207. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufmann.
|
1998
|
68
|
1,568
|
Adding Constrained Discontinuities to Gaussian Process Models of Wind Fields Dan Cornford* Ian T. Nabney Christopher K. I. Williamst Neural Computing Research Group Aston University, BIRMINGHAM, B4 7ET, UK d.comford@aston.ac.uk Abstract Gaussian Processes provide good prior models for spatial data, but can be too smooth. In many physical situations there are discontinuities along bounding surfaces, for example fronts in near-surface wind fields. We describe a modelling method for such a constrained discontinuity and demonstrate how to infer the model parameters in wind fields with MCMC sampling. 1 INTRODUCTION We introduce a model for wind fields based on Gaussian Processes (GPs) with 'constrained discontinuities'. GPs provide a flexible framework for modelling various systems. They have been adopted in the neural network community and are interpreted as placing priors over functions. Stationary vector-valued GP models (Daley, 1991) can produce realistic wind fields when run as a generative model; however, the resulting wind fields do not contain some features typical of the atmosphere. The most difficult features to include are surface fronts. Fronts are generated by complex atmospheric dynamics and are marked by large changes in the surface wind direction (see for example Figures 2a and 3b) and temperature. In order to account for such features, which appear discontinuous at our observation scale, we have developed a model for vector-valued GPs with constrained discontinuities which could also be applied to surface reconstruction in computer vision, and geostatistics. In section 2 we illustrate the generative model for wind fields with fronts. Section 3 explains what we mean by GPs with constrained discontinuities and derives the likelihood of data under the model. Results of Bayesian estimation of the model parameters are given, ·To whom correspondence should be addressed. tNowat: Division of Informatics, University of Edinburgh, 5 Forrest Hill, Edinburgh EHI 2QL, Scotland, UK 862 D. Com/ord, I. T. Nabney and C. K. I. Williams using a Markov Chain Monte Carlo (MCMC) procedure. In the final section, the strengths and weaknesses of the model are discussed and improvements suggested. 2 A GENERATIVE WIND FIELD MODEL We are primarily interested in retrieving wind fields from satellite scatterometer observations of the ocean surface!. A probabilistic prior model for wind fields will be used in a Bayesian procedure to resolve ambiguities in local predictions of wind direction. The generative model for a wind field including a front is taken to be a combination of two vector-valued GPs with a constrained discontinuity. A common method for representing wind fields is to put GP priors over the velocity potential ~ and stream function 'It, assuming the processes are uncorrelated (Daley, 1991). The horizontal wind vector u = (u, v) can then be derived from: 8'lt 8~ u=--+-, 8y 8x (1) This produces good prior models for wind fields when a suitable choice of covariance function for ~ and 'It is made. We have investigated using a modified Bessel function based covariance2 (Handcock and Wallis, 1994) but found, using three years of wind data for the North Atlantic, that the maximum a posteriori value for the smoothness paramete~ in this covariance function was'" 2.5. Thus we used the correlation function: p(r) = (1 + .!:.. + ~) exp (-.!:..) L 3L2 L (2) where L is the correlation length scale, which is equivalent to the modified Bessel function and less computationally demanding (Cornford, 1998). Simulate Frontal Position. Orientation and Direction Simulate Along Both Sides of Front using GPl Simulate 'Mnd Raids Either Side of Front Conditionally on that Sides Frontal 'Mnds using GP2 (a) N Origin (b) Figure 1: (a) Flowchart describing the generative frontal model. See text for full description. (b) A description of the frontal model. The generative model has the form outlined in Figure 1 a. Initially the frontal position and orientation are simulated. They are defined by the angle clockwise from north (¢/) that the front makes and a point on the line (x/, Y /). Having defined the position of the front, lS~ http://www.ncrg.aston.ac.uk/Projects/NEUROSAT/NEUROSAT.htm1 for details of the scatterometer work. Technical reports describing, in more detail, methods for generating prior wind field models can also be accessed from the same page. 2The modified Bessel function allows us to control the differentiability of the sample realisations through the 'smoothness parameter', as well as the length scales and variances. 3This varies with season, but is the most temporally stable parameter in the covariance function. Adding Constrained Discontinuities to GP Models o/Wind Fields 863 the angle of the wind across the front (a J) is simulated from a distribution covering the range [0,71"). This angle is related to the vertical component of vorticity «() across the front through ( = k· V x u ex: cos (¥ ) and the constraint a J E [0,71") ensures cyclonic vorticity at the front. It is assumed that the front bisects a J. The wind speed (8 J) is then simulated at the front. Since there is generally little change in wind speed across the front, one value is simulated for both sides of the front. These components 8 f = (¢ J , x J , Y J, a J, 8 J) define the line of the front and the mean wind vectors just ahead of and just behind the front (Figure Ib): A realistic model requires some variability in wind vectors along the front. Thus we use a GP with a non-zero mean (mla or mlb) along the line of the front. In the real atmosphere we observe a smaller variability in the wind vectors along the line of the front compared with regions away from fronts. Thus we use different GP parameters along the front (G Pl ), from those used in the wind field away from the front (GP2 ), although the same GPl parameters are used both sides of the front, just with different means. The winds just ahead of and behind the front are assumed conditionally independent given mla and mlb, and are simulated at a regular 50 km spacing. The final step in the generative model is to simulate wind vectors using G P2 in both regions either side of the front, conditionally on the values along that side of the front. This model is flexible enough to represent fronts, yet has the required constraints derived from meteorological principles, for example that fronts should always be associated with cyclonic vorticity and that discontinuities at the model scale should be in wind direction but not in wind speed4 . To make this generative model useful for inference, we need to be able to compute the data likelihood, which is the subject of the next section. 3 GPs WITH CONSTRAINED DISCONTINUITIES " .; .... -]. 1 ! .. I D2 > Dl (a) (b) Figure 2: (a) The discontinuity in one ofthe vector components in a simulation. (b) Framework for GPs with boundary conditions. The curve Dl has nl sample points with values Zt. The domain D2 has n2 points with values Z2. 4The model allows small discontinuities in wind speed, which are consistent with frontal dynamics. 864 D. Cornford, 1. T Nabney and C. K. 1. Williams We consider data from two domains D1 and D2 (Figure 2b), where in this case D1 is a curve in the plane which is intended to be the front and D2 is a region of the plane. We obtain n1 variables Zl at points Xl along the curve, and we assume these are generated under G P1 (a GP which depends on parameters 81 and has mean m1 = m1l which will be determined by (3) or (4». We are interested in determining the likelihood of the variables Z2 observed at n2 points X2 under GP2 which depends on parameters 82, conditioned on the 'constrained discontinuities' at the front. We evaluate this by calculating the likelihood of Z2 conditioned on the n1 values of Zl from G P1 along the front and marginalising out Zl: p(Z2182,81) = i: p(Z2IZ 1,82,81,m1)p(ZlI81,m1) dZ1. (5) From the definition ofthe likelihood of a GP (Cressie, 1993) we find: p(Z2IZ1,82,81,m1) = ~ 1 exp (--21 Z;'S2;lZ;) (6) (271") 2 ISd'2 where: To understand the notation consider the joint distribution of Zl, Z2 and in particular its covariance matrix: (7) where K 1112 is the n1 x n1 covariance matrix between the points in D1 evaluated using 82, K1212 = K~112 the n1 x n2 (cross) covariance matrix between the points in D1 and D2 evaluated using 8 2 and K2212 is the usual n2 x n2 covariance for points in D2. Thus we can see that S22 is the n2 x n2 modified covariance for the points in D2 given the points along D 1 , while the Z; is the corrected mean that accounts for the values at the points in D 1• which have non-zero mean. We remove the dependency on the values Zl by evaluating the integral in (5). p(ZlI81, m1) is given by: p(ZlI81, m1) = ~ 1 1. exp (--21 (Zl - m1)' Kill1 (Zl - m 1») (271") IK111112 (8) where K 1111 is the n1 x n1 covariance matrix between the points in D1 evaluated under the covariance given by 81 . Completing the square in Zl in the exponent, the integral (5) can be evaluated to give: (z 188m ) 1 _1_ 1 _1_ x p 2 2, 1, 1 (271")~ IS221 t IK11111t IBlt (9) exp (~ (C' B-1C - Z2' S2;l Z2 - m1' Kill1 m1) ) where: B (K' K-1 )'S-lK' K- 1 K- 1 1212 1112 22 1212 1112 + 1111 C' Z 'S-lK' K-1 'K-1 2 22 1212 1112 + m1 1111 The algorithm has been coded in MATLAB and can deal with reasonably large numbers of points quickly. For a two dimensional vector-valued GP with n1 = 12 and n2 = 200 5 and 5This is equivalent to nl = 24 and n2 = 400 for a scalar GP. Adding Constrained Discontinuities to GP Models of Wind Fields 865 a covariance function given by (2), computation of the log likelihood takes 4.13 seconds on an SGI Indy R5000. The mean value just ahead and behind the front define the mean values for the constrained discontinuity (i.e. m1 in (9». Conditional on the frontal parameters the wind fields either side (Figure 3a) are assumed independent: p(Z2a, Z2b\02, 01, Of) = p(Z2a\02, 01, m1a)p(m1a\Of) x p(Z2b\02, 01, m1b)p(m1b\Of) where we have performed the integration (5) to remove the dependency on Z1a and Z1b. Thus the likelihood of the data Z2 = (Z2a, Z2b) given the model parameters O2,01, Of is simply the product of the likelihoods of two GPs with a constrained discontinuity which can be computed using (9). Front (a) SOIl __ -..."""",,----' ...... ,"--"" .............. ," ---von ", .... ,'---_II :::: ' , .... , , -- ., ''I. \ , -- - - ,,,'\--_ .... ,' ,!. 100 '\ \, \, --- ..... ,,, , \, "" _-..... .... ", , '" _--....'''''' " I , "DC (b) Figure 3: (a) The division of the wind field using the generative frontal model. Z1a, Z1b are the wind fields just ahead and behind the front, along its length, respectively. Z2a, Z2b are the wind fields in the regions ahead of and behind the front respectively. (b) An example from the generative frontal model: the wind field looks like a typical 'cold front'. The model outlined above was tested on simulated data generated from the model to assess parameter sensitivity. We generated a wind field ZO = (Z2a' Z2b) using known model parameters (e.g. Figure 3b). We then sampled the model parameters from the posterior distribution: (10) where p( ( 2), p( ( 1), p( Of) are prior distributions over the parameters in the GPs and front models. This brings out one advantage of the proposed model. All the model parameters have a physical interpretation and thus expert knowledge was used to set priors which produce realistic wind fields. We will also use (10) to help set (hyper)priors using real data in Zoo MCMC using the Metropolis algorithm (Neal, 1993) is used to sample from (to) using the NETLAB6 library. Convergence of the Markov chain is currently assessed using visual inspection of the univariate sample paths since the generating parameters are known, although other diagnostics could be used (Cowles and Carlin, 1996). We find that the procedure is insensitive to the initial value of the GP parameters, but that the parameters describing the location ofthe front (1/>" d,) need to be initialised 'close' to the correct values if the chain is to converge on a reasonable time-scale. In the application some preliminary analysis of the wind field would be necessary to identify possible fronts and thus set the initial parameters to 'sensible' values. We intend to fit a vector-valued GP without any discontinuities 6Available from http://www.ncrg.aston.ac . uk/netlab/index. html. 866 D. Comjord, I. T. Nabney and C. K. 1. Williams 2 3 4 ' 5 2 3 4 Sample nurrber • In' Sample number w 104 (a) (b) Figure 4: Examples from the Markov chain of the posterior distribution (10). (a) The energy = negative log posterior probability. Note that the energy when the chain was initialised was 2789 and the first 27 values are outside the range of the y-axis. (b) The angle of the front relative to north (¢> I)' and then measure the 'strain' or misfit of the locally predicted winds with the winds fitted by the GP. Lines of large 'strain' will be used to initialise the front parameters. 3000 1000 2 3 500 ~ ~~~-an1.5~uw~2ww~2.~5~~3L-~3. 5 sample number Angle of wind (radians) (a) (b) Figure 5: Examples from the Markov chain of the posterior distribution (10). (a) The angle of the wind across the front (01 ). (b) Histogram of the posterior distribution of 01 allowing a 10000 iteration bum-in period. Examples of samples from the Markov chain from the simulated wind field shown in Figure 3a can be seen in Figures 4 and 5. Figure 4a shows that the energy level (= negative log posterior probability) falls very rapidly to near its minimum value from its large starting value of 2789. In these plots the true parameters for the front were ¢> I = 0.555,01 = 2.125 while the initial values were set at ¢>I = 0.89,01 = 1.49. Other parameters were also incorrectly set. The Metropolis algorithm seems to be able to find the minimum and then stays in it. Figure 4b and 5a show the Markov chains for ¢>I and 0/ ' Both converge quickly to an apparently stationary distributions, which have mean values very close to the 'true' generating parameters. The histogram of the distribution of 01 is shown in Figure 5b. Adding Constrained Discontinuities to GP Models of Wind Fields 867 4 DISCUSSION AND CONCLUSIONS Simulations from our model are meteorologically plausible wind fields which contain fronts. It is possible similar models could usefully be applied to other modelling problems where there are discontinuities with known properties. A method for the computation of the likelihood of data given two GP models, one with non-zero mean on the boundary and another in the domain in which the data is observed, has been given. This allows us to perform inference on the parameters in the frontal model using a Bayesian approach of sampling from the posterior distribution using a MCMC algorithm. There are several weaknesses in the model specifically for fronts, which could be improved with further work. Real atmospheric fronts are not straight, thus the model would be improved by allowing 'curved' fronts. We could represent the position of the front, oriented along the angle defined by ¢, using either another smooth GP, B-splines or possibly polynomials. Currently the points along the line of the front are simulated at the mean observation spacing in the rest of the wind field ('" 50 km). Interesting questions remain about the (in-fill) asymptotics (Cressie, 1993) as the distance between the points along the front tends to zero. Empirical evidence suggests that as long as the spacing along the front is 'much less' than the length scale of the GP along the front (which is typically'" 1000 km) then the spacing does not significantly affect the results. Although we currently use a Metropolis algorithm for sampling from the Markov chain, the derivative of (9) with respect to the GP parameters 81 and 82 could be computed analytically and used in a hybrid Monte Carlo procedure (Neal, 1993). These improvements should lead to a relatively robust procedure for putting priors over wind fields which will be used with real data when retrieving wind vectors from scatterometer observations over the ocean. Acknowledgements This work was partially supported by the European Union funded NEUROSAT programme (grant number ENV 4 CT96-0314) and also EPSRC grant GRlL03088 Combining Spatially Distributed Predictions from Neural Networks. References Cornford, D. 1998. Flexible Gaussian Process Wind Field Models. Technical Report NCRG/98/017, Neural Computing Research Group, Aston University, Aston Triangle, Birmingham, UK. Cowles, M. K. and B. P. Carlin 1996. Markov-Chain Monte-Carlo Convergence Diagnostics-A Comparative Review. Journal of the American Statistical Association 91, 883-904. Cressie, N. A. C. 1993. Statistics for Spatial Data. New York: John Wiley and Sons. Daley, R. 1991. Atmospheric Data Analysis. Cambridge: Cambridge University Press. Handcock, M. S. and J. R. Wallis 1994. An Approach to Statistical Spatio-Temporal Modelling of Meteorological Fields. Journal of the American Statistical Association 89, 368-378. Neal, R. M. 1993. Probabilistic Inference Using Markov Chain Monte Carlo Methods. Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto. URL: http://www.cs.utoronto.ca/ ... radford.
|
1998
|
69
|
1,569
|
Call-based Fraud Detection in Mobile Communication Networks using a Hierarchical Regime-Switching Model Jaakko Hollmen Helsinki University of Technology Lab. of Computer and Information Science P.O. Box 5400, 02015 HUT, Finland laakko.Hollmen@hut.fi Volker Tresp Siemens AG, Corporate Technology Dept. Information and Communications 81730 Munich, Germany Volker.Tresp@mchp.siemens.de Abstract Fraud causes substantial losses to telecommunication carriers. Detection systems which automatically detect illegal use of the network can be used to alleviate the problem. Previous approaches worked on features derived from the call patterns of individual users. In this paper we present a call-based detection system based on a hierarchical regime-switching model. The detection problem is formulated as an inference problem on the regime probabilities. Inference is implemented by applying the junction tree algorithm to the underlying graphical model. The dynamics are learned from data using the EM algorithm and subsequent discriminative training. The methods are assessed using fraud data from a real mobile communication network. 1 INTRODUCTION Fraud is costly to a network carrier both in terms of lost income and wasted capacity. It has been estimated that the telecommunication industry looses approximately 2-5% of its total revenue to fraud. The true losses are expected to be even higher since telecommunication companies are reluctant to admit fraud in their systems. A fraudulent attack causes lots of inconveniences to the victimized subscriber which might motivate the subscriber to switch to a competing carrier. Furthermore, potential new customers would be very reluctant to switch to a carrier which is troubled with fraud. Mobile communication networks -which are the focus of this work- are particularly appealing to fraudsters as the calling from the mobile terminal is not bound to a physical place and a subscription is easy to get. This provides means for an illegal high-profit business requiring minimal investment and relatively low risk of getting caught. Fraud is 890 J. Hollmen and V. Tresp usually initiated by a mobile phone theft, by cloning the mobile phone card or by acquiring a subscription with false identification. After intrusion the subscription can be used for gaining free services either for the intruder himself or for his illegal customers in form of call-selling. In the latter case, the fraudster sells calls to customers for reduced rates. The earliest means of detecting fraud were to register overlapping calls originating from one subscription, evidencing card cloning. While this procedure efficiently detects cloning, it misses a large share of other fraud cases. A more advanced system is a velocity trap which detects card cloning by using an upper speed limit at which a mobile phone user can travel. Subsequent calls from distant places provide evidence for card cloning. Although a velocity trap is a powerful method of detecting card cloning, it is ineffective against other types of fraud. Therefore there is great interest in detection systems which detect fraud based on an analysis of behavioral patterns (Barson et aI., 1996, Burge et aI., 1997, Fawcett and Provost, 1997, Taniguchi et aI., 1998). In an absolute analysis, a user is classified as a fraudster based on features derived from daily statistics summarizing the call pattern such as the average number of calls. In a differential analysis, the detection is based on measures describing the changes in those features capturing the transition from a normal use to fraud. Both approaches have the problem of finding efficient feature representations describing normal and fraudulent behavior. As they usually derive features as summary statistics over one day, they are plagued with a latency time of up to a day to detect fraudulent behavior. The resulting delay in detection can already lead to unacceptable losses and can be exploited by the fraudster. For these reasons real-time fraud detection is seen to be the most important development in fraud detection (Pequeno, 1997). In this paper we present a real-time fraud detection system which is based on a stochastic generative model. In the generative model we assume a variable victimized which indicates if the account has been victimized by a fraudster and a second variable fraud which indicates if the fraudster is currently performing fraud. Both variables are hidden. Furthermore, we have an observed variable call which indicates if a call being is performed or not. The transition probabilities from no-call to call and from call to no-call are dependent on the state of the variable fraud. Overall, we obtain a regime-switching time-series model as described by Hamilton (1994), with the modifications that first, the variables in the time series are not continuous but binary and second, the switching variable has a hierarchical structure. The benefit of the hierarchical structure is that it allows us to model the time-series at different time scales. At the lowest hierarchical level, we model the dynamical behavior of the individual calls, at the next level the transition from normal behavior to fraudulent behavior and at the highest level the transition to being victimized. To be able to model a time-series at different temporal resolutions was also the reason for introducing a hierarchy into a hidden Markov model for Jordan, Ghahramani and Saul (1997). Fortunately, our hidden variables have only a small number of states such that we do not have to work with the approximation techniques those authors have introduced. Section 2 introduces our hierarchical regime-switching fraud model. The detection problem is formulated as an inference problem on the regime probabilities based on subscriber data. We derive iterative algorithms for estimating the hidden variables fraud and victimized based on past and present data (filtering) or based on the complete set of observed data (smoothing). We present EM learning rules for learning the parameters in the model using observed data. We develop a gradient based approach for fine tuning the emission probabilities in the non-fraud state to enhance the discrimination capability of the model. In Section 3 we present experimental results. We show that a system which is fine-tuned on real data can be used for detecting fraudulent behavior on-line based on the call patterns. In Section 4 we present conclusions and discuss further applications and extensions of our fraud model. Fraud Detection Using a Hierarchical Regime-Switching Model 2 THE HIERARCHICAL REGIME-SWITCHING FRAUD MODEL 2.1 THE GENERATIVE MODEL 891 The hierarchical regime-switching model consists of three variables which evolve in time stochastically according to first-order Markov chains. The first binary variable Vt (victimized) is equal to one if the account is currently being victimized by a fraudster and zero otherwise. The states of this variable evolve according to the state transition probabilities pij = P(Vt = ilVt_l = j); i,j = 0,1. The second binary variable St (fraud) is equal to one if the fraudster currently performs fraud and is equal to zero if the fraudster is inactive. The change between actively performing fraud and intermittent silence is typical for a victimized account as is apparent from Figure 3. Note that this transient bursty behavior of a victimized account would be difficult to capture with a pure feature based approach. The states of this variable evolve following the state transition probabilities pfjk = P(St = ilvt = j,St-l = k,);i,j,k = 0,1. Finally, the binary variable Yt (call) is equal to one if the mobile phone is being used and zero otherwise with state transition matrix pfjk = P(Yt = ilst = j, Yt-l = k); i, j, k = 0,1. Note that this corresponds to the assumption of exponentially distributed call duration. Although not quite realistic, this is the general assumption in telecommunications. Typically, both the frequency of calls and the lengths of the calls are increased when fraud is executed. The joint probability of the time series up to time T is then T T T P(VT' ST, YT) = P(vo, So, Yo) II P(Vt!Vt-l) II P(stlvt, St-l) II P(Ytlst, Yt-l) (1) t=l t=l t=l where in the experiments we used a sampling time of one minute. Furthermore, VT {vo, ... , VT }, ST = {so, ... , ST }, YT = {Yo, ... , YT} and P(vo, So, Yo) is the prior distribution of the initial states. Figure 1: Dependency graph of the hierarchical regime-switching fraud model. The square boxes denote hidden variables and the circles observed variables. The hidden variable Vt on the top describes whether the subscriber account is victimized by fraud. The hidden variable St indicates if fraud is currently being executed. The state of St determines the statistics of the variable call Yt. 2.2 INFERENCE: FILTERING AND SMOOTIDNG When using the fraud detection system, we are interested to estimate the probability that an account is victimized or that fraud is currently occurring based on the call patterns up to the current point in time (filtering). We can calculate the probabilities of the states of the hidden variables by applying the following equations recursively with t = 1, ... , T. 892 J. Hol/men and V Tresp P(Vt = i, St-1 = k!Yt-1) = '2::prlP(Vt-1 = l, St-1 = kIYt- 1) I P(Vt = i, St = j!Yt-1) = '2:: PjikP(Vt = i, St-1 = kIYt-1) k where c is a scaling factor. These equations can be derived from the junction tree algorithm for the Bayesian networks (Jensen, 1996). We obtain the probability of victimization and fraud by simple marginalization P(Vt = ilYt) = L P(Vt = i, St = jlYr) ; P(St = jlYd = L P(Vt = i, St = jlYd· j i In some cases -in particular for the EM learning rules in the next section- we might be interested in estimating the probabilities of the hidden states at some time in the past (smoothing). In this case we can use a variation of the smoothing equations described in Hamilton (1994) and Kim (1994). After performing the forward recursion, we can calculate the probability of the hidden states at time tf given data up to time T > tf iterating the following equations with t = T, T - 1, ... ,1. . """' P(Vt+1 = k,St+1 = lIYT) . s P(Vt+1 = k,St = JIYT) = ~ P( _ k -ll¥') P(Vt+l = k,St = JIYt}Plkj I Vt+1 ,St+1 t P( , ·IV) """' P(Vt+1 = k,St =jIYT)p( . ·Iv") v Vt=z,St=JI.T =~ P( -k _ 'I¥,) Vt=z,St=)I.tPki k Vt+1 ,St - J t 2.3 EM LEARNING RULES Parameter estimation in the regime-switching model is conveniently formulated as an incomplete data problem, which can be solved using the EM algorithm (Hamilton, 1994). Each iteration of the EM algorithm is guaranteed to increase the value of the marginal loglikelihood function until a fixed point is reached. This fixed point is a local optimum of the marginal log-likelihood function. In the M-step the model parameters are optimized using the estimates of the hidden states using the current parameter estimates. Let 0 = {prj, Pijk' P;kj} denote the current parameter estimates. The new estimates are obtained using v 2:;=1 P( Vt = i, Vt-1 = jIYT; 0) Pij = ",T ( 'I ," L....t=l P Vt-1 = J }T; 0) s 2:;-1 P(St = i, Vt = j, St-1 = kIYT; 0) Pijk = T . 2:t=l P( Vt = J, St-1 = kIYT; 0) Y 2:;=l,if Yt=i andYt_l=j P(St-1 = kIYT;O) Pikj = T 2:t=l, if Yt-l=j P(St-1 = kIYT; 0) Fraud Detection Using a Hierarchical Regime-Switching Model 893 The E-step determines the probabilities on the right sides of the equations using the current parameter estimates. These can be determined using the smoothing equations from the previous section directly by marginalizing P(Vt = k, St = l, Vt+l = i, St+1 = jIYT) where the terms on the right side are obtained from the equations in the last Section. 2.4 DISCRIMINATIVE TRAINING In our data setting, it is not known when the fraudulent accounts were victimized by fraud. This is why we use the EM algorithm to learn the two regimes from data in an incomplete data setting. We know, however, which accounts were victimized by fraud. After EM learning the discrimination ability of the model was not satisfactory. We therefore used the labeled sequences to improve the model. The reason for the poor performance was credited to unsuitable call emission probabilities in the normal state. We therefore minimize the error function E = L:i (maXt P(v!i)I}~(i)) - t(i»)2 with regard to the parameter P;=O,j=O,k=O' where the t(i) = {O, I} is the label for the sequence i. The error function was minimized with Quasi-Newton procedure with numerical differentiation. 3 EXPERIMENTS To test our approach we used a data set consisting of 600 accounts which were not affected by fraud and 304 accounts which were affected by fraud. The time period for non-fraud and fraud accounts were 49 and 92 days, respectively. We divided the data equally into training data and test data. From the non-fraud data we estimated the parameters describing the normal calling behavior, i.e. pr,j =O,k' Next, we fixed the probability that an account is victimized from one time step to the next to PY=l,j=O = 10- 5 and the probability that a victimized account becomes de-victimized as pi=O,j=l = 5 X 10- 4 • Leaving those parameters fixed the remaining parameters were trained using the fraudulent accounts and the EM algorithm described in Section 2. We had to do unsupervised training since it was known by velocity check that the accounts were affected but it was not clear when the intrusion occurred. After unsupervised training, we further enhanced the discrimination capability of the system which helped us reduce the amount of false alarms. The final model parameters can be found in the Appendix. After training, the system was tested using the test data. Unfortunately, it is not known when the accounts were attacked by fraud, but only on per-account basis if an account was at some point a victim of fraud. Therefore, we declare an account to be victimized if the victimized variable at some point exceeds the threshold. Also, it is interesting to study the results shown in Figure 3. We show data and posterior time-evolving probabilities for an account which is known to be victimized. From the call pattern it is obvious that there are periods of suspiciously high traffic at which the probability of victimization is recognized to be very high. We also see that the variable fraud St follows the bursty behavior of the fraudulent behavior correctly. Note, that for smoothing which is important both for a retrospective analysis of call data and for learning, we achieve smoother curves for the victimized variable. 894 J. Hollmen and V. Tresp •• .. - - ' , , " 00 aDI 01» OeD O.Dt OIlS 0,08 om o.oa 001 0.1 O.D1 0,(2 OeD 001 0.05 001 oar Q.O& 001 01 Figure 2: The Receiver Operating Characteristic (ROC) curves are shown for on-line detection (left figure) and for retrospective classification (right figure). In the figures, detection probability is plotted against the false alarm probability. The dash-dotted lines are results before, the solid lines after discriminative training. We can see that the discriminative training improves the model considerably. After EM training and discriminative training, we tested the model both in on-line detection mode (filtering) and in retrospective classification (smoothing) with smoothed probabilities. The detection results are shown in Figure 2. With a fixed false alarm probability of 0.003, the detection probabilities for the training set were found to be 0.974 and 0.934 using on-line detection mode and with smoothed probabilities, respectively. With a testing set and a fixed false alarm probability of 0.020, we obtain the detection probabilities of 0.928 and 0.921, for the on-line detection and for retrospective classification, respectively. 4 CONCLUSIONS We presented a call-based on-line fraud detection system which is based on a hierarchical regime-switching generative model. The inference rules are obtained from the junction tree algorithm for the underlying graphical model. The model is trained using the EM algorithm in an incomplete data setting and is further refined with gradient-based discriminative training, which considerably improves the results. A few extensions are in the process of being implemented. First of all, it makes sense to use more than one fraud model for the different fraud scenarios and several user models to account for different user profiles. For these more complex models we might have to rely on approximations techniques such as the ones introduced by Jordan, Ghahramani and Saul (1997). Appendix The model parameters after EM training and discriminative training. Note that entering the fraud state without first entering the victimized state is impossible. pY . Ok = ( ... ,)= , pi,j=O,k = ( 0.9559 0.3533 1.0000 0.0000 0.0441 ) 0.6467 0.0000 ) 1.0000 p¥ . 1 k = ( t ,J= , pi,j=l,k = ( 0.9292 0.0570 0.9979 0.0086 0.0708 ) 0.9430 0.0021 ) 0.9914 Fraud Detection Using a Hierarchical Regime-Switching Model 895 _-o:t 111111111111U 11111] 111I111111[IIIIIIIIJIillIIIIIIII j 0.5 1 1.5 2 2 .5 ~o:~: : :\J 0.5 1 1.5 2 2.5 0.5 1.5 2 2.5 Figure 3: The first line shows the calling data Yt from a victimized account. The second and third lines show the states of the victimized and fraud variables, respectively. Both are calculated with the filtering equations. The fourth and fifth lines show the same variables using the smoothing equations. The displayed time window period is seventeen days. References Barson P., Field, S., Davey, N., McAskie, G., and Frank, R. (1996). The Detection of Fraud in Mobile Phone Networks. Neural Network World, Vol. 6, No.4. Bengio, Y. (1996). Markovian Models for Sequential Data. Technical Report # 1049, Universite de Montreal. Burge, P., Shawe-Taylor J., Moreau Y., Verrelst, H., Stormann C. and Gosset, P. (1997). BRUTUS - A Hybrid Detection Tool. Proc. of ACTS Mobile Telecommunications Summit, Aalborg, Denmark. Fawcett, T. and Provost, F. (1997). Adaptive Fraud Detection. Journal of Data Mining and Knowledge Discovery, , Vol. I, No.3, pp. 1-28. Hamilton, J. D. (1994). Time Series Analysis. Princeton University Press. Jensen, Finn V. (1996). Introduction to Bayesian Networks. UCL Press. Jordan, M. I, Ghahramani, Z. and Saul, L. K. (1997). Hidden Markov Decision Trees, in Advances in Neural Information Processing Systems: Proceedings of the 1996 Conference (NIPS'9), MIT-Press, pp. 501-507. Kim, c.-J. (1994). Dynamical linear models with Markov-switching. Journal of Econometrics, Vol. 60, pp. 1-22. Pequeno, K. A.(1997). Real-Time fraud detection: Telecom's next big step. Telecommunications (America Edition), Vol. 31, No.5, pp. 59-60. Taniguchi, M., Haft, M., Hollmen, J. and Tresp, V. (1998). Fraud detection in communications networks using neural and probabilistic methods. Proceedings of the 1998 IEEE Int. Con! in Acoustics, Speech and Signal Processing (ICASSP'98). Vol. 2. pp. 1241-1244.
|
1998
|
7
|
1,570
|
The Belief in TAP Yoshiyuki Kabashima Dept. of Compt. IntI. & Syst. Sci. Tokyo Institute of Technology Yokohama 226, Japan David Saad Neural Computing Research Group Aston University Birmingham B4 7ET, UK Abstract We show the similarity between belief propagation and TAP, for decoding corrupted messages encoded by Sourlas's method. The latter is a special case of the Gallager error-correcting code, where the code word comprises products of J{ bits selected randomly from the original message. We examine the efficacy of solutions obtained by the two methods for various values of J{ and show that solutions for J{ 2': 3 may be sensitive to the choice of initial conditions in the case of unbiased patterns. Good approximations are obtained generally for J{ = 2 and for biased patterns in the case of J{ 2': 3, especially when Nishimori's temperature is being used. 1 Introduction Belief networks [1] are diagrammatic representations of joint probability distributions over a set of variables. This set is usually represented by the vertices of a graph, while arcs between vertices represent probabilistic dependencies between variables. Belief propagation provides a convenient mathematical tool for calculating iteratively joint probability distributions between variables and have been used in a variety of cases, most recently in the field of error correcting codes, for decoding corrupted messages [2] (for a review of graphical models and their use in the context of error-correcting codes see [3]). Error-correcting codes provide a mechanism for retrieving the original message after corruption due to noise during transmission. Of a particular interest to the current paper is an error-correcting code presented by Sourlas [4] which is a special case of the Gallager codes [5]. The latter have been recently re-discovered by MacKay and Neal [2] and seem to have a significant practical potential. In this paper we will examine the similarities between the belief propagation (BP) and TAP approaches, used to decode corrupted messaged encoded by Sourlas's method, and compare the solutions obtained by both approaches to the exact results obtained using the replica method [8]. The statistical mechanics approach will then The Belie/in TAP 247 allow us to draw some conclusion on the efficacy of the TAP /BP approach in the context of error correcting codes. The paper is arranged in the following manner: In section 2 we will introduce the encoding method and describe the decoding task. The Belief Propagation approach to the decoding process will be introduced in section 3 and will be compared to the TAP approach for diluted spin systems in section 4. Numerical solutions for various cases will be presented in section 5 and we will summarize our results and discuss their implications in section 6. 2 The decoding problem In a general scenario, a message represented ~ an N dimensional binary vector e is encoded by a vector JO which is then transmitted through a noisy channel with some flipping probability p per bit. The received message J is then decoded to retrieve the original message. Sourlas's code [4], is based on encoded message bits of the form JPl,i 2 . . iK = ~il ei 2 ... eiK , taking the product of different J{ message sites for each code word bit. In the statistical mechanics approach we will attempt to retrieve the original message by exploring the ground state of the following Hamiltonian which corresponds to the preferred state of the system in terms of 'energy' 1{=- L Ah , ... iK) J(il , .. iK) Si l · · .SiK - F/f3LSk , (1) (i1, ... iK) k where S is an N dimensional binary vector of dynamical variables and A is a sparse tensor with C unit elements per index (other elements are zero), which determines the components of JO. The last term on the right is required in the case of sparse (biased) messages and will require assigning a certain value to the additive field F / f3, related to the prior belief in the Bayesian framework. The statistical mechanical analysis can be easily linked to the Bayesian framework [4] in which one focuses on the posterior probability using Bayes theorem P(SIJ)",,-, IT!' P(J!'IS) Po(S) where jJ runs over the message components and Po(S) represents the prior. Knowing the posterior one can calculate the typical retrieved message elements and their alignment, which correspond to the Bayes-optimal decoding. The logarithms of the likelihood and prior terms are directly related to the first and second components of the Hamiltonian (Eq.l). One should also note that A(il , . iK) J(il , . i K) represents a similar encoding scheme to that of Ref. [2] where a sparse matrix with J{ non-zero elements per row multiplies the original message e and the resulting vector, modulo 2, is transmitted. Sourlas analyzed this code in the cases of J{ = 2 and J{ -+ 00, where the ratio C / J{ -+ 00 , by mapping them onto the SK [9] and Random Energy [10] models respectively. However, the ratio R = J{ / C constitutes the code rate and the scenarios examined by Sourlas therefore correspond to the limited case of a vanishing code rate. The case of finite code rate, which we will consider here, has only recently been analyzed [8]. 3 Decoding by belief propagation As our goal, of calculating the posterior of the system P( S IJ) is rather difficult , we resort to the methods of BP, focusing on the calculation of conditional probabilities when some elements of the system are set to specific values or removed. 248 Y Kabashima and D. Saad The approach adopted in this case, which is quite similar to the practical approach employed in the case of Gallager codes [2], assumes a two layer system corresponding to the elements of the corrupted message J and the dynamical variables 5 respectively, defining conditional probabilities which relate elements in the two layers: (2) r~1 P(JI'ISI=X,{JII~I'}) = L P(JI'ISI=X,{Sk#}) P({Sk~dl{JII~I'}) , { Sk;tz} where the index J.l represents an element of the received vector message J, constituted by a particular choice of indices i 1 , .. . iK, which is connected to the corresponding index of 5 (l in the first equation), i.e., for which the corresponding element A(i1, ... iK) is non-zero; the notation {Sk~d refers to all elements of 5, excluding the I-th element, which are connected to the corresponding index of J (J.l in this case for the second equation); the index x can take values of ±1. The conditional probabilities q~1 and r~1 will enable us, through recursive calculations to obtain an approximated expression to the posterior. Employing Bayes rule and the assumption that the dependency of SI on an element JII is factorizable and vice versa: P(SI 1 ,SI 2 ",SIKI{Jvtl'}) = nf=lP(Slkl{JIlt/'}) and P({J/J~I'} ISI=x) = nllt/'P(JIIISI=X,{J(7~II})' one can rewrite a set of coupled equations for q!/' q;/ , r!1 and r;/ of the form q~1 = al'l PI IT r:1 and r~1 = L P (J I' lSI = x, {Sk#}) IT q!Z ' (3) /J~I' {Sk;tz} k~1 where al'l is a normalizing factor such that q~1 + q;/ = 1 and pf = P (SI = x) are our prior beliefs in the value of the source bits SI. This set of equations can be solved iteratively [2] by updating a coupled set of difference equations for 8ql'I = q~/-q;/ and 8rl'I = r~l-r;/, derived for this specific model, making use of the fact that the variables r~/' and sub-sequentially the variables q~/' can be calculated by exploiting the relation r;/ = (1±8rl'l)/2 and Eq.(3). At each iteration we can also calculate the pseudo-posterior probabilities qf = alPI nil r~/' where al are normalizing factors, to determine the current estimated value of SI. Two points that are worthwhile noting: Firstly, the iterative solution makes use of the normalization r~/+r;/ = 1, which is not derived from the basic probability rules and makes implicit assumptions about the probabilities of obtaining SI = ±1 for all elements I. Secondly, the iterative solution would have provided the true posterior probabilities qf if the graph connecting the message J and the encoded bits 5 would have been free of cycles, i.e., if the graph would have been a tree with no recurrent dependencies among the variables. The fact that the framework provides adequate practical solutions has only recently been explained [13]. 4 Decoding by TAP We will now show that for this particular problem it is possible to obtain a similar set of equations from the corresponding statistical mechanics framework based on Bethe approximation [11] or the TAP (Thouless-Anderson-Palmer) approach [12] to diluted systems 1 . In the statistical mechanics approach we assign a Boltzmann 1 The terminology in the case of diluted systems is slightly vague. Unlike in the case of fully connected systems, self consistent equations of diluted systems cannot be derived The Beliefin TAP 249 weight to each set comprising an encoded message bit J II. and a dynamical vector S WE (JII.IS) = e-{3 9(1I'IS) , (4) such that the first term of the system's Hamiltonian (Eq.1) can be rewritten as L II. g ( Jil.l S) , where the index J..l runs over all non-zero sites in the multidimensional tensor A. We will now employ two straightforward assumptions to write a set of coupled equations for the mean field q~1 == P(511 {Jvtll.})' which may be identified as the same variable as in the belief network framework (Eq.2) , and the effective Boltzmann weight weff (J 11.151, {J vtll.}): 1) we assume a mean field behavior for the dependence of the dynamical variables S on a certain realization of the message sites J, i.e., the dependence is factorizable and may be replaced by a product of mean fields . 2) Boltzmann weights (effective) for site 51 are factorizable with respect to J 11.. The resulting set of equations are of the form weff(J1l. 151, {Jvtll.}) = Tr{Sk;l!z} WE (JII. 1 S) II q~r k;tl qSl 11.1 == CLII.I pfl II Weff(Jv 151, {J"tv}) , vtll. (5) where CLII.I is a normalization factor and pfl is our prior knowledge of the source's bias. Replacing the effective Boltzmann weight by a normalized field, which may be identified as the variable r~1 of Eq.(2), we obtain r~l = P (51 1 JII.' {Jvtll.}) = ali.I weff(J1l. 151, {Jvtll.}) , (6) i.e., a set of equations equivalent to Eq.(3). The explicit expressions of the normalization coefficients, ali.I and CLII.I' are a~l = Tr{s} WE (JII.IS) II q~f k;tl and (7) The somewhat arbitrary use of the differences oqll.l = (5i}q and Dril.l = (5i}r in the BP approach becomes clear form the statistical mechanics description, where they represent the expectation values of the dynamical variables with respect to the fields . The statistical mechanics formulation also provides a partial answer to the successful use of the BP methods to loopy systems, as we consider a finite number of steps on an infinite lattice [14]. However, it does not provide an explanation in the case of small systems which should be examined using other methods. The formulation so far has been general; however, in the case of Sourlas's code we can make use of the explicit expression for g to derive the relat\on between q~l, r;l , oqll.l and Dril.l as well as an explicit expression for WE (JII.IS,,8) q~l == ~ (1 + oq1l.151) , r~l = ~ (1 + or1l.151) and (8) WE (JII.IS,,8) = ~ cosh ,8JII. (1 + tanh,8J II. II 51) , (9) I H(II.) by the perturbation expansion of the mean field equations with respect to Onsager reaction fields since these fields are too large in diluted systems. Consequently, the resulting equations are different than those obtained for fully connected systems [12]. We termed our approach TAP, following the convention for the Bethe approximation when applied to disordered systems subject to mean field type random interactions. 250 Y. Kabashima and D. Saad where .C(J.l) is the set of all sites of S connected to J/.I ' i.e., for which the corresponding element of the tensor A is non-zero. The explicit form of the equations for 8q/.ll and 8r/.ll becomes 8rjjl=tanhf3J/.I II 8q/.ll IEC,(/.I)/I and 8q/.ll =tanh ( L tanh- 1 8rv i + F) , (10) vEM(l)//.I where M(l)/ J.l is the set of all indices of the tensor J , excluding J.l, which are connected to the vector site I; the external field F which previously appeared in the last term of Eq. (1) is directly related to our prior belief of the message bias 1 pfl = "2 (1 + tanh FSI) . (11) We therefore showed that there is a direct relation between the equations derived from the BP approach and from TAP in this particular case. One should note that the TAP approach allows for the use of finite inverse-temperatures f3 which is not naturally included in the BP approach. 5 Numerical solutions To examine the efficacy of TAP /BP decoding we used the method for decoding corrupted messages encoded by the Sourlas scheme [4], for which we have previously obtained analytical solutions using the replica method [8]. We solved iteratively Eq.(10) for specific cases by making use of differences 8qJjI and 8r/.ll to obtain the values of q~l and r'N and of the magnetization M. Numerical solutions of 10 individual runs for each value of the flip rate p starting from different initial conditions, obtained for the case f{ = 2 and C = 4, different biases (J = pi = 0.1, 0.5 - the probability of +1 bit in the original message e) and temperatures (T = 0.26, Tn) are shown in Fig. 1a. For each run, 20000 bit code words JO were generated from 10000 bit message e using a fixed random sparse tensor A. The noise corrupted code word J was decoded to retrieve the original message e. Initial conditions are set to 8r /.II = 0 and 8q/.ll = tanh F reflecting the prior belief; whenever the TAP /BP approach was successful in predicting the theoretical values we observed convergence in most runs corresponding to the ferromagnetic phase while almost all runs at low temperatures did not converged to a stable solution above the critical flip-rate (although the magnetization M did converge as one may expect). We obtain good agreement between the TAP /BP solutions and the theoretical values calculated using the methods of [8] (diamond symbols and dashed line respectively). The results for biased patterns at T = 0.26 presented in the form of mean values and standard deviation, show a sub-optimal improvement in performance as expected. Obtaining solutions under similar conditions but at Nishimori's temperature - l/Tn = 1/2In[(1 - p)/p] [7], we see that pattern sparsity is exploited optimally resulting in a magnetization M ~ 0.8 for high corruption rates, as Tn simulates accurately the loss of information due to channel noise [6, 7]; results for unbiased patterns (not shown) are not affected significantly by the use of Nishimori's temperature. The replica-based theoretical solutions [8] indicate a profoundly different behaviour for f{ = 2 in comparison to other f{ values. We therefore obtained solutions for J{ = 5 under similar conditions (which are representative of results obtained in other cases of f{ # 2). The results presented in Fig. 1b, in terms of means and standard deviation of 10 individual runs per flip rate value p, are less encouraging as the iterative solutions are sensitive to the choice of initial conditions and tend to The Beliefin TAP 251 1.2 r---,-----.-----.---,-------, 1.2 r---...,------,------.----,----, a) K=2 b) K=5 Mn, Biased .~~O~i"\\ '"'--"' II8fiIllmllIat!J8!jlilalll!. O.S 0" .. I i ° s~ \ I O 6 ) I\ Mn, Biased • v'l, (1=O. l~ )o~ 1 T =0.26, Unbiased 08: i) O 4 ~=O.5~ 9 . 'l. T=O.26, Biased ~ , (I o~ t~ =O.l~ o '1.T 1 P ·!fi 0.2 °SoP ~'f.t' .: ?~~ o 8o:h!l. f~ O.S 0.995 '---'-~~~~---LJ o 0.020.040.060.06 0.1 0.120.14 ... ~ (1=O.l~ i ,-,~1~ / i 1.001 T:Tn, Unbiased i ~~1K-iiooh ~=O.5~ , 0.999 0.6 0.4 0.998 0.997 0.2 0.996 o o ~'MttItMi.u., ......... tttftl""Itttt"'" o 0.1 0.2 0.3 0.4 0.5 o 0.1 0.2 0.3 0.4 0.5 p p Figure 1: Numerical solutions for M and different flip rate p. (a) For K = 2, different biases (f = pi = 0.1, 0.5) and temperatures (T = 0.26, Tn). Results for the unbiased patterns are shown as raw data (10 runs per flip rate value p - diamond), while the theoretical solution is marked by the dashed line. Results for biased patterns are presented by their mean and standard deviation, showing a suboptimal performance as expected for T= 0.26 and an optimal one at Nishimori's temperature -Tn. The standard deviation is significantly smaller than the symbol size. Figure (b) shows results for the case K = 5 and T = Tn in similar conditions to (a). Also here iterative solutions may generally drift away from the theoretical values where temperatures other than Tn are employed (not shown); using Nishimori's temperature alleviates the problem only in the case of biased messages and the results are in close agreement with the theoretical solutions (inset - focusing on low p values). converge to sub-optimal values unless high sparsity and the appropriate choice of temperature (Tn) forces them to the correct values, showing then good agreement with the theoretical results (solid line, see inset). This phenomena is indicative of the fact that the ground state of the non-biased system is macroscopically degenerate with multiple equally good ground states. We conclude that the TAP /BP approach may be highly useful in the case of biased patterns but may lead to errors for unbiased patterns and K 2: 3, and that the use of the appropriate temperature, i.e., Nishimori's temperature, enables one to obtain improved results, in agreement with results presented elsewhere [4, 6, 7]. 252 Y. Kabashima and D. Saad 6 Summary and discussion We compared the use of BP to that of TAP for decoding corrupted messages encoded by Sourlas's method to discover that in this particular case the two methods provide a similar set of equations. We then solved the equations iteratively for specific cases and compared the results to those obtained by the replica method. The solutions indicate that the method is particularly useful in the case of biased messages and that using Nishimori's temperature is highly beneficial; solutions obtained using other temperature values may be sub-optimal. For non-sparse messages and l{ 2: 3 we may obtain erroneous solutions using these methods. It would be desirable to explore whether the similarity in the equations derived using TAP and BP is restricted to this particular case or whether there is a more general link between the two methods. Another important question that remains open is the generality of our conclusions on the efficacy of these methods for decoding corrupted messages, as they are currently being applied in a variety of state-of-theart coding schemes (e.g., [2,3]). Understanding the limitations ofthese methods and the proper way to use them in general, especially in the context of error-correcting codes, may be highly beneficial to practitioners. Acknowledgment This work was partially supported by the RFTF program of the JSPS (YK) and by EPSRC grant GR/L19232 (DS). References [1] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Morgan Kaufmann) 1988. [2] D.J.C. MacKay and R.M. Neal, Elect. Lett., 33, 457 and preprint (1997). [3] B.J. Frey, Graphical Models for Machine Learning and Digital Communication (MIT Press), 1998. [4] N. Sourlas, Nature, 339, 693 (1989) and Europhys. Lett., 25, 159 (1994). [5] R.G. Gallager, IRE Trans. Info. Theory, IT-8, 21 (1962). [6] P. Rujan, Phys. Rev. Lett., 10, 2968 (1993). [7] H. Nishimori,J. Phys. C, 13,4071 (1980) and J. Phys. Soc. of Japan, 62, 1169 (1993). [8] Y. Kabashima and D. Saad, Europhys. Lett., 45, in press (1999). [9] D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett., 35, 1792 (1975). [10] B. Derrida, Phys. Rev. B, 24, 2613 (1981). [11] H. Bethe, Proc. R. Soc. A, 151, 540 (1935). [12] D. Thouless, P.W. Anderson and R.G. Palmer, Phil. Mag., 35, 593 (1977). [13] Y. Weiss, MIT preprint CBCL155 (1997). [14] D. Sherrington and K.Y.M. Wong J. Phys. A, 20, L785 (1987).
|
1998
|
70
|
1,571
|
Synergy and redundancy among brain cells of behaving monkeys Itay Gat· Institute of Computer Science and Center for Neural Computation The Hebrew University, Jerusalem 91904, Israel Abstract Naftali Tishby t NEC Research Institute 4 Independence Way Princeton N J 08540 Determining the relationship between the activity of a single nerve cell to that of an entire population is a fundamental question that bears on the basic neural computation paradigms. In this paper we apply an information theoretic approach to quantify the level of cooperative activity among cells in a behavioral context. It is possible to discriminate between synergetic activity of the cells vs. redundant activity, depending on the difference between the information they provide when measured jointly and the information they provide independently. We define a synergy value that is positive in the first case and negative in the second and show that the synergy value can be measured by detecting the behavioral mode of the animal from simultaneously recorded activity of the cells. We observe that among cortical cells positive synergy can be found, while cells from the basal ganglia, active during the same task, do not exhibit similar synergetic activity. titay,tishby}@cs.huji.ac.il Permanent address: Institute of Computer Science and Center for Neural Computation, The Hebrew University, Jerusalem 91904, Israel. 112 I. Gat and N. Tishby 1 Introduction Measuring ways by which several neurons in the brain participate in a specific computational task can shed light on fundamental neural information processing mechanisms. While it is unlikely that complete information from any macroscopic neural tissue will ever be available, some interesting insight can be obtained from simultaneously recorded cells in the cortex of behaving animals. The question we address in this study is the level of synergy, or the level of cooperation, among brain cells, as determined by the information they provide about the observed behavior of the animal. 1.1 The experimental data We analyze simultaneously recorded units from behaving monkeys during a delayed response behavioral experiment. The data was collected at the high brain function laboratory of the Haddassah Medical School of the Hebrew universitY[l, 2]. In this task the monkey had to remember the location of a visual stimulus and respond by touching that location after a delay of 1-32 sec. Correct responses were rewarded by a drop of juice. In one set of recordings six micro-electrodes were inserted simultaneously to the frontal or prefrontal cortex[l, 3]. In another set of experiments the same behavioral paradigm was used and recording were taken from the striatum - which is the first station in basal ganglia (a sub-cortical ganglia)[2]. The cells recorded in the striatum were the tonically active neurons[2], which are known to be the cholinergic inter-neurons of the striatum. These cells are known to respond to reward. The monkeys were trained to perform the task in two alternating modes, "Go" and "No-Go" [1]. Both sets of behavioral modes can be detected from the recorded spike trains using several statistical modeling techniques that include Hidden Markov Models (HMM) and Post Stimulus Histograms (PSTH). The details of these detection methods are reported elsewhere[4, 5]. For this paper it is important to know that we can significantly detect the correct behavior, for example in the "Go" vs. the "No-Go" correct detection is achieved about 90% of the time, where the random is 50% and the monkey's average performance is 95% correct on this task. 2 Theoretical background Our measure of synergy level among cells is information theoretic and was recently proposed by Brenner et. aZ. [6] for analysis of spikes generated by a single neuron. This is the first application of this measure to quantify cooperativity among neurons. 2.1 Synergy and redundancy A fundamental quantity in information theory is the mutual information between two random variables X and Y. It is defined as the cross-entropy (Kullbak-Liebler divergence) between the joint distribution of the variables, p(x, y), and the product of the marginal distributions p(x)p(y). As such it measures the statistical dependence of the variables X and Y. It is symmetric in X and Y and has the following Synergy and Redundancy among Brain Cells of Behaving Monkeys 113 familiar relations to their entropies[7]: I(X; Y) DKL [P(X, Y) I P(X) P(Y)] = ~ P( x, y) log (~~ ~~r~) ) (1) H(X) + H(Y) - H(X, Y) = H(X) - H(XIY) = H(Y) - H(YIX). When given three random variables X I, X 2 and Y, one can consider the mutual information between the joint variables (X I ,X2 ) and the variable Y, I(XI' X 2; Y) (notice the position of the semicolon), as well as the mutual informations I(XI; Y) and I(X2; Y). Similarly, one can consider the mutual information between Xl and X 2 conditioned on a given value of Y = y, I(XI; X21y) = DKL[P(XI ,X2Iy)IP(Xl ly)P(X2Iy)]' as well as its average, the conditional mutual information, I(XI; X 2IY) = LP(y)Iy(XI; X2)' Y Following Brenner et. al.[6] we define the synergy level of Xl and X2 with respect to the variable Y as Syny(XI ,X2) = I(XI ,X2;Y) - (I(XI;Y) + I(X2;Y)), (2) with the natural generalization to more than two variables X . This expression can be rewritten in terms of entropies and conditional information as follows: Syny(XI , X 2) = (3) H(XI,X2) - H(XI,X2IY) - ((H(Xt) - H(XIIY)) + (H(X2) - H(X2IY))) H(XIIY) + H(X2IY) - H(XI' X2IY) + H(XI' X 2) - (H(Xd + H(X2)) " I " " ., ., Depends On Y Independent of Y When the variables exhibit positive synergy value, with respect to the variable Y, they jointly provide more information on Y than when considered independently, as expected in synergetic cases. Negative synergy values correspond to redundancy the variables do not provide independent information about Y. Zero synergy value is obtained when the variables are independent of Y or when there is no change in their dependence when conditioned on Y. We claim that this is a useful measure of cooperativity among neurons, in a given computational task. It is clear from Eq.( 3) that if Iy(XI; X 2) = I(XI; X 2) Vy E Y => Syny (Xl, X 2 ) = 0, since in that case L y P(y)Iy(XI;X2) = I(XI;X2). (4) In other words, the synergy value is not zero only if the statistical dependence, hence the mutual information between the variables, is affected by the value of Y. It is positive when the mutual information increase, on the average, when conditioned on Y, and negative if this conditional mutual information decrease. Notice that the value of synergy can be both positive and negative since information, unlike entropy, is not sub-additive in the X variables. 114 1. Gat and N Tishby 3 Synergy among neurons Our measure of synergy among the units is based on the ability to detect the behavioral mode from the recorded activity, as we discuss bellow. As discussed above, synergy among neurons is possible only if their statistical dependence change with time. An important case where synergy is not expected is pure "population coding" [8]. In this case the cells are expected to fire independently, each with its own fixed tuning curve. Our synergy value can thus be used to test if the recorded units are indeed participating in a pure population code of this kind, as hypothesized for certain motor cortical activity. Theoretical models of the cortex that clearly predict nonzero synergy include attractor neural networks (ANN)[9] and synfire chain models(SFC)[3] . Both these models predict changes in the collective activity patterns, as neurons move between attractors in the ANN case, or when different synfire-chains of activity are born or disappear in the SFC case. To the extent that such changes in the collective activity depend on behavior, nonzero synergy values can be detected. It remains an interesting theoretical challenge to estimate the quantitative synergy values for such models and compare it to observed quantities. 3.1 Time-dependent cross correlations In our previous studies[4] we demonstrated, using hidden Markov models of the activity, that the pairwise cross-correlations in the same data can change significantly with time, depending on the underlying collective state of activity. These states, revealed by the hidden Markov model, in turn depend on the behavior and enable its prediction. Dramatic and fast changes in the cross-correlation of cells has also been shown by others[lO]. This finding indicate directly that the statistical dependence of the neurons can change (rapidly) with time, in a way correlated to behavior. This clearly suggests that nonzero synergy should be observed among these cortical units, relative to this behavior. In the present study this theoretical hypothesis is verified. 3.2 Redundancy cases If on the other hand the conditioned mutual information equal zero for all behavioral modes, i.e. Iy(Xl; X2) = 0 Vy E Y, while I(Xl; X 2) > 0, we expect to get negative synergy, or redundancy among the cells, with respect to the behavior variable Y. We observed clear redundancy in another part of the brain, the basal ganglia, during the same experiment, when the behavior was the pre-reward and post-reward activity. In this case different cells provide exactly the same information, which yields negative synergy values. 4 Experimental results 4.1 Synergy measurement in practice To evaluate the synergy value among different cells, it is necessary to estimate the conditional distribution p(ylx) where y is the current behavior and x represent a single trial of spike trains of the considered cells. Estimating this probability, Synergy and Redundancy among Brain Cells of Behaving Monkeys 115 however, requires an underlying statistical model, or a represented of the spike trains. Otherwise there is never enough data since cortical spike trains are never exactly reproducible. In this work we choose the rate representation, which is the simplest to evaluate. The estimation of p(ylx) goes as follows: • For each of the M behavioral modes (Y1, Y2 .. , YM) collect spike train samples (the tmining data set). • Using the training sample, construct a Post Stimulus Time Histogram (PSTH), i.e. the rate as function of time, for each behavioral mode. • Given a spike train, outside of the training set, compute its probability to be result in each of the M modes. • The spike train considered correctly classified if the most probable mode is in fact the true behavioral mode, and incorrectly otherwise. • The fraction of correct classification, for all spike trains of a given behavioral mode Yi, is taken as the estimate of P(Yi Ix), and denoted pc., where Ci 1S the identity of the cells used in the computation. For the case of only two categories of behavior and for a uniform distribution of the different categories, the value of the entropy H(Y) is the same for all combinations of cells, and is simply H (Y) = - Ly p(y) log2 (p(y)) = log22 = 1. The full expression (in bits) for the synergy value can be thus written as follows: ~p(x) [- ~ Po"" log2(P"",)] ; (5) 1+ ~P(x) [- ~ Po, IOg,(P,,)] + ~ p(x) [- ~ Po, IOg2(P,,)] , If the first expression is larger than the second than there is (positive) synergy and vice versa for redundancy. However there is one very important caveat. As we saw the computation of the mutual information is not done exactly, and what one really computes is only a lower bound. If the bound is tighter for multiple cell calculation, the method could falsely infer positive synergy, and if the bound is tighter for the single cell computation, the method could falsely infer negative synergy. In previous works we have shown that the method we use for this estimation is quite reasonable and robust[5], therefore, we believe that we have even a conservative (i.e. less positive) estimate of synergy. 4.2 Observed synergy values In the first set of experiments we tried to detect the behavioral mode during the delay-period of correct trials. In this case the two types of behavior were the "Go" and the "No-Go" described in the introduction. An example of this detection problem is given in figure lAo In this figure there are 100 examples of multi-electrode recording of spike trains during the delay period. On the left is the "Go-mode" data and on the right the "No-Go mode", for two cells. On the lower part there is an example of two single spike trains that need to be classified by the mode models. 116 I. Gat and N. Tishby A. 00_. 110-00 1104. B. Pre-r-.r4 Poet-reward • ••••••.• _ ...•... __ ••• --"""--""'-"--""'''1 " : , ~ :~I 1", , ,;-,.-c:;;---..------;;;--"' .• ~ ~ -m~~' ,~ . [_.:~ ••••• :~_ •• ~ ••• .: •• ;'~~"h •• ~. __ ~ •••• ~_ •• ~.J • """'If-._ •• ,,... .1 :? ... :::-'--._ ? 'T' r"'ij"l"i" i""i,~"':('l',~u,i;~','Ll r'··jil·~··~·~IIUTI~j~I·I;i····"·II··II ··I ·:.j l ... :.~!!.: ...... : ............. : ........... ! ........ ~ .. _.J l .. J ... L .. .I. .. : .. ! ... : .. I ... : ...... l ...... : .... ,.j • """'If-. ___ •••• ,... • : ? ........ - -.--? i ................... L.,.:.:.:.~~.~.~................ _ ............. ~ .. ::-: .. :-::::::"',..i. ......................... , i I. II 1.1.. .. II i.! i ' I. .. 1 U J 1 Jli...i I I 1 1 I I I i I I I I I ! ~ •• : ..... ; ........ : ................. _ ••• _ •• ;U: ........ _ .... ::.: ~ .. : ••••• ~ ........ u .................... ~ ...... __ • .... ":'.: abag1e trial 110. 1 811agl. trial »ct. 2 81 .. 1& trial 110. 1 Figure 1: Raster displays of simultaneously recorded cells in the 2 different areas, in each area there were 2 behavioral modes. Table 1 gives some examples of detection results obtained by using 2 cells independently, and by using their joint combination. It can be seen that the synergy is positive and significant. We examined 19 recording session of the same behavioral modes for two different animals and evaluated the synergy value. In 18 out of the 19 sessions there was at least one example of significant positive synergy among the cells. For comparison we analyzed another set of experiments in which the data was recorded from the striatum in the basal ganglia. An example for this detection is shown in figure lB. The behavioral modes were the "pre-reward" vs. the "postreward" periods. Nine recording sessions for the two different monkeys were examined using the same detection technique. Although the detection results improve when the number of cells increase, in none of these recordings a positive synergy value was found. For most of the data the synergy value was close to zero, i.e. the mutual information among two cells jointly was close to the sum of the mutual information of the independent cells, as expected when the cells exhibit (conditionally) independent activity. The prevailing difference between the synergy measurements in the cortex and in the TAN s' of the basal ganglia is also strengthen by the different mechanisms underlying those cells. The TANs' are assumed to be globally mediators of information in the striatum, a relatively simple task, whereas the information processed in the frontal cortex in this task is believed to be much more collective and complicated. Here we suggest a first handle for quantitative detection of such different neuronal activities. Acknowledgments Special thanks are due to Moshe Abeles for his encouragement and support, and to William Bialek for suggesting the idea to look for the synergy among cortical cells. We would also like to thank A. Raz, Hagai Bergman, and Eilon Vaadia for sharing their data with us. The research at the Hebrew university was supported in part by a grant from the Unites States Israeli Binational Science Foundation (BSF). Synergy and Redundancy among Brain Cells of Behaving Monkeys 117 Table 1: Examples of synergy among cortical neurons. For each example the mutual information of each cell separately is given together with the mutual information of the pair. In parenthesis the matching detection probability (average over p(ylx)) is also given. The last column gives the percentage of increase from the mutual information of the single cells to the mutual information of the pair. The table gives only those pairs for which the percentage was larger than 20% and the detection rate higher than 60%. Session Cells CellI Ce1l2 Both cells Syn (%) b116b 5,6 0.068 (64.84) 0.083 (66.80) 0.209 (76.17) 38 bl21b 1,4 0.201 (73.74) 0.118 (69.70) 0.497 (87.88) 56 bl21b 3,4 0.082 (66.67) 0.118 (69.70) 0.240 (77.78) 20 bl26b 0,3 0.062 (62.63) 0.077 (66.16) 0.198 (75.25) 42 bl26b 1,2 0.030 (60.10) 0.051 (63.13) 0.148 (72.22) 82 cl77b 2,3 0.054 (62.74) 0.013 (61.50) 0.081 (68.01) 20 cr38b 0,2 0.074 (65.93) 0.058 (63.19) 0.160 (73.08) 21 cr38b 0,4 0.074 (65.93) 0.042 (62.09) 0.144 (71.98) 24 cr38b 3,4 0.051 (62.09) 0.042 (62.09) 0.111 (69.23) 20 cr43b 0,1 0.070 (65.00) 0.063 (64.44) 0.181 (74.44) 36 References [1] M. Abeles, E. Vaadia, H. Bergman, Firing patterns of single unit in the prefrontal cortex and neural-networks models., Network 1 (1990). [2] E. Raz, et al Neuronal synchronization of tonically active neurons in the striatum of normal and parkinsonian primates, J. Neurophysiol. 76:2083-2088 (1996). [3] M. Abeles, Corticonics, (Cambridge University Press, 1991). [4] I. Gat, N. Tishby and M. Abeles, Hidden Markov modeling of simultaneously recorded cells in the associative cortex of behaving monkeys, Network,8:297-322 (1997). [5] I. Gat, N. Tishby, Comparative study of different supervised detection methods of simultaneously recorded spike trains, in preparation. [6] N. Brenner, S.P. Strong, R. Koberle, W. Bialek, and R. de Ruyter van Steveninck, The Economy of Impulses and the Stiffnes of Spike Trains, NEC Research Institute Technical Note (1998). [7] T.M. Cover and J.A. Thomas, Elements of Information Theory., (Wiley NY, 1991). [8] A.P. Georgopoulos, A.B. Schwartz, R.E. Kettner, Neuronal Population Coding of Movement Direction, Science, 233:1416-1419 (1986). [9] D.J. Amit, Modeling Brain Function, (Cambridge University Press, 1989). [10] E. Ahissar et al Dependence of Cortical Plasticity on Correlated Activity of Single Neurons and on Behavioral Context, Science, 257:1412-1415 (1992).
|
1998
|
71
|
1,572
|
Classification on Pairwise Proximity Data Thore Graepelt , Ralf Herbrichi , Peter Bollmann-Sdorrat , Klaus Obermayert Technical University of Berlin, t Statistics Research Group, Sekr. FR 6-9, t Neural Information Processing Group, Sekr. FR 2-1 , Franklinstr. 28/29, 10587 Berlin, Germany Abstract We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an extension of the Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics W.r.t. their generalization. Finally, the algorithms are successfully applied to protein structure data and to data from the cat's cerebral cortex. They show better performance than K-nearest-neighbor classification. 1 Introduction In most areas of pattern recognition, machine learning, and neural computation it has become common practice to represent data as feature vectors in a Euclidean vector space. This kind of representation is very convenient because the Euclidean vector space offers powerful analytical tools for data analysis not available in other representations. However, such a representation incorporates assumptions about the data that may not hold and of which the practitioner may not even be aware. And - an even more severe restriction - no domain-independent procedures for the construction of features are known [3J. A more general approach to the characterization of a set of data items is to deClassification on Pairwise Proximity Data 439 fine a proximity or distance measure between data items - not necessarily given as feature vectors - and to provide a learning algorithm with a proximity matrix of a set of training data. Since pairwise proximity measures can be defined on structured objects like graphs this procedure provides a bridge between the classical and the" structural" approaches to pattern recognition [3J. Additionally, pairwise data occur frequently in empirical sciences like psychology, psychophysics, economics, biochemistry etc., and most of the algorithms developed for this kind of data - predominantly clustering [5, 4J and multidimensional scaling [8, 6]- fall into the realm of unsupervised learning. In contrast to nearest-neighbor classification schemes [10] we suggest algorithms which operate on the given proximity data via linear models. After a brief discussion of different kinds of proximity data in terms of possible embeddings, we suggest how the Optimal Hyperplane (OHC) algorithm for classification [2, 9] can be applied to distance data from both Euclidean and pseudo-Euclidean spaces. Subsequently, a more general model is introduced which is formulated as a linear threshold model on the proximities, and is optimized using the principle of Structural Risk Minimization [9J . We demonstrate how the choice of proximity measure influences the generalization behavior of the algorithm and apply both algorithms to real-world data from biochemistry and neuroanatomy. 2 The Nature of Proximity Data When faced with proximity data in the form of a matrix P = {Pij} of pairwise proximity values between data items, one idea is to embed the data in a suitable space for visualization and analysis. This is referred to as multidimensional scaling, and Torgerson [8J suggested a procedure for the linear embedding of proximity data. Interpreting the proximities as Euclidean distances in some unknown Euclidean space one can calculate an inner product matrix H = XTX w.r.t. to the center of mass of the data from the proximities according to [8] 1 21 21 21 2 ( f f f ) (H)ij = -2 !Pij! - £ Ii !Pmj ! - £ ~ !Pin ! + £2 m~l !Pmn! . (1) Let us perform a spectral decomposition H = UDUT = XTX and choose D and U such that their columns are sorted in decreasing order of magnitude of the eigenvalues .Ai of H . The embedding in an n-dimensional space is achieved by calculating the first n rows of X = D ~ U T . In order to embed a new data item characterized by a vector p consisting of its pairwise proximities Pi w.r.t. to the previously known data items, one calculates the corresponding inner product vector h using (1) with (H)ij, Pij, and Pmj replaced by hi , Pi , and Pm respectively, and then obtains the embedding x = D -~ UTh. The matrix H has negative eigenvalues if the distance data P were not Euclidean. Then the data can be isometrically embedded only in a pseudo-Euclidean or Minkowski space ~(n+,n-), equipped with a bilinear form q> , which is not positive definite. In this case the distance measure takes the form P(Xi, Xj) = Jq>(Xi - Xj) = J(Xi - xj)TM(Xi - Xj), where M is any n x n symmetric matrix assumed to have full rank, but not necessarily positive definite. However, we can always find a basis such that the matrix M assumes the form M = diag(In+ , -In-) with n = n+ + n-, where the pair (n+, n-) is called the signature of the pseudoEuclidean space [3J . Also in this case (1) serves to reconstruct the symmetric bilinear form, and the embedding proceeds as above with D replaced by D , whose diagonal contains the modules of the eigenvalues of H. 440 T. Graepel. R. Herbrich. P. Bollmann-Sdorra and K. Obermayer From the eigenvalue spectrum of H the effective dimensionality of the proximity preserving embedding can be obtained. (i) If there is only a small number of large positive eigenvalues, the data items can be reasonably embedded in a Euclidean space. (ii) If there is a small number of positive and negative eigenvalues of large absolute value, then an embedding in a pseudo-Euclidean space is possible. (iii) If the spectrum is continuous and relatively flat, then no linear embedding is possible in less than .e - 1 dimensions. 3 Classification in Euclidean and Pseudo-Euclidean Space Let the training set S be given by an .e x.e matrix P of pairwise distances of unknown data vectors x in a Euclidean space, and a target class Yi E {-I, + I} for each data item. Assuming that the data are linearly separable, we follow the OHC algorithm [2J and set up a linear model for the classification in data space, y(x) = sign(xT w + b) . (2) Then we can always find a weight vector wand threshold b such that Yi(xTw+b)~l i=l, . .. ,.e. (3) Now the optimal hyperplane with maximal margin is found by minimizing IIw l12 under the constraints (3). This is equivalent to maximizing the Wolfe dual W(o:) w.r.t. 0:, 1 W(o:) = o:TI- 20:TYXTXYo: , (4) with Y = diag(y) , and the .e-vector 1. The constraints are ai ~ 0, Vi, and 1 Ty 0:* = O. Since the optimal weight vector w* can be expressed as a linear combination of training examples w* = XYo:*, (5) and the optimal threshold b* is obtained by evaluating b* = Yi - xT w* for any training example X i with at i- 0, the decision function (2) can be fully evaluated using inner products between data vectors only. This formulation allows us to learn on the distance data directly. In the Euclidean case we can apply (1) to the distance matrix P of the training data, obtain the inner product matrix H = XTX, and introduce it directly without explicit embedding of the data - into the Wolfe dual (4). The same is true for the test phase, where only the inner products of the test vector with the training examples are needed. In the case of pseudo-Euclidean distance data the inner product matrix H obtained from the distance matrix P via (1) has negative eigenvalues. This means that the corresponding data vectors can only be embedded in a pseudo-Euclidean space R(n+ ,n-) as explained in the previous section. Also H cannot serve as the Hessian in the quadratic programming (QP) problem (4). It turns out, however, that the indefiniteness of the bilinear form in pseudo-Euclidean spaces does not forestall linear classification [3]. A decision plane is characterized by the equation xTMw = 0, as illustrated in Fig. 1. However, Fig. 1 also shows that the same plane can just as well be described by x T W = 0 - as if the space were Euclidean - where w = Mw is simply the mirror image of w w.r.t. the axes of negative signature. For the OHC algorithm this means, that if we can reconstruct the Euclidean inner product matrix XTX from the distance data, we can proceed with the OHC algorithm as usual. fI = XTX is calculated by "flipping" the axes of negative signature, i.e., with D = diag(l>-ll, ... , I>-cl), we can calculate fI according to fI = UDUT , (6) Classification on Pairwise Proximity Data 441 "x """"/ / / xTMw = a / / xTMx = a / w "x+ "W """Figure 1: Plot of a decision line (thick) in a 2D pseudo-Euclidean space with signature (1,1), i.e., M = diag(l, -1). The decision line is described by xTMw = a. When interpreted as Euclidean it is at right angles with w, which is the mirror image of w w.r.t. the axis X- of negative signature. In physics this plot is referred to as a Minkowski space-time diagram, where x+ corresponds to the space axis and x- to the time axis. The dashed diagonal lines indicate the points xTMx = a of zero length, the light cone. which serves now as the Hessian matrix for normal OHC classification. Note, that H is positive semi-definite, which ensures a unique solution for the QP problem (4). 4 Learning a Linear Decision Function in Proximity Space In order to cope with general proximity data (case (iii) of Section 2) let the training set S be given by an f x R proximity matrix P whose elements P' ) = P(.l"" r )) "rf' the pairwise proximity values between data items Xi, i = 1, ... , £, and a target class Yi E {-I, + I} for each data item. Let us assume that the proximity values satisfy reflexivity, Pii = a,Vi, and symmetry, Pij = pji,Vi,j. We can make a linear model for the classification of a new data item x represented by a vector of proximities P = (PI,'" ,pe)T where Pi = p(x, xd are the proximities of x w.r.t. to the items Xi in the training set, y(x) = sign(pT w + b) . (7) Comparing (7) to (2) we note, that this is equivalent to using the vector of proximities p as the feature vector x characterizing data item x. Consequently, the OHC algorithm from the previous section can be used to learn a proximity model when x is replaced by p in (2), XTX is replaced by p2 in the Wolfe dual (4), and the columns Pl of P serve as the training data. Note that the formal correspondence does not imply that the columns of the proximity matrix are Euclidean feature vectors as used in the SV setting. We merely consider a linear threshold model on the proximities of a data item to all the training data items. Since the Hessian of the QP problem (4) is the square of the proximity matrix, it is always at least positive semi-definite, which guarantees a unique solution of the QP problem. Once the optimal coefficients 0:; have been found, a test data item can be classified by determining its proximities Pi from the elements Xi of the training set and by using conditions (2) together with (5) for its classification. 5 Metric Proximities Let us consider two examples in order to see, what learning on pairwise metric data amounts to. The first example is the minimalistic a-I-metric, which for two objects Xi and x J is defined as follows: ( . x.) _ { a if Xi = Xj Po Xl, J 1 otherwise . (8) 442 . -.... . '" )I II •• ~.: ... ~ •• 'f .. .. -. , ' , , " . . . \ .' ,. . • . . . . " '. '.'~ .. : . : • I, ..... a} T. Graepe/, R. Herbrich, P Bollmann-Sdorra and K. Obermayer . . . . . . " . \ . .. , • . . . .. , . I, .... "~ : . '. : I, •• ,".' b) .I!"' •• .. ..~ ~ .: 'l~ . . . . , . . . " . ~ . \ . .. , .. ., "I, .... ": : . '. ,',. c) . . ..... Figure 2: Decision functions in a simple two-class classification problem for different Minkowski metrics. The algorithm described in Sect. 4 was applied with (a) the city-block metric (r = 1), (b) the Euclidean metric (r = 2), and (c) the maximum metric (r -+ 00). The three metrics result in considerably different generalization behavior, and use different Support Vectors (circled). The corresponding £ x £ proximity matrix Po has full rank as can be seen from its non-vanishing determinant det(Po) = (_I)l-l(£ - 1). From the definition of the 0-1 metric it is clear that every data item x not contained in the training set is represented by the same proximity vector p = 1, and will be assigned to the same class. For the 0-1 metric the QP problem (4) can be solved analytically by matrix inversion, and using POl = (£ - 1)-111 T - I we obtain for the classification This result means, that each new data item is assigned to the majority class of the training sample, which is - given the available information - the Bayes optimal decision. This example demonstrates, how the prior information - in the case of the 0-1 metric the minimal information of identity - is encoded in the chosen distance measure. As an easy-to-visualize example of metric distance measures on vectors x E ~n let us consider the Minkowski r-metrics defined for r 2: 1 as (10) For r = 2 the Minkowski metric is equivalent to the Euclidean distance. The case r = 1 corresponds to the so-called city-block metric, in which the distaqce is given by the sum of absolute differences for each feature. On the other extreme, the maximum norm, r -+ 00, takes only the largest absolute difference in feature values as the distance between objects. Note that with increasing r more weight is given to the larger differences in feature values, and that in the literature on multidimensional scaling [1] Minkowski metrics have been used to examine the dominance of features in human perception. Using the Minkowski metrics for classification in a toy example, we observed that different values of r lead to very different generalization behavior on the same set of data points, as can be seen in Fig. 2. Since there is no apriori reason to prefer one metric over the other, using a particular metric is equivalent to incorporating prior knowledge into the solution of the problem. Classification on Pairwise Proximity Data 443 I Size of Class ORC-cut-off 3.08 4.62 6.15 3.08 0.91 4.01 0.45 0.00 ORC-flip-axis 3.08 1.54 4.62 3.08 0.91 4.01 0.45 0.00 OR C-proximi ty 3.08 4.62 3.08 1.54 0.45 3.60 0.45 0.00 1-NN 5.82 6.00 6.09 6.74 1.65 3.66 0.00 2.01 2-NN 6.09 4.46 7.91 5.09 2.01 5.27 0.00 3.44 3-NN 5.29 2.29 4.18 4.71 2.14 6.34 0.00 2.68 4-NN 6.45 5.14 3.68 5.17 2.46 5.13 0.00 4.87 5-NN 5.55 2.75 2.72 5.29 1.65 5.09 0.00 4.11 Table 1: Classification results for Cat Cortex and Protein data. Bold numbers indicate best results. 6 Real-World Proximity Data In the numerical experiments we focused on two real-world data sets, which are both given in terms of a proximity matrix P and class labels y for each data item. The data set called "cat cortex" consists of a matrix of connection strengths between 65 cortical areas of the cat. The data was collected by Scannell [7] from text and figures of the available anatomical literature and the connections are assigned proximity values p as follows: self-connection (p = 0) , strong and dense connection (p = 1) , intermediate connection (p = 2), weak connection (p = 3), and absent or unreported connection (p = 4). From functional considerations the areas can be assigned to four different regions: auditory (A), visual (V), somatosensory (SS), and frontolimbic (FL). The classification task is to discriminate between these four regions, each time one against the three others. The second data set consists of a proximity matrix from the structural comparison of 224 protein sequences based upon the concept of evolutionary distance. The majority of these proteins can be assigned to one of four classes of globins: hemoglobin-a (R-a), hemoglobin-;3 (R-;3), myoglobin (M), and heterogenous globins (GR). The classification task is to assign proteins to one of these classes, one against the rest. We compared three different procedures for the described two-class classification problems, performing leave-one-out cross-validation for the "cat cortex" dataset and lO-fold cross-validation for the "protein" data set to estimate the generalization error. Table 1 shows the results. ORC-cut-off refers to the simple method of making the inner product matrix H positive semi-definite by neglecting projections to those eigenvectors with negative eigenvalues. ORC-flip-axis flips the axes of negative signature as described in (6) and thus preserves the information contained in those directions for classification. ORC-proximit}', finally, refers to the model linear in the proximities as introduced in Section 4. It can be seen that aRC-proximity shows a better generalization than ORC-flip-axis, which in turn performs slightly better than ORC-cut-off. This is especially the case on the cat cortex data set, whose inner Rroduct matrix H has negative eigenvalues. For comparison, the lower part of Table 1 shows the corresponding cross-validation results for K-nearest-neighbor, which is a natural choice to use, because it only needs the pairwise proximities to determine the training data to participate in the voting. The presented algorithms ORC-flip-axis and aRC-proximity perform consistently better than K-nearest-neighbor, even when the value of K is optimally chosen. 444 T Graepe/, R. Herbnch, P. Bollmann-Sdorra and K. Obermayer 7 Conclusion and Future work In this contribution we investigated the nature of proximity data and suggested ways for performing classification on them. Due to the generality of the proximity approach we expect that many other problems can be fruitfully cast into this framework. Although we focused on classification problems, regression can be considered on proximity data in an analogous way. Noting that Support Vector kernels and covariance functions for Gaussian processes are similarity measures for vector spaces, we see that this approach has recently gained a lot of popularity. However, one problem with pairwise proximities is that their number scales quadratically with the number of objects under consideration. Hence, for large scale practical applications the problems of missing data and active data selection for proximity data will be of increasing importance. Acknow ledgments We thank Prof. U. Kockelkorn for fruitful discussions. We also thank S. Gunn for providing his Support Vector implementation. Finally, we are indebted to M. Vingron and T. Hofmann for providing the protein data set. This project was funded by the Technical U ni versity of Berlin via the Forschungsinitiativprojekt FIP 13/41. References [1 J 1. Borg and J. Lingoes. Multidimensional Similarity Structure Analysis, volume 13 of Springer Series in Statistics. Springer-Verlag, Berlin, Heidelberg, 1987. [2J B. Boser, 1. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pages 144~ 152, 1992. [3J L. Goldfarb. Progress in Pattern Recognition, volume 2, chapter 9: A New Approach To Pattern Recognition, pages 241 ~402. Elsevier Science Publishers, 1985. [4J T. Graepel and K. Obermayer. A stochastic self-organizing map for proximity data. Neural Computation (accepted for pUblication), 1998. [5J T. Hofmann and J. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(1):1- 14, 1997. [6J H. Klock and J. M. Buhmann. Multidimensional scaling by deterministic annealing. In M. Pelillo and E. R. Hancock, editors, Energy Minimization Methods in Computer Vision and Pattern Recognition, volume 1223, pages 246-260, Berlin, Heidelberg, 1997. Springer-Verlag. [7J J. W. Scannell, C. Blakemore, and M. P. Young. Analysis of connectivity in the cat cerebral cortex. The Journal of Neuroscience, 15(2):1463- 1483,1995. [8J W. S. Torgerson. Theory and Methods of Scaling. Wiley, New York, 1958. [9J V. Vapnik. The Nature of Statistical Learning. Springer-Verlag, Berlin, Heidelberg, Germany, 1995. [10J D. Weinshall, D. W. Jacobs, and Y. Gdalyahu. Classification in non~metric space. In Advances in Neural Information Processing Systems, volume 11, 1999. in press.
|
1998
|
72
|
1,573
|
Using Analytic QP and Sparseness to Speed Training of Support Vector Machines John C. Platt Microsoft Research 1 Microsoft Way Redmond, WA 98052 jplatt@microsoft.com Abstract Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) problem. This paper proposes an algorithm for training SVMs: Sequential Minimal Optimization, or SMO. SMO breaks the large QP problem into a series of smallest possible QP problems which are analytically solvable. Thus, SMO does not require a numerical QP library. SMO's computation time is dominated by evaluation of the kernel, hence kernel optimizations substantially quicken SMO. For the MNIST database, SMO is 1.7 times as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be 1500 times faster than the PCG chunking algorithm. 1 INTRODUCTION In the last few years, there has been a surge of interest in Support Vector Machines (SVMs) [1]. SVMs have empirically been shown to give good generalization performance on a wide variety of problems. However, the use of SVMs is stilI limited to a small group of researchers. One possible reason is that training algorithms for SVMs are slow, especially for large problems. Another explanation is that SVM training algorithms are complex, subtle, and sometimes difficult to implement. This paper describes a new SVM learning algorithm that is easy to implement, often faster, and has better scaling properties than the standard SVM training algorithm. The new SVM learning algorithm is called Sequential Minimal Optimization (or SMO). 1.1 OVERVIEW OF SUPPORT VECTOR MACHINES A general non-linear SVM can be expressed as U = LQiYiK(Xi,X) - b (1) 558 J C. Platt where U is the output of the SVM, K is a kernel function which measures the similarity of a stored training example Xi to the input x, Yi E {-1, + 1} is the desired output of the classifier, b is a threshold, and (li are weights which blend the different kernels [1]. For linear SVMs, the kernel function K is linear, hence equation (1) can be expressed as u=w·x-b (2) where W = Li (liYiXi· Training of an SVM consists of finding the (li. The training is expressed as a minimization of a dual quadratic form: (3) subject to box constraints, (4) and one linear equality constraint N LYi(li = O. (5) i=l The (li are Lagrange multipliers of a primal quadratic programming (QP) problem: there is a one-to-one correspondence between each (li and each training example Xi. Equations (3-5) form a QP problem that the SMO algorithm will solve. The SMO algorithm will terminate when all of the Karush-Kuhn-Tucker (KKT) optimality conditions of the QP problem are fulfilled. These KKT conditions are particularly simple: (li = 0 '* YiUi ~ 1, 0 < (li < C '* YiUi = 1, (li = C '* YiUi :::; 1, (6) where Ui is the output of the SVM for the ith training example. 1.2 PREVIOUS METHODS FOR TRAINING SUPPORT VECTOR MACHINES Due to its immense size, the QP problem that arises from SVMs cannot be easily solved via standard QP techniques. The quadratic form in (3) involves a Hessian matrix of dimension equal to the number of training examples. This matrix cannot be fit into 128 Megabytes if there are more than 4000 training examples. Vapnik [9] describes a method to solve the SVM QP, which has since been known as "chunking." Chunking relies on the fact that removing training examples with (li = 0 does not change the solution. Chunking thus breaks down the large QP problem into a series of smaller QP sub-problems, whose object is to identify the training examples with non-zero (li. Every QP sub-problem updates the subset of the (li that are associated with the sub-problem, while leaving the rest of the (li unchanged. The QP sub-problem consists of every non-zero (li from the previous sub-problem combined with the M worst examples that violate the KKT conditions (6), for some M [1]. At the last step, the entire set of non-zero (li has been identified, hence the last step solves the entire QP problem. Chunking reduces the dimension of the matrix from the number of training examples to approximately the number of non-zero (li. If standard QP techniques are used, chunking cannot handle large-scale training problems, because even this reduced matrix cannot fit into memory. Kaufman [3] has described a QP algorithm that does not require the storage of the entire Hessian. The decomposition technique [6] is similar to chunking: decomposition breaks the large QP problem into smaller QP sub-problems. However, Osuna et al. [6] suggest keeping a Analytic QP and Sparseness to Speed Training o/Support Vector Machines 559 Q 2 =c Q 2 =c al=oQal=C a l = {::sJal = C Q 2 =0 Q 2 =0 Yt *- Y2 ~ Qt Q 2 = k Yt = Y2 ~ Qt + Q 2 = k Figure 1: The Lagrange multipliers al and a2 must fulfill all of the constraints of the full problem. The inequality constraints cause the Lagrange multipliers to lie in the box. The linear equality constraint causes them to lie on a diagonal line. fixed size matrix for every sub-problem, deleting some examples and adding others which violate the KKT conditions. Using a fixed-size matrix allows SVMs to be trained on very large training sets. 10achims [2] suggests adding and subtracting examples according to heuristics for rapid convergence. However, until SMO, decomposition required the use of a numerical QP library, which can be costly or slow. 2 SEQUENTIAL MINIMAL OPTIMIZATION Sequential Minimal Optimization quickly solves the SVM QP problem without using numerical QP optimization steps at all. SMO decomposes the overall QP problem into fixedsize QP sub-problems, similar to the decomposition method [7]. Unlike previous methods, however, SMO chooses to solve the smallest possible optimization problem at each step. For the standard SVM, the smallest possible optimization problem involves two elements of a. because the a. must obey one linear equality constraint. At each step, SMO chooses two ai to jointly optimize, finds the optimal values for these ai, and updates the SVM to reflect these new values. The advantage of SMO lies in the fact that solving for two ai can be done analytically. Thus, numerical QP optimization is avoided entirely. The inner loop of the algorithm can be expressed in a short amount of C code, rather than invoking an entire QP library routine. By avoiding numerical QP, the computation time is shifted from QP to kernel evaluation. Kernel evaluation time can be dramatically reduced in certain common situations, e.g., when a linear SVM is used, or when the input data is sparse (mostly zero). The result of kernel evaluations can also be cached in memory [1]. There are two components to SMO: an analytic method for solving for the two ai, and a heuristic for choosing which multipliers to optimize. Pseudo-code for the SMO algorithm can be found in [8, 7], along with the relationship to other optimization and machine learning algorithms. 2.1 SOLVING FOR TWO LAGRANGE MULTIPLIERS To solve for the two Lagrange multipliers al and a2, SMO first computes the constraints on these mUltipliers and then solves for the constrained minimum. For convenience, all quantities that refer to the first multiplier will have a subscript 1, while all quantities that refer to the second mUltiplier will have a subscript 2. Because there are only two multipliers, 560 1. C. Platt the constraints can easily be displayed in two dimensions (see figure 1). The constrained minimum of the objective function must lie on a diagonal line segment. The ends of the diagonal line segment can be expressed quite simply in terms of a2. Let s = YI Y2· The following bounds apply to a2: 1 . 1 L = max(O, a2 + sal - '2(s + l)C), H = mm(C, a2 + sal - '2(s - l)C). (7) Under normal circumstances, the objective function is positive definite, and there is a minimum along the direction of the linear equality constraint. In this case, SMO computes the minimum along the direction of the linear equality constraint: new _ + Y2(EI - E2) a2 -a2 K( .... .... ) K( .... - ) 2K( .... .... )' Xl, Xl + X2, X2 Xl, X2 (8) where Ei = Ui - Yi is the error on the ith training example. As a next step, the constrained minimum is found by clipping a2ew into the interval [L, H]. The value of al is then computed from the new, clipped, a2: (9) For both linear and non-linear SVMs, the threshold b is re-computed after each step, so that the KKT conditions are fulfilled for both optimized examples. 2.2 HEURISTICS FOR CHOOSING WHICH MULTIPLIERS TO OPTIMIZE In order to speed convergence, SMO uses heuristics to choose which two Lagrange multipliers to jointly optimize. There are two separate choice heuristics: one for al and one for a2. The choice of al provides the outer loop of the SMO algorithm. If an example is found to violate the KKT conditions by the outer loop, it is eligible for optimization. The outer loop alternates single passes through the entire training set with multiple passes through the non-bound ai (ai f. {a, C}). The multiple passes terminate when all of the non-bound examples obey the KKT conditions within E. The entire SMO algorithm terminates when the entire training set obeys the KKT conditions within c. Typically, c = 10-3 . The first choice heuristic concentrates the CPU time on the examples that are most likely to violate the KKT conditions, i.e., the non-bound subset. As the SMO algorithm progresses, ai that are at the bounds are likely to stay at the bounds, while ai that are not at the bounds will move as other examples are optimized. As a further optimization, SMO uses the shrinking heuristic proposed in [2]. After the pass through the entire training set, shrinking finds examples which fulfill the KKT conditions more than the worst example failed the KKT conditions. Further passes through the training set ignore these fulfilled conditions until a final pass at the end of training, which ensures that every example fulfills its KKT condition. Once an al is chosen, SMO chooses an a2 to maximize the size of the step taken during joint optimization. SMO approximates the step size by the absolute value of the numerator in equation (8): lEI -E21. SMO keeps a cached error value E for every non-bound example in the training set and' then chooses an error to approximately maximize the step size. If EI is positive, SMO chooses an example with minimum error E2 . If EI is negative, SMO chooses an example with maximum error E2 . 2.3 KERNEL OPTIMIZATIONS Because the computation time for SMO is dominated by kernel evaluations, SMO can be accelerated by optimizing these kernel evaluations. Utilizing sparse inputs is a generally Analytic QP and Sparseness to Speed Training of Support Vector Machines 561 Experiment Kernel Sparse Kernel Training Number of C % Inputs Caching Set Support Sparse Used Used Size Vectors Inputs AdultLin Linear y mix 11221 4158 0.05 89 AdultLinD Linear N mix 11221 4158 0.05 0 WebLin Linear y mix 49749 1723 1 96 WebLinD Linear N mix 49749 1723 1 0 AdultGaussK Gaussian y y 11221 4206 1 89 AdultGauss Gaussian y N 11221 4206 1 89 AdultGaussKD Gaussian N y 11221 4206 1 0 AdultGaussD Gaussian N N 11221 4206 1 0 WebGaussK Gaussian y y 49749 4484 5 96 WebGauss Gaussian y N 49749 4484 5 96 WebGaussKD Gaussian N y 49749 4484 5 0 WebGaussD Gaussian N N 49749 4484 5 0 MNIST Polynom. y N 60000 3450 100 81 Table 1: Parameters for various experiments applicable kernel optimization. For commonly-used kernels, equations (1) and (2) can be dramatically sped up by exploiting the sparseness of the input. For example, a Gaussian kernel can be expressed as an exponential of a linear combination of sparse dot products. Sparsely storing the training set also achieves substantial reduction in memory consumption. To compute a linear SVM, only a single weight vector needs to be stored, rather than all of the training examples that correspond to non-zero ai. If the QP sub-problem succeeds, the stored weight vector is updated to reflect the new ai values. 3 BENCHMARKING SMO The SMO algorithm is tested against the standard chunking algorithm and against the decomposition method on a series of benchmarks. Both SMO and chunking are written in C++, using Microsoft's Visual C++ 6.0 compiler. Joachims' package SVMlight (version 2.01) with a default working set size of lOis used to test the decomposition method. The CPU time of all algorithms are measured on an unloaded 266 MHz Pentium II processor running Windows NT 4. The chunking algorithm uses the projected conjugate gradient algorithm as its QP solver, as suggested by Burges [1]. All algorithms use sparse dot product code and kernel caching, as appropriate [1, 2]. Both SMO and chunking share folded linear SVM code. The SMO algorithm is tested on three real-world data sets. The results of the experiments are shown in Tables 1 and 2. Further tests on artificial data sets can be found in [8, 7]. The first test set is the UeI Adult data set [5]. The SVM is given 14 attributes of a census form of a household and asked to predict whether that household has an income greater than $50,000. Out of the 14 attributes, eight are categorical and six are continuous. The six continuous attributes are discretized into quintiles, yielding a total of 123 binary attributes. The second test set is text categorization: classifying whether a web page belongs to a category or not. Each web page is represented as 300 sparse binary keywords attributes. The third test set is the MNIST database of handwritten digits, from AT&T Research Labs [4]. One classifier of MNIST, class 8, is trained. The inputs are 784-dimensional 562 1. C. Platt Experiment SMa SVMllght Chunking SMa SVMllght Chunking Time Time Time Scaling Scaling Scaling (sec) (sec) (sec) Exponent Exponent Exponent AdultLin 13.7 217.9 20711.3 1.8 2.1 3.1 AdultLinD 21.9 nla 21141.1 1.0 nla 3.0 WebLin 339.9 3980.8 17164.7 1.6 2.2 2.5 WebLinD 4589.1 nla 17332.8 1.5 nla 2.5 AdultGaussK 442.4 284.7 11910.6 2.0 2.0 2.9 AdultGauss 523.3 737.5 nla 2.0 2.0 nla AdultGaussKD 1433.0 nla 14740.4 2.5 nla 2.8 AdultGaussD 1810.2 nla nla 2.0 nla nla WebGaussK 2477.9 2949.5 23877.6 1.6 2.0 2.0 WebGauss 2538.0 6923.5 nla 1.6 1.8 nla WebGaussKD 23365.3 nla 50371 .9 2.6 nla 2.0 WebGaussD 24758.0 nla nla 1.6 nla nla MNIST 19387.9 38452.3 33109.0 nla nla nla Table 2: Timings of algorithms on various data sets. non-binary vectors and are stored as sparse vectors. A fifth-order polynomial kernel is used to match the AT&T accuracy results. The Adult set and the Web set are trained both with linear SVMs and Gaussian SVMs with variance of 10. For the Adult and Web data sets, the C parameter is chosen to optimize accuracy on a validation set. Experiments on the Adult and Web sets are performed with and without sparse inputs and with and without kernel caching, in order to determine the effect these kernel optimizations have on computation time. When a kernel cache is used, the cache size for SMa and SVMlight is 40 megabytes. The chunking algorithm always uses kernel caching: matrix values from the previous QP step are re-used. For the linear experiments, SMa does not use kernel caching, while SVMlight does. In Table 2, the scaling of each algorithm is measured as a function of the training set size, which is varied by taking random nested subsets of the full training set. A line is fitted to the log of the training time versus the log of the set size. The slope of the line is an empirical scaling exponent. 4 CONCLUSIONS As can be seen in Table 2, standard PCG chunking is slower than SMa for the data sets shown, even for dense inputs. Decomposition and SMa have the advantage, over standard PCG chunking, of ignoring the examples whose Lagrange multipliers are at C. This advantage is reflected in the scaling exponents for PCG chunking versus SMa and SVMlight . PCG chunking can be altered to have a similar property [3]. Notice that PCG chunking uses the same sparse dot product code and linear SVM folding code as SMa. However, these optimizations do not speed up PCG chunking due to the overhead of numerically solving large QP sub-problems. SMa and SVM1ight are similar: they decompose the large QP problem into very small QP sub-problems. SMa decomposes into even smaller sub-problems: it uses analytical solutions of two-dimensional sub-problems, while SVMlight uses numerical QP to solve 10dimensional sub-problems. The difference in timings between the two methods is partly due to the numerical QP overhead, but mostly due to the difference in heuristics and kernel optimizations. For example, SMa is faster than SVMlight by an order of magnitude on Analytic QP and Sparseness to Speed Training of Support Vector Machines 563 linear problems, due to linear SVM folding. However, SVMlight can also potentially use linear SVM folding. In these experiments, SMO uses a very simple least-recently-used kernel cache of Hessian rows, while SVMlight uses a more complex kernel cache and modifies its heuristics to utilize the kernel effectively [2]. Therefore, SMO does not benefit from the kernel cache at the largest problem sizes, while SVMlight speeds up by a factor of 2.5 . Utilizing sparseness to compute kernels yields a large advantage for SMO due to the lack of heavy numerical QP overhead. For the sparse data sets shown, SMO can speed up by a factor of between 3 and 13, while PCG chunking only obtained a maximum speed up of 2.1 times. The MNIST experiments were performed without a kernel cache, because the MNIST data set takes up most of the memory of the benchmark machine. Due to sparse inputs, SMO is a factor of 1.7 faster than PCG chunking, even though none of the Lagrange multipliers are at C. On a machine with more memory, SVMlight would be as fast or faster than SMO for MNIST, due to kernel caching. In summary, SMO is a simple method for training support vector machines which does not require a numerical QP library. Because its CPU time is dominated by kernel evaluation, SMO can be dramatically quickened by the use of kernel optimizations, such as linear SVM folding and sparse dot products. SMO can be anywhere from 1.7 to 1500 times faster than the standard PCG chunking algorithm, depending on the data set. Acknowledgements Thanks to Chris Burges for running data sets through his projected conjugate gradient code and for various helpful suggestions. References [1] c. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2), 1998. [2] T. Joachims. Making large-scale SVM learning practical. In B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 169-184. MIT Press, 1998. [3] L. Kaufman. Solving the quadratic programming problem arising in support vector classification. In B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 147-168. MIT Press, 1998. [4] Y. LeCun. MNIST handwritten digit database. Available on the web at http:// www.research.att.comr yann/ocr/mnistl. [5] C. J. Merz and P. M. Murphy. UCI repository of machine learning databases, 1998. [http://www.ics.uci.edu/rvmlearnIMLRepository.html].Irvine.CA: University of California, Department of Information and Computer Science. [6] E. Osuna, R. Freund, and F. Girosi. Improved training algorithm for support vector machines. In Proc. IEEE Neural Networks in Signal Processing '97, 1997. [7] J. C. Platt. Fast training of SVMs using sequential minimal optimization. In B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 185-208. MIT Press, 1998. [8] J. C. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Technical Report MSR- TR-98-14, Microsoft Research, 1998. Available at http://www.research.microsoft.comrjplattlsmo.html. [9] V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982.
|
1998
|
73
|
1,574
|
Dynamically Adapting Kernels in Support Vector Machines N ello Cristianini Colin Campbell Dept. of Engineering Mathematics University of Bristol, UK nello.cristianini@bristol.ac.uk Dept. of Engineering Mathematics University of Bristol, UK c.campbell@bristol.ac.uk John Shawe-Taylor Dept. of Computer Science Royal Holloway College john@dcs.rhbnc.ac.uk Abstract The kernel-parameter is one of the few tunable parameters in Support Vector machines, controlling the complexity of the resulting hypothesis. Its choice amounts to model selection and its value is usually found by means of a validation set. We present an algorithm which can automatically perform model selection with little additional computational cost and with no need of a validation set. In this procedure model selection and learning are not separate, but kernels are dynamically adjusted during the learning process to find the kernel parameter which provides the best possible upper bound on the generalisation error. Theoretical results motivating the approach and experimental results confirming its validity are presented. 1 Introduction Support Vector Machines (SVMs) are learning systems designed to automatically trade-off accuracy and complexity by minimizing an upper bound on the generalisation error provided by VC theory. In practice, however, SVMs still have a few tunable parameters which need to be determined in order to achieve the right balance and the values of these are usually found by means of a validation set. One of the most important of these is the kernel-parameter which implicitly defines the structure of the high dimensional feature space where the maximal margin hyperplane is found. Too rich a feature space would cause the system to overfit the data, Dynamically Adapting Kernels in Support Vector Machines 205 and conversely the system can be unable to separate the data if the kernels are too poor. Capacity control can therefore be performed by tuning the kernel parameter subject to the margin being maximized. For noisy datasets, yet another quantity needs to be set, namely the soft-margin parameter C. SVMs therefore display a remarkable dimensionality reduction for model selection. Systems such as neural networks need many different architectures to be tested and decision trees are faced with a similar problem during the pruning phase. On the other hand SVMs can shift from one model complexity to another by simply tuning a continuous parameter. Generally, model selection by SVMs is still performed in the standard way: by learning different SVMs and testing them on a validation set in order to determine the optimal value of the kernel-parameter. This is expensive in terms of computing time and training data. In this paper we propose a different scheme which dynamically adjusts the kernel-parameter to explore the space of possible models at little additional computational cost compared to fixed-kernel learning. Futhermore this approach only makes use of training-set information so it is more efficient in a sample complexity sense. Before proposing the model selection procedure we first prove a theoretical result, namely that the margin and structural risk minimization (SRM) bound on the generalization error depend smoothly on the kernel parameter. This can be exploited by an algorithm which keeps the system close to maximal margin while the kernel parameter is changed smoothly. During this phase, the theoretical bound given by SRM theory can be computed. The best kernel-parameter is the one which gives the lowest possible bound. In section 4 we present experimental results showing that model selection can be efficiently performed using the proposed method (though we only consider Gaussian kernels in the simulations outlined). 2 Support Vector Learning The decision function implemented by SV machines can be written as: f(x) = sign (L Yiai K(x, Xi) - B) tESV where the ai are obtained by maximising the following Lagrangian (where m is the number of patterns): m m L = L ai - 1/2 L aiajYiyjK(Xi, Xj) i= l i,j= l with respect to the ai, subject to the constraints m LaiYi = a i=l and where the functions K(x, x') are called kernels. The kernels provide an expression for dot-products in a high-dimensional feature space [1]: K(x, x') = (<I> (x) , <I>(x' )) 206 N. Cristianini, C. Campbell and 1. Shawe-Taylor and also implicitly define the nonlinear mapping <1>( x) of the training data into feature space where they may be separated using the maximal margin hyperplane. A number of choices of kernel-function can be made e.g. Gaussians kernels: K(x, x') = e-llx-x'112/2(T2 The following upper bound can be proven from VC theory for the generalisation error using hyperplanes in feature space [7, 9J: where R is the radius of the smallest ball containing the training set, m the number of training points and 'Y the margin (d. [2J for a complete survey of the generalization properties of SV machines). The Lagrange multipliers Qi are usually found by means of a Quadratic Programming optimization routine, while the kernel-parameters are found using a validation set. As illustrated in Figure 1 there is a minimum of the generalisation error for that value of the kernel-parameter which has the best trade-off between overfitting and ability to find an efficient solution. 0 13 012 0 11 0 1 009 008 0 07 0 06 005 0 04 2 10 Figure 1: Generalization error (y-axis) as a function of (J (x-axis) for the mirror symmetry problem (for Gaussian kernels with zero training error and maximal margin, m = 200, n = 30 and averaged over 105 examples). 3 A utomatic Model Order Selection We now prove a theorem which shows that the margin of the optimal hyperplane is a smooth function of the kernel parameter, as is the upper bound on the generalisation error. First we state the Implicit Function Theorem. Implicit Function Theorem [10]: Let F(x, y) be a continuously differentiable function, F : U ~ ~ x V ~ ~p --t ~ and let (a, b) E U x V be a solution to the equation F(x, y) = O. Let the partial derivatives matrix mi,j = (~:;) w.r.t. y be full rank at (a, b). Then, near (a, b), Dynamically Adapting Kernels in Support Vector Machines 207 there exists one and only one function y = g(x) such that F(x,g(x)) = 0, and such function is continuous. Theorem: The margin, of SV machines depends smoothly on the kernel parameter a. Proof: Consider the function 9 : ~ <;;; ~ --t A <;;; ~P, 9 : a ~ (aO, A) which given the data maps the choice of a to the optimal parameters aO and lagrange parameter A of the SV machine with Kernel matrix Gij = YiYjK(a; Xi, Xj )). Let p Wu(a) = l:ai - 1/2 l:aiajYiyjK(a; Xi,Xj) + A(l: Yiai) i=l i,j be the functional that the SV machine maximizes. Fix a value of a and let aO(a) be the corresponding solution of Wu(a). Let I be the set of indices for which aj((J) =1= O. We may assume that the submatrix of G indexed by I is non-singular since otherwise the maximal margin hyperplane could be expressed in terms of a subset of indices. Now choose a maximal set of indices J containing I such that the corresponding su bmatrix of G is non-singular and all of the points indexed by J have margin 1. Now consider the function F((J,a , A)i = (a~)j; ,i 2: 1, F((J,a,A)o = LjYjaj in the neighbourhood of (J, where ji is an enumeration of the elements of J, oWu "'_ oa. = 1 - Yj L.. aiYiK((J; Xi, Xj) + AYj J . t and satisfies the equation F((J, aO((J), A(a)) = 0 at the extremal points of Wu(a) . Then the SV function is the implicit function, (aO, A) = g((J), and is continuous (and unique) iff F is continuously differentiable and the partial derivatives matrix w.r.t. a, A is full rank. But the partial derivatives matrix H is given by OP Hij = oat = Yj;Yj}K((J;xj; ,Xj}) = Hji,i ,j 2: 1, JJ for ji,iJ E J, which was non-degenerate by definition of J, while oFo oFo oFj . Hoo = OA = 0 and HOj = oajJ = n = OA = Hjo,J 2: 1. Consider any non-zero a satisfying Lj ajYJ = 0, and any A. We have (a, Af H(a, A) = aTGa + 2AaT Y = aTGa > O. Hence, the matrix H is non-singular for a satisfying the given linear constraint. Hence, by the implicit function theorem 9 is a continuous function of (J. The following is proven in [2J: ,2 = (t Zif) -1 t=l which shows that, is a continuous function of (J. As the radius of the ball containing the points is also a continuous function of (J , and the generalization error bound has the form f. ~ CR(a)2llaO((J)lll for some constant C, we have the following corollary. Corollary: The bound on the generalization error is smooth in (J. This means that, when the margin is optimal, small variations in the kernel parameter will produce small variations in the margin (and in the bound on the generalisation error). Thus ,u ~ ,uHu and after updating the (J, the system will 208 N Cristianini, C. Campbell and J. Shawe-Taylor still be in a sub-optimal position. This suggests the following strategy for Gaussian kernels, for instance: Kernel Selection Procedure l. Initialize u to a very small value 2. Maximize the margin, then • Compute the SRM bound (or observe the validation error) • Increase the kernel parameter: u +- u + 8u 3. Stop when a predetermined value of u is reached else repeat step 2. This procedure takes advantage of the fact that for very small (J convergence is generally very rapid (overfi tting the data, of course), and that once the system is near the equilibrium, few iterations will always be sufficient to move it back to the maximal margin situation. In other words, this system is brought to a maximal margin state in the beginning, when this is computationally very cheap, and then it is actively kept in that situation by continuously adjusting the a while the kernelparameter is gradually increased. In the next section we will experimentally investigate this procedure for real-life datasets. In the numerical simulations we have used the Kernel-Adatron (KA) algorithm recently developed by two of the authors [4] which can be used to train SV machines. We have chosen this algorithm because it can be regarded as a gradient ascent procedure for maximising the Kuhn-Tucker Lagrangian L. Thus the ai for a sub-optimal state are close to those for the optimum and so little computational effort will be needed to bring the system back to a maximal margin position: The Kernel-Adatron Algorithm. l. exi = l. 2. FOR i = 1 TO m · ,i = YiZi • 8exi = 17(1 _ , i) • IF (exi + 8exi ) ::; 0 THEN exi = 0 ELSE exi +- exi + 8ext . • margin = ~ (min(z;) -max(z;)) (4 (z;) = positively (negatively) labelled patterns) 3. IF(margin = 1) THEN stop, ELSE go to step 2. 4 Experimental Results In this section we implement the above algorithm for real-life datasets and plot the upper bound given by VC theory and the generalization error as functions of (J. In order to compute the bound, E ::; R2/m,2 we need to estimate the radius of the ball in feature space. In general his can be done explicitly by maximising the following Lagrangian w.r.t. Ai using convex quadratic programming routines: L = L AiK(Xi, Xi) - L AiAjK(Xi, Xj) i,j subject to the constraints 2:i Ai = 1 and Ai 2: O. The radius is then found from [3]: Dynamically Adapting Kernels in Support Vector Machines 209 i,j i,j However, we can also get an upper bound for this quantity by noting that Gaussian kernels always map training points to the surface of a sphere of radius 1 centered on the origin of the feature space. This can be easily seen by noting that the distance of a point from the origin is its norm: 11<I>(x)11 = J(<I>(X),<I>(X)) = JK(x,x) = Jellx-xll/2o-2 = 1 In Figure 2 we give both these bounds (the upper bound is Li adm) and generalisation error (on a test set) for two standard datasets: the aspect-angle dependent sonar classification dataset of Gorman and Sejnowski [5] and the Wisconsin breast cancer dataset [8]. As we see from these plots there is little need for the additional computational cost of determining R from the above quadratic progamming problem, at least for Gaussian kernels. In Fig. 3 we plot the bound Li adm and generalisation error for 2 figures from a United States Postal Service dataset of handwritten digits [6]. In these, and other instances we have investigated, the minimum of the bound approximately coincides with the minimum of the generalisation error. This gives a good criterion for the most suitable choice for a. Furthermore, this estimate for the best a is derived solely from training data without the need for an additional validation set . 02 . , .. Figure 2: Generalisation error (solid curves) for the sonar classification (left Fig.) and Wisconsin breast cancer datasets (right Fig.). The upper curves (dotted) show the upper bounds from VC theory (for the top curves R=l). Starting with a small a-value we have observed that the margin can be maximised rapidly. Furthermore, the margin remains close to 1 if a is incremented by a small amount. Consequently, we can study the performance of the system by traversing a range of a-values, alternately incrementing a then maximising the margin using the previous optimal set of a-values as a starting point. We have found that this procedure does not add a significant computational cost in general. For example, for the sonar classification dataset mentioned above and starting at a = 0.1 with increments ~a = 0.1 it took 186 iterations to reach a = 1.0 and 4895 to reach a = 2.0 as against 110 and 2624 iterations for learning at both these a-values. For a rough doubling of the learning time it is possible to determine a reasonable value for a for good generalisation without use of a validation set. 210 N Cristianini, C Campbell and J Shawe-Taylor .. " ...... ". -", '. '. '. O. o. \ 0 7 07 \. \'" '. O. -' O. \. \ 0' \\., 02 \ , "-'. 0' ~ 0 0 0 10 0 12 Figure 3: Generalisation error (solid curve) and upper bound from VC theory (dashed curve with R=l) for digits 0 and 3 from the USPS dataset of handwritten digits. 5 Conclusion We have presented an algorithm which automatically learns the kernel parameter with little additional cost, both in a computational and sample-complexity sense. Model selection takes place during the learning process itself, and experimental results are provided showing that this strategy provides a good estimate of the correct model complexity. References [1] Aizerman, M., Braverman, E., and Rozonoer, L. (1964). Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning, A utomations and Remote Control, 25:821-837. [2] Bartlett P., Shawe-Taylor J., (1998). Generalization Performance of Support Vector Machines and Other Pattern Classifiers. 'Advances in Kernel Methods - Support Vector Learning', Bernhard Sch61kopf, Christopher J. C. Burges, and Alexander J. Smola (eds.), MIT Press, Cambridge, USA. [3] Burges c., (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:l. [4] Friess T., Cristianini N., Campbell C., (1998) The Kernel-Adatron Algorithm: a Fast and Simple Learning Procedure for Support Vector Machines, in Shavlik, J. , ed., Machine Learning: Proceedings of the Fifteenth International Conference, Morgan Kaufmann Publishers, San Francisco, CA. [5] Gorman R.P. & Sejnowski, T.J. (1988) Neural Networks 1:75-89. [6] LeCun, Y., Jackel, L. D. , Bottou, L., Brunot, A., Cortes, C., Denker, J. S., Drucker, H., Guyon, I., Muller, U. A., Sackinger, E., Simard, P. and Vapnik, V., (1995). Comparison of learning algorithms for handwritten digit recognition, International Conference on Artificial Neural Networks, Fogelman, F. and Gallinari, P. (Ed.), pp. 53-60. [7] Shawe-Taylor, J., Bartlett, P., Williamson, R. & Anthony, M. (1996). Structural Risk Minimization over Data-Dependent Hierarchies NeuroCOLT Technical Report NCTR-96-053 (ftp://ftp.des .rhbne .ae. uk /pub/neuroeolt/teeh_reports). [8] Ster, B., & Dobnikar, A. (1996) Neural networks in medical diagnosis: comparison with other methods. In A. Bulsari et al. (ed.) Proceedings of the International Conference EA NN '96, p. 427-430. [9] Vapnik, V. (1995) The Nature of Statistical Learning Theory, Springer Verlag. [10] James, Robert C. (1966) Advanced calculus Belmont, Calif. : Wadsworth
|
1998
|
74
|
1,575
|
Temporally Asymmetric Hebbian Learning, Spike Timing and Neuronal Response Variability L.F. Abbott and Sen Song Volen Center and Department of Biology Brandeis University Waltham MA 02454 Abstract Recent experimental data indicate that the strengthening or weakening of synaptic connections between neurons depends on the relative timing of pre- and postsynaptic action potentials. A Hebbian synaptic modification rule based on these data leads to a stable state in which the excitatory and inhibitory inputs to a neuron are balanced, producing an irregular pattern of firing. It has been proposed that neurons in vivo operate in such a mode. 1 Introduction Hebbian modification of network interconnections plays a central role in the study of learning in neural networks (Rumelhart and McClelland, 1986; Hertz et al., 1991). Most work on Hebbian learning involves network models in which the activities of the individual units are represented by continuous variables. A Hebbian learning rule, in this context, is specified by describing how network weights change as a function of the activities of the units that transmit and receive signals across a given network connection. While analyses of Hebbian learning along these lines have provided important results, direct application of these ideas to neuroscience is hindered by the fact that real neurons cannot be adequately described by continuous activity variables such as firing rates. Instead, the inputs and outputs of neurons are sequences of action potentials or spikes. All the information conveyed by one neuron to another over any appreciable distance is carried by the temporal patterns of action potential sequences. Rules by which synaptic connections between real neurons are modified in a Hebbian manner should properly be expressed as functions of the relative timing of the action potentials fired by the input (presynaptic) and output (postsynaptic) neurons. Until recently, little information has been available about the exact dependence of synaptic modification on pre- and postsynaptic spike timing (see however, Levy and Steward, 1983; Gustafsson et ai., 1987). New experimental results (Markram et at., 1997; Bell et al., 1997; Debanne et at., 1998; Zhang et at., 1998; Bi and Poo, 1999) have changed 70 L. F Abbott and S. Song this situation dramatically, and these allow us to study Hebbian learning in a manner that is much more realistic and relevant to biological neural networks. The results may find application in artificial neural networks as well. 2 Temporally Asymmetric LTP and LTD The biological substrate for Hebbian learning in neuroscience is provided by long-term potentiation (LTP) and long-term depression (LTD) of the synaptic connections between neurons (see for example, Malenka and Nicoll, 1993). LTP is a long-lasting strengthening of synaptic efficacy associated with paired pre- and postsynaptic activity. LTD is a long-lasting weakening of synaptic strength. In recent experiments on neocortical slices (Markram et aI., 1997), hippocampal cells in culture (Bi and Poo, 1999), and in vivo studies of tadpole tectum (Zhang et aI., 1998), induction of LTP required that presynaptic action potentials preceded postsynaptic firing by no more than about 20 ms. Maximal LTP occurred when presynaptic spikes preceded postsynaptic action potentials by less than a few milliseconds. If presynaptic spikes followed postsynaptic action potentials, long-term depression rather than potentiation resulted. These results are summarized schematically in Figure 1. Figure 1: A model of the change in synaptic strength 6..g produced by paired pre- and postsynaptic spikes occurring at times tpre and tpost respectively. Positive changes correspond to LTP and negative to LTD. There is an abrupt transition at tpre tpost = O. The units for 6..g are arbitrary in this figure, but data indicate a maximum change of approximately 0.5 % per spike pair. The curve in Figure 1 is a caricature used to model the weight changes arising from pairings of pre- and postsynaptic action potentials separated by various intervals of time. This curve resembles the data from all three preparations discussed above, but a couple of assumptions have been made in its construction. The data indicate that there is a rapid transition from LTP to LTD depending on whether the time difference between pre- and postsynaptic spiking is positive or negative, but the existing data cannot resolve exactly what happens at the transition point. We have assumed that there is a discontinuous jump from LTP to LTD at this point. In addition, we assume that the area under the LTP side of the curve is slightly less than the area under the LTD side. In Figure 1, this diffetence is imposed by making the magnitude of LTD slightly greater than the magnitude of LTP, while both sides of the curve have equal exponential fall-offs away from zero time difference. Alternately, we could have given the LTD side a slower exponential fall-off and equal amplitude. The data do not support either assumption unambiguously, nor do they indicate which area is larger. The assumption that the area under the LTD side of the curve is larger than that under the LTP side is critical if the resulting synaptic modification rule is to be stable against uncontrolled growth of synaptic strengths. Hebb (1949) postulated that a synapse should be strengthened when the presynaptic neuron is frequently involved in making the postsynaptic neuron fire an action potential. Causality is an important element in Hebb's statement; synaptic potentiation should occur only if there is a causal relationship between the pre- and postsynaptic spiking. The LTPILTD rule summarized in Figure 1 imposes causality through a tight timing requirement. The narrow Hebbian Learning and Response Variability 71 windows for LTP and LTD seen in the data, and the abrupt transition from potentiation to depression near zero separation between pre- and postsynaptic spike times impose a strict causality condition for LTP induction. 3 Response Variability What are the implications of the synaptic modification rule summarized in Figure I? To address this question, we introduce another topic that has been discussed extensively within the computational neuroscience community in recent years, the origin of response variability (Softky and Koch, 1992 & 1994; Shadlen and Newsome, 1994 & 1998; Tsodyks and Sejnowski, 1995; Amit and BruneI, 1997; Troyer and Miller, 1997a & b; Bugmann et aI., 1997; van Vreeswijk and Sompolinsky, 1996 & 1998). Neurons can respond to multiple synaptic inputs in two different modes of operation. Figure 2 shows membrane potentials of a model neuron receiving 1000 excitatory and 200 inhibitory synaptic inputs. Each input consists of an independent Poisson spike train driving a synaptic conductance. The integrate-and-fire model neuron used in this example integrates these synaptic conductances as a simple capacitor-resistor circuit. To generate action potentials in this model, we monitor the membrane potential and compare it to a threshold voltage. Whenever the membrane potential reaches the threshold an action potential is "pasted" onto the membrane potential trace and the membrane potential is reset to a prescribed value. A -58 :> -20 E ~ -40 > -60 ~ 250 500 750 1000 V r50 100 150 200 t (ms) B -50 250 500 750 1000 _-20 > E. -40 > -60 "Y'-'~"'h.llNM,~j~~"~"250 500 750 1000 t (ms) Figure 2: Regular and irregular firing modes of a model integrate-and-fire neuron. Upper panels show the model with action potentials deactivated, and the dashed lines show the action potential threshold. The lower figures show the model with action potentials activated. A) In the regular firing mode, the average membrane potential without spikes is above threshold and the firing pattern is fast and regular (note the different time scale in the lower panel). B) In the irregular firing mode, the average membrane potential without spikes is below threshold and the firing pattern is slower and irregular. Figures 2A and 2B illustrate the two modes of operation. The upper panels of Figure 2 show the membrane potential with the action potential generation mechanism of the model turned off, and the lower panels show the membrane potential and spike sequences that result when the action potential generation is turned on. In Figure 2A, the effect of the excitatory inputs is strong enough relative to that of the inhibitory inputs so that the average membrane potential, when action potential generation is blocked, is above the spike threshold of the model. When the action potential mechanism is turned back on (lower panel of Figure 2A), this produces a fairly regular pattern of action potentials at a relatively high rate. The total synaptic input attempts to charge the neuron above the threshold, but every time the potential reaches the threshold it gets reset and starts charging again. In this regular 72 L. F Abbott and S. Song firing mode of operation, the timing of the action potentials is determined primarily by the charging rate of the cell, which is controlled by its membrane time constant. Since this does not vary as a function of time, the firing pattern is regular despite the fact that the synaptic input is varying. Figure 2B shows the other mode of operation that produces an irregular firing pattern. In the irregular firing mode, the average membrane is more hyperpolarized than the threshold for action potential generation (upper panel of Figure 2B). In this mode, action potentials are only generated when there is a fluctuation in the total synaptic current strong enough to make the membrane potential cross the threshold. This results in slower and more irregular firing (lower panel of Figure 2B). The irregular firing mode has a number of interesting features (Shadlen and Newsome, 1994 & 1998; Tsodyks and Sejnowski, 1995; Amit and Brunei, 1997; Troyer and Miller, 1997a & b; Bugmann et aI., 1997; van Vreeswijk and Sompolinsky, 1996 & 1998). First, it generates irregular firing patterns that are far closer to the firing patterns seen in vivo than the patterns produced in the regular firing mode. Second, responses to changes in the synaptic input are much more rapid in this mode, being limited only by the synaptic rise time rather than the membrane time constant. Finally, the timing of action potentials in the irregular firing mode is related to the timing of fluctuations in the synaptic input rather than being determined primarily by the membrane time constant of the cell. A ~ 1. (I) '" I 1 (I) ~ 11l !!? 0.8 ·20 B ·10 0 10 20 tpre -Ipost (ms) Figure 3: Histograms indicating the relative probability of finding pre- and postsynaptic spikes separated by the indicated time interval. A) Regular firing mode. The probability is essentiaIly flat and at the chance level of one. B) Irregular firing mode. There is an excess of presynaptic spike shortly before a postsynaptic spike. An important difference between the regular and irregular firing modes is illustrated in the cross-correlograms shown in Figure 3 (Troyer and Miller, 1997b; Bugmann et al. 1997). These indicate the probability that an action potential fired by the postsynaptic neuron is preceded or followed by an presynaptic spike separated by various intervals. The histogram has been normalized so its value for pairings that are due solely to chance is one. The histogram when the model is in the regular firing mode (Figure 3A) takes a value close to one for almost all input-output spike time differences. This is a reflection of the fact that the timing of individual action potentials in the regular firing mode is relatively independent of the timing of the presynaptic inputs. In contrast, the histogram for a model neuron in the irregular firing mode (Figure 3B) shows a much larger excess of presynaptic spikes occurring shortly before the postsynaptic neuron fires. This excess reflects the fluctuations in the total synaptic input that push the membrane potential up to the threshold and produce a spike in the irregular firing mode. It indicates that, in this mode, there is a tight temporal correlation between the timing of such fluctuations and output spikes. For a neuron to operate in the irregular firing mode, there must be an appropriate balance between the strength of its excitatory and inhibitory inputs. The excitatory input must be weak enough, relative to the inhibitory input, so that the average membrane potential in the absence of spikes is below the action potential threshold to avoid regular firing. However, excitatory input must be sufficiently strong to keep the average potential close enough to Hebbian Learning and Response Variability 73 the threshold so that fluctuations can reach it and cause the cell to fire. How is this balance achieved? 4 Asymmetric LTPILTD Leads to an Irregular Firing State A comparison of the LTPILTD synaptic modification rule illustrated in Figure 1, and the presynaptic/postsynaptic timing histogram shown in Figure 3, reveals that a temporally asymmetric synaptic modification rule based on the curve in Figure 1 can automatically generate the balance of excitation and inhibition needed to produce an irregular firing state. Suppose that we start a neuron model in a regular firing mode by giving it relatively strong excitatory synaptic strengths. We then apply the LTPILTD rule of Figure 1 to the excitatory synapse while holding the inhibitory synapse at constant values. Recall that Figure 1 has been adjusted so that the area under the LTD part of the curve is greater than that under the LTP part. This means that if there is an equal probability of a presynaptic spike to either precede or follow a postsynaptic spike the net effect will be a weakening of the excitatory synapses. This is exactly what happens in the regular firing mode, where the relationship between the timing of pre- and postsynaptic spikes is approximately random (Figure 3A). As the LTPILTD rule weakens the excitatory synapses, the average membrane potential drops and the neuron enters the irregular firing mode. In the irregular firing mode, there is a higher probability for a presynaptic spike to precede than to follow a postsynaptic spike (Figure 3B). This compensates for the fact that the rule we use produces more LTD than LTP. Equilibrium will be reached when the asymmetry of the LTPILTD modification curve of Figure 1 is matched by the asymmetry of the presynaptic/postsynaptic timing histogram of Figure 3B. The equilibrium state corresponds to a balanced, irregular firing mode of operation, and it is automatically produced by the temporally asymmetric learning rule. Figure 4A shows a transition from a regular to an irregular firing state mediated by the temporally asymmetric LTPILTD modification rule. The irregularity of the postsynaptic spike train has been quantified by plotting the coefficient of variation (CV), the standard deviation over the mean of the interspike intervals, of the model neuron as a function of time. Initially, the neuron was in a regular firing state with a low CV value. After the synaptic modification rule reached an equilibrium state, the CV took a value near one indicating that the neuron has been transformed into an irregular firing mode. The solid curve in Figure 4B shows that temporally asymmetric LTPILTD can robustly generate irregular output firing for a wide range of input firing rates. A B 1.0 1.2 § § .~ 0.8 .~ 1.0 .~ 0.6 .~ 0.8 (5 (5 0.6 ". ~ 0" c '0 ~ 04 ~ 0.2 Qi 2 ~ .......... . 8 8 o. 0.0-'------.--,---.-----, 00+--,---..,.----,---; " .0 20 30 40 5 ,0 15 20 25 time steps input rate (Hz) Figure 4: Coefficient of variation (CV) of the output spike train of the model neuron. A) Transition from a regular to an irregular firing state as temporally asymmetric LTPILTD modifies synaptic strengths. The units of time in this plot 'are arbitrary because they depend on the magnitude of LTP and LTD used in the model. B) Equilibrium CV values as a function of the firing rates of excitatory inputs to the model neuron, The solid curve gives the results when temporally asymmetric LTP/LTD is active, The dashed curve shows the results if the synaptic strengths that arose for 5 Hz inputs are left unmodified. 74 L. F Abbott and S. Song 5 Discussion Temporally asymmetric LTPILTD provides a Hebbian-type learning rule with interesting properties (Kempter et aI., 1998). In the past, temporally asymmetric Hebbian learning rules have been studied and applied to problems of temporal sequence generation (Manai and Levy, 1993), navigation (Blum and Abbott, 1996; Gerstner and Abbott, 1997), motor learning (Abbott and Blum, 1996), and detection of spike synchrony (Gerstner et al., 1996). In these studies, two different LTPILTD window sizes were assumed: either of order 100 ms (Manai and Levy, 1993; Blum and Abbott, 1996; Gerstner and Abbott, 1997; (Abbott and Blum, 1996) or around 1 ms (Gerstner et aI., 1996). The new data (Markram et al., 1997; Bell et aI., 1997; Zhang et al., 1998; Bi and Poo, 1999) give a window size of order 10 ms. For alms window size, temporally asymmetric LTPILTD is sensitive to precise spike timing. When the window size is of order 100 ms, changes in stimuli or motor actions on a behavioral level become relevant for LTP and LTD. A window size of 10 ms, as supported by the recent data, suggests that LTP and LTD are sensitive to firing correlations relevant to neuronal circuitry, such as input-output correlations, which vary over this time scale. Temporally asymmetric LTPILTD has some interesting properties that distinguish it from Hebbian learning rules based on correlations or covariances in pre- and postsynaptic rates. We have found that the rule used here is not sensitive to input firing rates or to variability in input rates. If we split the excitatory inputs of the model into two groups and give these two input sets different rates, we see no difference in the distribution of synaptic strengths arising from the learning rule. Similarly, if one group is given a steady firing rate and the other group has firing rates that vary in time, no difference in synaptic strengths is apparent. The most effective way to induce LTP in a set of inputs is to synchronize some of their spikes. Inputs with synchronized spikes are slightly more effective at firing the neuron than un synchronized spikes. This means that such inputs will preceded postsynaptic spikes more frequently and thus will get stronger. This suggests that spike synchrony may be a signal that marks a set of inputs for learning. Even when this synchrony has no particular functional effect, so that it has little impact on the firing pattern of the postsynaptic neuron, it can lead to dramatic shifts in synaptic strength. Thus, spike synchronization may be a mechanism for inducing LTP and LTD. Acknowledgments Research supported by the National Science Foundation (DMS-9503261), the Sloan Center for Theoretical Neurobiology at Brandeis University, a Howard Hughes Predoctoral Fellowship, and the W.M. Keck Foundation. References Abbott, LF & Blum, KI (1996) Functional significance of long-term potentiation for sequence learning and prediction. Cerebral Cortex 6:406-416. Amit, DJ & BruneI N (1997) Global spontaneous activity and local structured (learned) delay activity in cortex. Cerebral Cortex 7:237-252. Bell ce, Han VZ, Sugawara Y & Grant K (1997) Synaptic plasticity in a cerebellum-like structure depends on temporal order. Nature 387:278-281. Bi G-q & Poo M-m (1999) Activity-induced synaptic modifications in hippocampal culture: dependence on spike timing, synaptic strength and cell type. J. Neurophysiol. (in press). Blum, KI & Abbott, LF (1996) A model of spatial map formation in the hippocampus of the rat. Neural Compo 8:85-93. Bugmann, G, Christodoulou, C & and Taylor, JG (1997) Role of temporal integration and fluctuation detection in the highly irregular firing of a leaky integrator neuron model with partial reset. Neural Compu. 9:985-1000. Hebbian Learning and Response Variability 75 Debanne D, Gahwiler BH, Thompson SM (1998) Long-term synaptic plasticity between pairs of individual CA3 pyramidal cells in rat hippocampal slices. J. Physiol. 507:237247. Gerstner, W & Abbott, LF (1997) Learning navigational maps through potentiation and modulation of hippocampal place cells. J. Computational Neurosci. 4:79-94. Gerstner W, Kempter R, van Hemmen JL & Wagner, H (1996) A neural learning rule for sub-millisecond temporal coding. Nature 383:76-78. Gustafsson B, Wigstrom H, Abraham WC & Huang Y-Y (1987) Long-term potentiation in the hippocampus using depolarizing current pulses as the conditioning stimulus to single volley synaptic potentials. J. Neurosci. 7:774-780. Hebb, DO (1949) The Organization of Behavior: A Neuropsychological Theory. New York:Wiley. Hertz, JA, Palmer, RG & Krogh, A (1991) Introduction to the Theory of Neural Computation. New York:Addison-Wesley. Kempter R, Gerstner W & van Hemmen JL (1999) Hebbian learning and spiking neurons. (submitted). Levy WB & Steward 0 (1983) Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus. Neurosci. 8:791-797. Malenka, RC & Nicoll, RA (1993) MBDA-receptor-dependent synaptic plasticity: Multiple forms and mechanisms. Trends Neurosci. 16:521-527. Minai, AA & Levy, WB (1993) Sequence learning in a single trial. INNS World Congress on Neural Networks 11:505-508. Markram H, Lubke J, Frotscher M & Sakrnann B (1997) Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275:213-215. Rumelhart, DE & McClelland, JL, editors (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volumes I & II. Cambridge, MA:MIT Press. Shadlen, MN & Newsome, WT (1994) Noise, neural codes and cortical organization. Current Opinion in Neurobiology 4:569-579. Shadlen, MN & Newsome, WT (1998) The Variable Discharge of Cortical Neurons: Implications for Connectivity, Computation, and Information Coding. Journal of Neuroscience 18:3870-3896. Softky, WR & Koch, C (1992) Cortical cells should spike regularly but do not. Neural Computation 4:643-646. Softky, WR & Koch, C (1994) The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. Journal of Neuroscience 13:334-350. Troyer, 1W & Miller, KD (1997a) Physiological gain leads to high lSI variability in a simple model of a cortical regular spiking cell. Neural Compo 9:971-983: Troyer, 1W & Miller, KD (1997b) Integrate-and-fire neurons matched to physiological F-I curves yield high input sensitivity and wide dynamic range. Computational Neuroscience, Trends in Research. 1M Boser, ed. New York:Plenum, pp. 197-20l. Tsodyks, M & Sejnowski, TJ (1995) Rapid switching in balanced cortical network models. Network 6: 1-14. van Vreeswijk, C & Sompolinsky, H (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274: 1724-1726. van Vreeswijk, C & Sompolinsky, H (1998) Chaotic balanced state in a model of cortical circuits. Neural Compo 10:1321-1327. Zhang LI, Tao, HW, Holt CE, Harris WA & Poo M-m (1998) A critical window for cooperation and competition among developing retinotectal synapses. Nature 395:37-44.
|
1998
|
75
|
1,576
|
Learning Mixture Hierarchies N uno Vasconcelos Andrew Lippman MIT Media Laboratory, 20 Ames St, EI5-320M, Cambridge, MA 02139, {nuno,lip} @media.mit.edu, http://www.media.mit.edwnuno Abstract The hierarchical representation of data has various applications in domains such as data mining, machine vision, or information retrieval. In this paper we introduce an extension of the Expectation-Maximization (EM) algorithm that learns mixture hierarchies in a computationally efficient manner. Efficiency is achieved by progressing in a bottom-up fashion, i.e. by clustering the mixture components of a given level in the hierarchy to obtain those of the level above. This cl ustering requires onl y knowledge of the mixture parameters, there being no need to resort to intermediate samples. In addition to practical applications, the algorithm allows a new interpretation of EM that makes clear the relationship with non-parametric kernel-based estimation methods, provides explicit control over the trade-off between the bias and variance of EM estimates, and offers new insights about the behavior of deterministic annealing methods commonly used with EM to escape local minima of the likelihood. 1 Introduction There are many practical applications of statistical learning where it is useful to characterize data hierarchically. Such characterization can be done according to either top-down or bottom-up strategies. While the former start by generating a coarse model that roughly describes the entire space, and then successively refine the description by partitioning the space and generating sub-models for each of the regions in the partition; the later start from a fine description, and successively agglomerate sub-models to generate the coarser descriptions at the higher levels in the hierarchy. Bottom-up strategies are particularly useful when not all the data is available at once, or when the dataset is so big that processing it as whole is computationally infeasible. This is the case of machine vision tasks such as object recognition, or the indexing of video databases. In object recognition, it is many times convenient to determine not only which object is present in the scene but also its pose [2], a goal that can be attained by a hierarchical, description where at the lowest level a model is learned for each object pose and all pose models are then combined into a generic model at the top level of the hierarchy. Similarly, Learning Mixture Hierarchies 607 for video indexing, one may be interested in learning a description for each frame and then combine these into shot descriptions or descriptions for some other sort of high level temporal unit [6). In this paper we present an extension of the EM algorithm [I) for the estimation of hierarchical mixture models in a bottom-up fashion. It turns out that the attainment of this goal has far more reaching consequences than the practical applications above. In particular, because a kernel density estimate can be seen as a limiting case ofa mixture model (where a mixture component is superimposed on each sample), this extension establishes a direct connection between so-called parametric and non-parametric density estimation methods making it possible to exploit results from the vast non-parametric smoothing literature [4) to improve the accuracy of parametric estimates. Furthennore, the original EM algorithm becomes a particular case of the one now presented, and a new intuitive interpretation becomes available for an important variation of EM (known as deterministic annealing) that had previously been derived from statistical physics. With regards to practical applications, the algorithm leads to computationally efficient methods for estimating density hierarchies capable of describing data at different resolutions. 2 Hierarchical mixture density estimation Our model consists of a hierarchy of mixture densities, where the data at a given level is described by c l P(X) = L 1I"~p(Xlz~ = I , Md, (I) k= 1 where 1 is the level in the hierarchy (l = 0 providing the coarsest characterization of the data), MI the mixture model at this level, Cl the number of mixture components that compose it, 11"~ the prior probability of the kth component, and z~ a binary variable that takes the value 1 if and only if the sample X was drawn from this component. The only restriction on the model is that if node j of levell + 1 is a child of node i of levell, then 11"1+1 = 11"1+111"1 J jlk k' (2) where k is the parent of j in the hierarchy of hidden variables. The basic problem is to compute the mixture parameters of the description at levell given the knowledge of the parameters at level 1 + 1. This can also be seen as a problem of clustering mixture components. A straightforward solution would be to draw a sample from the mixture density at levell + 1 and simply run EM with the number of classes of the level 1 to estimate the corresponding parameters. Such a solution would have at least two major limitations. First, there would be no guarantee that the constraint of equation (2) would be enforced, i.e. there would be no guarantee of structure in the resulting mixture hierarchy, and second it would be computationally expensive, as all the models in the hierarchy would have to be learned from a large sample. In the next section, we show that this is really not necessary. 3 Estimating mixture hierarchies The basic idea behind our approach is, instead of generating a real sample from the mixture model at level L + 1, to consider a virtual sample generated from the same model, use EM to find the expressions for the parameters of the mixture model of levell that best explain this virtual sample, and establish a closed-fonn relationship between these parameters and those of the model at level 1 + I. For this, we start by considering a virtual sample X = {XI, .. . , X C l+ l } from ;\.11+1, where each of the Xi is a virtual sample from one of 608 N. Vasconcelos and A. Lippman the C'+ 1 components of this model, with size Mi = 11'! N, where N is the total number of virtual points. We next establish the likelihood for the virtual sample under the model M" For this, as is usual in the EM literature, we assume that samples from different blocks are independent, Le. C 1+1 P(XIM,) = II P(XiIM,), (3) i=1 but, to ensure that the constraint of equation (2) is enforced, samples within the same block are assigned to the same component of M,. Assuming further that, given the knowledge of the assignment the samples are drawn independently from the corresponding mixture component, the likelihood of each block is given by c 1 c 1 M i P(XiIMd = 2: lI'~P(Xilzij = I,M,) = 2: 11'} II p(XrlZij = I,M,), (4) j = 1 j = 1 m = 1 where Zij = Z!+I z; is a binary variable with value one if and only if the block Xi is assigned to the jth component of M" and xr is the mth data point in Xi. Combining equations (3) and (4) we obtain the incomplete data likelihood, under M" for the whole sample C 1+1 c 1 M. P(XIM,) = II 2: 11'; II p(XrlZij = I,M,). (5) i= 1 j = 1 m = 1 This equation is similar to the incomplete data likelihood of standard EM, the main difference being that instead of having an hidden variable for each sample point, we now have one for each sample block. The likelihood of the complete data is given by C 1+1 c 1 P(X, ZIM,) = II II [lI'~P(Xilzij = 1, M,)f·i , i = 1 j = 1 where Z is a vector containing all the Zij, and the log-likelihood becomes C 1+1 c 1 log P(X, ZIM,) = 2: 2: Zij 10g(1I';P(Xilzij = 1, M,). i = 1 j = 1 Relying on EM to estimate the parameters of M, leads to the the following E-step (6) (7) _ _ _ P(Xi I Zij = I, M,)lI'; hij E[zijIXi,M,]-P(zij-lIXi , M,)-~ P(X .I. -I lA) I' (8) L..k , Zzk , l VI/ lI'k The key quantity to compute is therefore P (Xi I Zij = I, M,). Taking its logarithm 1 M. 10gP(Xilzij = I,M,) Mi[M. 2:logP(xrlzij = I,M,)] , i = 1 MiEM 1+ 1., [log P(XIZij = 1, M,)], (9) where we have used the law of large numbers, and EM 1+1 •• [x] is the expected value of x according the ith mixture component of M'+ 1 (the one from which Xi was drawn). This is an easy computation for most densities commonly used in mixture modeling. It can be shown [5] that for the Gaussian case it leads to [ I .L { ~l _1~ l +I}]M. g(J1.i+ 1 ,/L~ , E~)e-~trace (~J)~' 7r~ (10) Learning Mixture Hierarchies where 9(x, J1., E) is the expression for a Gaussian with mean J1. and covariance E. The M-step consists of maximizing C I+1 c l Q = L L hij 10g(1T~P(Xilzij = 1, Md) i = 1 j = 1 609 (II) subject to the constraint Ej 7r~ = I. Once again, this is a relatively simple task for common mixture models and in [5] we show that for the Gaussian case it leads to the following parameter update equations Ei hij MiJ1.~+ 1 Ei h ij Mi (12) (3) E.~ .. M. [LhijMiE~+1 + LhijMi(J1.~+I-J1.;)(J1.!+1 -J1.;f] .(4) , 'J , i i Notice that neither equation (10) nor equations (12) to (14) depend explicitly on the underlying sample Xi and can be computed directly from the parameters of Ml+l. The algorithm is thus very efficient from a computational standpoint as the number of mixture components in Ml+ 1 is typically much smaller than the size of the sample at the bottom of the hierarchy. 4 Relationships with standard EM There are interesting relationships between the algorithm derived above and the standard EM procedure. The first thing to notice is that by making Mi = I and E~+ 1 = 0, the E and M-steps become those obtained by applying standard EM to the sample composed of the points J1.~+1 . Thus, standard EM can be seen as a particular case of the new algorithm, that learns a two level mixture hierarchy. An initial estimate is first obtained at the bottom of this hierarchy by placing a Gaussian with zero covariance on top of each data point, the model at the second level being then computed from this estimate. The fact that the estimate at the bottom level is nothing more than a kernel estimate with zero bandwidth suggests that other choices of the kernel bandwidth may lead to better overall EM estimates. Under this interpretation, the E~+I become free parameters that can be used to control the smoothness of the density estimates and the whole procedure is equivalent to the composition of three steps: I) find the kernel density estimate that best fits the sample under analysis, 2) draw a larger virtual sample from that density, and 3) compute EM estimates from this larger sample. In section 5, we show that this can leave to significant improvements in estimation accuracy, particularly when the initial sample is small, the free parameters allowing explicit control over the trade-off between the bias and variance of the estimator. Another interesting relationship between the hierarchical method and standard EM can be derived by investigating the role of the size of the underlying virtual sample (which determines Mi) on the estimates. Assuming Mi constant, Mi = M, Vi, it factors out of all summations in equations (12) to (14), the contributions of numerator and denominator canceling each other. In this case, the only significance of the choice of M is its impact on the E-step. Assuming, as before, that E~+I = 0 we once again have the EM algorithm, but where the class-conditional likelihoods of the E-step are now raised to the Mth power. If 610 N Vasconcelos and A. Lippman M is seen as the inverse of temperature, both the E and M steps become those of standard EM under deterministic annealing (DA) I [3]. The DA process is therefore naturally derived from our hierarchical formulation, which gives it a new interpretation that is significantly simpler and more intuitive than those derived from statistical physics. At the start of the process M is set to zero, i.e. no virtual samples are drawn from the Gaussian superimposed on the real dataset, and there is no virtual data. Thus, the assignments hij of the E-step simply become the prior mixing proportions 11"; and the M-step simply sets the parameters of all Gaussians in the model to the sample mean and sample covariance of the real sample. As M increases, the number of virtual points drawn from each Gaussian also increases and for M = 1 we have a single point that coincides with the point on the real training sample. We therefore obtain the standard EM equations. Increasing M further will make the E-step assignments harder (in the limit of M = 00 each point is assigned to a single mixture component) because a larger virtual probability mass is attached to each real point leading to much higher certainty with regards to the reliability of the assignment. Overall, while in the beginning of the process the reduced size of the virtual sample allows the points in the real sample to switch from mixture to mixture easily, as M is increased the switching becomes much less likely. The "exploratory" nature of the initial iterations drives the process towards solutions that are globally good, therefore allowing it to escape local minima. 5 Experimental results In this section, we present experimental results that illustrate the properties of the hierarchical EM algorithm now proposed. We start by a simple example that illustrates how the algorithm can be used to estimate hierarchical mixtures. , .. : ... ~., :.:~ .. : .. :~~y ... :.~.:: ~, ~~,;,i 1!IO - 100 -so 0 50 100 150 '·.i~ i ';~ '~'Tl;~:. Figure I: Mixture hierarchy derived from the model shown in the left. The plot relative to each level of the hierarchy is superimposed on a sample drawn from this model. Only the one-standard deviation contours are shown for each Gaussian. The plot on the left of Figure 1 presents a Gaussian mixture with 16 uniformly weighted components. A sample with 1000 points was drawn from this model, and the algorithm used to find the best descriptions for it at three resolutions (mixtures with 16, 4, and 2 Gaussian). These descriptions are shown in the figure. Notice how the mixture hierarchy naturally captures the various levels of structure exhibited by the data. This example suggests how the algorithm could be useful for applications such as object recognition or image retrieval. Suppose that each of the Gaussians in the leftmost plot of IDA is a technique drawn from analogies with statistical physics that avoids local maxima of the likelihood function (in which standard EM can get trapped) by perfonning a succession of optimizations at various temperatures [31. Learning Mixture Hierarchies 611 ( I'· j '~~~~--H~~W~~~~ o-.I • ...,r'Itol Figure 2: Object recognition task. Left: 8 of the 100 objects in the database. Right: computational savings achieved with hierarchical recognition vs full search. the figure describes how a given pose of a given object populates a 2-D feature space on which object recognition is to be perfonned. In this case, higher levels in the hierarchical representation provide a more generic description of the object. E.g. each of the Gaussians in the model shown in the middle of the figure might provide a description for all the poses in which the camera is on the same quadrant of the viewing sphere, while those in the model shown in the right might represent views from the same hemisphere. The advantage, for recognition or retrieval, of relying on a hierarchal structure is that the search can be perfonned first at the highest resolution, where it is much less expensive, only the best matches being considered at the subsequent levels. Figure 2 illustrates the application of hierarchical mixture modeling to a real object recognition task. Shown on the left side of the figure are 8 objects from the 100 contained in the Columbia object database [2]. The database consists of 72 views (obtained by positioning the camera in 5° intervals along a circle on the viewing sphere), which were evenly separated into a training and a test set. A set of features was computed for each image, and a hierarchical model was then learned for each object in the resulting feature space. While the process could be extended to any number of levels, here we only report on the case of a two-level hierarchy: at the bottom each image is described by a mixture of 8 Gaussians, and at the top each mixture (also with 8 Gaussians) describes 3 consecutive views. Thus, the entire training set is described by 3600 mixtures at the bottom resolution and 1200 at the top. Given an image of an object to recognize, recognition takes place by computing its projection into the feature space, measuring the likelihood of the resulting sample according to each of the models in the database, and choosing the most likely. The complexity of the process is proportional to the database size. The plot on the left of Figure 2 presents the recognition accuracy achieved with the hierarchical representation vs the corresponding complexity, shown as a percent of the complexity required by full search. The full-search accuracy is in this case 90%, and is also shown as a straight line in the graph. As can be seen from the figure, the hierarchical search achieves the full search accuracy with less than 40% of its complexity. We are now repeating this experiments with deeper trees, where we expect the gains to be even more impressive. We finalize by reporting on the impact of smoothing on the quality of EM estimates. For this, we conducted the following Monte Carlo experiment: I) draw 200 datasets Si, i = 1, ... ,200 from the model shown on the left of Figure 1, 2) fit each dataset with EM, 3) measure the correlation coefficient Pi, i = 1, ... ,200 between each of the EM fits and the original model, and 4) compute the sample mean p and variance a p. The correlation coefficient is defined by Pi = f f(x)h(x)dxIU f(x)dxf ii(X)dx), where f(x) is the 612 OD , 08 / -so - - 100 200 - 300 • <00 -500 -- - 1000 o .o'-----'--~,0--':15:-----:2O':---25~--:'c30--= 36:--..... 40::---'45 8lC 10~ 10 N. Vasconcelos and A. Lippman -so - -HI) J(I) • 4CXl 500 - 1000 --------- - - - - - -:: .. ~ . " -, -' 15 20 2S 30 36 Figure 3: Results of the Monte Carlo experiment described on the text. Left: p as a function 17k. Right: Up as a function of 17k. The various curves in each graph correspond to to different sample sizes. true model and fi(X) the ith estimate, and can be computed in closed form for Gaussian mixtures. The experiment was repeated with various dataset sizes and various degrees of smoothing (by setting the bandwidth of the underlying Gaussian kernel to oil for various values of O'k). Figure 3 presents the results of this experiment. It is clear, from the graph on the left, that smoothing can have a significant impact on the quality of the EM estimates. This impact is largest for small samples, where smoothing can provide up to a two fold improvement estimation accuracy, but can be found even for large samples. The kernel bandwidth allows control over the trade-off between the bias and variance of the estimates. When O'k is zero (standard EM), bias is small but variance can be large, as illustrated by the graph on the right of the figure. As O'k is increased, variance decreases at the cost of an increase in bias (the reason why for large O'k aU lines in the graph of the left meet at the same point regardless ofthe sample size). The point where p is the highest is the point at which the bias-variance trade off is optimal. Operating at this point leads to a much smaller dependence of the accuracy of the estimates on the sample size or, conversely, the need for much smaller samples to achieve a given degree of accuracy. References [I] A. Dempster, N. Laird, and D. Rubin. Maximum-likelihood from Incomplete Data via the EM Algorithm. J. of the Royal Statistical Society, B-39, 1977. [2] H. Murase and S. Nayar. Visual Learning and Recognition of 3-D Objects from Appearence. International Journal of Computer Vision, 14:5-24, 1995. [3] K. Rose, E. Gurewitz, and G. Fox. Vector Quantization by Determinisc Annealing. IEEE Trans. on Information Theory, Vol. 38, July 1992. [4] J. Simonoff. Smoothing Methods in Statistics. Springer-Verlag, 1996. [5] N. Vasconcelos and A. Lippman. Learning Mixture Hierarchies. Technical report, MIT Media Laboratory, 1998. Available from ftp:l/ftp.media.mit.eduJpub/nunolHierMix.ps.gz. [6] N. Vasconcelos and A. Lippman. Content-based Pre-Indexed Video. In Proc. Int. Con! Image Processing, Santa Barbara, California, 1997.
|
1998
|
76
|
1,577
|
On the optimality of incremental neural network algorithms Ron Meir* Department of Electrical Engineering Technion, Haifa 32000, Israel rmeir@dumbo.technion.ac.il Vitaly Maiorovt Department of Mathematics Technion, Haifa 32000, Israel maiorov@tx.technion.ac.il Abstract We study the approximation of functions by two-layer feedforward neural networks, focusing on incremental algorithms which greedily add units, estimating single unit parameters at each stage. As opposed to standard algorithms for fixed architectures, the optimization at each stage is performed over a small number of parameters, mitigating many of the difficult numerical problems inherent in high-dimensional non-linear optimization. We establish upper bounds on the error incurred by the algorithm, when approximating functions from the Sobolev class, thereby extending previous results which only provided rates of convergence for functions in certain convex hulls of functional spaces. By comparing our results to recently derived lower bounds, we show that the greedy algorithms are nearly optimal. Combined with estimation error results for greedy algorithms, a strong case can be made for this type of approach. 1 Introduction and background A major problem in the application of neural networks to real world problems is the excessively long time required for training large networks of a fixed architecture. Moreover, theoretical results establish the intractability of such training in the worst case [9][4]. Additionally, the problem of determining the architecture and size of the network required to solve a certain task is left open. Due to these problems, several authors have considered incremental algorithms for constructing the network by the addition of hidden units, and estimation of each unit's parameters incrementally. These approaches possess two desirable attributes: first, the optimization is done step-wise, so that only a small number of parameters need to be optimized at each stage; and second, the structure of the network -This work was supported in part by the a grant from the Israel Science Foundation tThe author was partially supported by the center for Absorption in Science, Ministry of Immigrant Absorption, State of Israel. 296 R. Meir and V Maiorov is established concomitantly with the learning, rather than specifying it in advance. However, until recently these algorithms have been rather heuristic in nature, as no guaranteed performance bounds had been established. Note that while there has been a recent surge of interest in these types of algorithms, they in fact date back to work done in the early seventies (see [3] for a historical survey). The first theoretical result establishing performance bounds for incremental approximations in Hilbert space, was given by Jones [8]. This work was later extended by Barron [2], and applied to neural network approximation of functions characterized by certain conditions on their Fourier coefficients. The work of Barron has been extended in two main directions. First, Lee et at. [10] have considered approximating general functions using Hilbert space techniques, while Donahue et al. [7] have provided powerful extensions of Jones' and Barron's results to general Banach spaces. One of the most impressive results of the latter work is the demonstration that iterative algorithms can, in many cases, achieve nearly optimal rates of convergence, when approximating convex hulls. While this paper is concerned mainly with issues of approximation, we comment that it is highly relevant to the statistical problem of learning from data in neural networks. First, Lee et at. [10] give estimation error bounds for algorithms performing incremental optimization with respect to the training error. Under certain regularity conditions, they are able to achieve rates of convergence comparable to those obtained by the much more computationally demanding algorithm of empirical error minimization. Moreover, it is well known that upper bounds on the approximation error are needed in order to obtain performance bounds, both for parametric and nonparametric estimation, where the latter is achieved using the method of complexity regularization. Finally, as pointed out by Donahue et al. [7], lower bounds on the approximation error are crucial in establishing worst case speed limitations for learning. The main contribution of this paper is as follows. For functions belonging to the Sobolev class (see definition below), we establish, under appropriate conditions, near-optimal rates of convergence for the incremental approach, and obtain explicit bounds on the parameter values of the network. The latter bounds are often crucial for establishing estimation error rates. In contrast to the work in [10] and [7], we characterize approximation rates for functions belonging to standard smoothness classes, such as the Sobolev class. The former work establishes rates of convergence with respect to the convex hulls of certain subsets of functions, which do not relate in a any simple way to standard functional classes (such as Lipschitz, Sobolev, Holder, etc.). As far as we are aware, the results reported here are the first to report on such bounds for incremental neural network procedures. A detailed version of this work, complete with the detailed proofs, is available in [13]. 2 Problem statement We make use of the nomenclature and definitions from [7]. Let H be a Banach space of functions with norm II . II. For concreteness we assume henceforth that the norm is given by the Lq norm, 1 < q < 00, denoted by II . Ilq. Let linn H consist of all sums of the form L~=l aigi , gi E H and arbitrary ai, and COn H is the set of such sums with ai E [0,1] and L~=l ai = 1. The distances, measured in the Lq norm, from a function f are given by dist(1innH,f) = inf {llh - fllq : hE linnH}, dist(conH, f) = inf {llh - fllq : hE conH}. The linear span of H is given by linH = Un linn H, while the convex-hull of H is coH = Uncon H. We follow standard notation and denote closures of sets by a bar, e.g. coH is the closure of the convex hull of H. In this work we focus on the special case where H = H1} ~ {g : g(x) = eCJ(aT x + b), lei::; 1}, IICJ(·)llq ::; I}, (1) On the Optimality of Incremental Neural Network Algorithms 297 corresponding to the basic building blocks of multilayer neural networks. The restriction 110-011 ::; 1 is not very demanding as many sigmoidal functions can be expressed as a sum of functions of bounded norm. It should be obvious that linn 1l1) corresponds to a two-layer neural network with a linear output unit and o--activation functions in the single hidden layer, while COn 1l1) is equivalent to a restricted form of such a network, where restrictions are placed on the hidden-to-output weights. In terms of the definitions introduced above, the by now well known property of universal function approximation over compacta can be stated as lin1l = C(M), where C(M) is the class of continuous real valued functions defined over M , a compact subset of Rd. A necessary and sufficient condition for this has been established by Leshno et af. [11], and essentially requires that 0-(.) be locally integrable and non-polynomial. We comment that if 'T} = 00 in (l), and c is unrestricted in sign, then co1l= = lin1l=. The distinction becomes important only if 'T} < 00, in which case co1l1) C lin1l1). For the purpose of incremental approximation, it turns out to be useful to consider the convex hull co1l, rather than the usual linear span, as powerful algorithms and performance bounds can be developed in this case. In this context several authors have considered bounds for the approximation of a function 1 belonging to co1l by sequences of functions belonging to COn 1l. However, it is not clear in general how well convex hulls of bounded functions approximate general functions. One contribution of this work is to show how one may control the rate of growth of the bound 'T} in (1), so that general functions, belonging to certain smoothness classes (e.g. Sobolev), may be well approximated. In fact, we show that the incremental approximation scheme described below achieves nearly optimal approximation error for functions in the Sobolev space. Following Donahue et af. [7], we consider c:-greedy algorithms. Let E = (El ' E2, ... ) be a positive sequence, and similarly for (ai, a 2, ... ), 0 < an < 1. A sequence of functions hI, h2' ... is E-greedy with respect to 1 if for n = 0, 1, 2, .. . , Ilhn+I - Illq < inf {llanhn + (1 - an)g - Illq : 9 E 1l1)} + En , (2) where we set ho = O. For simplicity we set an = (n - l)/n, although other schemes are also possible. It should be clear that at each stage n, the function hn belongs to COn 1l1). Observe also that at each step, the infimum is taken with respect to 9 E 1l1)' the function hn being fixed. In terms of neural networks, this implies that the optimization over each hidden unit parameters (a, b, c) is performed independently of the others. We note in passing, that while this greatly facilitates the optimization process in practice, no theoretical guarantee can be made as to the convexity of the single-node error function (see [1] for counter-examples). The variables En are slack variables, allowing the extra freedom of only approximate minimization. In this paper we do not optimize over an, but rather fix a sequence in advance, forfeiting some generality at the price of a simpler presentation. In any event, the rates we obtain are unchanged by such a restriction. In the sequel we consider E-greedy approximations of smooth functions belonging to the Sobolev class of functions, W; = {I : m?x WDk 1112 ::; ·1} , OS kS r where k = (k 1 , . •• , k d ) , ki 2:: 0 and Ikl = ki + ... kd . Here Vk is the partial derivative operator of order k. All functions are defined over a compact domain K C Rd. 3 Upper bound for the L2 norm First, we consider the approximation of functions from WI using the L2 norm. In distinction with other Lq norms, there exists an inner product in this case, defined through 298 R. Meir and V Maiorov (".) = II·II~. This simp1ification is essential to the proof in this case. We begin by recalling a result from [12], demonstrating that any function in L2 may be exactly expressed as a convex integral representation of the form f(x) = Q J h(x, O)w(O)dO, (3) where 0 < Q < 00 depends on f, and w( 0) is a probability density function (pdf) with respect to the multi-dimensional variable O. Thus, we may write f(x) = QEw{h(x, e)}, where Ew denotes the expectation operator with respect to the pdf w . Moreover, it was shown in [12], using the Radon and wavelet transforms, that the function h(x, 0) can be taken to be a ridge function with 0 = (a, b, e) and h(x, 0) = ea(aT x + b). In the case of neural networks, this type of convex representation was first exploited by Barron in [2], assuming f belongs to a class of functions characterized by certain moment conditions on their Fourier transforms. Later, Delyon et al. [6] and Maiorov and Meir [12] extended Barron's results to the case of wavelets and neural networks, respectively, obtaining rates of convergence for functions in the Sobolev class. The basic idea at this point is to generate an approximation, hn(x), based on n draws of random variables en = {e l , e2, ... ,en}, ei ,....., w(·), resulting in the random function Q n hn(x; en) = - 2: h(x, ei ). n i=l (4) Throughout the paper we conform to standard notation, and denote random variables by uppercase letters, as in e, and their realization by lower case letters, as in O. Let wn = TI~=l Wi represent the product pdf for {e l , ... ,en}. Our first result demonstrates that, on the average, the above procedure leads to good approximation of functions belonging to W{. Theorem 3.1 Let K C Rd be a compact set. Then/or any f E W{, n > 0 and c > 0 there exists a constant e > 0, such that Ew " Ilf - hn(x; en)112 :S en-rld+E , where Q < en(1/2- rl d)+, and (x)+ = max(O,x). (5) The implication of the upper bound on the expected value, is that there exists a set of values o*,n = {Or, ... , O~}, for which the rate (5) can be achieved. Moreover, as long as the functions h(x, Od in (4) are bounded in the L2 norm, a bound on Q implies a bound on the size of the function hn itself. Proof sketch The proof proceeds by expressing f as the sum of two functions, iI and 12. The function iI is the best approximation to f from the class of multi-variate splines of degree r. From [12] we know that there exist parameters on such that IliI (.) - hn {-, on)112 :S en-rid. Moreover, using the results of [5] it can be shown that 1112112 :S en-rid. Using these two observations, together with the triangle inequality Ilf - hn l12 :S IliI - hnl12 + 1112112, yields the desired result. I Next, we show that given the approximation rates attained in Theorem 3.1, the same rates may be obtained using an c -greedy algorithm. Moreover, since in [12] we have established the optimality of the upper bound (up to a logarithmic factor in n), we conclude that greedy approximations can indeed yield near-optimal perfonnance, while at the same time being much more attractive computationally. In fact, in this section we use a weaker algorithm, which does not perform a full minimization at each stage. On the Optimality of Incremental Neural Network Algorithms Incremental algorithm: (q = 2) Let an = 1 - lin, 6:n = 1 - an = lin. 1. Let 0i be chosen to satisfy Ilf(x) Qh(x,Onll~ = EWl {llf(x) - Qh(x,edIID· 2. Assume that 0i , °2, ... ,O~-l have been generated. Select O~ to obey n-l 2 f(x) - ::~; L h(x,On - 6:nQh(x,O~) i=l 2 n-l 22} . f(x) Qan "h(x, On - 6:nQh(x, en) n-lL..,; i=l Define 299 which measures the error incurred at the n-th stage by this incremental procedure. The main result of this section then follows. Theorem 3.2 For any f E WI and c > 0, the error of the incremental algorithm above is bounded as for some finite constant c. Proof sketch The claim will be established upon showing that (6) namely, the error incurred by the incremental procedure is identical to that of the nonincremental one described preceding Theorem 3.1. The result will then follow upon using Holder's inequality and the upper bound (5) for the r.h.s. of (6). The remaining details are straightforward, but tedious, and can be found in the full paper [13]. I 4 Upper bound for general Lq norms Having established rates of convergence for incremental approximation of WI in the L2 norm, we move now to general Lq norms. First, note that the proof of Theorem 3.2 relies heavily on the existence on an inner product. This useful tool is no longer available in the case of general Banach spaces such as L q . In order to extend the results to the latter norm, we need to use more advanced ideas from the theory of the geometry of Banach spaces. In particular, we will make use of recent results from the work of Donahue et al. [7]. Second, we must keep in mind that the approximation of the Sobolev space WI using the Lq norm only makes sense if the embedding condition rid> (1/2 - l/q)+ holds, since otherwise the Lq norm may be infinite (the embedding condition guarantees its finiteness; see [14] for details). We first present the main result of this section, followed by a sketch of the proof. The full details of the rather technical proof can be found in [13]. Note that in this case we need to use the greedy algorithm (2) rather than the algorithm of Section 3. 300 R. Meir and V Maiorov Theorem 4.1 Let the embedding condition r / d > (1/2 - 1/ q) + hold for 1 < q < 00, 0< r < r*, r* = ~ + (~- ~)+ andassumethatllh(-,O)llq:S IforallO. Thenforany JEW; andf. > 0 where { ;I- (~-P , = !+% _qd q > 2, ~ q :s 2, (7) c = c(r, d, K) and hn(-, on) is obtained via the incremental greedy algorithm (2) with cn = O. Proof sketch The main idea in the proof of Theorem 4.1 is a two-part approximation scheme. First, based on [13], we show that any JEW; may be well approximated by functions in the convex class COn ('111/) for an appropriate value of TJ (see Lemma 5.2 in [13]), where R1/ is defined in (1). Then, it is argued, making use of results from [7] (in particular, using Corollary 3.6) , that an incremental greedy algorithm can be used to approximate the closure of the class co( R 1/) by the class COn (111/). The proof is completed by using the triangle inequality. The proof along the above lines is done for the case q > 2. In the case q :s 2, a simple use of the Holder inequality in the form Ilfllq ~ IKI 1/q-l/21IfI12, where IKI is the volume of the region K, yields the desired result, which, given the lower bounds in [12], is nearly optimal. I 5 Discussion We have presented a theoretical analysis of an increasingly popular approach to incremental learning in neural networks. Extending previous results, we have shown that near-optimal rates of convergence may be obtained for approximating functions in the Sobolev class W;. These results extend and clarify previous work dealing solely with the approximation of functions belonging to the closure of convex hulls of certain sets of functions. Moreover, we have given explicit bounds on the parameters used in the algorithm, and shown that the restriction to COn 111/ is not too stringent. In the case q ~ 2 the rates obtained are as good (up to logarithmic factors) to the rates obtained for general spline functions, which are known to be optimal for approximating Sobolev spaces. The rates obtained in the case q > 2 are sub-optimal as compared to spline functions, but can be shown to be provably better than any linear approach. In any event, we have shown that the rates obtained are equal, up to logarithmic factors, to approximation from linn 111/' when the size of TJ is chosen appropriately, implying that positive input-to-output weights suffice for approximation. An open problem remaining at this point is to demonstrate whether incremental algorithms for neural network construction can be shown to be optimal for every value of q. In fact, this is not even known at this stage for neural network approximation in general. References [1] P. Auer, M. Herbster, and M. Warmuth. Exponentially many local minima for single neurons. In D.S. Touretzky, M.e. Mozer, and M.E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 316-322. MIT Press, 1996. [2] AR. Barron. Universal approximation bound for superpositions of a sigmoidal function. IEEE Trans. In! Th., 39:930-945, 1993. [3] AR. Barron and R.L. Barron. Statistical learning networks: a unifying view. In E. Wegman, editor, Computing Science and Statistics: Proceedings 20th Symposium Interface, pages 192-203, Washington D.e., 1988. Amer. Statis. Assoc. On the Optimality of Incremental Neural Network Algorithms 301 [4] A. Blum and R. Rivest. Training a 3-node neural net is np-complete. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems I, pages 494-50l. Morgan Kaufmann, 1989. [5] c. de Boor and G. Fix. Spline approximation by quasi-interpolation. J. Approx. Theory, 7:19-45,1973. [6] B. Delyon, A. Juditsky, and A. Benveniste. Accuracy nalysis for wavelet approximations. IEEE Transaction on Neural Networks, 6:332-348, 1995. [7] M.J. Donahue, L. Gurvits, C. Darken, and E. Sontag. Rates of convex approximation in non-hilbert spaces. Constructive Approx. , 13: 187-220, 1997. [8] L. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rate for projection pursuit regression and neural network training. Ann. Statis. , 20:608-613, 1992. [9] S. Judd. Neural Network Design and the Complexity of Learning. MIT Press, Boston, USA,1990. [10] W.S. Lee, P.S. Bartlett, and R.c. Williamson. Efficient Agnostic learning of neural networks with bounded fan-in. IEEE Trans. In! Theory, 42(6):2118-2132, 1996. [11] M. Leshno, V. Lin, A. Pinkus, and S. Schocken. Multilayer Feedforward Networks with a Nonpolynomial Activation Function Can Approximate any Function. Neural Networks, 6:861-867,1993. [12] V.E. Maiorov and R. Meir. On the near optimality of the stochastic approximation of smooth functions by neural networks. Technical Report CC-223, Technion, Department of Electrical Engineering, November ]997. Submitted to Advances in Computational Mathematics. [13] R. Meir and V. Maiorov. On the optimality of neural network approximation using incremental algorithms. Submitted for publication, October 1998. ftp://dumbo.technion.ac.il/pub/PAPERSlincrementa].pdf. [14] H. Triebel. Theory of Function Spaces. Birkhauser, Basel, 1983.
|
1998
|
77
|
1,578
|
Optimizing admission control while ensuring quality of service in multimedia networks via reinforcement learning* Timothy X Brown t , Hui Tong t , Satinder Singh+ t Electrical and Computer Engineering + Computer Science University of Colorado Boulder, CO 80309-0425 {timxb, tongh, baveja}@colorado.edu Abstract This paper examines the application of reinforcement learning to a telecommunications networking problem. The problem requires that revenue be maximized while simultaneously meeting a quality of service constraint that forbids entry into certain states. We present a general solution to this multi-criteria problem that is able to earn significantly higher revenues than alternatives. 1 Introduction A number of researchers have recently explored the application of reinforcement learning (RL) to resource allocation and admission control problems in telecommunications. e.g., channel allocation in wireless systems, network routing, and admission control in telecommunication networks [1, 6, 7, 8]. Telecom problems are attractive applications for RL research because good, simple to implement, simulation models exist for them in the engineering literature that are both widely used and results on which are trusted, because there are existing solutions to compare with, because small improvements over existing methods can lead to significant savings in the long run, because they have discrete states, and because there are many potential commercial applications. However, existing RL applications have ignored an issue of great practical importance to telecom engineers, that of ensuring quality of service (QoS) while simultaneously optimizing whatever resource allocation performance criterion is of interest. This paper will focus on admission control for broadband multimedia communication networks. These networks are unlike the current internet in that voice, video, and data calls arrive and depart over time and, in exchange for giving QoS guarantees to customers, the network collects revenue for calls that it accepts into the network. In this environment, admission control decides what calls to accept into the network so as to maximize the earned revenue while meeting the QoS guarantees of all carried customers. 'Timothy Brown and Hui Tong were funded by NSF CAREER Award NCR-9624791. Satinder Singh was funded by NSF grant IIS-97 1 1753. Optimizing Admission Control via RL 983 Meeting QoS requires a decision function that decides when adding a new call will violate QoS guarantees. Given the diverse nature of voice, video, and data traffic, and their often complex underlying statistics, finding good QoS decision functions has been the subject of intense research [2, 5]. Recent results have emphasized that robust and efficient QoS decision functions require on-line adaptive methods [3]. Given we have a QoS decision function, deciding which of the heterogeneous arriving calls to accept and which to reject in order to maximize revenue can be framed as a dynamic program problem. The rapid growth in the number of states with problem complexity has led to reinforcement learning approaches to the problem [6]. In this paper we consider the problem of finding a control policy that simultaneously meets QoS guarantees and maximizes the network's earned revenue. We show that the straightforward approach of mixing positive rewards for revenue with negative rewards for violating QoS leads to sub-optimal policies. Ideally we would like to find the optimal policy from the subset of policies that never violate the QoS constraint. But there is no a priori useful way to characterize the space of policies that don't violate the QoS constraint. We present a general approach to meeting such multicriteria that solves this problem and potentially many other applications. Experiments show that incorporating QoS and RL yield significant gains over some alternative heuristics. 2 Problem Description This section describes the admission control problem model that will be used. To emphasize the main features of the problem, networking issues such as queueing that are not essential have been simplified or eliminated. It should be emphasized that these aspects can readily be incorporated back into the problem. We focus on a single network link. Users attempt to access the link over time and the network immediately chooses to accept or reject the call. If accepted, the call generates traffic in terms of bandwidth as a function of time. At a later time, the call terminates and departs from the network. For each call accepted, the network receives revenue at a fixed rate over the duration of the call. The network measures QoS metrics such as transmission delays or packet loss rates and compares them against the guarantees given to the calls. Thus, the problem is described by the call arrival, traffic, and departure processes; the revenue rates; QoS metrics; QoS constraints; and link model. The choices used in this paper are given in the next paragraph. Calls are divided into discrete classes indexed by i. The calls are generated via a Poisson arrival process (arrival rate Ai) and exponential holding times (mean holding time 1/ f.Li) . Within a call the bandwidth is an ON/OFF process where the traffic is either ON at rate Ti or OFF at rate zero with mean holding times V?N, and V?FF . The effective immediate revenue are Ct. The link has a fixed bandwidth B. The total bandwidth used by accepted calls varies over time. The QoS metric is the fraction of time that the total bandwidth exceeds the link bandwidth (i.e. the overload probability, p). The QoS guarantee is an upper limit, p*. In previous work each call had a constant bandwidth over time so that the effect on QoS was predictable. Variable rate traffic is safely approximated by assuming that it always transmits at its maximum or peak rate. Such so-called peak rate allocation under-utilizes the network; in some cases by orders of magnitude less than what is possible. Stochastic traffic rates in real traffic, the desire for high network utilization/revenue, and the resulting potential for QoS violations distinguish the problem in this paper. 3 Semi-Markov Decision Processes At any given point of time, the system is in a particular configuration, x, defined by the number of each type of ongoing calls. At random times a call arrival or a call termination 984 T. X Brown, H. Tong and S. Singh event, e, can occur. The configuration and event together determine the state of the system, s = (x, e). When an event occurs, the learner has to choose an action feasible for that event. The choice of action, the event, and the configuration deterministically define the next configuration and the payoff received by the learner. Then after an interval the next event occurs, and this cycle repeats. The task of the learner is to determine a policy that maximizes the discounted sum of payoffs over an infinite horizon. Such a system constitutes a finite state, finite action, semi-Markov decision process (SMDP). 3.1 Multi-criteria Objective The admission control objective is to learn a policy that assigns an accept or reject decision to each possible state of the system so as to maximize J = E {foOO ,,/C(t)dt} , where E{·} is the expectation operator, c(t) is the total revenue rate of ongoing calls at time t, and, E (0,1) is a discount factor that makes immediate profit more valuable than future profit. 1 In this paper we restrict the maximization to policies that never enter states that violate QoS guarantees. In general SMDP, due to stochastic state transitions, meeting such constraints may not be possible (e.g. from any state no matter what actions are taken there is a possibility of entering restricted states). In this problem service quality decreases with more calls in the system and adding calls is strictly controlIed by the admission controller so that meeting this QoS constraint is possible. 3.2 Q-Iearning RL methods solve SMDP problems by learning good approximations to the optimal value function, J*, given by the solution to the Bellman optimality equation which takes the following form for the dynamic call admission problem: J*(s) max [E.6.t , s,{c(s,a,flt)+,(L~t)J*(s')}l (I) aEA(s) where A(s) is the set of actions available in the current state s, flt is the random time until the next event, c(s, a, flt) is the effective immediate payoff with the discounting, and ,(flt) is the effective discount for the next state s' . We learn an approximation to J* using Watkin's Q-learning algorithm. To focus on the dynamics of this paper's problem and not on the confounding dynamics of function approximation, the problem state space is kept small enough so that table lookup can be used. Bellman's equation can be rewritten in Q-values as J*(s) max Q*(s,a) (2) aEA (s) Call Arrival: When a call arrives. the Q-value of accepting the call and the Q-value of rejecting the call is determined. If rejection has the higher value, we drop the call. Else, if acceptance has the higher value, we accept the call. Call Termination: No action needs to be taken. Whatever our decision, we update our value function as follows: on a transition from state s to s' on action a in time flt, Q(s, a) (1 - 0:)Q(8, a) + 0: (c(s, a, flt) + ,(flt) max Q(8', b)) bEArs') (3) 1 Since we will compare policies based on total reward rather than discounted sum of reward, we can use the Tauberian approximation [4), i.e., r is chosen to be sufficiently close to I. Optimizing Admission Control via RL 985 where ex E [0, 1] is the learning rate. In order for Q-Iearning to perform well, all potentially important state-action pairs (s, a) must be explored. At each state, with probability E we apply an action that will lead to a less visited configuration, instead of the action recommended by the Q-value. However, to update Q-values we still use the action b recommended by the Q-Iearning. 4 Combining Revenue and Quality of Service The primary question addressed in this paper is how to combine the QoS constraint with the objective of maximizing revenue within this constraint. Let p(s, a, ~t) and q(s, a, ~t) be the revenue and measured QoS components of the reward, c(s, a, ~t). Ideally c(s, a, ~t) = p(s, a, ~t) when the QoS constraint is met and c(s, a, ~t) = -Large (where -Large is any large negative value) when QoS is not met. If the QoS parameters could be accurately measured between each state transition then this approach would be a valid solution to the problem. In network systems, the QoS metrics contain a high-degree of variability. For example, overload probabilities can be much smaller than 10-3 while the interarrival periods can be only a few ON/OFF cycles so that except for states with the most egregious QoS violations, most interarrival periods will have no overloads. If the reward is a general function of revenue and QoS: c(s, a, ~t) = f(p(s, a, ~t), q(s, a, ~t)), (4) sufficient and necessary condition for inducing optimal policy with the QoS constraint is given by: E{J(p(s,a,~t),q(s,a,~t))} = { ~t~~~a,~t)} ifE{q(s,a,~t)} <p* otherwise (5) For fe) satisfying this condition, states that violate QoS will be highly penalized and never visited. The actions for states that are visited will be based solely on revenue. The Appendix gives a simple example showing that finding a fO that yields the optimal policy is unlikely without significant prior knowledge about each state. Several attempts at using (4) to combine QoS and revenue into the reward either violated QoS or had significantly lower reward. A straight-forward alternative exists to meeting the multicriteria formulated as follows. For each criteria, j, we estimate a separate set of Q-factors, Qj (s, a). Each is updated via on-line Q-Iearning. These are then combined post facto at the time of decision via some function Q (.) so that: Q(s, a) = Q( {Qj (s, a)}). For example in this paper the two criteria are estimated separately as QP and Qq and Q(s, a) = Q(QP(s, a), Qq(s, a)) = { ~~~~~) if Qq(s, a) < p* otherwise (6) (7) The structure of this problem allows us to estimate Qq without using (3). As stated, the QoS is an intrinsic property of a state and not of future states so it is independent of the policy. This allows us to collect QoS statistics about each state and treat them in a principled way (e.g. computing confidence intervals on the estimates). Using these QoS estimates, the set of allowable states contracts monotonically over time eventually converging to a fixed set of allowable states. Since the QoS constraint is guaranteed to reach a fixed point asymptotically, the Q-Iearned policy also approaches a fixed point at the optimal policy via standard Q-Iearning proofs. A related scheme is analyzed in [4] suggesting that similar cases will also converge to optimal policies. 986 T. X Brown, H. Tong and S. Singh Many other QoS criteria do depend on the policy and require using (3). A constraint on the expected overload probability with a given policy is an example. 5 Simulation Results The experiment uses the following model. The total bandwidth is normalized to 1.0 unit of traffic per unit time. The target overflow probability is p* = 10-3 . Two source types are considered with the properties shown in Table 1. As noted before, call holding times are exponential and the arrivals are Poisson. For the first experiment, the ON/OFF holding times are exponentially distributed, while for the second experiment, they are Pareto distributed. The Pareto distribution is considered to be a more accurate representation of data traffic. Table 1: Experimental parameters Source Type Parameter I II ON rate, r 0.08 0.2 Mean ON period, 11l1ON 5 5 Mean OFF period, 11l1OFF 15 45 Hyperbolic exponent, U + 1 2.08 2.12 Call arrival rate, A 0.067 0.2 Call holding time, II J.L 60 60 Immediate payoff, c 5 I In the experiments, for each state-action pair, (s , a), QP(s, a) is updated using (3). As stated, in this case the update of Qq(s, a) does not need to use (3). Since random exploration is employed to ensure that all potentially important state-action pairs be tried, it naturally enables us to collect statistics that can be used to estimate QoS at these state-action pairs, Qq (s, a) . As the number of visits to each state-action pair increases, the estimated Qq(s, a) becomes more and more accurate and, with confidence, we can gradually eliminate those state-action pairs that will violate QoS requirement. As a consequence, QP(s, a) is updated in a gradually correct subset of state-action space in the sense that QoS is met for any action within this subspace. Initial Q-values for RL are artificial1y set such that Q-Iearning started with the greedy policy (the greedy policy always accepts). After training is completed, we apply a test data set to compare the policy obtained through RL with alternative heuristic policies. The final QoS measurements obtained at the end of the RL training while learning QoS are used for testing different policies. To test the RL policies, when there is a new call arrival, the algorithm first determines if accepting this call will violate QoS. If it will, the call is rejected, else the action is chosen according to a = argmaxaEA(s) Q(s, a), where A(s) = {l=accept, O=reject}. For the QoS constraint we use three cases: Peak rate allocation; Statistical multiplexing function learned on-line. denoted QoS learned; Given statistical multiplexing function a priori, denoted QoS given. We examine six different cases: (I) RL: QoS given; (2) RL: QoS learned; (3) RL: peak rate; (4) A heuristic that only accepts calls from the most valuable class, i.e .. type I, with given QoS; (5) Greedy: QoS given; (6) Greedy: peak rate. From the results shown in Fig. I, it is clear that simultaneously doing Q-Iearning and QoS learning converges correctly to the RL policy obtained by giving the QoS a priori and doing standard Q-Iearning only. We see significant gains (about 15%) due to statistical multiplexing: (6) vs (5), and (3) vs (l). The gains due to RL are about 25%: (6) vs (3), and (5) vs (2). Together they yield about 45% increase in revenue over conservative peak rate allocation in this example. It is also clear from the figure that the RL policies perform better than the heuristic policies. Fig. (2) shows the rejection ratios for different policies. Optimizing Admission Control via RL Companson 01 dlffer&nt policies. expooentlal OWOFF I Jjl/ ... O"" .... __ - 0-0- __ -0 _ ... 3 "liDo g0 ... --0----0 0 --s-- - g-09 0 8 07 06 • __ ... ... _ ...... ......... ... ............ ............ ..................... 2_ ... ... I .AL: OOS given 2.AL: OOS leamed o 3.AL: peak rate o 4 Greedy. type I only 5.Greedy· QoS given 6.Greedy- peak rale 05L~,-----,-_--"-_--:,--'==:c==-====.J o 6 8 10 12 14 Iraining llfT18Sleps(x 106, Figure) : Comparison of total rewards of RL while learning QoS, RL with given QoS measurements, RL with peak rate, greedy policies and peak rate allocation, normalized by the greedy total reward exponential ON/OFF. 987 l-Greectt peak ''''. 2-RL peak rate. a-Greed! 00S grven. '-RL 00S ~arn.d . "_,,oJ OWOfF 08 07 0 6 Figure 2: Comparison of rejection ratios for the policies learned in Fig. 1. We repeat the above experiments with Pareto distributed ON and OFF periods, using the same parameters listed in Table 1. The results are shown in Figs. 3-4. Clearly, the different ON/OFF distributions yield similar gains for RL. 6 Conclusion This paper shows that a QoS constraint could be incorporated into a RL solution to maximizing a network's revenue, using a vector value Q-learning function. The formulation is quite general and can be applied to many possible constraints. The approach, when applied to a simple networking problem, increases revenue by up to 45%. Future research includes: using neural networks or other function approximators to deal with more complex problems for which lookup tables are infeasible; and extending admission control to multi-link routing. 7 Appendix: Simple One-State Example A simple example will show that a function with property (5) is unlikely. Consider a link that can accept only one type of call and it can accept no more than one call. With no actions possible when carrying a call there is only one state. Only two rewards are possible, c(R) for reject and c(A) for accept. To fix the value function let c(R) = 0 and let p and q be the random revenues and QoS experienced. Analysis of () and (2) shows that the accept action will be chosen if and only if E{J(p, q)} > O. In this example, the revenues are random and possibly negative (e.g. if they are net after cost of billing and transport). The call should be accepted if E {p} > 0 and E {q} < p*. Therefore the correct reward function has the property: E{J(p,q)} > 0 if E{p} > 0 and E{q} < p* (8) The point of the example is that an f(·) satisfying (8) requires prior knowledge about the distributions of the revenue and the QoS as a function of the state. Even if it were possible 988 COII'!"lnson .1_'" policies. Pareto OIroFF 1 3 ,-~r-------,-----r-----r---r--r---.----, t 2 I , I 11 I 1 ' 0 9 08 0 7 0 6 os 0 , , I ', 1- - RL: QoS leamed I 1 Gteedy: QoS given - , Gteedy: peak rate 6 8 10 12 " '''''flgIImU9pO(x 1 0~ Figure 3: Comparison of total rewards of RL while learning QoS, greedy policy and peak rate allocation, normalized by the greedy total reward - Pareto ON/OFF. II T. X Brown, H. Tong and S. Singh 1-Greedy peak .... 2-G ... dy' <loS lIlY .... 3- RL <loS Ioamod. Pare" ONIOFF 0 9 08 Figure 4: Comparison of rejection ratios for the policies learned in Fig. 3. for this example, setting up constraints such as (8) for a real problem with a huge state space would be non-trivial because p and q are functions of the many state and action pairs. References [I] Boyan, J.A., Littman, ML, "Packet routing in dynamically changing networks: a reinforcement learning approach," in Cowan, J.D., et al., ed. Advances in NIPS 6, Morgan Kauffman, SF, 1994. pp. 671-678. [2] Brown, T.X, "Adaptive Access Control Applied to Ethernet Data," Advances in NIPS 9, ed. M. Mozer et al., MIT Press, 1997. pp. 932-938. [3] Brown, T.X, "Adaptive Statistical Multiplexing for Broadband Communications," Invited Tutorial Fifth IFfP Workshop on Peiformance Modeling & Evaluation of ATM Networks, Ilkley, U.K., July, 1997. [4] Gabor, Z., Kalmar, Z., Szepesvari, c., "Multi-criteria Reinforcement Learning," to appear in International Conference on Machine Learning, Madison, WI, July, 1998. [5] Hiramatsu, A., "ATM Communications Network Control by .Neural Networks," IEEE T on Neural Networks, v. 1, n. 1, pp. 122-130, 1990. [6] Marbach, P., Mihatsch, 0., Schulte, M., Tsitsiklis, J.N., "Reinforcement learning for cal1 admission control and routing in integrated service networks," in Jordan, M., et aI., ed. Advances in NIPS 10, MIT Press, 1998. [7] Nie, J., Haykin, S., "A Q-learning based dynamic channel assignment technique for mobile communication systems," to appear in IEEE T on Vehicular Technology. [8] Singh, S.P., Bertsekas, D.P., "Reinforcement learning for dynamic channel allocation in cel1ular telephone systems," in Advances in NIPS 9, ed. Mozer, M., et al., MIT Press, 1997. pp. 974-980.
|
1998
|
78
|
1,579
|
Coding time-varying signals using sparse, shift-invariant representations Michael S. Lewicki* lewickiCsalk.edu Terrence J. Sejnowski terryCsalk.edu Howard Hughes Medical Institute Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Rd. La Jolla, CA 92037 Abstract A common way to represent a time series is to divide it into shortduration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the temporal alignment of the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time series that does not require blocking the data. The algorithm finds an efficient representation by inferring the best temporal positions for functions in a kernel basis. These can have arbitrary temporal extent and are not constrained to be orthogonal. This allows the model to capture structure in the signal that may occur at arbitrary temporal positions and preserves the relative temporal structure of underlying events. The model is shown to be equivalent to a very sparse and highly over complete basis. Under this model, the mapping from the data to the representation is nonlinear, but can be computed efficiently. This form also allows the use of existing methods for adapting the basis itself to data. This approach is applied to speech data and results in a shift invariant, spike-like representation that resembles coding in the cochlear nerve. 1 Introduction Time series are often encoded by first dividing the signal into a sequence of blocks. The data within each block is then fit with a standard basis such as a Fourier or wavelet. This has a limitation that the components of the bases are arbitrarily aligned with respect to structure in the time series. Figure 1 shows a short segment of speech data and the boundaries of the blocks. Although the structure in the signal is largely periodic, each large oscillation appears in a different position within the blocks and is sometimes split across blocks. This problem is particularly present for acoustic events with sharp onset, such as plosives in speech. It also presents ·To whom correspondence should be addressed. Coding Time-Varying Signals Using Sparse, Shift-Invariant Representations 731 difficulties for encoding the signal efficiently, because any basis that is adapted to the underlying structure must represent all possible phases. This can be somewhat circumvented by techniques such as windowing or averaging sliding blocks, but it would be more desirable if the representation were shift invariant. time Figure 1: Blocking results in arbitrary phase alignment the underlying structure. 2 The Model Our goal is to model a signal by using a small set of kernel functions that can be placed at arbitrary time points. Ultimately, we want to find the minimal set of functions and time points that fit the signal within a given noise level. We expect this type of model to work well for signals composed of events whose onset can occur at arbitrary temporal positions. Examples of these include, musical instruments sounds with sharp attack or plosive sounds in speech. We assume time series x(t) is modeled by (1) where Ti indicates the temporal position of the ith kernel function, <Pm [i) , which is scaled by Si. The notation m[i] represents an index function that specifies which of the M kernel functions is present at time Ti. A single kernel function can occur at multiple times during the time series. Additive noise at time t is given by E(t). A more general way to express (1) is to assume that the kernel functions exist at all time points during the signal, and let the non-zero coefficients determine the positions of the kernel functions. In this case, the model can be expressed in convolutional form x(t) L / Sm(T)<Pm(t - T)dT + E(t) m (2) L sm(t) * <Pm(t) + E(t) , (3) m where Sm(T) is the coefficient at time T for kernel function <Pm. It is also helpful to express the model in matrix form using a discrete sampling of the continuous time series: x = As + E. (4) 732 M. S. Lewicki and T. J. Sejnowski The basis matrix, A, is defined by (5) where C(a) is an N-by-N circulant matrix parameterized by the vector a. This matrix is constructed by replicating the kernel functions at each sample position [ ~ aN-I a2 al 1 al an a3 a2 C(a) = aN-2 aN-3 ao aN-l aN-l aN-2 al ao (6) The kernels are zero padded to be of length N . The length of each kernel is typically much less than the length of the signal, making A very sparse. This can be viewed as a special case of a Toeplitz matrix. Note that the size of A is M N-by-N, and is thus an example of an overcomplete basis, i.e. a basis with more basis functions than dimensions in the data space (Simoncelli et al., 1992; Coifman and Wickerhauser, 1992; Mallat and Zhang, 1993; Lewicki and Sejnowski, 1998). 3 A probabilistic formulation The optimal coefficient values for a signal are found by maximizing the posterior distribution s = argmaxP(slx,A) = argmaxP(xIA,s)P(s) 8 8 (7) where s is the most probable representation of the signal. Note that omission of the normalizing constant P(xIA) does not change the location of the maximum. This formulation of the problem offers the advantage that the model can fit more general types of distributions and naturally "denoises" the signal. Note that the mapping from x to s is nonlinear with non-zero additive noise and an overcomplete basis (Chen et al., 1996; Lewicki and Sejnowski, 1998). Optimizing (7) essentially selects out the subset of basis functions that best account for the data. To define a probabilistic model, we follow previous conventions for linear generative models with additive noise (Cardoso, 1997; Lewicki and Sejnowski, 1998). We assume the noise, to, to have a Gaussian distribution which yields a data likelihood for a given representation of 1 logP(xIA,s) ex - 2u2(x - As)2. (8) The function P(s) describes the a priori distribution of the coefficients. Under the assumption that P(s) is sparse (highly -peaked around zero), maximizing (7) results in very few nonzero coefficients. A compact representation of s is to describe the values of the non-zero coefficients and their temporal positions M n", P(s) = II P(Um,Tm) = II II P(Um,i)P(Tm,i), (9) m m=l i=l where the prior for the non-zero coefficient values, Um,i, is assumed to be Laplacian, and the prior for the temporal positions (or intervals), Tm,i, is assumed to be a gamma distribution. Coding Time-Varying Signals Using Sparse, Shift-Invariant Representations 733 4 Finding the best encoding A difficult challenge presented by the proposed model is finding a computationally tractable method for fitting it to the data. The brute-force approach of generating the basis matrix A generates an intractable number basis functions for signals of any reasonable length, so we need to look for ways of making the optimization of (7) more efficient. The gradient of the log posterior is given by a as 10gP(sIA,x) oc AT(x - As) + z(s) , (10) where z(s) = (logP(s)),. A basic operation required is v = AT u. We saw that x = As can be computed efficiently using convolution (2). Because AT is also block circulant AT = [ C.(~.D 1 C(¢'u ) (11) where ¢'(1 : N) = ¢(N : -1 : 1). Thus, terms involving AT can also be computed efficiently using convolution v = AT U = [ ¢1 (-~~ ~ u(t) 1 ¢M( -t) * u(t) (12) Obtaining an initial representation An alternative approach to optimizing (7) is to make use of the fact that if the kernel functions are short enough in length, direct multiplication is faster than convolution, and that, for this highly overcomplete basis, most of the coefficients will be zero after being fit to the data. The central problem in encoding the signal then is to determine which coefficients are non-zero, ideally finding a description of the time series with the minimal number of non-zero coefficients. This is equivalent to determining the best set of temporal positions for each of the kernel functions (1). A crucial step in this approach is to obtain a good initial estimate of the coefficients. One way to do this is to consider the projection of the signal onto each of the basis functions, i.e. AT x. This estimate will be exact (i.e. zero residual error) in the case of zero noise and A orthogonal. For the non-orthogonal, overcomplete case the solution will be approximate, but for certain choices of the basis matrix, an exact representation can still be obtained efficiently (Daubechies, 1990; Simoncelli et aI., 1992). Figure 2 shows examples of convolving two different kernel functions with data. One disadvantage with this initial solution is that the coefficient functions s~(t) are not sparse. For example, even though the signal in figure 2a is composed of only three instances of the kernel function, the convolution is mostly non-zero. A simple procedure for obtaining a better initial estimate of the most probable coefficients is to select the time locations of the maxima (or extrema) in the convolutions. These are positions where the kernel functions capture the greatest amount of signal structure and where the optimal coefficients are likely to be non-zero. This generates a large number of positions, but their number can be reduced further by selecting only those that contribute significantly, i.e. where the average power is greater than some fraction of the noise level. From these, a basis for the entire signal is constructed by replicating the kernel functions at the appropriate time positions. 734 ~Z'C7'C71 V1 I M. S. Lewicki and T J Sejnowski fJVSNSM ~ I Figure 2: Convolution using the fast Fourier transform is an efficient way to select an initial solution for the temporal positions of the kernel functions. (a) The convolution of a sawtooth-shaped kernel function, ¢J(t), with a sawtooth waveform, x(t). (b) A single period sine-wave kernel function convolved with a speech segment. Once an initial estimate and basis are formed, the most probable coefficient values are estimated using a modified conjugate gradient procedure. The size of the generated basis does not pose a problem for optimization, because it is has very few non-zero elements (the number of which is roughly constant per unit time). This arises because each column is non-zero only around the position of the kernel function, which is typically much shorter in duration than the data waveform. This structure affords the use of sparse matrix routines for all the key computations in the conjugate gradient routine. After the initial fit, there typically are a large number of basis functions that give a very small contribution. These can be pruned to yield, after refitting, a more probable representation that has significantly fewer coefficients. 5 Properties of the representation Figure 3 shows the results of fitting a segment of speech with a sine wave kernel. The 64 kernel functions were constructed using a single period of a sine function whose log frequencies were evenly distributed between 0 and Nyquist (4 kHz), which yielded kernel functions that were minimally correlated (they are not orthogonal because each has only one cycle and is zero elsewhere). The kernel function lengths varied between 2 and 64 samples. The plots show the positions of the non-zero coefficients superimposed on the waveform. The residual errors curves from the fitted waveforms are shown offset, below each waveform. The right axes indicate the kernel function number which increase with frequency. The dots show the starting position of the kernels with non-zero coefficients, with the dot size scaled according to the mean power contribution. This plot is essentially a time/frequency analysis, similar to a wavelet decomposition, but on a finer temporal scale. Figure 3a shows that the structure in the coefficients repeats for each oscillation in the waveform. Adding a delay leaves the relative temporal structure of the nonzero coefficients mostly unchanged (figure 3b). The small variations between the two sets of coefficients are due to variations in the fitting of the small-magnitude coefficients. Representing the signal in figure 3b with a standard complete basis would result in a very different representation. Coding Time- Varying Signals Using Sparse, Shift-Invariant Representations a 0.2 • : : . : .. . : . . 0.1 o ~ . 1 o ~. 1 o . . 20 20 e ••• . . : : 40 40 60 time 60 time 60 100 80 100 735 53 14 120 14 120 Figure 3: Fitting a shift-invariant model to a segment of speech, x(t). Dots indicate positions of kernels (right axis) with size scaled by the mean power contribution. Fitting error is plotted below speech signal. 736 M S. Lewicki and T. J. Sejnowski 6 Discussion The model presented here can be viewed as an extension of the shiftable transforms of Simoncelli et al. (1992). One difference is that here no constraints are placed on the kernel functions. Furthermore, this model accounts for additive noise, which yields automatic signal denoising and provides sensible criteria for selecting significant coefficients. An important unresolved issue is how well the algorithm works for increasingly non-orthogonal kernels. One interesting property of this representation is that it results in a spike-like representation. In the resulting set of non-zero coefficients, not only is their value important for representing the signal, but also their relative temporal position, which indicate when an underlying event has occurred. This shares many properties with cochlear models. The model described here also has capacity to have an over complete representation at any given timepoint, e.g. a kernel basis with an arbitrarily large number of frequencies. These properties make this model potentially useful for binaural signal processing applications. The effectiveness of this method for efficient coding remains to be proved. A trivial example of a shift-invariant basis is a delta-function model. For a model to encode information efficiently, the representation should be non-redundant. Each basis function should "grab" as much structure in the data as possible and achieve the same level of coding efficiency for arbitrary shifts of the data. The matrix form of the model (4) suggests that it is possible to achieve this optimum by adapting the kernel functions themselves using the methods of Lewicki and Sejnowski (1998). Initial results suggest that this approach is promising. Beyond this, it is evident that modeling the higher-order structure in the coefficients themselves will be necessary both to achieve an efficient representation and to capture structure that is relevant to such tasks as speech recognition or auditory stream segmentation. These results are a step toward these goals. Acknowledgments. We thank Tony Bell, Bruno Olshausen, and David Donoho for helpful discussions. References Cardoso, J.-F. (1997). Infomax and maximum likelihood for blind source separation. IEEE Signal Processing Letters, 4: 109- 11 I. Chen, S., Donoho, D. L., and Saunders, M. A. (1996). Atomic decomposition by basis pursuit. Technical report, Dept. Stat., Stanford Univ., Stanford, CA. Coifman, R. R. and Wickerhauser, M. V. (1992). Entropy-based algorithms for best basis selection. IEEE Transactions on Information Theory, 38(2):713- 718. Daubechies, I. (1990). The wavelet transform, time-frequency localization, and signal analysis. IEEE Transactions on Information Theory, 36(5):961- 1004. Lewicki, M. S. and Sejnowski, T. J. (1998). Learning overcomplete representations. Neural Computation. submitted. Mallat, S. G. and Zhang, Z. F. (1993). Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397-3415. Simoncelli, E. P., Freeman, W. T., Adelson, E. H., and J ., H. D. (1992). Shiftable multiscale transforms. IEEE Trans. Info. Theory, 38:587- 607.
|
1998
|
79
|
1,580
|
Analyzing and Visualizing Single-Trial Event-Related Potentials Tzyy-Ping Jung1,2, Scott Makeig2,3, Marissa Westerfield2 Jeanne Townsend2, Eric Courchesne2, Terrence J. Sejnowskp,2 1 Howard Hughes Medical Institute and Computational Neurobiology Laboratory The Salk Institute, P.O. Box 85800, San Diego, CA 92186-5800 {jung,scott,terry}~salk.edu 2University of California, San Diego, La Jolla, CA 92093 3Naval Health Research Center, P.O. Box 85122, San Diego, CA 92186-5122 Abstract Event-related potentials (ERPs), are portions of electroencephalographic (EEG) recordings that are both time- and phase-locked to experimental events. ERPs are usually averaged to increase their signal/noise ratio relative to non-phase locked EEG activity, regardless of the fact that response activity in single epochs may vary widely in time course and scalp distribution. This study applies a linear decomposition tool, Independent Component Analysis (ICA) [1], to multichannel single-trial EEG records to derive spatial filters that decompose single-trial EEG epochs into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain networks. Our results on normal and autistic subjects show that ICA can separate artifactual, stimulus-locked, response-locked, and. non-event related background EEG activities into separate components, allowing ( 1) removal of pervasive artifacts of all types from single-trial EEG records, and (2) identification of both stimulus- and responselocked EEG components. Second, this study proposes a new visualization tool, the 'ERP image', for investigating variability in latencies and amplitudes of event-evoked responses in spontaneous EEG or MEG records. We show that sorting single-trial ERP epochs in order of reaction time and plotting the potentials in 2-D clearly reveals underlying patterns of response variability linked to performance. These analysis and visualization tools appear broadly applicable to electrophyiological research on both normal and clinical populations. Analyzing and Visualizing Single-Trial Event-Related Potentials 119 1 Introduction Scalp-recorded event-related potentials (ERPs) are voltage changes in the ongoing electroencephalogram (EEG) that are both time- and phase-locked to some experimental events. These field potentials are usually averaged to increase their signal/noise ratio relative to artifacts and other non-phase locked EEG activity. The averaging method disregards the fact that in single epochs response activity may vary widely in both time course and scalp distribution. These differences are in part attributed to different strategies employed by subjects for processing different stimuli, to differences in expectation, attention, and arousal occurring in different trials, and/or to variations in alertness and fatigue [2 , 3]. Single-trial analysis, on the other hand, can avoid problems due to time and/or phase shifts and can potentially reveal much richer information about event-related brain dynamics in endogenous ERPs, but suffers from pervasive artifacts associated with blinks, eyemovements, and muscle noise, and poor signal-to-noise ratio arising from the fact that non-phase locked background EEG activities often are larger than phase-locked response components. We present here new methods for analyzing and visualizing multichannel unaveraged single-trial ERP records that alleviate these problems. First, multi-channel EEG epochs were analyzed using Independent Component Analysis (ICA), a signal processing technique that can decompose multichannel complex data into spatially fixed and temporally independent components. Next, a new visualization tool, the 'ERP image', is introduced for visualizing relations between single-trial ERP records and their contributions to the ERP average. To form an ERP image, the recorded potentials at one channel are plotted as parallel lines and single-trial ERP epochs are sorted in order of reaction time. ICA, applied to the single-trial EEG records from normal and autistic subjects in a visual selective attention experiment, derived components whose dynamics were affected by stimulus presentations and/or subject responses in distinct ways. We demonstrate, through analysis of two sample data sets, the power of the proposed analysis and visualization tools for increasing the amount and quality of information about event-related brain dynamics that can be derived from single-trial EEG data. 2 Independent Component Analysis of EEG data Bell and Sejnowski [5] have proposed a simple neural network algorithm that blindly separates mixtures, x, of independent sources, s, using infomax. They showed that maximizing the joint entropy, H(y), of the output of a neural processor minimizes the mutual information among the output components, Yi = g( Ui), where g( ud is an invertible bounded nonlinearity and u = Wx, a version of the original sources, s, identical save for scaling and permutation. Lee et al. [1] generalized the infomax algorithm to perform blind source separation on linear mixtures of sources with either sub- or super-Gaussian distributions. Please see [5, 1] for details regarding the algorithms. ICA is suitable for performing blind source separation on EEG data because: (1) it is plausible that EEG data recorded at multiple scalp sensors are linear sums of temporally independent components arising from spatially fixed, distinct or overlapping brain or extra-brain networks, and, (2) spatial smearing of EEG data by volume conduction does not involve significant time delays!. In single-trial EEG analysis, the rows of the input matrix x are the EEG signals recorded at different electrodes, while the columns are measurements recorded at different time points. lSee [4] for details regarding lCA assumptions underlying EEG analysis. 120 Single-trial EAPs at Cz Ordered by AT T.-P Jung et al. With 20-trlal smoothing -100 100 300 500 700 900 Time (msec) 25 20 15 10 5 o -5 -10 -15 - 20 - 25 /.LV Figure 1: ERP images. (left panel) Single-trial ERPs recorded at a central electrode (Cz) and time-locked to onsets of visual target stimuli (vertical left line), plotted with subject reaction times (thick black line). (middle panel) The 390 single trials were then sorted (bottom to top) in order of increasing reaction time. (right panel) To increase signal-to-noise ratio and minimize EEG signals not both time- and phase-locked to the experimental events, the trials were averaged vertically using a 30-trial moving window advanced in one-trial increments. The rows of the independent output data matrix u = Wx are time courses of activation of the lCA components, and the columns of the inverse matrix, W-l , give the projection strengths of the respective components onto the scalp sensors. The scalp topographies of the components provide evidence as to their physiological origin (e.g., eye activity should project mainly to frontal sites). EEG signals of interest (e.g. , event-related brain signals) can then be obtained by projecting selected lCA components back onto the scalp as x' = (W)-lu', where u' is the matrix of activation waveforms, u , with rows representing activations of "irrelevant" sources set to zero. 3 Methods and Materials EEG data were recorded at 29 scalp electrodes and 2 EOG placements from 2 normal and 1 autistic subjects who participated in a 2-hr visual selected attention task in which they were instructed to attend to circles flashed in random order at one of five locations laterally arrayed 0.8 cm above a central fixation point. Locations were outlined by five evenly spaced 1.6-cm blue squares displayed on a black background at visual angles of ±2.7 deg and ±5.5 deg from fixation. Attended locations were highlighted through entire 90-sec experimental blocks. Subjects were instructed to maintain fixation on the central cross and press a button each time they saw a circle in the attended location (see [6] for details). 4 Results The lCA algorithm was applied separately to concatenated 31-channel single-trial EEG records from two normal and one autistic subjects. The derived independent components had a variety of distinct relations to task events. Some were clearly time-locked to stimuli presentations, while others were time-locked to subject reAnalyzing and Visualizing Single-Trial Event-Related Potentials 121 sponses. Still others captured spontaneous EEG activity together with blinks, eyemovements, and muscle artifacts, while others accounted for oscillatory and other background EEG phenomena. 4.1 ERP image To investigate variability in the latencies and amplitudes of event-evoked responses in spontaneous EEG, we here introduce a new visualization tool, the ERP image. An example shown in Figure 1 (left paneQ plots 390 single-trial ERP epochs time-locked to onsets oftarget stimuli ( vertical left line) and recorded at a central electrode (Cz) from a normal subject. Each horizontal trace represents a I-sec single-trial ERP record whose potential variations are plotted in different colors. The thick line plots the subject reaction times (RT) in successive trials. Note the trial-to-trial fluctuations in ERP latency and reaction time. The ERP average of these trials is plotted in the bottom of the panel. Next, the single trials were sorted in order of increasing reaction time (Fig. 1 middle paneQ, and were then smoothed with a 30-trial moving average (right paneQ. Note that, in all but the longest-RT trials, the early positive feature (P2) is time-locked to stimulus onset (i.e. is stimuluslocked), and that the P3 feature follows RT in nearly all trials (i.e. is responselocked). ERP image plots allow visualization of relations between event-related EEG trials and single-trial contributions to their ERP averaged. They disclose a tight link between the amplitudes and latencies of individual event-related responses and subject behavior. 4.2 Removing blink and eye-movement artifacts from EEG records Autistic subjects tend to blink more frequently than normal subjects [8]. ICA, applied to this data set in which about 50% of the trials were contaminated by blinks, successfully isolated blink artifacts to a single component (Fig. 2A, left) whose contributions could be removed from the EEG records by subtracting out the component projection [7]. Though the subjects were instructed to fixate during each 90-sec blocks, it has been suspected, though poorly documented, that their eyes tended to drift towards target stimuli presented at peripheral locations. Here, a second ICA component accounted for these small horizontal eye-movements (Fig. 2B, right). Fig. 2B (5 traces) also shows separate ERP averages (at periocular site EOG2) of responses to targets presented at the five different attended locations. The size of the prominent eye movement-related component is proportional to the angle between the stimulus location and the fixation point. Figure 2C shows the averaged ERPs at the same site in response to stimuli presented at the five different attended locations, before (faint traces) and after (solid traces) artifact removal. After artifact correction, the averaged ERPs to stimuli presented at the five different locations were independent of stimulus location. 4.3 Extracting event-related brain activity from EEG Records In these data, ICA also separated stimulus-locked, response-locked, and non-phase locked background EEG activities into different independent components. Numbers of components in each class varied across subjects. Figure 3A shows the projections of the subgroups of ICA components accounting primarily for (left) stimulus-locked, (middle) response-locked, and (right) remaining non-phase locked background EEG activity at site P03. Notice that, (1) both the response latencies and active durations of the early stimulus-locked PI and Nl components were very stable in nearly all trials, (2) the peak of the later P3 component covaried with reaction time, and (3) the projections of ICA components accounting for non-phase locked background EEG activity contributed very little to the averaged ERP (right panel, bottom 122 (A) CofT'90nent 1 (8) (C) leftmost T.-P lung et al. COfT'90nent 2 II t=t=±:::h:h;; o 900 0 900 0 900 0 900 0 900 rightmost ~~~~~rt: o 900 0 900 0 ...L 900 0 900 0 900 IFixation Point Time (msec) Figure 2: (A) (left) Scalp topography and 5 consecutive I-sec epochs of the activation time course of an leA component counting for blink artifacts in 641 single trials recorded from an adult autistic subject. (B) The scalp topography of a second eye-movement component and its averaged activation time courses in response to target stimuli presented at the five different attended locations. (C) Averaged ERPs at site EOG2 to targets presented at each of five attended locations, before (faint traces) and after (solid traces) artifact removal. trace). These results indicate that ICA makes possible the extraction and separation of event-related brain phenomena of all types from single-trial EEG records. 4.4 Re-aligning single-trial event-related potentials Figure 3B (left pane~ shows the raw artifact-corrected single-trial ERP epochs (the sum of the data in Fig. 3A). Response latency fluctuations resulted in temporal smearing of the P3 feature in the averaged ERP (bottom left). Realigning the single-trial ERP epochs to the median reaction time sharpened the averaged P3 (center panel, P3'), but unfortunately made the early stimulus-locked activity out of phase and the early averaged ERP thus absent in the first 200 msec. Because ICA separated stimulus-locked and response-locked activity into different independent components, we could realign the time courses of the response-locked P3' component to the median reaction time and project the adjusted data, along with the unaligned time courses of stimulus-locked components (PI/NI), back onto the scalp sensors (right pane~. This realignment preserved the early stimulus-locked PI/NI while sharpening the response-locked P3. The method minimized temporal smearing in the averaged ERP arising from performance fluctuations (left (3 right panels). 4.5 Event-related oscillatory EEG activity ICA, applied to multichannel single-trial EEG records, can also separate multiple oscillatory components even within a single frequency band. For example, Figure 3C plots scalp topographies and ERP images of activations of two ICA components accounting for alpha activity in target-response epochs from a normal subject. Note that the activity of the first component (left pane~ was augmented following stimulation, while the activity of the second component (middle pane~ was blocked by the subject response. When the same spatial filter was applied to EEG records from another session in which the subject was instructed to attend to but not to respond Analyzing and Visualizing Single-Trial Event-Related Potentials (A) Stimulus-locked Activity at P03 Response-locked Activity at P03 Background ActMty at P03 o 0 10 (8) (C) -100 100 300 500 700 900 -100 100 300 sao 700 900 -100 100 300 sao 700 900 Time (msec) Time (msec) Time (msec) Single-trial ERPs at P03 o 100 300 500 700 900 Time (msec) Alpha Component 1 Re-aligned ERPs Alpha component 2 Motor-response session o Re--aligned ERPs Alpha component 2 No-response session 5 o -5 -10 10 o -10 123 Figure 3: (A) Projections of ICA components at site P03 accounting, respectively, for stimulus-locked (left) , response-locked (middle), and non-phase locked background EEG activity (right) at one posterior site, P03. (B) (left) Artifact-corrected single-trial ERP records time-locked to stimulus onsets (left), and subject responses (center) . Note that the early ERP features (PI , NI) are not in phase in the response-locked trials, and do not appear in the response-locked average (center bottom). (right) Projections of the response-locked components were aligned to median reaction time (355 ms) and summed with stimulus-aligned component projections, forming an enhanced stimulus-aligned ERP (right bottom). (C) ERP-image plots of activations of ICA components accounting for alpha activity in EEG recorded from a normal subject. The alpha activity extracted by these components were either augmented (left) or blocked (middle) by subject responses. When the spatial filter for the second alpha component (middle) was applied to EEG records from another session in which the subject was asked only to 'mentally note' the occurrence of target stimuli, blocking was replaced by continued phase-locking. 124 T-P lung et al. to target stimuli, this alpha activity was not blocked (right pane~ . ICA identifies spatially-overlapping patterns of coherent activity over the entire scalp rather than focusing on single scalp channels or channel pairs. 5 Conclusions We have developed analytic and visualization tools for analysis of multichannel single-trial EEG records. Single-trial ERP analysis based on Independent Component Analysis allows blind separation of multichannel complex EEG data into a sum of temporally independent and spatially fixed components. ICA can effectively remove eye and muscle artifacts without altering the underlying brain activity in the EEG records. lCA can also be used to extract event-related brain phenomena of all types from EEG records. It can identify spatially-overlapping patterns of coherent activity over the entire scalp, and can be used to realign the time courses of response-locked components to prevent temporal smearing in the average arising from performance fluctuations. ERP images make visible systematic relations between single-trial EEG or MEG records and experimental events, and their relations to averaged ERPs. ERP images can also be used to display relationships between phase, amplitude and timing of event-related EEG components time-locked to either stimuli or subject responses. The analysis and visualization tools proposed in this study dramatically increase the amount and quality of information on eventor response-related brain signals that can be extracted from ERP data. Both tools appear applicable to electrophyiological research on normal and clinical populations. References [1] T.W. Lee, M. Girolami and T.J. Sejnowski (1999) Independent Component Analysis using an Extended Infomax Algorithm for Mixed Sub-Gaussian and Super-Gaussian Sources, Neural Computation, 11(2): 606-33. [2] H. Yabe, F. Satio & Y. Fukushima (1993) Median Method for Detecting Endogenous Event-related Brain Potentials, Electroencephalog. din. Neurophysiolog. 87(6):403-7. [3] H. Yabe, F. Satio & Y. Fukushima (1995) Classification of Single-trial ERP Sub-types: Application of Globally Optimal Vector Quantization Using Simulated Annealing, Electroencephalog. din. Neurophysiolog. 94(4):288-97. [4] S. Makeig, T-P Jung, A.J. Bell, D. Ghahremani, and T.J. Sejnowski (1997) Blind Separation of Event-related Brain Responses into Independent Components, Proc. Natl. Acad. Sci. USA, USA, 94:10979-84. [5] A.J. Bell & T.J. Sejnowski (1995). An information-maximization approach to blind separation and blind deconvolution, Neural Computation 7:1129-1159. [6] S. Makeig, M. Westerfield, J. Covington, T-P Jung, J. Townsend, T.J. Sejnowski, and E. Courchesne (in press) Functionally independent components of the late positive event-related potential in a visual spatial attention paradigm, J. Neuroscience. [7] Jung T-P, Humphries C, Lee TW, Makeig S, McKeown MJ, Iragui V, Sejnowski TJ (1998) Extended ICA Removes Artifacts from Electroencephalographic Data, In: Advances in Neural Information Processing Systems 10, 894-900. [8] J.G. Small (1971) Sensory Evoked Responses of Autistic Children, In: Infantile Autism, 224-39.
|
1998
|
8
|
1,581
|
Evidence for a Forward Dynamics Model in Human Adaptive Motor Control Nikhil Bhushan and Reza Shadmehr Dept. of Biomedical Engineering Johns Hopkins University, Baltimore, MD 21205 Email: nbhushan@bme.jhu.edu, reza@bme.jhu.edu Abstract Based on computational principles, the concept of an internal model for adaptive control has been divided into a forward and an inverse model. However, there is as yet little evidence that learning control by the eNS is through adaptation of one or the other. Here we examine two adaptive control architectures, one based only on the inverse model and other based on a combination of forward and inverse models. We then show that for reaching movements of the hand in novel force fields, only the learning of the forward model results in key characteristics of performance that match the kinematics of human subjects. In contrast, the adaptive control system that relies only on the inverse model fails to produce the kinematic patterns observed in the subjects, despite the fact that it is more stable. Our results provide evidence that learning control of novel dynamics is via formation of a forward model. 1 Introduction The concept of an internal model, a system for predicting behavior of a controlled process, is central to the current theories of motor control (Wolpert et al. 1995) and learning (Shadmehr and Mussa-Ivaldi 1994). Theoretical studies have proposed that internal models may be divided into two varieties: forward models, which simulate the causal flow of a process by predicting its state transition given a motor command, and inverse models, which estimate motor commands appropriate for a desired state transition (Miall and Wolpert, 1996). This classification is relevant for adaptive control because based on computational principles, it has been proposed that learning control of a nonlinear system might be facilitated if a forward model of the plant is learned initially, and then during an off-line period is used to train an inverse model (Jordan and Rumelhart, 1992). While there is no experimental evidence for this idea in the central nervous system, there is substantial evidence 4 N. Bhushan and R. Shadmehr that learning control of arm movements involves formation of an internal model. For example, practicing arm movements while holding a novel dynamical system initiates an adaptation process which results in the formation of an internal model: upon sudden removal of the force field, after-effects are observed which match the expected behavior of a system that has learned to predict and compensate for the dynamics of the imposed field (Shadmehr and Brashers-Krug, 1997). However, the computational nature of this internal model, whether it be a forward or an inverse model, or a combination of both, is not known. Here we use a computational approach to examine two adaptive control architectures: adaptive inverse model feedforward control and adaptive forward-inverse model feedback control. We show that the two systems predict different behaviors when applied to control of arm movements. While adaptation to a force field is possible with either approach, the second system with feedback control through an adaptive forward model, is far less stable and is accompanied with distinct kinematic signatures, termed "near path-discontinuities". We observe remarkably similar instability and near path-discontinuities in the kinematics of 16 subjects that learned force fields. This is behavioral evidence that learning control of novel dynamics is accomplished with an adaptive forward model of the system. 2 Adaptive Control using Internal Models Adaptive control of a nonlinear system which has large sensory feedback delays, such as the human arm, can be accomplished by using two different internal model architectures. The first method uses only an adaptive inverse dynamics model to control the system (Shadmehr and Mussa-Ivaldi, 1994). The adaptive controller is feedforward in nature and ignores delayed feedback during the movement. The control system is stable because it relies on the equilibrium properties of the muscle and the spinal reflexes to correct for any deviations from the desired trajectory. The second method uses a rapidly adapting forward dynamics model and delayed sensory feedback in addition to an inverse dynamics model to control arm movements (Miall and Wolpert, 1996). In this case, the corrections to deviations from the desired trajectory are a result of a combination of supraspinal feedback as well as spinal/muscular feedback. Since the two methods rely on different internal model and feedback structures, they are expected to behave differently when the dynamics of the system are altered. The Mechanical Model of the Human Arm For the purpose of simulating arm movements with the two different control architectures, a reasonably accurate model of the human arm is required. We model the arm as a two joint revolute arm attached to six muscles that act in pairs around the two joints. The three muscle pairs correspond to elbow joint, shoulder joint and two joint muscles and are assumed to have constant moment arms. Each muscle is modeled using a Hill parametric model with nonlinear stiffness and viscosity (Soechting and Flanders, 1997). The dynamics of the muscle can be represented by a nonlinear state function f M, such that, (1) where, Ft is the force developed by the muscle, N is the neural activation to the muscle, and Xm, xm are the muscle length and velocity. The passive dynamics related to the mechanics of the two-joint revolute arm can be represented by fD, such that, x = fD(T, x, x) = D- 1 (x)[T - C(x, x)x + JT Fxl (2) Evidence for a Forward Dynamics Model in Human Adaptive Motor Control 5 where, x is the hand acceleration, T is the joint torque generated by the muscles, x, x are the hand position and velocity, D and C are the inertia and the coriolis matrices of the arm, J is the Jacobian for hand position and joint angle, and Fx is the external dynamic interaction force on the hand. Under the force field environment, the external force Fx acting on the hand is equal to Bx, where B is a 2x2 rotational viscosity matrix. The effect of the force field is to push the hand perpendicular to the direction of movement with a force proportional to the speed of the hand. The overall forward plant dynamics of the arm is a combination of JM and JD and can be repff~sented by the function Jp , (3) Adaptive Inverse Model Feedforward Control The first control architecture uses a feedforward controller with only an adaptive inverse model. The inverse model computes the neural activation to the muscles for achieving a desired acceleration, velocity and position of the hand. It can be represented as the estimated inverse, 1;1, of the forward plant dynamics, and maps the desired position Xd, velocity Xd, and acceleration Xd of the hand, into descending neural commands N c. Nc = 1;1 (Xd, Xd, Xd) (4) Adaptation to novel external dynamics occurs by learning a new inverse model of the altered external environment. The error between desired and actual hand trajectory can be used for training the inverse model. When the inverse model is an exact inverse of the forward plant dynamics, the gain of the feedforward path is unity and the arm exactly tracks the desired trajectory. Deviations from the desired trajectory occur when the inverse model does not exactly model the external dynamics. Under that situation, the spinal reflex corrects for errors in desired (Xmd, Xmd) and actual (xm,xm) muscle state, by producing a corrective neural signal NR based on a linear feedback controller with constants K1 and K 2 • (5) Adaptive Forward-Inverse Model Feedback Control The second architecture provides feedback control of arm movements in addition to the feedforward control described above. Delays in feedback cause instability, therefore, the system relies on a forward model to generate updated state estimates of the arm. An estimated error in hand trajectory is given by the difference in desired and estimated state, and can be used by the brain to issue corrective neural signals to the muscles while a movement is being made. The forward model, written Desired Trajectory Inverse Arm Dynamics Model T d Inverse Muscle A·1 Model f;:.,' to 6d(t+60) / r+----------------~ ! '"-.... ----v-----~ . "., l ........ _ .. _ ..... _ .... __ ._ ... _!e ........... __ ..... _ ........... . Muscle T fM A=gO ms 6 d(I.30) + . 6(1.30) Arm Dynamics to 6 Fx (external force) A=30ms Figure 1: The adaptive inverse model feedforward control system. 6 N. Bhushan and R. Shadmehr 1\ 1\ A=120ms x, X (t+60) Desired Trajectory Td Inverse Muscle Nc Model f--L-..~ f·' A=60 ms + M NR L........_---' A-9Oms A~30ms Figure 2: A control system that provides feedback control with the use of a forward and an inverse model. as jp, mimics the forward dynamics of the plant and predicts hand acceleration i, from neural signal Nc, and an estimate of hand state x, ±. (6) U sing this equation, one can solve for x, ± at time t, when given the estimated state at some earlier time t - T, and the descending neural commands N c from time t - T to t. If t is the current time and T is the time delay in the feedback loop, then sensory feedback gives the hand state x, x at t-T. The current estimate of the hand position and velocity can be computed by assuming initial conditions x(t - T)=X(t - T) and ±(t - T)=X(t - T), and then solving Eq. 6. For the simulations, T has value of 200 msec, and is composed of 120 msec feedback delay, 60 msec descending neural path delay, and 20 msec muscle activation delay. Based on the current state estimate and the estimated error in trajectory, the desired acceleration is corrected using a linear feedback controller with constants Kp and Kv. The inverse model maps the hand acceleration to appropriate neural signal for the muscles Nc. The spinal reflex provides additional corrective feedback N R , when there is an error in the estimated and actual muscle state. Xd + Xc = Xd + Kp(Xd - x) + Kv(Xd - ±) 1;1 (xnew , x, ±) K 1(xm - xm) + K 2(±md - xm) (7) (8) (9) When the forward model is an exact copy of the forward plant dynamics jp= jp, and the inverse model is correct j;l = 1;1, the hand exactly tracks the desired trajectory. Errors due to an incorrect inverse model are corrected through the feedback loop. However, errors in the forward model cause deviations from the desired behavior and instability in the system due to inappropriate feedback action. 3 Simulations results and comparison to human behavior To test the two control architectures, we compared simulations of arm movements for the two methods to experimental human results under a novel force field environment. Sixteen human subjects were trained to make rapid point-to-point reaching Evidence for a Forward Dynamics Model in Human Adaptive Motor Control Inverse Model (1) Feedforward Control (2) ...~. ('.::::)(:::?). _.i .')-'" ...... Typical Subject .. "'.""".. !A i.. .. :; . .. : .... , c. .... :.~ ...... ) ",A ·o .. v ""' .... 021N\l ~:~ 0.5 1 1.5 O'IT[] 0.3 0.2 0, ~ffi1TI O.5 sec 1 15 Forward·lnverse Model Feedback Control .,~ r<:. .~ t>4J(: : ::~ ., . ./ .) •. / ~ .. o:lJIffl:J Q2~ O.S 1 1.5 04[m:J 0.3 0.' Ql o 0.5 1 1.5 ~w o O.5 sec 1 1.5 7 Figure 3: Performance in field B2 after a typical subject (middle column) and each of the controllers (left and right columns) had adapted to field B 1 . (1) hand paths for 8 movement directions, (2-5) hand velocity, speed, derivative of velocity direction, and segmented hand path for the -900 downward movement. The segmentation in hand trajectory that is observed in our subjects is almost precisely reproduced by the controller that uses a forward model. movements with their hand while an external force field, Fx = Bx, pushed on the hand. The task was to move the hand to a target position 10 cm away in 0.5 sec. The movement could be directed in any of eight equally spaced directions. The subjects made straight-path minimum-jerk movements to the targets in the absence of any force fields. The subjects were initially trained in force field Bl with B=[O 13;-130]' until they had completely adapted to this field and converged to the straight-path minimum-jerk movement observed before the force field was applied. Subsequently, the force field was switched to B2 with B=[O -13;13 0] (the new field pushed anticlockwise, instead of clockwise), and the first three movements in each direction were used for data analysis. The movements of the subjects in field B2 showed huge deviations from the desired straight path behavior because the subjects expected clockwise force field B 1 • The hand trajectories for the first movement in each of the eight directions are shown for a typical subject in Fig. 3 (middle column). Simulations were performed for the two methods under the same conditions as the human experiment. The movements were made in force field B 2 , while the internal models were assumed to be adapted to field B 1 . Complete adaptation to the force field Bl was found to occur for the two methods only when both 8 N. Bhushan and R. Shadmehr Expenmental Forward • data from • Model (a) 16 subjects Control :[[[1 &' I~ ~ III Q = A1(") dl(m) An t,(s) A,(") cJ(m/s' ) Ns Figure 4: The mean and standard deviation for segmentation parameters for each type of controller as compared to the data from our subjects. Parameters are defined in Fig. 3: Ai is angle about a seg. point, di is the distance to the i-th seg. point, ti is time to reach the i-th seg. point, Cj is cumulative squared jerk for the entire movement, Ns is number of seg. point in the movement. Up until the first segmentation point (AI and dd, behavior of the controllers are similar and both agree with the performance of our subjects. However, as the movement progresses, only the controller that utilizes a forward model continues to agree with the movement characteristics of the subjects. the inverse and forward models expected field B I . Fig. 3 (left column) shows the simulation of the adaptive inverse model feedforward control for movements in field B2 with the inverse model incorrectly expecting B I . Fig. 3 (right column) shows the simulation of the adaptive forward-inverse model feedback control for movements in field B2 with both the forward and the inverse model incorrectly expecting B I . Simulations with the two methods show clear differences in stability and corrective behavior for all eight directions of movement. The simulations with the inverse model feedforward control seem to be stable, and converge to the target along a straight line after the initial deviation. The simulations with the forward-inverse model feedback control are more unstable and have a curious kinematic pattern with discontinuities in the hand path. This is especially marked for the downward movement. The subject's hand paths show the same kinematic pattern of near discontinuities and segmentation of movement as found with the forward-inverse model feedback control. To quantify the segmentation pattern in the hand path, we identified the "near path-discontinuities" as points on the trajectory where there was a sudden change in both the derivative of hand speed and the direction of hand velocity. The hand path was segmented on the basis of these near discontinuities. Based on the first three segments in the hand trajectory we defined the following parameters: AI, angle between the first segment and the straight path to the target; dl , the distance covered during the first segment; A2, angle between the second segment and straight path to the target from the first segmentation point; t2, time duration of the second Evidence for a Forward Dynamics Model in Human Adaptive Motor Control 9 segment; A3, angle between the second and third segments; Ns, the number of segmentation points in the movement. We also calculated the cumulative jerk CJ in the movements to get a measure of the instability in the system. The results of the movement segmentation are presented in Fig. 4 for 16 human subjects, 25 simulations of the inverse model and 20 simulations of the forward model control for three movement directions (a) -900 downward, (b) 900 upward and (c) 1350 upward. We performed the different simulations for the two methods by systematically varying various model parameters over a reasonable physiological range. This was done because the parameters are only approximately known and also vary from subject to subject. The parameters of the second and third segment, as represented by A2, t2 and A3, clearly show that the forward model feedback control performs very differently from inverse model feedforward control and the behavior of human subjects is very well predicted by the former. Furthermore, this characteristic behavior could be produced by the forward-inverse model feedback control only when the forward model expected field B 1 . This could be accomplished only by adaptation of the forward model during initial practice in field B 1 • This provides evidence for an adaptive forward model in the control of human arm movements in novel dynamic environments. We further tried to fit adaptation curves of simulated movement parameters (using forward-inverse model feedback control) to real data as subjects trained in field B 1 . We found that the best fit was obtained for a rapidly adapting forward and inverse model (Bhushan and Shadmehr, 1999). This eliminated the possibility that the inverse model was trained offline after practice. The data, however, suggested that during learning of a force field, the rate of learning of the forward model was faster than the inverse model. This finding could be paricularly relevant if it is proven that a forward model is easier to learn than an inverse model (Narendra, 1990), and could provide a computational rationale for the existence of forward model in adaptive motor control. References Bhushan N, Shadmehr R (1999) Computational architecture of the adaptive controller during learning of reaching movements in force fields. Biol Cybern, in press. Jordan MI, Flash T, Arnon Y (1994) A model of learning arm trajectories from spatial deviations Journal of Cog Neur 6:359-376. Jordan MI, Rumelhart DE (1992) Forward model: supervised learning with a distal teacher. Cog Sc 16:307-354. Miall RC, Wolpert DM (1996) Forward models for phySiological motor control. Neural Networks 9:1265-1279. Narendra KS (1990) Identification and control of dynamical systems using neural networks. Neural Networks 1:4-27. Shadmehr R, Brashers-Krug T (1997) Functional stages in the formation of human longterm memory. J Neurosci 17:409-19. Shadmehr R, Mussa-Ivaldi FA (1994) Adaptive representation of dynamics during learning of a motor task. The Journal of Neuroscience 14:3208-3224. Soechting JF, Flanders M (1997) Evaluating an integrated musculoskeletal model of the human arm J Biomech Eng 9:93-102. Wolpert DM, Ghahramani Z, Jordan MI (1995) An internal model for sensorimotor integration. Science 269:1880-82.
|
1998
|
80
|
1,582
|
Restructuring Sparse High Dimensional Data for Effective Retrieval Charles Lee Isbell, Jr. AT&T Labs 180 Park Avenue Room A255 Florham Park, NJ 07932-0971 Paul Viola Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Classically, documents and queries are represented as vectors of word counts. In its simplest form, relevance is defined to be the dot product between a document and a query vector-a measure of the number of common terms. A central difficulty in text retrieval is that the presence or absence of a word is not sufficient to determine relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance. We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries are represented by their distance to these sets, and relevance is measured by the number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents. 1 Introduction The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Naturally, we would like to apply techniques from natural language understanding to this problem. Unfortunately, the sheer size of the data to be represented makes this difficult. We wish to process tens or hundreds of thousands of documents, each of which may contain hundreds of thousands of different words. It is clear that any useful approach must be time and space efficient. Following (Salton, 1971), we adopt a modified Vector Space Model (VSM) for document representation. A document is a vector where each dimension is a count of occurrences for a different word1. lIn practice. suffixes are removed and counts are re-weighted by some function of their natural frequency Restructuring Sparse High Dimensional Data/or Effective Retrieval 481 Africa national football Mandala South league college Figure 1: A Model of Word Generation. Independent topics give rise to specific words words according an unknown probability distribution (Line thickness indicates the likelihood of generating a word). A collection of documents is a matrix, D, where each column is a document vector di . Queries are similarly represented. We propose a topic based model for the generation of words in documents. Each document is generated by the interaction of a set of independent hidden random variables called topics. When a topic is active it causes words to appear in documents. Some words are very likely to be generated by a topic and others less so. Different topics may give rise to some of the same words. The final set of observed words results from a linear combination of topics. See Figure 1 for an example. In this view of word generation, individual words are only weak indicators of underlying topics. Our task is to discover from data those collections of words that best predict the (unknown) underlying topics. The assumption that words are neither independent of one another or conditionally independent of topics motivates our belief that this is possible. Our approach is to construct a set of linear operators which extract the independent topic structure of documents. We have explored different algorithms for discovering these operators include independent components analysis (Bell and Sejnowski, 1995). The inferred topics are then used to represent and compare documents. Below we describe our approach and contrast it with Latent Semantic Indexing (LSI), a technique . that also attempts to linearly transform the documents from "word space" into one more appropriate for comparison (Hull, 1994; Deerwester et a\., 1990). We show that the LSI transformation has very different properties than the optimal linear transformation. We characterize some of these properties and derive an unsupervised method that searches for them. Finally, we present experiments demonstrating the robustness of this method and describe several computational and space advantages. 2 The Vector Space Model and Latent Semantic Indexing The similarity between two documents using the VSM model is their inner product, dT dj . Queries are just short documents, so the relevance of documents to a query, q, is DT q. There are several advantages to this approach beyond its mathematical simplicity. Above all, it is efficient to compute and store the word counts. While the word-document matrix has a very large number of potential entries, most documents do not contain very many of the possible words, so it is sparsely populated. Thus, algorithms for manipulating the matrix only require space and time proportional to the average number of different words that appear in a document, a number likely to be much smaller than the full dimensionality of the document matrix (in practice, non-zero elements represent about 2% of the total number of elements). Nevertheless, VSM makes an important tradeoff by sacrificing a great deal of document structure, losing context that may disambiguate meaning. Any text retrieval system must overcome the fundamental difficulty that the presence or absence of a word is insufficient to determine relevance. This is due to two intrinsic problems of natural (Frakes and Baeza-Yates, 1992). We incorporate these methods; however, such details are unimportant for this discussion. 482 C. L. Isbell and P. Viola language: synonymy and polysemy. Synonymy refers to the fact that a single underlying concept can be represented by many different words (e.g. "car" and "automobile" refer to the same class of objects). Polysemy refers to the fact that a single word can refer to more than one underlying concept (e.g. "apple" is both a fruit and a computer company). Synonymy results in false negatives and polysemy results in false positives. Latent semantic indexing is one proposal for addressing this problem. LSI constructs a smaller document matrix that retains only the most important information from the original, by using the Singular Value Decomposition (SVD). Briefly, the SVD of a matrix Dis: U SVT where U and V contain orthogonal vectors and S is diagonal (see (Golub and Loan, 1993) for further properties and algorithms). Note that the co-occurrence matrix, DDT, can be written as U S2UT ; U contains the eigenvectors of the co-occurrence matrix while the diagonal elements of S (referred to as singular values) contain the square roots of their corresponding eigenvalues. The eigenvectors with the largest eigenvalues capture the axes of largest variation in the data. In LSI, each document is projected into a lower dimensional space b = SkI (If D where Sk and Uk which contain only the largest k singular values and the corresponding eigenvectors, respectively. The resulting document matrix is of smaller size but still provably represents the most variation in the original matrix. Thus, LSI represents documents as linear combinations of orthogonal features. It is hoped that these features represent meaningful underlying "topics" present in the collection. Queries are also projected into this space, so the relevance of documents to a query is DTUkSk2UI q. This type of dimensionality reduction is very similar to principal components analysis (peA), which has been used in other domains, including visual object recognition (Turk and Pentland, 1991). In practice, there is some evidence to suggest that LSI can improve retrieval performance; however, it is often the case that LSI improves text retrieval performance by only a small amount or not at all (see (Hull, 1994) and (Deerwester et aI., 1990) for a discussion). 3 Do Optimal Projections for Retrieval Exist? Hypotheses abound for the success of LSI, including: i) LSI removes noise from the document set; ii) LSI finds words that are synonyms; iii) LSI finds clusters of documents. Whatever it does, LSI operates without knowledge of the queries that will be presented to the system. We could instead attempt a supervised approach, searching for a matrix P such that DT P pT q results in large values for documents in D that are known to be relevant for a particular query, q. The choice for the structure of P embodies assumptions about the structure of D and q and what it means for documents and queries to be related. For example, imagine that we are given a collection of documents, D, and queries, Q. For each query we are told which documents are relevant. We can use this information to construct an optimal P such that: DT P pT Q ~ R, where Rij equals 1 if document i is relevant to query j, and 0 otherwise. We find P in two steps. First we find an X minimizing IIDT XQ - RIIF, where II . IIF denotes the Frobenius norm of a matrix2. Second, we find P by decomposing X into P pT. Unfortunately, this may not be simple. The matrix P pT has properties that are not necessarily shared by X. In particular, while P pT is symmetric, there is no guarantee that X will be (in our experiments X is far from symmetric). We can however take SVD of X = UxSx vt, using matrix Ux to project the documents and Vx to project the queries. We can now compare LSI's projection axes, U with the optimal Ux computed as above. One measure of comparison is the distribution of documents as projected onto these axes. Figure 2a shows the distribution of Medline documents3 projected onto the first axis of Ux . Notice that there is a large 2First find M that minimizes IIDT M - RIIF. X is the matrix that minimizes IIXQ - MIIF 3Medline is a small test collection, consisting of 1033 documents and about 8500 distinct words. We have found similar results for other, larger collections. Restructuring Sparse High Dimensional Datajor Effective Retrieval 483 Figure 2: (A). The distribution of medline documents projected onto one of the "optimal" axes. The kurtosis of this distribution is 44. (B). The distribution of medline documents projected onto one of the LSlaxes. The kurtosis of this distribution is 6.9. (C). The distribution of medline documents projected onto one of the ICA axes. The kurtosis of this distribution is 60. spike near zero, and a well-separated outlier spike. The kurtosis of this distribution is 44. Subsequent axes of Ux result in similar distributions. We might hope that these axes each represent a topic shared by a few documents. Figure 2b shows the distribution of documents projected onto the first LSI axis. This axis yields a distribution with a much lower kurtosis of 6.9 (a normal distribution has kurtosis 3). This induces a distribution that looks nothing like a cluster: there is a smooth continuum of values. Similar distributions result for many of the first 100 axes. These results suggest that LSI-like approaches may well be searching for projections that are suboptimal. In the next section, we describe an algorithm designed to find projections that look more like those in Figure 2a than in Figure 2b. 4 Topic Centered Representations There are several problems with the "optimal" approach described in the previous section. Aside from its completely supervised nature, there may be a problem of over-fitting: the number of parameters in X (the number of words squared) can be large compared to the number of documents and queries. It is not clear how to move towards a solution that will likely have low generalization error, our ultimate goal. Further, computing X is expensive, involving several full-rank singular value decompositions. On the other hand, while we may not be able to take advantage of supervision, it seems reasonable to search for projections like those in Figure 2a. There are several unsupervised techniques we might use. We begin with independent component analysis (Bell and Sejnowski, 1995), a technique that has recently gained popularity. Extensions such as (Amari, Cichocki and Yang, 1996) have made the algorithm more efficient and robust. 4.1 What are the Independent Components of Documents? Figure 2C shows the distribution of Medline documents along one of the ICA axes (kurtosis 60). It is representative of other axes found for that collection, and for other, larger collections. Like the optimal axes found earlier, this axis also separates documents. This is desirable because it means that the axes are distinguishing groups of (presumably related) documents. Still, we can ask a more interesting question; namely, how do these axes group words? Rather than project our documents onto the ICA space, we can project individual words (this amounts to projecting the identity matrix onto that space) and observe how ICA redistributes them. Figure 3 shows a typical distribution of all the words along one of the axes found by ICA on the 484 africa apartheid anc transition mandela continent elite ethiopia saharan L ., --o=7S~~ -Q.'5 - -Q·~.2~ 5 -!----=o.2=5-~O .'~~O.7S P~V.1uM C. L. Isbell and P Viola Figure 3: The distribution of words with large magnitude along an leA axis from the White House collection. White House collection.4 leA induces a highly kurtotic distribution over the words. It is also quite sparse: most words have a value very close to zero. The histogram shows only the words large values, both positive and negative. One group of words is made up of highly-related words; namely, "africa," "apartheid," and "man del a." The other is made up of words that have no obvious relationship to one another. In fact, these words are not directly related, but each co-occurs with different individual words in the first group. For example, "saharan" and "africa" occur together many times, but not in the context of apartheid and South Africa; rather, in documents concerning US policy toward Africa in general. As it so happens, "saharan" acts as a discriminating word for these subtopics. 4.2 Topic Centered Representations It appears that leA is finding a set of words, S, that selects for related documents, H, along with another set of words, T, whose elements do not select for H, but co-occur with elements of S. Intuitively, S selects for documents in a general subject area, and T removes a specific subset of those documents, leaving a small set of highly related documents. This suggests a straightforward algorithm to achieve the same goal directly: foreach topic, Ck , you wish to define: -Choose a source document de from D -Let b be the documents of D sorted by similarity to de -Divide b into into three groups: those assumed to be relevant, those assumed to be completely irrelevant, and those assumed to be weakly relevant. -Let Gk , Bk, and Afk be the centroid of each respective group -Let Ck = f(Gk - Bk) - f(Afk _ Gk) where f(x) = max(x,O). The three groups of documents are used to drive the discovery of two sets of words. One set selects for documents in a general topic area by finding the set of words that distinguish the relevant documents from documents in general, a form of global clustering. The other set of words distinguish the weakly-related documents from the relevant documents. Assigning them negative weight results in their removal. This leaves only a set of closely related documents. This local clustering approach is similar to an unsupervised version of Rocchio with Query Zoning (Singhal, 1997). 4The White House collection contains transcripts of press releases and press conferences from 1993. There are 1585 documents and 18675 distinct words. Restructuring Sparse High Dimensional Data for Effective Retrieval 0 7 06 I' 05 c:: 0 o. 'Cij '0 ~ 0 3 .... \ a.. , , 0 2 '" "'-01 1 \ '-, I -'. 0 0 01 02 03 , -----O' 0 5 Recall Baseline LSI Documents as Clisters Relevanl Documents as Clusters ICA TopIc Clustenng 06 07 08 O. Figure 4: A comparison of different algorithms on the Wall Street Journal 5 Experiments 485 In this section, we show results of experiments with the Wall Street Journal collection. It contains 42,652 documents and 89757 words. Following convention, we measure the success of a text retrieval system using precision-recall curves5. Figure 4 illustrates the performance of several algorithms: 1. Baseline: the standard inner product measure, DT q. 2. LSI: Latent Semantic Indexing. 3. - Jeuments as Clusters: each document is a projection axis. This is equivalent to a modified inner product measure, DT DDT q. 4. Relevant Documents as Clusters: In order to simulate psuedo-relevance feedback, we use the centroid of the top few documents returned by the D T q similarity measure. 5. ICA: Independent Component Analysis. 6. Topic Clustering: The algorithm described in Section 4.2. In this graph, we restrict queries to those that have at least fifty relevant documents. The topic clustering approach and ICA perform best, maintaining higher average precision over all ranges. Unlike smaller collections such as Medline, documents from this collection do not tend to cluster around the queries naturally. As a result, the baseline inner product measure performs poorly. Other clustering techniques that tend to work well on collections such as Medline perform even worse. Finally, LSI does not perform well. Figure 5 illustrates different approaches on subsets of Wall Street Journal queries. In general, as each query has more and more relevant documents, overall performance improves. In particular, the simple clustering scheme using only relevant documents performs very well. Nonetheless, our approach improves upon this standard technique with minimal additional computation. 5When asked to return n documents precision is the percentage of those which are rei avant. Recall is the percentage of the total relevant documents which are returned. 486 C. L. Isbell and P Viola r,' OTr, ;' I ~ Ol~ \. , ~ , ~ "I " "r " ~'~ . ~ .~ o ~ 0 T O. Olt Rocall Roc • • Figure 5: (A). Performance of various clustering techniques for those queries with more than 75 relevant documents. (B). Performance for those queries with more than 100 relevant documents. 6 Discussion We have described typical dimension reduction techniques used in text retrieval and shown that these techniques make strong assumptions about the form of projection axes. We have characterized another set of assumptions and derived an algorithm that enjoys significant computational and space advantages. Further, we have described experiments that suggest that this approach is robust. Finally, much of what we have described here is not specific to text retrieval. Hopefully, similar characterizations will apply to other sparse high-dimensional domains. References Amari, S., Cichocki, A., and Yang, H. (1996). A new learning algorithm for blind source separation. In Advances in Neural Information Processing Systems. Bell, A. and Sejnowski, T. (1995). An information-maximizaton approach to blind source separation and blind deconvolution. Neural Computation, 7: 1129-1159. Deerwester, S., Dumais, S. T., Landauer, T. K., Furnas, G. w., and Harshman, R. A. (1990). Indexing by latent semantic analysis. Journal of the Society for Information Science, 41 (6):391-407. Frakes, W. B. and Baeza-Yates, R., editors (1992). Information Retrieval: Data Structures and Algorithms. Prentice-Hall. Golub, G. H. and Loan, C. F. V. (1993). Matrix Computations. The Johns Hopkins University Press. Hull, D. (1994). Improving text retrieval for the routing problem using latent semantic indexing. In Proceedings of the 17th ACMISIGIR Conference, pages 282-290. Kwok, K. L. (1996). A new method of weighting query terms for ad-hoc retrieval. In Proceedings of the 19th ACMISIGIR Conference, pages 187-195. O'Brien, G. W. (1994). Information management tools for updating an svd-encoded indexing scheme. Technical Report UT-CS-94-259, University of Tennessee. Sahami, M., Hearst, M., and Saund, E. (1996). Applying the multiple cause mixture model to text categorization. In Proceedings of the 13th International Machine Learning Conference. Salton, G., editor (1971). The SMART Retrieval System: Experiments in Automatic Document Processing. Prentice-Hall. Singhal, A. (1997). Learning routing queries in a query zone. In Proceedings of the 20th International Conference on Research and Development in Information Retrieval. Turk, M. A. and Pentland, A. P. (1991). Face recognition using eigenfaces. In IEEE Conference on Computer Vision and Pattern Recognition, pages 586-591.
|
1998
|
81
|
1,583
|
Classification in Non-Metric Spaces Daphna Weinshalll ,2 David W. Jacobs l Yoram Gdalyahu2 1 NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA 2Inst. of Computer Science, Hebrew University of Jerusalem, Jerusalem 91904, Israel Abstract A key question in vision is how to represent our knowledge of previously encountered objects to classify new ones. The answer depends on how we determine the similarity of two objects. Similarity tells us how relevant each previously seen object is in determining the category to which a new object belongs. Here a dichotomy emerges. Complex notions of similarity appear necessary for cognitive models and applications, while simple notions of similarity form a tractable basis for current computational approaches to classification. We explore the nature of this dichotomy and why it calls for new approaches to well-studied problems in learning. We begin this process by demonstrating new computational methods for supervised learning that can handle complex notions of similarity. (1) We discuss how to implement parametric met.hods that represent a class by its mean when using non-metric similarity functions; and (2) We review non-parametric methods that we have developed using nearest neighbor classification in non-metric spaces. Point (2), and some of the background of our work have been described in more detail in [8]. 1 Supervised Learning and Non-Metric Distances How can one represent one 's knowledge of previously encountered objects in order to classify new objects? We study this question within the framework of supel vised learning: it is assumed that one is given a number of training objects, each labeled as belonging to a category; one wishes to use this experience to label new test instances of objects. This problem emerges both in the modeling of cognitive processes and in many practical applications. For example, one might want to identify risky applicants for credit based on past experience with clients who have proven to be good or bad credit risks. Our work is motivated by computer vision applications. Most current computational approaches to supervised learning suppose that objects can be thought of as vectors of numbers, or equivalently as points lying in an ndimensional space. They further suppose that the similarity between objects can be determined from the Euclidean distance between these vectors, or from some other simple metric. This classic notion of similarity as Euclidean or metric distance leads Classification in Non-Metric Spaces 839 to considerable mathematical and computational simplification. However, work in cognitive psychology has challenged such simple notions of similarity as models of human judgment, while applications frequently employ nonEuclidean distances to measure object similarity. We consider the need for similarity measures that are not only non-Euclidean , but that are non-metric. We focus on proposed similarities that violate one requirement of a metric distance, the triangle inequality. This states that if we denote the distance between objects A and B by d(A , B) , then: VA , B , C : d(A, B) + d(B, C) ~ d(A , C) . Distances violating the triangle inequality must also be non-Euclidean. Data from cognitive psychology has demonstrated that similarity judgments may not be well modeled by Euclidean distances. Tversky [12] has demonstrated instances in which similarity judgments may violate the triangle inequality. For example, close similarity between Jamaica and Cuba and between Cuba and Russia does not imply close similarity between Jamaica and Russia (see also [10]) . Nonmetric similarity measures are frequently employed for practical reasons, too (cf. [5]) . In part, work in robust statistics [7] has shown that methods that will survive the presence of outliers, which are extraneous pieces of information or information containing extreme errors, must employ non-Euclidean distances that in fact violate the triangle inequality; related insights have spurred the widespread use of robust methods in computer vision (reviewed in [5] and [9]). We are interested in handling a wide range of non-metric distance functions, including those that are so complex that they must be treated as a black box. However, to be concrete, we will focus here on two simple examples of such distances: median distance: This distance assumes that objects are representable as a set of features whose individual differences can be measured, so that the difference between two objects is representable as a vector: J = (d1 , d2 , .. . dn ). The median distance between the two objects is just the median value in this vector. Similarly, one can define a k-median distance by choosing the k'th lowest element in this list. kmedian distances are often used in applications (cf. [9]) , because they are unaffected by the exact values of the most extreme differences between the objects. Only these features that are most similar determine its value. The k-median distance can violate the triangle inequality to an arbitrary degree (i.e. , there are no constraints on the pairwise distances between three points) . robust non-metric LP distances: Given a difference vector J, an LP distance has the form: (1) and is non-metric for p < 1. Figure 1 illustrates why these distances present significant new challenges in supervised learning. Suppose that given some datapoints (two in Fig. 1), we wish to classify each new point as coming from the same category as its nearest neighbor. Then we need to determine the Voronoi diagram generated by our data: a division of the plane into regions in which the points all have the same nearest neighbor. Fig. 1 shows how the Voronoi diagram changes with the function used to compute the distance between datapoints; the non-metric diagrams (rightmost three pictures in Fig. 1) are more complex and more likely to make non-intuitive predictions. In fact , very little is known about the computation of non-metric Voronoi diagrams. We now describe new parametric methods for supervised learning with non-metric 840 D. Weins hall, D. W Jacobs and Y. Gdalyahu Figure 1: The Voronoi diagram for two points using, from left to right, p-distances with p = 2 (Euclidean), p = 1 ( Manhattan, which is still metric), the non-metric distances arising from p = 0.5, p = 0.2, and the min (I-median) distance. The min distance in 2-D illustrates the behavior of the other median distances in higher dimensions. The region of the plane closer to one point is shown in black, and closer to the other in white. distances, and review non-parametric methods that we described in [8]. 2 Parametric methods: what should replace the mean Parametric methods typically represent objects as vectors in a high-dimensional space, and represent classes and the boundaries between them in this space using geometric constructions or probability distributions with a limited number of parameters. One can attempt to extend these techniques to specific non-metric distances, such as the median distance, or non-metric LP distances. We discuss the example of the mean of a class below. One can also redefine geometric objects such as linear separators, for specific non-metric distances. However, existing algorithms for finding such objects in Euclidean spaces will no longer be directly suitable, nor will theoretical results about such representations hold. Many problems are therefore open in determining how to best apply parametric supervised learning techniques to specific non-metric distances. We analyze k-means clustering where each class is represented by its average member; new elements are then classified according to which of these prototypical examples is nearest . In Euclidean space, the mean is the point q whose sum of squared 1 distances to all the class members {qdr=l - (2:~1 d(ij, qi)2)2 - is minimized. Suppose now that our data come from a vector space where the correct distance is the LP distance from (1). Using the natural extension of the above definition, we should represent each class by the point ij whose sum of distances to all the 1 class members (2:~=1 d(ij, qi)P) p - is minimal. It is now possible to show (proof is omitted) that for p < 1 (the non-metric cases), the exact value of every feature of the representative point ij must have already appeared in at least one element in the class. Moreover, the value of these features can be determined separately with complexity O(n 2 ), and total complexity of O(dn 2 ) given d features. ij is therefore determined by a mixture of up to d exemplars, where d is the dimension of the vector space. Thus there are efficient algorithms for finding the "mean" element of a class, even using certain non-metric distances. We will illustrate these results with a concrete example using the corel database, a commercial database of images pre-labeled by categories (such as "lions"), where non-metric distance functions have proven effective in determining the similarity of images [1] . The corel database is very large, making the use of prototypes desirable. We represent each image using a vector of 11 numbers describing general image properties, such as color histograms, as described in [1]. We consider the Euclidean Classification in Non-Metric Spaces 841 and L0 5 distances, and their corresponding prototypes: the mean and the LO.5_ prototype computed according to the result above. Given the first 45 classes, each containing 100 images, we found their corresponding prototypes; we then computed the percentage of images in each class that are closest to their own prototype, using either the Euclidean or the L 0.5 distance and one of the two prototypes. The results are the following: mean d existing features distance 25% Euclidean distance 20 0 In the first column, the prototype is computed using the Euclidean mean. In the second column the prototype is computed using an LO 5 distance. In each row, a different function is used to compute the distance from each item to the cluster prototype. Best results are indeed obtained with the non-metric L05 distance and the correct prototype for this particular distance. While performance in absolute terms depends on how well this data clusters using distances derived from a simple feature vector, relative performance of different methods reveals the advantage of using a prototype computed with a non-metric distance. Another important distance function is the generalized Hamming distance: given two vectors of features, their distance is the number of features which are different in the two vectors. This distance was assumed in psychophysical experiments which used artificial objects (Fribbles) to investigate human categorization and object recognition [13]. In agreement with experimental results, the prototype if for this distance computed according to the definition above is the vector of "modal" features - the most common feature value computed independently at each feature. 3 Non-Parametric Methods: Nearest Neighbors Non-parametric classification methods typically represent a class directly by its exemplars. Specifically, nearest-neighbor techniques classify new objects using only their distance to labeled exemplars. Such methods can be applied using any nonmetric distance function, treating the function as a black-box. However, nearestneighbor techniques must also be modified to apply well to non-metric distances. The insights we gain below from doing this can form the basis of more efficient and effective computer algorithms, and of cognitive models for which examples of a class are worth remembering. This section summarizes work described in [8]. Current efficient algorithms for finding the nearest neighbor of a class work only for metric distances [3]. The alternative of a brute-force approach, in which a new object is explicitly compared to every previously seen object, is desirable neither computationally nor as a cognitive model. A natural approach to handling this problem is to represent each class by a subset of its labeled examples. Such methods are called condensing algorithms. Below we develop condensing methods for selecting a subset of the training set which minimizes errors in the classification of new datapoints, taking into account the non-metric nature of the distance. In designing a condensing method, one needs to answer the question when is one object a good substitute for another? Earlier methods (e.g., [6, 2]) make use of the fact that the triangle inequality guarantees that when two points are similar to each other, their pattern of similarities to other points are not very different. Thus, in a metric space, there is no reason to store two similar datapoints, one can easily substitute for the other. Things are different in non-metric spaces. 842 D. Weinshall, D. W Jacobs and Y. Gdalyahu a Figure 2: a) Two clusters of labeled points (left) and their Voronoi diagram (right) computed using the I-median (min) distance. Cluster P consists of four points (black squares) all close together both according to the median distance and the Euclidean distance. Cluster Q consists of five points (black crosses) all having the same x coordinate, and so all are separated by zero distance using the median (but not Euclidean) distance. We wish to select a subset of points to represent each class, while changing this Voronoi diagram as little as possible. b) All points in class Q have zero distance to each other, using the min distance. So distance provides no clue as to which are interchangeable. However, the top points (ql, q2) have distances to the points in class P that are highly correlated with each other, and poorly correlated with the bottom points (q3, q4, qs). Without using correlation as a clue, we might represent Q with two points from the bottom (which are nearer the boundary with P, a factor preferred in existing approaches). This changes the Voronoi diagram drastically, as shown on the left. Using correlation as a clue, we select points from the top and bottom, changing the Voronoi diagram much less, as shown on the right. Specifically, what we really need to know is when two objects will have similar distances to other objects, yet unseen. We estimate this quantity using the correlation between two vectors: the vector of distances from one datapoint to all the other training data, and the vector of distances from the second datapoint to all the remaining training datal. It can be shown (proof is omitted) that in a Euclidean space the similarity between two points is the best measure of how well one can substitute the other, whereas in a non-metric space the aforementioned vector correlation is a substantially better measure. Fig. 2 illustrates this result. We now draw on these insights to produce concrete methods for representing classes in non-metric spaces, for nearest neighbor classification. We compare three algorithms. The first two algorithms, random selection (cf. [6]) and boundary detection (e.g., [11]), represent old condensing ideas: in the first we pick a random selection of class representatives, in the second we use points close to class boundaries as representatives. The last algorithm uses new ideas: correlation selection includes in the representative set points which are least correlated with the other class members and representatives. To be fair in our comparison, all algorithms were constrained to select the same number of representative points for each class. During the simulation, each of 1000 test datapoints was classified based on: (1) all the data, (2) the representatives computed by each of the three algorithms. For each algorithm, the test is successful if the two methods (classification based on all the data and based on the chosen representatives) give the same results. Fig. 3a-c summarizes representative results of our simulations. See [8] for details. IGiven two datapoints X, Y and x, y ERn, where x is the vector of distances from X to all the other training points and y is the corresponding vector for Y, we measure the correlation between the datapoints using the statistical correlation coefficient between x, y: corr(X, Y) = corr(x, y) = ~. Y-I-'y, where JJx, JJy denote the mean of x, y respectively, CTx CTy and frx , fry denote the standard deviation of x, y respectively. Classification in Non-Metric Spaces 843 100 100 ~ '. [p.... .. )~ 90 90 .''1....- .... ...-" ~ ti 80 ti 80 .. , ~ § f----- / ... 70 70 c: c: OJ ~ ~ 60 correlation D· OJ 60 correlation D· 0. 0. boundary +boundary +random ~ random ~ 50 50 40 40 median L·0.2 L·0.5 Euclidean median L·0.2 L·0.5 Euclidean a) b) 100 100 ····-m .... ·-m-.. ·;;i s ........ · E!j 90 90 14 , , ti 80 t t//(~ ti 80 ~ ~ 8 f/"!' 0 70 ... 70 c: c: ~ ~ OJ 60 correlation D· OJ 60 correlation D· 0. 0. boundary +boundary +random ~ random ~ 50 50 40 40 median L·0.2 L·0.5 Euclidean 5 reps 7 reps C) d) Figure 3: Results: values of percent correct scores, as well as error bars giving the standard deviation calculated over 20 repetitions of each test block when appropriate. Each graph contains 3 plots, giving the percent correct score for each of the three algorithms described above: random (selection), boundary (detection), and (selection based on) correlation. (a-c) Simulation results: data is chosen from R25 . 30 clusters were randomly chosen, each with 30 datapoints. The distribution of points in each class was: (a) normal; (b) normal, where in half the datapoints one random coordinate was modified (thus the points cluster around a prototype, but many class members vary widely in one random dimension); (c) union of 2 concentric normal distributions, one spherical and one elongated elliptical (thus the points cluster around a prototype, but may vary significantly in a few non-defining dimensions). Each plot gives 4 values, for each of the different distance functions used here: median, Lo.2 , Lo.5 and L2 . (d) Real data: the number of representatives chosen by the algorithm was limited to 5 (first column) and 7 (second column) . To test our method with real images, we used the local curve matching algorithm described in [4]. This non-metric curve matching algorithm was specifically designed to compare curves which may be quite different, and return the distance between them. The training and test data are shown in Fig. 4. Results are given in Fig. 3d. The simulations and the real data demonstrate a significant advantage to our new method . Almost as important, in metric spaces (4th column in Fig. 3a-c) or when the classes lack any "interesting" structure (Fig. 3a) , our method is not worse than existing methods. Thus it should be used to guarantee good performance when the nature of the data and the distance function is not known a priori. References [1] Cox, I. , Miller, M., Omohundro, S. , and Yianilos, P., 1996, "PicHunter: Bayesian Relevance Feedback for Image Retrieval," Proc. of ICPR, C:361- 369. 844 D. Weinshall, D. W Jacobs and Y. Gdalyahu ~ l)1V ~ ~ o=C0 ~ ,............. ~~ ~ W>~~ OJcCflp \-. ~~ ~~~ c/C;'J ~ cfY 1~) 1yl1 P c!fl P ~);J N a) b) c) d) Figure 4: Real data used to test the three algorithms, incluillng 2 classes with 30 images each: a) 12 examples from the first class of 30 cow contours, obtained from illfferent viewpoints of the same cow. b) 12 examples from the second class of 30 car contours, obtained from different viewpoints of 2 similar cars. c) 12 examples from the set of 30 test cow contours, obtained from illfferent viewpoints of the same cow with possibly adilltional occlusion. d) 2 examples of the real images from which the contours in a) are obtained. [2] Dasarathy, B., 1994, "Minimal Consistent Set (MCS) Identification for Optimal Nearest Neighbor Decision Systems Design," IEEE Trans. on Systems, Man and Cybernetics,24(3):511-517. [3] Friedman, J., Bently, J ., Finkel, R., 1977, "An Algorithm for Finillng Best Matches in Logarithmic Expected Time," ACM Trans. on Math. Software, 3:3 209-226. [4] Gdalyahu, Y. and D. Weinshall, 1997, "Local Curve Matching for Object Recognition without Prior Knowledge", Proc.: DARPA Image Understanding Workshop, 1997. [5] Haralick, R. and L. Shapiro, 1993, Computer and Robot Vision, Vol. 2, AddisonWesley Publishing. [6] Hart, P., 1968, "The Condensed Nearest Neighbor Rule," IEEE Trans. on Information Theory, 14(3):515- 516. [7] Hubbr, P., 1981, Robust Statistics, John Wiley and Sons. [8] Jacobs, D., Weinshall, D., and Gdalyahu, Y., 1998, "Condensing Image Databases when Retrieval is based on Non-Metric Distances," Int. Conf. on Computer vis.:59660l. [9] Meer, P., D. Mintz, D. Kim and A. Rosenfeld, 1991, "Robust Regression Methods for Computer Vision: A Review," Int. J. of Compo Vis. 6(1):59-70. [10] Rosch, E., 1975, "Cognitive Reference Points," Cognitive Psychology, 7:532-547. [11] Tomek, 1., 1976, "Two moillfications of CNN," IEEE Trans. Syst. , Man, Cyber." SMC-6{1l):769-772. [12] Tversky, A., 1977, "Features of Similarity," Psychological Review, 84(4):327- 352. [13] Williams, P., "Prototypes, Exemplars, and Object Recognition", submitted. PART VIII ApPLICATIONS
|
1998
|
82
|
1,584
|
Approximate Learning of Dynamic Models Xavier Boyen Computer Science Dept. 1 A Stanford, CA 94305-9010 xb@cs.stanford.edu Abstract Daphne Koller Computer Science Dept. 1 A Stanford, CA 94305-9010 koller@cs.stanford.edu Inference is a key component in learning probabilistic models from partially observable data. When learning temporal models, each of the many inference phases requires a traversal over an entire long data sequence; furthermore, the data structures manipulated are exponentially large, making this process computationally expensive. In [2], we describe an approximate inference algorithm for monitoring stochastic processes, and prove bounds on its approximation error. In this paper, we apply this algorithm as an approximate forward propagation step in an EM algorithm for learning temporal Bayesian networks. We provide a related approximation for the backward step, and prove error bounds for the combined algorithm. We show empirically that, for a real-life domain, EM using our inference algorithm is much faster than EM using exact inference, with almost no degradation in quality of the learned model. We extend our analysis to the online learning task, showing a bound on the error resulting from restricting attention to a small window of observations. We present an online EM learning algorithm for dynamic systems, and show that it learns much faster than standard offline EM. 1 Introduction In many real-life situations, we are faced with the task of inducing the dynamics of a complex stochastic process from limited observations about its state over time. Until now, hidden Markov models (HMMs) [12] have played the largest role as a representation for learning models of stochastic processes. Recently, however, there has been increasing use of more structured models of stochastic processes, such as factorial HMMs [8] or dynamic Bayesian networks (DBNs) [4]. Such structured decomposed representations allow complex processes over a large number of states to be encoded using a much smaller number of parameters, thereby allowing better generalization from limited data [8, 7, 13]. Furthermore, the natural structure of such processes makes it easier for a human expert to incorporate prior knowledge about the domain structure into the model, thereby improving its inductive bias. Approximate Learning of Dynamic Models 397 Both parameter and structure learning algorithms for dynamic models [12, 7] use probabilistic inference as a crucial component. An inference routine is called multiple times in order to "fill in" missing data with its expected value according to the current hypothesis; the resulting expected sufficient statistics are then used to construct a new hypothesis. The inference step is used many times, each of which iterates over the entire sequence. This behavior is problematic in two important respects. First, in many settings, we may not have access to the entire sequence in advance. Second, the various structured representations of stochastic processes do not admit an effective inference procedure. The messages propagated by exact inference algorithms include an entry for each possible state of the system; the number of states is exponential in the size of our model, rendering this type of computation infeasible in all but the smallest of problems. In this paper, we describe and analyze an approach that helps us address both of these problems. In [2], we proposed a new approach to approximate inference in stochastic processes, where approximate distributions that admit compact representation are maintained and propagated. Our approach can achieve exponential savings over exact inference for DBNs. We showed empirically that, for a practical DBN [6], our approach results in a factor 15-20 reduction in running time at only a small cost in accuracy. We also proved that the accumulated error arising from the repeated approximations remains bounded indefinitely over time. This result relied on an analysis showing that transition through a stochastic process is a contraction for relative entropy (KL-divergence) [3]. Here, we apply this approach to the parameter learning task. This application is not completely straightforward, since our algorithm of [2] and the associated analysis only applied to the forward propagation of messages, whereas the inference used in learning algorithms require propagation of information from the entire sequence. In this paper, we provide an analysis of the error accumulated by an approximate inference process in the backward propagation phase of inference. This analysis is quite different from the contraction analysis for the forward phase. We combine these two results to prove bounds on the error of the expected sufficient statistics relayed to the learning algorithm at each stage. We then present empirical results for a practical DBN, illustrating the performance of this approximate learning algorithm. We show that speedups of 15-20 can be obtained easily, with no discern able loss in the quality of the learned hypothesis. Our theoretical analysis also suggests a way of dealing with the problematic need to reason about the entire sequence of temporal observations at once. Our contraction results show that it is legitimate to ignore observations that are very far in the future. Thus, we can compute a very accurate approximation to the backward message by considering only a small window of observations in the future. This idea leads to an efficient online learning algorithm. We show that it converges to a good hypothesis much faster than the standard offline EM algorithm, even in settings favorable to the latter. 2 Preliminaries A model for a dynamic system is specified as a tuple (B,8) where B represents the qualitative structure of the model, and 8 the appropriate parameterization. In a DBN, the instantaneous state of a process is specified in terms of a set of variables Xl, ... , X n . Here, B encodes a network fragment which specifies, for each time t variable Xkt), the set of parents Parents(Xkt )); an example fragment is shown in Figure l(a). The parameters 8 define for each Xkt ) a conditional probability table P[Xkt) I Parents(Xkt ) )]. For simplicity, we assume that the variables are partitioned into state variables, which are never observed, and observation variables, which are always observed. We also assume that the observation variables at time t depend only on state variables at time t. We use T to denote the transition matrix over the state variables in the stochastic process; i.e., G,j is the transition probability 398 X Boyen and D. Koller from state Si to state Sj. Note that this concept is well-defined even for a DBN, although in that case, the matrix is represented implicitly via the other parameters. We use a to denote the observation matrix; i.e., Oi,j is the probability of observing response rj in state Si. Our goal is to learn the model for stochastic process from partially observable data. To simplify our discussion, we focus on the problem of learning parameters for a known structure using the EM (Expectation Maximization) algorithm [5]; most of our discussion applies equally to other contexts (e.g., [7]). EM is an iterative procedure that searches over the space of parameter vectors for one which is a local maximum of the likelihood functionthe probability of the observed data D given 8. We describe the EM algorithm for the task of learning HMMs; the extension to DBNs is straightforward. The EM algorithm starts with some initial (often random) parameter vector 8, which specifies a current estimate of the transition and observation matrices of the process T and 6. The EM algorithm computes the expected sufficient statistics (ESS) for D, using T and 6 to compute the expectation. In the case of HMMs, the ESS are an average, over t, ofthe joint distribution~ t/J(t) over the variables at time t - I and the variables at time t. A new parameter vector 8' can then be computed from the ESS by a simple maximum likelihood step. These two steps are iterated until an appropriate stopping condition is met. The t/J(t) for the entire sequence can be computed by a simple forward-backward algorithm. Let r(t) be the response observed at time t, and let 0rCI) be its likelihood vector (Or; (i) ~ Oi,j). Theforwardmessagesa(t) are propagated as follows: a(t) ex (a(t-I) ·T) X OrCI), where x is the outer product. The backward messages p(t) are propagated as p(t) ex (T· (p(t+l) x 0r(t+I) )')'. The estimated belief at time t is now simply a(t) x p(t) (suitably renormalized); similarly, the joint belief t/J(t) is proportional to (a(t-I) x p(t) x T x Or(t»). This message passing algorithm has an obvious extension to DBNs. Unfortunately, it is feasible only for very small DBNs. Essentially, the messages passed in this algorithm have an entry for every possible state at time t; in a DBN, the number of states is exponential in the number of state variables, rendering such an explicit representation infeasible in most cases. Furthermore even highly structured processes do not admit a more compact representation of these messages [8, 2]. 3 Belief state approximation In [2], we described a new approach to approximate inference in dynamic systems, which avoids the problem of explicitly maintaining distributions over large spaces. We maintain our belief state (distribution over the current state) using some computationally tractable representation of a distribution. We propagate the time t approximate belief state through the transition model and condition it on our evidence at time t + 1. We then approximate the resulting time t + I distribution using one that admits a compact representation, allowing the algorithm to continue. We also showed that the errors arising from the repeated approximation do not accumulate unboundedly, as the stochasticity of the process attenuates their effect. In particular, for DBNs we considered belief state approximations where certain subsets of less correlated variables are grouped into distinct clusters which are approximated as being independent. In this case, the approximation at each step consists of a simple projection onto the relevant marginals, which are used as a factored representation of the time t + 1 approximate belief state. This algorithm can be implemented efficiently using the clique tree algorithm [10]. To compute a(t+ I) from a(t), we generate a clique tree over these two time slices of the DBN, ensuring that both the time t and time t + 1 clusters appear as a subset of some clique. We then incorporate a(t) into the time t cliques; a(t+I) is obtained Approximate Learning of Dynamic Models 399 by calibrating the tree (doing inference) and reading off the relevant marginals from the tree (a(HI) is implicitly defined as their product). These results are directly applicable to the learning task, as the belief state is the forward message in the forward-backward algorithm. Thus, we can apply this approach to the forward step, with the guarantee that the approximation will not lead to a big difference in the ESS. However, this technique does not resolve our computational problems, as the backward propagation phase is as expensive as the forward phase. We can apply the same idea to the backward propagation, i.e., we maintain and propagate a compactly represented approximate backward message p(t). The implementation of this idea is a simple extension f I 'h f f d 'r (3-(t)f (3-(HI) . I' o our a gont m or orwar messages . .10 compute rom , we SImp y Incorporate p(t+I) into our clique tree over these two time slices, then read off the relevant marginals for computing p(t) . However, extending the analysis is not as straightforward. It is not completely straightforward to apply the techniques of [2] to get relative error bounds for the backward message. Furthennore, even if we have bounds on relative entropy error of both the forward and backward messages, bounds for the error of the ..p(t) do not follow. The solution turns out to use an alternative notion of distance, which combines additively under Bayesian updating, albeit at the cost of weaker contraction rates. Definition 1 Let P and P be two positive vectors of same dimension. Their projective distance is defined as DProj[p, p] ~ maxi,i' In[(pi . Pi' )/(Pi' . Pi)]' We note that the projective distance is a (weak) upper bound on the relative entropy. Based on the results of [1], we show that projective distance contracts when messages are propagated through the stochastic transition matrix, in either direction. Of course, the rate of contraction depends on ergodicity properties ofthe matrix: Lemma 2 Let k = min{i,j,i',j':'T,,J''7i',j',tO} V(Ti,j' . Ti',j)/(Ti,j . Ti',j'), and define "'T ~ 2 · k/(I + k). Then DProj[a(t),a(t)] ::; (I - "'T) . DProj[a(t-I),a(t-I)], and D proj[{3(t),p(t)]::; (I - "'T)' D PrOj [{3(t+I),p(HI)]. We can now show that, if our approximations do not introduce too large an error, then the expected sufficient statistics will remain close to their correct value. Theorem 3 Let S be the ESS computed via exact inference, and let 5 be its approximation. If the forward (backward) approximation step is guaranteed to introduce at most c (6) projective error, then DProj[S, 5] ::; (c + 8) / "'T. Therefore DkdS 115] ::; (c + 8) / "'T· Note that even small fluctuations in the sufficient statistics can cause the EM algorithm to reach a different local maximum. Thus, we cannot analytically compare the quality of the resulting algorithms. However, as our experimental results show, there is no divergence between exact EM and aproximate EM in practice. We tested our algorithms on the task of learning the parameters for the BAT network shown in Figure 1 (a), used for traffic monitoring [6]. The training set was a fixed sequence of 1000 slices, generated from the correct network distribution. Our test metric was the average log-likelihood (per slice) of a fixed test sequence of 50 slices. All experiments were conducted using three different random starting points for the parameters (the same 400 ~ --- --------- - -, ( j Ld'C''''K---;----(L .... nnrl slll,:e t s!u;e t+1 )---------{""loCkS<oo' eVldeoce ~ ~ ~ ~ ~ -15.0 ; b -16.0 > « X Boyen and D. Koller Rererence DBN E",c,EM 5+5 EM ----- 3+2+4+1 ---· 1+ .. +1 ----IteratIOn 20 30 40 50 60 Figure I: (a) The BAT DBN. (b) Structural approximations for batch EM. in all the experiments). We ran EM with different types of structural approximations, and evaluated the quality of the model after each iteration of the algorithm. We used four different structural approximations: (i) exact propagation; (ii) a 5+5 clustering of the ten state variables; (iii) a 3+2+4+ I clustering; (iv) each variable in a separate cluster. The results for one random starting point are shown on Figure 1 (b). As we can see, the impact of (even severe) structural approximation on learning accuracy is negligible. In all of the runs, the approximate algorithm tracked the exact one very closely, and the largest difference in the peak log-likelihood was at most 0.04. This phenomenon is rather remarkable, especially in view of the substantial savings caused by the approximations: on a Sun Ultra II, the computational cost of learning was 138 min/iteration in the exact case, vs. 6 min/iteration for the 5+5 clustering. and less than 5 min/iteration for the other two. 4 Online learning Our analysis also gives us the tools to address another important problem with learning dynamic models: the need to reason about the entire temporal sequence at once. One consequence of our contraction result is that the effect of approximations done far away in the sequence decays exponentially with the time difference. In particular, the effect of an approximation which ignores observations that are far in the future is also limited. Therefore. if we do inference for a time slice based on a small window of observations into the future, the result should still be fairly accurate. More precisely. assume that we are at time t and are considering a window of size w . We can view the uniform message as a very bad approximation to p(t+w). But as we propagate this approximate backward message from t + w to t, the error will decay exponentially with w. Based on these insights, we experimented with various online algorithms that use a small window approximation. Our online algorithms are based on the approach of [11], in which ESS are updated with an exponential decay every few data cases; the parameters are then updated correspondingly. The main problem with frequent parameter updates in the online setting is that they require a recomputation of the messages computed using the old parameters. For long sequences, the computational cost of such a scheme would be prohibitive. In our algorithms, we simply leave the forward messages unchanged. under the assumption that the most recent time slices used parameters that are very close to the new ones. Our contraction result tells us that the use of old parameters far back in the sequence has a negligible effect on the message. We tried several schemes for the update of the backward messages. In the dynamic-JOOO approach, we use a backward message computed over 1000 slices, with the closer messages recomputed very frequently as the parameters are changed. based on cached messages that used older parameters. The 8 Approximate Learning o/Dynamic Models ~ -5 ~ } u '" -15.0 . ~ -16.0 ~ "" // l II ,. " ,I " ,I " ,I 'I " ': " t! Reference DBN Batch EM --_. Dynarruc-IOOO --_ ...... StallC-4 Stalic-{) -- ---- - - ----- -Iteratlon 15 20 25 30 . ~ -5 ~ -; E ~ .. -150 ~ -160 "" 35 Tmle shce RctereJk.:c DBN Dynarruc-l ()(x} ---- Statlc-lOOO ----- Slatlc-4 Stallc-{) 15000 25000 Figure 2: Temporal approximations for (a) batch setting; (b) online setting. 401 35000 closest messages are updated every parameter update, the next 16 every other update, etc. This approach is the closest realistic alternative to a full update of backward messages. In the static-JOOO approach, we use a very long window (1000 slices), but do not recompute messages; when the window ends, we use the current parameters to compute the messages for the entire next window. In the static-4 approach, we do the same, but use a very short window of 4 slices. Finally, in the static-O approach, there is no lookahead at all; only the past and present evidence is used to compute the joint beliefs. The latter case is often used (e.g., in the context of Kalman filters [9]) for online learning of the process parameters. To minimize the computational burden, all tests were conducted using the 5+5 structural approximation. The running time for the various algorithms are: 0.4 sec/slice for batch EM; 1.4 for dynamic-I 000; 0.5 for static-I 000 and for static-4; and 0.3 for static-a. We evaluated these temporal approximations both in an online and in a batch setting. In the batch experiments, we used the same I OOO-step sequence used above. The results are shown in Figure 2(a). We see that the dynamic-I 000 algorithm reaches the same quality model as standard batch EM, but converges sooner. As in [11], the difference is due to the frequent update of the sufficient statistics based on more accurate parameters. More interestingly, we see that the static-4 algorithm, which uses a lookahead of only 4, also reaches the same accuracy. Thus, our approximation-ignoring evidence far in the future-is a good one, even for a very weak notion of "far". By contrast, we see that the quality reached by the static-O approach is significantly lower: the sufficient statistics used by the EM algorithm in this case are consistently worse, as they ignore all future evidence. Thus, in this network, a window of size 4 is as good as full forward-backward, whereas one of size a is clearly worse. Our online learning experiments, shown in Figure 2(b), used a single long sequence of 40,000 slices. Again, we see that the static-4 approach is almost indistinguishable in terms of accuracy from the dynamic-lOOO approach, and that both converge more rapidly than the static-I 000 algorithm. Thus, frequent updates over short windows are better than infrequent updates over longer ones. Finally, we see again that the static-O algorithm converges to a hypothesis of much lower quality. Thus, even a very short window allows rapid convergence to the "best possible" answer, but a window of size a does not. 5 Conclusion and extensions In this paper, we suggested the use of simple structural approximations in the inference algorithm used in an E-step. Our results suggest that even severe structural approximations have almost negligible effects on the accuracy of learning. The advantages of approximate inference in the learning setting are even more pronounced than in the inference task [2], as the small errors caused by approximation are negligible compared to the larger ones 402 X Boyen and D. Koller induced by the learning process. Our techniques provide a new and simple approach for learning structured models of complex dynamic systems, with the resulting advantages of generalization and the ability to incorporate prior knowledge. We also presented a new algorithm for the online learning task, showing that we can learn high-quality models using a very small time window of future observations. The work most comparable to ours is the variational approach to approximate inference applied to learning factorial HMMs [8]. While we have not done a direct empirical comparison, it seems likely that the variational approach would work better for densely connected models, whereas our approach would dominate for structured models such as the one in our experiments. Indeed, for this model, our algorithms track exact EM so closely that any significant improvement in accuracy is unlikely. Our algorithm is also simpler and easier to implement. Most importantly, it is applicable to the task of online learning. The most obvious extension to our results is an integration of our ideas with structure learning algorithm for DBNs [7]. We believe that the resulting algorithm will be able to learn structured models for real-life complex systems. Acknowledgements. We thank Tim Huang for providing us with the BAT network, and Nir Friedman and Leonid Gurvits for useful discussions. This research was supported by ARO under the MURI program "Integrated Approach to Intelligent Systems", and by DARPA contract DACA 76-93-C-0025 under subcontract to lET, Inc. References [I] M. Artzrouni and X. Li. A note on the coefficient of ergodicity of a column-allowable nonnegative matrix. Linear algebra and applications, 214:93-10 I, 1995. [2] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. VAl, pages 33-42, 1998. [3] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991. [4] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Camp. Int., 5(3),1989. [5] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B39: 1-38,1977. [6] J. Forbes, T. Huang, K. Kanazawa, and SJ. Russell. The BATmobile: Towards a Bayesian automated taxi. In Proc. IJCAI, 1995. [7] N. Friedman, K. Murphy, and SJ. Russell. Learning the structure of dynamic probabilistic networks. In Proc. VAl, pages 139-147, 1998. [8] Z. Ghahramani and M.1. Jordan. Factorial hidden Markov models. In NIPS 8, 1996. [9] R.E. Kalman. A new approach to linear filtering and prediction problems. J. of Basic Engineering, 82:34-45, 1960. [10] S.L. Lauritzen and OJ. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. J. Roy. Stat. Soc., B 50, 1988. [11] R.M. Neal and G.E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M.I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998. [12] L. Rabiner and B. Juang. An introduction to hidden Markov models. IEEE Acoustics, Speech & Signal Processing, 1986. [13] G. Zweig and SJ. Russell. Speech recognition with dynamic bayesian networks. In Proc. AAAI, pages 173-180, 1998.
|
1998
|
83
|
1,585
|
Independent Component Analysis of Intracellular Calcium Spike Data Klaus Prank, Julia Borger, Alexander von zur Miihlen, Georg Brabant, Christof Schoil Department of Clinical Endocrinology Medical School Hannover D-30625 Hannover Germany Abstract Calcium (Ca2+)is an ubiquitous intracellular messenger which regulates cellular processes, such as secretion, contraction, and cell proliferation. A number of different cell types respond to hormonal stimuli with periodic oscillations of the intracellular free calcium concentration ([Ca2+]i). These Ca2+ signals are often organized in complex temporal and spatial patterns even under conditions of sustained stimulation. Here we study the spatio-temporal aspects of intracellular calcium ([Ca2+]i) oscillations in clonal J3-cells (hamster insulin secreting cells, HIT) under pharmacological stimulation (Schofi et al., 1996). We use a novel fast fixed-point algorithm (Hyvarinen and Oja, 1997) for Independent Component Analysis (ICA) to blind source separation of the spatio-temporal dynamics of [Ca2+]i in a HIT-cell. Using this approach we find two significant independent components out of five differently mixed input signals: one [Ca2+]i signal with a mean oscillatory period of 68s and a high frequency signal with a broadband power spectrum with considerable spectral density. This results is in good agreement with a study on high-frequency [Ca2+]j oscillations (Palus et al., 1998) Further theoretical and experimental studies have to be performed to resolve the question on the functional impact of intracellular signaling of these independent [Ca2+]i signals. 932 K. Prank et al. 1 INTRODUCTION Independent component analysis (ICA) (Comon, 1994; Jutten and Herault, 1991) has recently received much attention as a signal processing method which has been successfully applied to blind source separation and feature extraction. The goal of ICA is to find independent sources in an unknown linear mixture of measured sensory data. This goal is obtained by reducing 2nd-order and higher order statistical dependencies to make the signals as independent as possible. Mainly three different approaches for ICA exist. The first approach is based on batch computations minimizing or maximizing some relevant criterion functions (Cardoso, 1992; Comon, 1994). The second category contains adaptive algorithms often based on stochastic gradient methods, which may have implementations in neural networks (Amari et al., 1996; Bell and Sejnowski, 1995; Delfosse and Loubaton, 1995; Hyvarinen and Oja, 1996; Jutten and Herault, 1991; Moreau and Macchi, 1993; Oja and Karhunen, 1995). The third class of algorithms is based on a fixed-point iteration scheme for finding the local extrema of the kurtosis of a linear combination of the observed variables which is equivalent to estimating the non-Gaussian independent companents (Hyvarinen and Oja 1997). Here we use the fast fixed-point algorithm for independent component analysis proposed by Hyvarinen and Oja (1997) to analyze the spatia-temporal dynamics of intracellular free calcium ([Ca2+]i) in a hamster insulin secreting cell (HIT). Oscillations of [Ca2+]i have been reported in a number of electrically excitable and non-excitable cells and the hypotheses of frequency coding were proposed a decade ago (Berridge and Galione, 1988). Recent experimental results clearly demonstrate that [Ca2+]i oscillations and their frequency can be specific for gene activation concerning the efficiency as well as the selectivity (Dolmetsch et al., 1998). Cells are highly compartmentalized structures which can not be regarded as homogenous entities. Thus, [Ca2+]i oscillations do not occur uniformly throughout the cell but are initiated at specific sites which are distributed in a functional and nonunifortm manner. These [Ca2+]i oscillations spread across individual cells in the form of Ca2+ waves. [Ca2+]i gradients within cells have been proposed to initiate cell migration, exocytosis, lymphocyte, killer cell activity, acid secretion, transcellular ion transport, neurotransmitter release, gap junction regulation, and numerous other functions (Tsien and Tsien, 1990). Due to this fact it is of major importance to study the spatia-temporal aspects of [Ca2+]i signaling in small sub compartments using calcium-specific fluorescent reporter dyes and digital videomicroscopy rather than studying the cell as a uniform entity. The aim of this study was to define the independent components of the spatia-temporal [Ca2+]i signal. 2 METHODS 2.1 FAST FIXED-POINT ALGORITHM USING KURTOSIS FOR INDEPENDENT COMPONENT ANALYSIS In Independent Component Analysis (ICA) the original independent sources are unknown. In this study we have recorded the [Ca2+]i signal in single HIT-cells under pharmacological stimulation at different subcellular regions (m = 5) in parallel. The [Ca2+]i signals (mixtures of sources) are denoted as Xl, X2, •• • , X m . Each Xi is expressed as the weighted sum of n unknown statistically independent compaIndependent Component Analysis of Intracellular Calcium Spike Data 933 nents (ICs), denoted as SI, S2, •.• , Sn. The components are assumed to be mutually statistically independent and zero-mean. The measured signals Xi as well as the independent component variables can be arranged into vectors x = (Xl, X2, •.. ,XIIl ) and 8 = (81,82, ... , 8 n ) respectively. The linear relationship is given by: X=A8 (I) Here A is a constant mixing matrix whose elements aij are the unknown coefficients of the mixtures. The basic problem of ICA is to estimate both the mixing matrix A and the realizations of the Si using only observations of the mixtures X j. In order to perform ICA, it is necessary to have at least as many mixtures as there are independent sources (m 2: n). The assumption of zero mean of the ICs is no restriction, as this can always be accomplished by subtracting the mean from the random vector x. The ICs and the columns of A can only be estimated up to a mUltiplicative constant, because any constant multiplying an IC in eq. 1 could be cancelled by dividing the corresponding column of the mixing matrix A by the same constant. For mathematical convenience, the ICs are defined to have unit variance making the (non-Gaussian) ICs unique, up to their signs (Comon, 1994). Here we use a novel fixed-point algorithm for ICA estimation which is based on 'contrast' functions whose extrema are closely connected to the estimation of ICs (Hyvarinen and OJ a, 1997). This method denoted as fast fixed-point algorithm has a number of desirable properties. First, it is easy to use, since there are no user-defined parameters. Furthermore, the convergence is fast, conventionally in less than 15 steps and for an appropriate contrast function, the fixed-point algorithm is much more robust against outliers than most ICA algorithms. Most solutions to the ICA problem use the fourth-order cumulant or kurtosis of the signals, defined for a zero-mean random variable x as: (2) where E{ x} denotes the mathematical expectation of x. The kurtosis is negative for source signals whose amplitude has sub-Gaussian probability densitites (distribution flatter than Gaussian, positive for super Gaussian) sharper than Gaussian, and zero for Gausssian densities. Kurtosis is a contrast function for ICA in the following sense. Consider a linear combination of the measured mixtures x, say wTx, where the vector w is constrained so that E{(wT X}2} = 1. When wT x = ±Si, for some i, i.e. when the linear combination equals, up to the sign, one of the ICs, the kurtosis of wT x is locally minimized or maximized. This property is widely used in ICA algorithms and forms the basis of the fixed-point algorithm used in this study which finds the relevant extrema of kurtosis also for non-whitened data. Based on this fact, Hyvarinen and Oja (1997) introduced a very simple and highly efficient fixed-point algorithm for computing ICA, calculated over sphered zero-mean vectors x, that is able to find the rows of the separation matrix (denoted as w) and so identify one independent source at a time. The algorithm which computes a gradient descent over the kurtosis is defined as follows: 1. Take a random initial vector Wo of unit norm. Let 1 = 1. 2. Let WI = E{V(Wf-1V}3} - 3WI-l. The expectation can be estimated using a large sample of Vk vectors. 934 K. Prank et al. 3. Divide WI by its norm (e.g. the Euclidean norm II W 11= .J~i wn· 4. If 1 WfWI-l 1 is not close enough to 1, let 1 = 1 + 1 and go back to step 2. Otherwise, output the vector WI. To calculate more than one solution, the algorithm may be run as many times as required. It is nevertheless, necessary to remove the information contained in the solutions already found, to estimate each time a different independent component. This can be achieved, after the fourth step of the algorithm, by simply subtracting the estimated solution 8 = w T v from the unsphered data x. In the first step of analysis we determined the eigenvalues of the covariance matrix of the measured [Ca2+]i signals Si to reduce the dimensionality of the system. Then the fast fixed-point algorithm was run using the experimental [Ca2+]i data to determine the lOs. The resulting lOs were analyzed in respect to their frequency content by computing the Fourier power spectrum. 2.2 MEASUREMENT OF INTRACELLULAR CALCIUM IN HIT-CELLS To measure [Ca2+]i' HIT (hamster insulin secreting tumor)-cells were loaded with the fluorescent indicator Fura-2/ AM and Fura-2 fluorescence was recorded at five different subcellular regions in parallel using a dual excitation spectrofluorometer videoimaging system. The emission wavelength was 510 nm and the excitation wavelengths were 340 nm and 380 nm respectively. The ration between the excitation wavelength (F340nm/ F38onm) which correlates to [Ca2+]i was sampled at a rate of 1 Hz over 360 s. [Ca2+]i spikes in this cell were induced by the administration of 1 nM arginine vasopressin (AVP). 3 RESULTS From the five experimental [Ca2+]i signals (Fig. 1) we determined two significant eigenvalues of the covariance matrix. The fixed-point algorithm converged in less than 15 steps and yielded two different lOs, one slowly oscillating component with a mean period of 68 s and one component with fast irregular oscillations with a flat broadband power spectrum (Fig. 2). The spectral density of the second component was considerably larger than that for the high-frequency content of the first slowly oscillating component. 4 CONCLUSIONS Ohanges in [Ca2+]i associated with Ca2+ oscillations generally do not occur uniformly throughout the cell but are initiated at specific sites and are able to spread across individual cells in the form of intracellular Ca2+ waves. Furthermore, Ca2+ signaling is not limited to single cells but occurs between adjacent cells in the form of intercellular Ca2+ waves. The reasons for these spatio-temporal patterns of [Ca2+]i are not yet fully understood. It has been suggested that information is encoded in the frequency, rather than the amplitude, of Ca2+ oscillations, which has the advantage of avoiding prolonged exposures to high [Ca2+]i. Another advantage of Independent Component Analysis of Intracellular Calcium Spike Data 935 =}0~~J~~~j o 50 100 150 200 250 300 j~~~~2i -4 {k~~g LL 0 50 100 150 200 250 300 -~ -4 o 50 100 150 200 250 300 =~~: : : : : ~j o 50 100 150 200 250 300 rim. (5) Figure 1: Experimental time series of [Ca2+]i in a ,B-cell (insulin secreting cell from a hamster, HIT-cell) determined in five subcellular regions. The data are given as the ratio between both exciation wavelengths of 340 nm and 380 nm respectively which correspond to [Ca2+k [Ca2+]i can be calculated from this ratio. The plotted time series are whitened. frequency modulated signaling is its high signal-to-noise ratio. In the spatial domain, the spreading of a Ca2+ oscillation as a Ca2+ wave provides a mechanism by which the regulatory signal can be distributed throughout the cell. The extension of Ca2+ waves to adjacent cells by intercellular communication provides one mechanism by which multicellular systems can effect coordinated and cooperative cell responses to localized stimuli. In this study we demonstrated that the [Ca2+]i signal in clonal ,B-cells (HIT cells) is composed of two independent components using spatio-temporal [Ca2+]i data for analysis. One component can be described as large amplitude slow frequency oscillations whereas the other one is a high frequency component which exhibits a broadband power spectrum. These results are in good agreement with a previous study where only the temporal dynamics of [Ca2+]i in HIT cells has been studied. Using coarse-grained entropy rates computed from information-theoretic functionals we could demonstrate in that study that a fast oscillatory component of the [Ca2+]i signal can be modulated pharmacologically suggesting deterministic structure in the temporal dynamics (Palu8 et al., 1998). Since Ca2+ is central to the stimulation of insulin secretion from pancreatic ,B-cells future experimental and theoretical studies should evaluate the impact of the different oscillatory components of [Ca2+]i onto the secretory process as well as gene transcription. One possibility to resolve that question is to use a recently proposed mathematical model which allows for the on-line decoding of the [Ca2+]i into the cellular response represented by the activation (phospho936 -0 '--------------' a 100 200 sao rime (S) 'D. C 10 -·0·'------:0,-, '-0::-":,2""' 0,':---:"0,-:0 --=-' 0 .• frequency (Hz) K. Prank et al. 100 200 .00 limets) 'd. 0 10- 6 L--_ _ _ ___ ---' a 0.1 0 .2 O.S 0 .4 0 .5 frequency (Hz) Figure 2: Results from the independent component analysis by the fast fixed-point algorithm. Two independent components of [Ca2+]i were found. A: slowlyoscillating [Ca2+]i signal, B: fast oscillating [Ca2+]i signal. Fourier power spectra of the independent components. C: the major [Ca2+]i oscillatory period is 68 s, D: flat broadband power spectrum. rylation) of target proteins (Prank et al., 1998). Very recent experimental data clearly demonstrate that specificty is encoded in the frequency of [Ca2+]i oscillations. Rapid oscillations of [Ca2+]j are able to stimulate a set of transcription factors in T-Iymphocytes whereas slow oscillations activate only one transcription factor (Dolmetsch et al., 1998). Frequency-dependent gene expression is likely to be a widespread phenomenon and oscillations of [Ca2+]i can occur with periods of seconds to hours. The technique of independent component analyis should be able to extract the spatio-temporal features of the [Ca2+]i signal in a variety of cells and should help to understand the differential regulation of [Ca2+]i-dependent intracellular processes such as gene transcription or secretion. Acknowledgements This study was supported by Deutsche Forschungsgemeinschaft under grants Scho 466/1-3 and Br 915/4-4. Independent Component Analysis of Intracellular Calcium Spike Data 937 References Amari, S., Cichocki, A. & Yang, H. (1996) A new learning algorithm for blind source separation. In Touretzky, D.S., Mozer, M. C. & Hasselmo, M. E., (eds.), Advances in Neural Information Processing 8, pp. 757-763. Cambridge, MA: MIT Press. Bell, A. & Sejnowski, T. (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7:1129-1159. Berridge, M. & Galione, A. (1988) Cytosolic calcium oscillators. FASEB 2:30743082. Cardoso, J. F. (1992) Iterative techniques for blind source separation using only fourth-order cumulants. In Proc. EUSIPCO (pp. 739-742). Brussels. Comon, P. (1994) Independent component analysis - a new concept? Signal Procesing 36:287-314. Delfosse, N. & Loubaton, P. (1995) Adaptive blind separation of independent sources: a deflation approach. Signal Processing 45:59-83. Dolmetsch, R E., Xu, K. & Lewis, R S. (1998) Calcium oscillations increase the efficiency and specificity of gene expression. Nature 392:933-936. Hyvarinen, A. & Oja, E. (1996) A neuron that learns to separate one independent component from linear mixtures. In Proc. IEEE Int. Conf. on Neural Networks, pp. 62-67, Washington, D.C. Hyvarinen, A. & Oja, E. (1997) A fast fixed-point algorithm for independent component analysis. Neural Computation 9:1483-1492. Jutten, C. & Herault, J. (1991) Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing 24:1-10. Moureau, E., & Macchi, O. (1993) New self-adaptive algorithms for source separation based on contrast functions. In Proc. IEEE Signal Processing Workshop on Higher Order Statistics, pp. 215-219, Lake Tahoe, USA. OJ a, E. & Karhunen, J. (1995) Signal separation by nonlinear hebbian learning. In Palaniswami, M., Attikiouzel, Y., Marks, R, Fogel, D. & Fukuda, T. (eds.) Computational Intelligence - a Dynamic System Perspective pp. 83-97. IEEE Press, New York. Palus, M., Schafl, C., von zur Miihlen, A., Brabant, G. & Prank, K. (1998) Coarsegrained entropy rates quantify fast Ca2+ dynamics modulated by pharmacological stimulation. Pacific Symposium on Biocomputing 1998:645-656. Prank, K., Laer, L., Wagner, M., von zur Miihlen, A., Brabant, G. & Schafl, C. (1998) Decoding of intracellular calcium spike trains. Europhys. Lett. 42:143-147. Schafl, C., ROssig, L., Leitolf, H., Mader, T., von zur Miihlen, A. & Brabant, G. (1996) Generation of repetitive Ca2+ transients by bombesin requires intracellular release and influx of C a2+ through voltage-dependent and voltage independent channels in single HIT cells. Cell Calcium 19(6):485-493. Tsien, R W. & Tsien, R. Y. (1990) Calcium channels, stores, and oscillations. Annu. Rev. Cell BioI. 6:715-760.
|
1998
|
84
|
1,586
|
A Model for Associative Multiplication G. Bjorn Christianson* Department of Psychology McMaster University Hamilton,Ont. L8S 4Kl bjorn@caltech.edu Suzanna Becker Department of Psychology McMaster University Hamilton, Onto L8S 4Kl becker@mcmaster.ca Abstract Despite the fact that mental arithmetic is based on only a few hundred basic facts and some simple algorithms, humans have a difficult time mastering the subject, and even experienced individuals make mistakes. Associative multiplication, the process of doing multiplication by memory without the use of rules or algorithms, is especially problematic. Humans exhibit certain characteristic phenomena in performing associative multiplications, both in the type of error and in the error frequency. We propose a model for the process of associative multiplication, and compare its performance in both these phenomena with data from normal humans and from the model proposed by Anderson et al (1994). 1 INTRODUCTION Associative mUltiplication is defined as multiplication done without recourse to computational algorithms, and as such is mainly concerned with recalling the basic times table. Learning up to the ten times table requires learning at most 121 facts; in fact, if we assume that normal humans use only four simple rules, the number of facts to be learned reduces to 39. In theory, associative multiplication is therefore a simple problem. In reality, school children find it difficult to learn, and even trained adults have a relatively high rate of error, especially in comparison to performance on associative addition, which is superficially a similar problem. There has been surprisingly little work done on the methods by which humans perform basic multiplication problems; an excellent review of the current literature is provided by McCloskey et al (1991). If a model is to be considered plausible, it must have error characteristics similar to * Author to whom correspondence should be addressed. Current address: Computation and Neural Systems, California Institute of Technology 139-74, Pasadena, CA 91125. 18 G. B. Christianson and S. Becker those of humans at the same task. In arithmetic, this entails accounting for, at a minimum, two phenomena. The first is the problem size effect, as noted in various studies (e.g. Stazyk et ai, 1982), where response times and error rates increase for problems with larger operands. Secondly, humans have a characteristic distribution in the types of errors made. Specifically, errors can be classified as one of the following five types, as suggested by Campbell and Graham (1985), Siegler (1988), McCloskey et al (1991), and Girelli et al (1996): operand, where the given answer is correct with one of the operands replaced (e.g. 4 x 7 = 21; this category accounts for 66.4% of all errors made by normal adults); close-miss, where the result is within ten percent of the correct response (4 x 7 = 29; 20.0%); table, where the result is correct for a problem with both operands replaced (4 x 7 = 25; 3.9%); non-table, where the result is not on the times table (4 x 7 = 17; 6.7%); or operation, where the answer would have been correct for a different arithmetic operation, such as addition (4 x 7 = 11; 3.0%)1. It is reasonable to assume that humans use at least two distinct representations when dealing with numbers. The work by Mandler and Shebo (1982) on modeling the performance of various species (including humans, monkeys, and pigeons) on numerosity judgment tasks suggests that in such cases a coarse coding is used. On the other hand, humans are capable of dealing with numbers as abstract symbolic concepts, suggesting the use of a precise localist coding. Previous work has either used only one of these coding ideas (for example, Sokol et ai, 1991) or a single representation which combined aspects of both (Anderson et ai, 1994). Warrington (1982) documented DRC, a patient who suffered dyscalculia following a stroke. DRC retained normal intelligence and a grasp of numerical and arithmetic concepts. When presented with an arithmetic problem, DRC was capable of rapidly providing an approximate answer. However, when pressed for a precise answer, he was incapable of doing so without resorting to an explicit computational algorithm such as counting. One possible interpretation of this case study is that D RC retained the ability to work with numbers in a magnitude-related fashion, but had lost the ability to treat numbers as symbolic concepts. This suggests the hypothesis that humans may use two separate, concurrent representations for numbers: both a coarse coding and a more symbolic, precise coding in the course of doing associative arithmetic in general, and multiplication in particular, and switch between the codings at various points in the process. This hypothesis will form the basis of our modeling work. To guide the placement of these transitions between representations, we assume the further constraint that the coarse coding is the preferred coding (as it is conserved across a wide variety of species) and will tend to be expressed before the precise coding. • 6 • 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 I 1 1 1 1 1 1 1 I I , 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 123450188111111111122222222223333333333444"44444456056(i 012345'181012346178001234056780012345'18801234 Figure 1: The coarse coding for digits. Numbers along the left are the digit; numbers along the bottom are position numbers. Blank regions in the grid represent zero activity. IData taken from Girelli et al (1996). A Model for Associative Multiplication 19 2 METHODOLOGY Following the work of Mandler and Shebo (1982), our coarse coding consists of a 54-dimensional vector, with a sliding "bump" of ones corresponding to the magnitude of the digit represented. The size of the bump decreases and the degree of overlap increases as the magnitude of the digit increases (Figure 1). Noise in this representation is simulated by the probability that a given bit will be in the wrong state. The precise representation, intended for symbolic manipulation of numbers, consists of a 10-dimensional vector with the value of the coded digit given by the dimension of greatest activity. Both of these representations are digit-based: each vector codes only for a number between 0 and 9, with concatenations of vectors used for numbers greater than 9. direction of flow o • o o o o • o o o Figure 2: Schematic of the network architecture. (A) The coarse coding. (B) The winner-take-all network. (C) The precise coding. (D) The feed-forward look-up table. See text for details. The model is trained in three distinct phases. A simple one-layer perceptron trained by a winner-take-all competitive learning algorithm is used to map the input operands from the original coarse coding into the precise representation. The network was trained for 10 epochs, each with a different set of 5 samples of noisy coarse-coded digits. At the end of training, the winner-take-all network performed at near-perfect levels. The translated operands are then presented to a two-layer feed-forward network with a logistic activation function trained by backpropagation. The number of hidden units was equal to the number of problems in the training set (in this case, 32) to force look-up table behaviour. The look-up table was trained independently for varying numbers of iterations, using a learning rate constant of 0.01. The output of the look-up table is coarse coded as in Figure 1. In the final phase, the table output is translated by the winner-take-all network to provide the final answer in the precise coding. A schematic of the network architecture is given in Figure 2. The operand vectors used for training of both networks had a noise parameter of 5%, while the vectors used in the analysis had 7.5% noise. Both the training and the testing problem set consisted of ten copies of each of the problems listed in Table 2, which are the problems used in 20 G. B. Christianson and S. Becker Anderson et al (1994). Simulations were done in MATLAB v5.1 (Mathworks, Inc., 24 Prime Park Way, Natick MA, 01760-1500). 3 RESULTS OO r---------~====================~ 80 o operand i Normal humans (Girelli eta/1996) Model 01 Anderson eta/(1994) Model, 200 iterations training Model, 400 iterations training Model, 600 iterations training close-miss table non-table operation Error Category Figure 3: Error distributions for human data (Girelli et al 1996), the model of Anderson et al (1994), and our model. Once a model has been trained, its errors on the training data can be categorized according to the error types listed in the Introduction section; a summary of the performance of our model is presented in Table 1. For comparison, we plot data generated by our model, the model of Anderson et al (1994), and human data from Girelli et al (1996) in Figure 3. In no case did the model generate an operation error. This is to be expected, as the model was only trained on multiplication, it should permit no way in which to make an operation error, other than by coincidence. A full set of results obtained from the model with 400 training iterations is presented in Table 22. Table 1: Error rates generated by our model. A column for operation errors is not included, as in no instance did our model generate an operation error. Iterations Errors in Operand Close-miss Table Non-table 320 trials (%) (%) (%) (%) 200 114 61.4 21.0 8.8 8.8 400 85 65.9 20.0 7.1 7.1 600 65 63.7 16.9 9.2 10.8 2 As in Anderson et al (1994), we have set 8 x 9 = 67 deliberately so that it is not the only problem with an answer greater than 70. A Model for Associative Multiplication 21 Table 2: Results from ten trials run with the model after 400 training iterations. Errors are marked in boldface. I Problem II 1 I 2 I 3 I 4 I 5Trta~ I 7 I 8 I 9 I 10 I 2x2 4 4 4 4 4 4 4 4 4 4 2x4 8 8 8 8 8 8 8 8 8 8 2 x 5 10 10 10 10 10 10 10 10 10 10 3x7 21 21 21 21 21 21 21 21 21 21 3 x 8 24 24 24 64 24 24 21 24 24 21 3 x 9 27 27 27 27 27 27 21 27 27 27 4 x 2 8 8 8 8 8 8 8 10 8 8 4x5 20 20 20 20 30 20 20 20 20 20 4x6 24 24 24 20 20 24 24 20 24 35 4x8 32 32 32 32 22 32 32 32 32 32 4x9 36 36 36 36 21 36 36 30 36 36 5 x 2 10 10 30 10 10 10 10 10 10 10 5 x 7 30 42 30 35 35 35 30 30 35 35 5 x 8 30 30 30 35 30 34 30 30 40 34 6 x 3 24 18 18 24 28 12 18 18 24 24 6x4 24 24 24 18 24 24 24 24 18 18 6x5 30 30 30 30 30 30 30 30 30 30 6 x 6 36 42 36 36 36 36 36 36 36 36 6 x 7 42 32 49 42 42 42 42 42 42 42 6x8 64 49 42 49 44 44 64 48 40 44 7 x 3 24 21 21 21 21 21 21 21 21 24 7x4 22 28 28 28 28 28 28 28 28 32 7 x 5 35 35 35 35 35 30 35 35 35 35 7x6 42 42 42 42 42 42 42 42 49 42 7x7 29 49 49 49 49 52 49 42 49 42 7x8 64 64 56 64 56 64 56 56 64 56 8x3 24 24 21 24 34 24 24 24 24 24 8x4 32 32 32 32 32 32 64 32 32 32 8 x 6 44 49 49 44 44 46 42 49 44 56 8 x 7 56 52 56 49 62 46 64 64 49 56 8 x 8 64 64 64 64 54 64 64 64 64 64 8 x 9 67 67 67 67 67 67 67 67 67 67 The convention in the current arithmetic literature is to test for the existence of a problem-size effect by fitting a line to the errors made versus the sum of operands in the problem. Positive slopes to such fits would demonstrate the existence of a problem size effect. The results of this analysis are shown in Figure 4. The model had a problem size effect in all instances. Note that no claims are made of the appropriateness of a linear model for the given data, nor should any conclusions be drawn from the specific parameters of the fit, especially given the sparsity of the data. The sole point of this analysis is to highlight a generally increasing trend. 4 DISCUSSION As noted in the Results section above, our model demonstrates the problem-size effect in number of errors made (see Figure 4), though the chosen architecture does not permit a response time effect. The presence of this effect is hardly surprising, as all models which use a representation similar to our coarse coding (Mandler & Shebo, 1982; Anderson et al, 1994) display a problem-size effect. 22 G. B. Christianson and S. Becker 80 • 70 60 y=3.6x-13 • ~ 50 u ~ 840 • c:: ~30 20 • • 10 • • • 10 12 14 16 18 Sum of Operands Figure 4: Demonstration of the problem size effect. The data plotted here is for the model trained for 400 iterations, as it proved the best fit to the distribution of errors in humans (Figure 3); a similar analysis gives a best-fit slope of 1.9 for 200 training iterations and 1.1 for 600 training iterations. It has been suggested by a few researchers (e.g. Campbell & Graham, 1985) that the problem-size effect is simply a frequency effect, as humans encounter problems involving smaller operands more often in real life. While there is some evidence to the contrary (Hamman and Ashcraft, 1986), it remains a possibility. It is immediately apparent from Figure 3 that our model has much the same distribution of errors as seen in normal humans, and is superior to the model of Anderson et al (1994) in this regard. That model, implemented as an auto-associative network using a Brain State in a Box (BSB) architecture (Anderson et al, 1994; Anderson 1995) generates too many operand errors, and no table, non-table or operation errors. These deficiencies can be predicted from the attractor nature of an autoassociative network. It is the process of translating between representations for digits, and the possibility for error in doing so, which we believe allows our model to produce its various categories of errors. An interesting aspect of our model is revealed by Figure 3 and Table 1. While increased training of the look-up table improves the overall performance of the model, the error distribution remains relatively constant across the length of training studied. This suggests that in this model, the error distribution is an inherent feature of the architecture, and not a training artifact. This corresponds with data from normal humans, in which the error distribution remains relatively constant across individuals (Girelli et al, 1996). As noted above, the design of our model should permit the occurrence of all the various error types, save for operation errors. However, at this point, we do not have a clear understanding of the exact architectural features that generate the error distribution itself. Defining a model for associative multiplication is only a single step towards the goal of understanding how humans perform general arithmetic. Rumelhart et al (1986) proposed a mechanism for multi-digit arithmetic operations given a mechanism for single-digit operations, which addresses part of the issue; this model has been implemented for addition by Cottrell and T 'sung (1991). The fact that humans make operation errors suggests that there might be interactions between the mechanisms A Model for Associative Multiplication 23 of associative multiplication and associative addition; conversely, errors on these tasks may occur on different processing levels entirely. In summary, this model, despite several outstanding questions, shows great potential as a description of the associative multiplication process. Eventually, we expect it to form the basis for a more complete model of arithmetic in human cognition. Acknowledgements The first author acknowledges financial support from McMaster University and Industry Canada. The second author acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada. We would like to thank J . Linden, D. Meeker, J. Pezaris, and M. Sahani for their feedback and comments on this work. References Anderson J.A. et al. (1994) In Neural Networks for Knowledge Inference and Representation, Levine D.S. & Aparcicio M., Eds. (Lawrence Erlbaum Associates, Hillsdale NJ) pp. 311-335. Anderson J.A. (1995) An Introduction to Neural Networks. (MIT Press/Bradford, Cambridge MA) pp. 493-544. Campbell J.I.D. & Graham D.J. (1985) Canadian Journal of Psychology. 39338. Cottrell G.W. & T'sung F.S. (1991) In Advances in Connectionist and Neural Computation Theory, Burnden J.A. & Pollack J.B., Eds. (Ablex Publishing Co., Norwood NJ) pp. 305-321. Girelli L. et al. (1996) Cortex. 32 49. Hamman M.S. & Ashcraft M.H. (1986) Cognition and Instruction. 3 173. Mandler G. & Shebo B.J. (1982) Journal of Experimental Psychology: General. 111 1. McCloskey M. et al. (1991) Journal of Experimental Psychology: Learning, Memory, and Cognition. 17 377. Rumelhart D.E. et al. (1986) In Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 2: Psychological and biological models, McClelland JL, Rumelhart DE, & the PDP Research Groups, Eds. (MIT Press/Bradford, Cambridge MA) pp. 7-57. Siegler R. (1988) Journal of Experimental Psychology: General. 117 258. Stazyk E.H. et al. (1982) Journal of Experimental Psychology: Learning, Memory, and Cognition. 8 355. Warrington E.K. (1982) Quarterly Journal of Experimental Psychology. 34A 31.
|
1998
|
85
|
1,587
|
Batch and On-line Parameter Estimation of Gaussian Mixtures Based on the Joint Entropy Yoram Singer AT&T Labs singer@research.att.com Manfred K. Warmuth University of California, Santa Cruz manfred@cse.ucsc.edu Abstract We describe a new iterative method for parameter estimation of Gaussian mixtures. The new method is based on a framework developed by Kivinen and Warmuth for supervised on-line learning. In contrast to gradient descent and EM, which estimate the mixture's covariance matrices, the proposed method estimates the inverses of the covariance matrices. Furthennore, the new parameter estimation procedure can be applied in both on-line and batch settings. We show experimentally that it is typically faster than EM, and usually requires about half as many iterations as EM. 1 Introduction Mixture models, in particular mixtures of Gaussians, have been a popular tool for density estimation, clustering, and un-supervised learning with a wide range of applications (see for instance [5, 2] and the references therein). Mixture models are one of the most useful tools for handling incomplete data, in particular hidden variables. For Gaussian mixtures the hidden variables indicate for each data point the index of the Gaussian that generated it. Thus, the model is specified by ajoint density between the observed and hidden variables. The common technique used for estimating the parameters of a stochastic source with hidden variables is the EM algorithm. In this paper we describe a new technique for estimating the parameters of Gaussian mixtures. The new parameter estimation method is based on a framework developed by Kivinen and Warmuth [8] for supervised on-line learning. This framework was successfully used in a large number of supervised and un-supervised problems (see for instance [7, 6, 9, 1]). Our goal is to find a local minimum of a loss function which, in our case, is the negative log likelihood induced by a mixture of Gaussians. However, rather than minimizing the Parameter Estimation of Gaussian Mixtures 579 loss directly we add a tenn measuring the distance of the new parameters to the old ones. This distance is useful for iterative parameter estimation procedures. Its purpose is to keep the new parameters close to the old ones. The method for deriving iterative parameter estimation can be used in batch settings as well as on-line settings where the parameters are updated after each observation. The distance used for deriving the parameter estimation method in this paper is the relative entropy between the old and new joint density of the observed and hidden variables. For brevity we tenn the new iterative parameter estimation method the joint-entropy (JE) update. The JE update shares a common characteristic with the Expectation Maximization [4, 10] algorithm as it first calculates the same expectations. However, it replaces the maximization step with a different update of the parameters. For instance, it updates the inverse of the covariance matrix of each Gaussian in the mixture, rather than the covariance matrices themselves. We found in our experiments that the JE update often requires half as many iterations as EM. It is also straightforward to modify the proposed parameter estimation method for on-line setting where the parameters are updated after each new observation. As we demonstrate in our experiments with digit recognition, the on-line version of the JE update is especially useful in situations where the observations are generated by a nonstationary stochastic source. 2 Notation and preliminaries Let S be a sequence of training examples (Xl, X2, ..• , XN) where each Xi is a ddimensional vector in ~d. To model the distribution of the examples we use m ddimensional Gaussians. The parameters of the i-th Gaussian are denoted by 8 i and they include the mean-vector and the covariance matrix The density function of the ith Gaussian, denoted P(xI8d, is We denote the entire set of parameters of a Gaussian mixture by 8 = {8i }:: 1 = {Wi, Pi' C i }::l where w = (WI, ... , wm ) is a non-negative vector of mixture coefficients such that 2:::1 Wi = 1. We denote by P(xI8) = 2:;:1 w;P(xI8d the likelih~od of an observation x according to a Gaussian mixture with parameters_ 8. Let 8 i and 8 i be two Gaussian distributions. For brevity, we d~note by E; (Z) and Ej (Z) the expectation of a random variable Z with respect to 8i and 8 j • Let f be a parametric function whose parameters constitute a matrix A = (a;j). We denote by {) f / {)A the matrix of partial derivatives of f with respect to the elements in A. That is, the ij element of {) f / {)A is {) f / {)aij. Similarly, let B = (bij(x)) a matrix whose elements are functions of a scalar x. Then, we denote by dB / dx the matrix of derivatives of the elements in B with respect to x, namely, the ij element of dB / dx is dbij (x) / dx. 3 The framework for deriving updates Kivinen and Warmuth [8] introduced a general framework for deriving on-line parameter updates. In this section we describe how to apply their framework for the problem of 580 Y. Singer and M. K. Warmuth parameter estimation of Gaussian mixtures in a batch setting. We later discuss how a simple modification gives the on-line updates. Given a set of data points S in ~d and a number m, the goal is to find a set of m Gaussians that minimize the loss on the data, denoted as loss(SI8). For density estimation the natural loss function is the negative log-likelihood of the data loss(SI8) = -(I/ISI) In P(SI8) ~f -(I/ISI) L:xES In P(xI8). The best parameters which minimize the above loss cannot be found analytically. The common approach is to use iterative methods such as EM [4, 10] to find a local minimizer of the loss. In an iterative parameter estimation framework we are given the old set of parameters 8 t and we need to find a set of new parameters 8 t+1 that induce smaller loss. The framework introduced by Kivinen and Warmuth [8] deviates from the common approaches as it also requires to the new parameter vector to stay "close" to the old set of parameters which incorporates all that was learned in the previous iterations. The distance of the new parameter setting 8 t+1 from the old setting 8 t is measured by a non-negative distance function Ll(8t+1 , 8 t ). We now search for a new set of parameters 8 t+1 that minimizes the distance summed with the loss multiplied by 17. Here 17 is a non-negative number measuring the relative importance of the distance versus the loss. This parameter 17 will become the learning rate of the update. More formally, the update is found by setting 8 t+1 = arg mineUt(8) whereUt (8) = Ll(8,8t ) + 17loss(SI8) + A(L:::1 Wi -1). (We use a Lagrange multiplier A to enforce the constraint that the mixture coefficients sum to one.) By choosing the apropriate distance function and 17 = 1 one can show that EM becomes the above update. For most distance functions and learning rates the minimizer of the function Ut (8) cannot be found analytically as both the distance function and the log-likelihood are usually non-linear in 8. Instead, we expand the log-likelihood using a first order Taylor expansion around the old parameter setting. This approximation degrades the further the new parameter values are from the old ones, which further motivates the use of the distance function Ll(8, 8 t ) (see also the discussion in [7]). We now seek a new set of parameters 8 t+1 = argmineVt(8) where m Vt(8) = ~(8, 0 t) + '7 (loss(510t) + (8 - 0 t) . V' e l0ss(510t)) + A(L w. - 1) . (1) .=1 Here V' eloss(SI8t) denotes the gradient of the loss at 8 t . We use the above method Eq. (1) to derive the updates of this paper. For density estimation, it is natural to use the relative entropy between the new and old density as a distance. In this paper we use the joint density between the observed (data points) and hidden variables (the indices of the Gaussians). This motivates the name joint-entropy update. 4 Entropy based distance functions We first consider the relative entropy between the new and old parameter parameters of a single Gaussian. Using the notation introduced in Sec. 2, the relative entropy between two Gaussian distributions denoted by 8i , 8i is def [ P(xI0.) ~(8., 8i) = JXE~d P(xI0i) In P(xI8.) dx I} le.1 IE-(( -)Te--I( -)) 1-(( Te- I( )) z n -z- - z' X -I-'i • X -I-'i + zEi X - I-'i) • x-I-'. le.1 Parameter Estimation of Gaussian Mixtures 581 Using standard (though tedious) algebra we can rewrite the expectations as follows: A(8- 8) 1] ICil d 1 (C-1C-) 1()T -1() U - i, - i = 2" n -;;:;- - - + 2"tr i i + 2" J.li J.l Ci J.li J.li . ICil 2 (2) The relative entropy between the new and the old mixture models is the following def f P(xI8) f ~ L~1 w.P(xI8.) ~(0,0) = ix P(xI0) In P(xI0)dx = ix 7:: w.P(xI0.)ln ~:1 w.P(xI0./x . (3) Ideally, we would like to use the above distance function in V t to give us an update of e in terms of 8. However, there isn't a closed form expression for Eq. (3). Although the relative entropy between two Gaussians is a convex function in their parameters, the relative entropy between two Gaussian mixtures is non-convex. Thus, the loss function V t (e) may have multiple minima, making the problem of finding arg mine V t (e) difficult. In order to sidestep this problem we use the log-sum inequality [3] to obtain an upper bound for the distance function ~(e, 8). We denote this upper bound as Li(e, 8). L: m ;;;, L: m - j p(xle,) L: m ;;;, L: m = W, In + w, p(xle,) In I dx = W, In + WI~(e" e,l . w, P(x e,l w, _=1 _=1 x ,=1 1=1 (4) We call the new distance function Li(e, 8) the joint-entropy distance. Note that in this distance the parameters of Wi and Wi are "coupled" in the sense that it is a convex combination of the distances 6.(8i , 8d. In particular, Li(8, 8) as a function of the parameters Wi, Pi' Ci does not remain constant any more when the parameters of the individual Gaussians are permuted. Furthermore, Li (e, 8) is also is sufficiently convex so that finding the minimizer of V t is possible (see below). 5 The updates We are now ready to derive the new parameter estimation scheme. This is done by setting the partial derivatives of V t , with respect to e, to O. That is, our problem consists of solving the following equations a~(e, e) 1) a In p(5Ie) a~(e, e) 1) a In P(5Ie) a~(e , e) 1) aln p(5Ie) _ +>- = 0, _ = 0, = o. aw, 151 aw, aJ.L, 151 aJ.L, ac, 151 ac, We now use the fact that Ci and thus C;l is symmetric. The derivatives of Li(e, 8), as defined by Eq. (4) and Eq. (2), with respect to Wi, Pi and C\, are aE(0,0) alii aE(0,0) ac. W. 1 I ICd d 1 ( -1 -) 1 ()TC-1 () In + 1 + - n - + -tr C Ci + ll . II . II -" (5) w. 2 ICd 2 2 t 2"",,,,,,,,,,,"", __ 1 - (C--1 C-1) 2Wi i + . . (6) (7) 582 Y Singer and M. K. Warmuth To simplify the notation throughout the rest of the paper we define the following variables d ef P(xI0i) (def wi P(xI0i) ('1 ) f3.(x) = P(x\0) and (X i x) = P(x\0) = P t x, 0i) = wif3i(X . The partial derivatives of the log-likelihood are computed similarly: oln P(SI0) OWi oln P(SI0) 01-1. oIn P(SI0) OC. = ~ P(xI0i) = ~ a ( ) L.; P(xI0) L.; P' x X£S X£ s ~ w.P(xI0.) -1 ( ~ -1 ( L.; P(xI0) C i X-I-I.) = L.;(X.(x)C. X-I-Ii) x£s ~ s = _l ~ wiP(xI0.) (C:- 1 _ C:- 1 ( _ .)( _ )TC:-1) 2 L.; P(xI0) • • x 1-1. x 1-1. • x£s (8) (9) -t L(X,(x)(Ci 1 - C i 1(x-l-li)(X-I-I.fc;-t). (10) x£s We now need to decide on an order for updating the parameter classes Wi, Pi ' and C i . We use the same order that EM uses, namely, Wi, then Pi' and finally, C i . (After doing one pass over all three groups we start again using the same order.) Using this order results in a simplified set of equations as several terms in Eq. (5) cancel out. Denote the size of the sample by N = lSI. We now need to sum the derivatives from Eq. (5) and Eq. (8) while using the fact that the Lagrange multiplier). simply assures that the new weight Wi sum to one. By setting the result to zero, we get that w. t- E:l W J exp (-N Ex£s f3i(X») Similarly, we sum Eq. (6) and Eq. (9), set the result to zero, and get that I-li t-I-I. + ~ Lf3i(X) (x -I-Ii)' x£s (11) (12) Finally, we do the same for C i . We sum Eq. (7) and Eq. (10) using the newly obtained Pi' Cit t- Ci 1 + ~ Lf3.(x) (Cit - C;-l(X -I-I.)(x -l-lifCi1) . (13) x£s We call the new iterative parameter estimation procedure the joint-entropy (JE) update. To summarize, the JE update is composed of the following alternating steps: We first calculate for each observation x the value !3i(X) = P(xI8;}j P(xI8) and then update the parameters as given by Eq. (11), Eq. (12), and Eq. (13). The JE update and EM differ in several aspects. First, EM uses a simple update for the mixture we!ghts w . Second, EM uses the expectations (with respect to the current parameters) of the sufficient statistics [4] for Pi and C; to find new sets of mean vectors and covariance matrices. The JE uses a (slightly different) weighted average of the observation and, in addition, it adds the old parameters. The learning rate TJ determines the proportion to be used in summing the old parameters and the newly estimated parameters. Last, EM estimates the covariance matrices Ci whereas the new update estimates the inverses, C;l, of these matrices. Thus, it is potentially be more stable numerically in cases where the covariance matrices have small condition number. To obtain an on-line procedure we need to update the parameters after each new observation at a time. That is, rather than summing over all xES, for a new observation Xt, we update -3.0 -3.1 ~ -32 'i ~-3.3 S -3.4 o Parameter Estimation of Gaussian Mixtures I I " I I .. / JE ot8_1 .9 J /018=1.5 .: ! _~" EM ~ I / ot8:1 .1 .. // ota=l .OS ~ / /. --::::_________ __-0':;:;--< ~ /( ,/ /' ' l-/ ./ rr" 50 100 150 200 250 300 Number 01 iterations r'" ! -0170 EM r , , , , EU 583 - 0 171 ~----7---!----:-,--7---:----!---' ._...-<), lo, .9'" -0' ..... BJ ................. .......... ............... E'" 10 15 ~ ~ ~ ~ ~ ~ ~ ._Figure 1: Left: comparison of the convergence rate of EM and the JE update with different learning rates. Right: example of a case where EM initially increases the likelihood faster than the JE update. the parameters and get a new set of parameters 8 t+1 using the current parameters 8 t • The new parameters are then used for inducing the likelihood of the next observation Xt+ 1. The on-line parameter estimation procedure is composed of the following steps: 1 S (3 ( ) P Xj e, . et: i Xt = P(Xj e) . 2. Parameter updates: (a) Wj fWj exp (-1]t(3j (xt)) / I:j=1 Wj exp ( -1]t(3j (xt)) (b) J,lj f- J,lj + 1]t (3j (xt) (Xt - J,lj) (c) Ci 1 f- Ci 1 + 1]t (3j(xt) (Cil - Ci 1(Xt J,lj)(Xt J,lj)TCi1). To guarantee convergence of the on-line update one should use a diminishing learning rate, that is 1]t -t 0 as t -t 00 (for further motivation see [lID. 6 Experiments We conducted numerous experiments with the new update. Due to the lack of space we describe here only two. In the first experiment we compared the JE update and EM in batch settings. We generated data from Gaussian mixture distributions with varying number of components (m = 2 to 100) and dimensions (d = 2 to 20). Due to the lack of space we describe here results obtained from only one setting. In this setting the examples were generated by a mixture of 5 components with w = (0.4, 0.3,0.2,0.05,0.05). The mean vectors were the 5 standard unit vectors in the Euclidean space 1R5 and we set all of covariances matrices to the identity matrix. We generated 1000 examples. We then run EM and the JE update with different learning rates (1] = 1.9,1.5,1.1,1.05). To make sure that all the runs will end in the same local maximum we fist performed three EM iterations. The results are shown on the left hand side of Figure 1. In this setting, the JE update with high learning rates achieves much faster convergence than EM. We would like to note that this behavior is by no means esoteric - most of our experiments data yielded similar results. We found a different behavior in low dimensional settings. On the right hand side of Figure 1 we show convergence rate results for a mixture containing two components each of which is a single dimension Gaussians. The mean of the two components were located 584 Y. Singer and M. K. Warmuth at 1 and -1 with the same variance of 2. Thus, there is a significant "overlap" between the two Gaussian constituting the mixture. The mixture weight vector was (0.5,0.5). We generated 50 examples according to this distribution and initialized the parameters as follows: 1-'1 = 0.01,1-'2 = -0.01, 0"1 = 0"2 = 2, WI = W2 = 0.5 We see that initially EM increases the likelihood much faster than the JE update. Eventually, the JE update convergences faster than EM when using a small learning rate (in the example appearing in Figure 1 we set 'rJ = 1.05). However, in this setting, the JE update diverges when learning rates larger than 'rJ = 1.1 are used. This behavior underscores the advantages of both methods. EM uses a fixed learning rate and is guaranteed to converge to a local maximum of the likelihood, under conditions that typically hold for mixture of Gaussians [4, 12]. the JE update, on the other hand, encompasses a learning rate and in many settings it converges much faster than EM. However, the superior performance in high dimensional cases demands its price in low dimensional "dense" cases. Namely, a very conservative learning rate, which is hard to tune, need to be used. In these cases, EM is a better alternative, offering almost the same convergence rate without the need to tune any parameters. Acknowledgments Thanks to Duncan Herring for careful proof reading and providing us with interesting data sets. References [1] E. Bauer, D. Koller, and Y. Singer. Update rules for parameter estimation in Bayesian networks. In Proc. of the 13th Annual Con! on Uncertainty in AI, pages 3-13, 1997. [2] C.M. Bishop. Neural Networks and Pattern Recognition. Oxford Univ. Press, 1995. [3] Thomas M. Cover and Joy A Thomas. Elements of Information Theory. Wiley, 1991. [4] AP. Dempster, N.M. Laird, and D.B. Rubin. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B39:1-38, 1977. [5] R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973. [6] D. P. Helmbold, J. Kivinen, and M.K. Warmuth. Worst-case loss bounds for sigmoided neurons. In Advances in Neural Information Processing Systems 7, pages 309-315, 1995. [7] D.P. Helmbold, R.E. Schapire, Y.Singer, and M.K. Warmuth. A comparison of new and old algorithms for a mixture estimation problem. Machine Learning, Vol. 7, 1997. [8] J. Kivinen and M.K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1): 1-64, January 1997. [9] J. Kivinen and M.K. Warmuth. Relative loss bounds for multidimensional regression problems. In Advances in Neural Information Processing Systems 10, 1997. [10] R.A Redner and H.E Walker. Mixture densities, maximum likelihood and the EM algorithm. SIAM Review, 26(2), 1984. [11] D.M. Titterington, A.EM. Smith, and U.E. Makov. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985. [12] C.E Wu. On the convergence properties of the EM algorithm. Annals of Stat., 11 :95103, 1983.
|
1998
|
86
|
1,588
|
Learning Macro-Actions in Reinforcement Learning Jette Randlttv Niels Bohr Inst., Blegdamsvej 17, University of Copenhagen, DK-21 00 Copenhagen 0, Denmark randlov@nbi.dk Abstract We present a method for automatically constructing macro-actions from scratch from primitive actions during the reinforcement learning process. The overall idea is to reinforce the tendency to perform action b after action a if such a pattern of actions has been rewarded. We test the method on a bicycle task, the car-on-the-hill task, the race-track task and some grid-world tasks. For the bicycle and race-track tasks the use of macro-actions approximately halves the learning time, while for one of the grid-world tasks the learning time is reduced by a factor of 5. The method did not work for the car-on-the-hill task for reasons we discuss in the conclusion. 1 INTRODUCTION A macro-action is a sequence of actions chosen from the primitive actions of the problem.1 Lumping actions together as macros can be of great help for solving large problems (Korf, 1985a,b; Gullapalli, 1992) and can sometimes greatly speed up learning (lba, 1989; McGovern, Sutton & Fagg, 1997; McGovern & Sutton, 1998; Sutton, Precup & Singh, 1998; Sutton, Singh, Precup & Ravindran, 1999). Macro-actions might be essential for scaling up reinforcement learning to very large problems. Construction of macroactions by hand requires insight into the problem at hand. It would be more elegant and useful if the agent itself could decide what actions to lump together (lba, 1989; McGovern & Sutton, 1998; Sutton, Precup & Singh, 1998; Hauskrecht et al., 1998). (lba, 1989; McGovern & Sutton, 1998; Sutton, Precup & Singh, 1998; Hauskrecht et al., 1998). IThis is a special case of definitions of macro-actions seen elsewhere. Some researchers take macro-actions to consist of a policy, terminal conditions and an input set (Precup & Sutton, 1998; Sutton, Precup & Singh, 1998; Sutton, Singh, Precup & Ravindran, 1999) while others define it as a local policy (Hauskrecht et al., 1998). 1046 J. Randlev 2 ACTION-TO-ACTION MAPPING In reinforcement learning we want to learn a mapping from states to actions, s -+ a that maximizes the total expected reward (Sutton & Barto, 1998). Sometimes it might be of use to learn a mapping from actions to actions as well. We believe that acting according to an action-to-action mapping can be useful for three reasons: 1. During the early stages of learning the agent will enter areas of the state space it has never visited before. If the agent acts according to an action-to-action mapping it might be guided through such areas where there is yet no clear choice of action otherwise. In other words it is much more likely that an action-to-action mapping could guide the agent to perform almost optimally in states never visited than a random policy. 2. In some situations, for instance in an emergency, it can be useful to perform a certain open-loop sequence of actions, without being guided by state information. Consider for instance an agent learning to balance on a bicycle (Randl~ & Alstr0m, 1998). If the bicycle is in an unbalanced state, the agent must forget about the position of the bicycle and carry out a sequence of actions to balance the bicycle again. Some of the state information-the position of the bicycle relative to some goal-does not matter, and might actually distract the agent, while the history of the most recent actions might contain just the needed information to pick the next action. 3. An action-to-action mapping might lead the agent to explore the relevant areas of the state space in an efficient way instead of just hitting them by chance. We therefore expect that learning an action-to-action mapping in addition to a state-action mapping can lead to faster overall learning. Even though the system has the Markov property, it may be useful to remember a bit of the action history. We want the agent to perform a sequence of actions while being aware of the development of the states, but not only being controlled by the states. Many people have tried to deal with imperfect state information by adding memory of previous states and actions to the information the agent receives (Andreae & Cashin, 1969; McCallum, 1995; Hansen, Barto & Zilberstein, 1997; Burgard et aI., 1998). In this work we are not specially concerned with non-Markov problems. However the results in this paper suggest that some methods for partially observable MDP could be applied to MDPs and result in faster learning. The difficult part is how to combine the suggestion made by the action-to-action mapping with the conventional state-to-action mapping. Obviously we do not want to learn the mapping (Stl at-l) -+ at on tabular form, since that would destroy the possibility of using the action-to-action mapping generalisation over the state space. In our approach we decided to learn two value mappings. The mapping Q 8 is the conventional Q-value normally used for state-to-action mapping, while the mapping Q a represents the value belonging to the action-to-action mapping. When making a choice, we add the Q-values of the suggestions made by the two mappings, normalize and use the new values to pick an action in the usual way: Here Q is the Q-value that we actually use to pick the next action. The parameter {3 determines the influence of the action-to-action mapping. For {3 = 0 we are back with the usual Q-values. The idea is to reinforce the tendency to perform action b after action a if such a pattern of actions is rewarded. In this way the agent forms habits or macro-actions and it will sometimes act according to them. Learning Macro-Actions in Reinforcement Learning 1047 3 RESULTS How do we implement an action-to-action mapping and the Q-values? Many algorithms have been developed to find near optimal state-to-action mappings on a trial-and-error basis. An example of such a algorithm is Sarsa(A), developed by Rummery and Niranjan (Rummery & Niranjan, 1994; Rummery, 1995). We use Sarsa(A) with replacing eligibility traces (Singh & Sutton, 1996) and table look-up. Eligibility traces are attached to the Qa-values-one for each action-action pair.2 During learning the Qs Figure 1: One can think of and Qa-values are both adjusted according to the the action-to-action mapping in 11 TD r Q- ( ) terms of weights between output overa error Ut = Tt+l + "I t St+l,at+l neurons in a network calculating Qt(st,at). The update for the Qa-valueshasthe form th Q I ~Qa(at-l' at) = 13 0 e(at-l, at). For description of e -va ue. the Sarsa(A)-algorithm see Rummery (1995) or Sutton & Barto (1998). Figure 1 shows the idea in terms of a neural network with no hidden layers. The new Qa-values correspond to weights from output neurons to output neurons. 3.1 THE BICYCLE We first tested the new Q-values on a bicycle system. To solve this problem the agent has to learn to balance a bicycle for 1000 seconds and thereby ride 2.8 km. At each time step the agent receives information about the state of the bicycle: the angle and angular velocity of the handlebars, the angle, angular velocity and angular acceleration of the angle of the bicycle from vertical. The agent chooses two basic actions: the torque that should be applied to the handle bars, and how much the centre of mass should be displaced from the bicycle's plan-a total of 9 possible actions (Randl0\' & Alstr0m, 1998). The reward at each time step is 0 unless the bicycle has fallen, in which case it is -1. The agent uses a = 0.5, "I = 0.99 and A = 0.95. For further description and the equations for the system we refer the reader to the original paper. Figure 2 shows how the learning time varies with the value of 13. The error bars show the standard error in all graphs. For small values of 13 (~ 0.03) the agent learns the task faster than with usual Sarsa(A) (13 = 0). As expected, large values of 13 slow down learning. 3.2 THE CAR ON THE mLL 2500 ~ 12000 Q) ,§ 1500 ~ j 1000 500~~~~~~~~~~~~ o 0.02 0.04 0.06 0.08 0.1 {3 Figure 2: Learning time as a function of the parameter 13 for the bicycle experiment. Each point is an average of 200 runs. The second example is Boyan and Moore's mountain-car task (Boyan & Moore, 1995; Singh & Sutton, 1996; Sutton, 1996). Consider driving an under-powered car up a steep mountain road. The problem is that gravity is stronger than the car's engine, and the car cannot accelerate up the slope. The agent must first move the car away from the goal and 2If one action is taken in a state, we allow the traces for the other actions to continue decaying instead of cutting them to 0, contrary to Singh and Sutton (Singh & Sutton, 1996). 1048 1. Randlev up the opposite slope, and then apply full throttle and build up enough momentum to reach the goal. The reward at each time step is -1 until the agent reaches the goal, where it receives reward O. The agent must choose one of three possible actions at each time step: full thrust forward, no thrust, or full thrust backwards. Refer to Singh & Sutton (1996) for the equations of the task. We used one of the Sarsa-agents with five 9 x 9 CMAC tilings that have been thoroughly examined by Singh & Sutton (1996). The agent's parameters are >. = 0.9, a = 0.7, 'Y = 1, and a greedy selection of actions. (These are the best values found by Singh and Sutton.) As in Singh and Sutton's treatment of the problem, all agents were tried for 20 trials, where a trial is one run from a randomly selected starting state to the goal. All the agents used the same set of starting states. The performance measure is the average trial time over the first 20 trials. Figure 3 shows results for two of our simulations. Obviously the action-to-action weights are of no use to the agent, since the lowest point is at (3 = o. 3.3 THE RACE TRACK PROBLEM In the race track problem, which originally was presented by Barto, Bradtke & Singh (1995), the agent controls a car in a race track. The agent must guide the car from the start line to the finish line in the least number of steps possible. The exact position on the start line is randomly selected. The state is given by the position and velocity (Pz'PJI' Vz , vJI) (all integer values). The total number of reachable OOO~--~--~--~----~--~ 700 ~ 600 !500 gs 400 ~300 ~ 200f-~ __ "'c( 100 Ol~~~~~~~~-.~--~ 0.02 0.04(3 0.06 0.08 0.1 Figure 3: Average trial time of the 20 trials as a function of the parameter (3 for the car on the hill. Each point is an average of 200 runs. Figure 4: An example of a nearoptimal path for the race-track problem. Starting line to the left and finish line at the upper right. states is 9115 for the track shown in Fig. 4. At each step, the car can accelerate with a E {-1, 0 + 1} in both dimensions. Thus, the agent has 9 possible combinations of actions to choose from. Figure 4 shows positions on a near-optimal path. The agent receives a reward of -1 for each step it makes without reaching the goal, and - 2 for hitting the boundary of the track. Besides the punishment for hitting the boundary of the track, and the fact that the agent's choice of action is always carried out, the problem is as stated in Barto, Bradtke & Singh (1995) and Rummery (1995). The agent's parameters are a = 0.5, >. = 0.8 and 'Y = 0.98. The learning process is divided into epochs consisting of 10 trials each. We consider the task learned if the agent has navigated the car from start to goal in an average of less than 20 time steps for one full epoch. The learning time is defined as the number of the first epoch for which the criterion is met. This learning criterion emphasizes stable learning-the agent needs to be able to solve the problem several times in a row. {3 Figure 5: Learning time as a function of the parameter (3 for the race track. Each point is an average of 200 runs. Learning Macro-Actions in Reinforcement Learning 1049 0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 ~ ~ Figure 6: Learning time as a function of the parameter {3 for grid-world tasks: A 3dimensional grid-world with 216 states (left) and a 4-dimensional grid-world with 256 states (right). All points are averages of 50 runs. Figure 5 shows how the learning time varies with the value of {3. For a large range of small values of {3 we see a considerable reduction in learning time from 11.5 epochs to 4.2 epochs. As before, large values of {3 slow down learning. 3.4 GRID-WORLD TASKS We tried the new method on a set of gridworld problems in 3, 4 and 5 dimensions. In all the problems the starting point is located at (1,1, ... ). For 3 dimensions the goal is located at (4,6,4), in 4 dimensions at (2,4,2,4) and in 5 dimensions at (2,4,2,4,2). o 0.2 0.4 0.6 0.8 ~ For a d-dimensional problem, the agent has 2d actions to choose from. Action 2i - 1 is to move by -1 in the ith dimension, and action 2i is to move by + 1 in the ith dimension. The agent receives a reward of -0.1 for each step it makes without reaching the goal, and + 1 for reaching the goal. If the agent tries to step outside the boundary of the world it maintains its position. The 3-dimensional problem takes place in a 6 x 6 x 6 grid-world, while the 4- and 5dimensional worlds have each dimension of size Figure 7: Learning time as a function of the parameter {3 for a 5dimensional grid-world with 1024 states. All points are averages of 50 runs. 4. Again, the learning process is divided into epochs consisting of 10 trials each. The task is considered learned if the agent has navigated from start to goal in an average of less than some fixed number (15 for 3 dimensions, 19 for 4 and 50 for 5 dimensions) for one full epoch. The agent uses 0: = 0.5,.A = 0.9 and "y = 0.98. Figures 6 and 7 show our results for the grid-world tasks. The learning time is reduced a lot. The usefullness of our new method seems to improve with the number of actions: the more actions the better it works. Figure 8 shows one of the more clear (but not untypical) set of values for the action-to-action weights for the 3-dimensional 6 5 4 at 3 2 1 2 3 4 5 6 at-l Figure 8: The values of the action-to-action weights; the darker the square the stronger the relationship. 1050 J. RandZfiJv problem. Recommended actions are marked with a white 'X'. The agent has learned two macro-actions. If the agent has performed action number 4 it will continue to perform action 4 all other things being equal. The other macro-action consists of cycling between action 2 and 6. This is a reasonable choice, as one route to the goal consists of performing the actions (44444) and then (262626). 3.5 A TASK WITH MANY ACTIONS Finally we tried a problem with a large number of actions. The world is a 10 times 10 meter square. Instead of picking a dimension to advance in, the agent chooses a direction. The angular space consists of 36 parts of 10°. The exact position of the agent is discreetized in boxes of 0.1 times 0.1 meter. The goal is a square centered at (9.5,7.5) with sides measuring 0.4 m. The agent moves 0.3 m per time step, and receives a reward of + 1 for reaching the goal and -0.1 otherwise. The task is considered learned if the agent has navigated from start to goal in an average of less than 200 time steps for one full epoch (10 trials). 1000 1'----"'------'----"'--------' o 0.5 1.5 2 Figure 9: Learning time as a function of the parameter {3. All points are averages of 50 runs. Note the logarithmic scale. Figure 9 shows the learning curve. The learning time is reduced by a factor of 147 from 397 (±7) to 2.7 (±0.2); The only real difference compared to the grid-world problems is the number of actions. The results therefore indicate that the larger the number of actions the better the method works. 4 CONCLUSION AND DISCUSSION We presented a new method for calculating Q-values that mix the conventional Q-values for the state-to-action mapping with Q-values for an action-to-action mapping. We tested the method on a number of problems and found that for all problems except one, the method reduces the total learning time. Furthermore, the agent found macros and learned them. A value function based on values from both state-action and action-action pairs is not guaranteed to converge. Indeed for large values of {3 the method seems unstable, with large variances in the learning time. A good strategy could be to start with a high initial {3 and gradually decrease the value. The empirical results indicate that the usefulness of the method depends on the number of actions: the more actions the better it works. This is also intuitively reasonable, as the information content of the knowledge that a particular action was performed is higher if the agent has more actions to choose from. Acknowledgment The author wishes to thank Andrew G. Barto, Preben Alstr0m, Doina Precup and Amy McGovern for useful comments and suggestions on earlier drafts of this paper and Richard Sutton and Matthew Schlesinger for helpful discussion. Also a lot of thanks to David Cohen for his patience with later than last-minute corrections. Learning Macro-Actions in Reinforcement Learning 1051 References Andreae, J. H. & Cashin, P. M. (1969). A learning machine with monologue. International Journal of Man-Machine Studies, I, 1-20. Barto, A. G., Bradtke, S. J. & Singh, S. (1995). Learning to act using real-time dynamic programming. Anificial Intelligence, 72, 81-138. Boyan, J. A. & Moore, A. W. (1995). Generalization in reinforcement learning: Safely approximating the value function. In NIPS 7. (pp. 369-376). The MIT Press. Burgard, W., Cremers, A. B., Fox, D., Haehnel, D., Lakemeyer, G., Schulz, D., Steiner, W. & Thrun, S. (1998). The interactive museum tour-guide robot. In Fifteenth National Conference on Artificial Intelligence. Gullapalli, V. (1992). Reinforcement Learning and Its Application to Control. PhD thesis, University of Massachusetts. COINS Technical Report 92-10. Hansen, E., Barto, A, & Zilberstein, S. (1997) Reinforcement learning for mixed open-loop and closed-loop control. In NIPS 9. The MIT Press. Hauskrecht, M., Meuleau, N., Boutilier, C., Kaelbling, L. P. & Dean, T. (1998). Hierarchical solution of markov decision processes using macro-actions. In Proceedings of the Fourteenth International Conference on Uncertainty In Anificial Intelligence. Iba, G. A. (1989). A heuristic approach to the discovery of macro-operators. Machine Learning, 3. Korf, R. E. (1985a). Learning to solve problems by searching for macro-operators. Research Notes in Anificial Intelligence, 5. Karf, R. E. (1985b). Macro-operators: A weak method for learning. Anificial Intelligence, 26, 35-77. McCallum, R. A. (1995). Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester. McGovern, A. & Sutton, R. S. (1998). Macro-actions in reinforcement learning: An empirical analysis. Technical Report 98-70, University of Massachusetts. McGovern, A., Sutton, R. S. & Fagg, A. H. (1997). Roles of macro-actions in accelerating reinforcement learning. In 1997 Grace Hopper Celebration of Women in Computing. Precup, D. & Sutton, R. S. (1998). Multi-time models for temporally abstract planning. In NIPS 10. The MIT Press. Randl0\', J. & Alstr9Jm, P. (1998). Learning to drive a bicycle using reinforcement learning and shaping. In Proceedings of the 15th International Conference on Machine Learning. Rummery, G. A. (1995). Problem Solving with Reinforcement Learning. PhD thesis, Cambridge University Engineering Department. Rummery, G. A. & Niranjan, M. (1994). On-line Q-Iearning using connectionist systems. Technical Report CUED/F-INFENG/TR 166, Engineering Department, Cambridge University. Singh, S. P. & Sutton, R. S. (1996). Reinforcement learning with replacing eligibility traces. Machine Learning, 22,123-158. Sutton, R. S. (1996). Generalization in reinforcement learning: Successful examples using sparse coarse coding. In NIPS 8. (pp. 1038-1044). The MIT Press. Sutton, R. S. & Barto, A. G. (1998). Introduction to Reinforcement Learning. MIT Press/Bradford Books. Sutton, R. S., Precup, D. & Singh, S. (1998). Between MDPs and semi-MDPs: Learning, planning, and representing knowledge at multiple temporal scales. Technical Report UM-CS-1998-074, Department of Computer Science, UMass. Sutton, R. S., Singh, S., Precup, D. & Ravindran, B. (1999). Improved switching among temporally abstract actions. In NIPS II. The MIT Press.
|
1998
|
87
|
1,589
|
Fast Neural Network Emulation of Dynamical Systems for Computer Animation Radek Grzeszczuk 1 Demetri Terzopoulos 2 Geoffrey Hinton 2 1 Intel Corporation Microcomputer Research Lab 2200 Mission College Blvd. Santa Clara, CA 95052, USA 2 University of Toronto Department of Computer Science 10 King's College Road Toronto, ON M5S 3H5, Canada Abstract Computer animation through the numerical simulation of physics-based graphics models offers unsurpassed realism, but it can be computationally demanding. This paper demonstrates the possibility of replacing the numerical simulation of nontrivial dynamic models with a dramatically more efficient "NeuroAnimator" that exploits neural networks. NeuroAnimators are automatically trained off-line to emulate physical dynamics through the observation of physics-based models in action. Depending on the model, its neural network emulator can yield physically realistic animation one or two orders of magnitude faster than conventional numerical simulation. We demonstrate NeuroAnimators for a variety of physics-based models. 1 Introduction Animation based on physical principles has been an influential trend in computer graphics for over a decade (see, e.g., [1, 2, 3]). This is not only due to the unsurpassed realism that physics-based techniques offer. In conjunction with suitable control and constraint mechanisms, physical models also facilitate the production of copious quantities of realistic animation in a highly automated fashion. Physics-based animation techniques are beginning to find their way into high-end commercial systems. However, a well-known drawback has retarded their broader penetration--compared to geometric models, physical models typically entail formidable numerical simulation costs. This paper proposes a new approach to creating physically realistic animation that differs Emulation for Animation 883 radically from the conventional approach of numerically simulating the equations of motion of physics-based models. We replace physics-based models by fast emulators which automatically learn to produce similar motions by observing the models in action. Our emulators have a neural network structure, hence we dub them NeuroAnimators. Our work is inspired in part by that of Nguyen and Widrow [4]. Their "truck backer-upper" demonstrated the neural network based approximation and control of a nonlinear kinematic system. We introduce several generalizations that enable us to tackle a variety of complex, fully dynamic models in the context of computer animation. Connectionist approximations of dynamical systems have been also been applied to robot control (see, e.g., [5,6]). 2 The NeuroAnimator Approach Our approach is motivated by the following considerations: Whether we are dealing with rigid [2], articulated [3], or nonrigid [I] dynamic animation models, the numerical simulation of the associated equations of motion leads to the computation of a discrete-time dynamical system of the form StHt = ~[St, Ut, ft ]. These (generally nonlinear) equations express the vector St+8t of state variables of the system (values of the system's degrees of freedom and their velocities) at time t + r5t in the future as a function ~ of the state vector St, the vector Ut of control inputs, and the vector ft of external forces acting on the system at time t. Physics-based animation through the numerical simulation of a dynamical system requires the evaluation of the map ~ at every timestep, which usually involves a non-trivial computation. Evaluating ~ using explicit time integration methods incurs a computational cost of O(N) operations, where N is proportional to the dimensionality of the state space. Unfortunately, for many dynamic models of interest, explicit methods are plagued by instability, necessitating numerous tiny timesteps r5t per unit simulation time. Alternatively, implicit time-integration methods usually permit larger timesteps, but they compute ~ by solving a system of N algebraic equations, generally incurring a cost of O( N 3 ) per timestep. Is it possible to replace the conventional numerical simulator by a significantly cheaper alternative? A crucial realization is that the substitute, or emulator, need not compute the map ~ exactly, but merely approximate it to a degree of precision that preserves the perceived faithfulness of the resulting animation to the simulated dynamics of the physical model. Neural networks offer a general mechanism for approximating complex maps in higher dimensional spaces [7].1 Our premise is that, to a sufficient degree of accuracy and at significant computational savings, trained neural networks can approximate maps ~ not just for simple dynamical systems, but also for those associated with dynamic models that are among the most complex reported in the graphics literature to date. The NeuroAnimator, which uses neural networks to emulate physics-based animation, learns an approximation to the dynamic model by observing instances of state transitions, as well as control inputs and/or external forces that cause these transitions. By generalizing from the sparse examples presented to it, a trained NeuroAnimator can emulate an infinite variety of continuous animations that it has never actually seen. Each emulation step costs only O(N2) operations, but it is possible to gain additional efficiency relative to a numerical simulator by training neural networks to approximate a lengthy chain of evaluations of the discrete-time dynamical system. Thus, the emulator network can perform "super I Note that q, is in general a high-dimensional map from RS+u+ f t---7 RS, where s, u, and f denote the dimensionalities of the state, control, and external force vectors. 884 R. Grzeszczuk, D. Terzopoulos and G. E. Hinton timesteps" b.t = n6t, typically one or two orders of magnitude larger than 6t for the competing implicit time-integration scheme, thereby achieving outstanding efficiency without serious loss of accuracy. 3 From Physics-Based Models to NeuroAnimators Our task is to construct neural networks that approximate <P in the dynamical system. We propose to employ backpropagation to train feed forward networks N<l>, with a single layer of sigmoidal hidden units, to predict future states using super time steps b.t = n6t while containing the approximation error so as not to appreciably degrade the physical realism of the resulting animation. The basic emulation step is St+~t = N <l> [st, Ut, ftl. The trained emulator network N<l> takes as input the state of the model, its control inputs, and the external forces acting on it at time t, and produces as output the state of the model at time t + t1t by evaluating the network. The emulation process is a sequence of these evaluations. After each evaluation, the network control and force inputs receive new values, and the network state inputs receive the emulator outputs from the previous evaluation. Since the emulation step is large compared with the numerical simulation step, we res ample the motion trajectory at the animation frame rate, computing intermediate states through linear interpolation of states obtained from the emulation. 3.1 Network Input/Output Structure Fig. lea) illustrates different emulator input/output structures. The emulator network has a single set of output variables specifying St+~t. In general, for a so-called active model, which includes control inputs, under the influence of unpredictable applied forces, we employ a full network with three sets of input variables: St. Ut. and ft. as shown in the figure. For passive models, the control Ut = 0 and the network simplifies to one with two sets of inputs, St and ft. In the special case when the forces ft are completely determined by the state of the system St. we can suppress the ft inputs, allowing the network to learn the effects of these forces from the state transition training data, thus yielding a simpler emulator with two input sets St and Ut. The simplest type of emulator has only a single set of inputs St. This emulator suffices to approximate passive models acted upon by deterministic external forces. 3.2 Input and Output Transformations The accurate approximation of complex functional mappings using neural networks can be challenging. We have observed that a simple feedforward neural network with a single layer of sigmoid units has difficulty producing an accurate approximation to the dynamics of physical models. In practice, we often must transform the emulator to ensure a good approximation of the map <P. A fundamental problem is that the state variables of a dynamical system can have a large dynamic range (in principle, from -00 to +(0). To approximate a nonlinear map <P accurately over a large domain, we would need to use a neural network with many sigmoid units, each shifted and scaled so that their nonlinear segments cover different parts of the domain. The direct approximation of <P is therefore impractical. A successful strategy is to train networks to emulate changes in state variables rather than their actual values, since state changes over small timesteps will have a significantly smaller dynamic range. Hence, in Fig. 1 (b) (top) we restructure our simple network N <l> as a network N ~ which is trained Emulation/or Animation 885 -"IlJ ., ! I ... ~ ....!L. : r------------------------------------------------------. ;;~ N4 G u I ~ Y , , I ! ____________________________________________________ ~_~J 1--------- ---------------------------------------------I ;;~ N' ~ Ut I x ~ y y: : NcJ): t ___ __ _ ____________________________ _ _ ______ _ __________ __ J I I It I. I, I T' ~ N" T'J T' T" I''''' u, I X X 1$ Y Y y: : NcJ): , _____ _ ________ _ __ __ ___ __ _ _ _ __ __ _ ____ _______ _ _ __ _ _ _ _ _ __ _ J (a) (b) Figure 1: (a) Different types of emulators. (b) Transforming a simple feedforward neural network Net> into a practical emulator network N4, that is easily trained to emulate physicsbased models. The following operators perform the appropriate pre- and post-processing: T~ transforms inputs to local coordinates, T~ normalizes inputs, T~ unnormalizes outputs, T~ transforms outputs to global coordinates, T~ converts from a state change to the next state (see text and [8] for the details). to emulate the change in the state vector ~St for given state, external force, and control inputs, followed by an operator T~ that computes St+t>.t = St + ~St to recover the next state. We can further improve the approximation power of the emulator network by exploiting natural invariances. In particular, since the map !f> is invariant under rotation and translation, we replace N~ with an operator T~ that converts the inputs from the world coordinate system to the local coordinate system of the model, a network N~ that is trained to emulate state changes represented in the local coordinate system, and an operator T~ that converts the output of N~ back to world coordinates (Fig. I (b) (center». Since the values of state, force, and control variables can deviate significantly, their effect on the network outputs is uneven, causing problems when large inputs must have a small influence on outputs. To make inputs contribute more evenly to the network outputs, we normalize groups of variables so that they have zero means and unit variances. With normalization, we can furthermore expect the weights of the trained network to be of order unity and they can be given a simple random initialization prior to training. Hence, in Fig. l(b)) (bottom) we replace N~ with an operator T~ that normalizes its inputs, a network N4, that assumes zero mean, unit variance inputs and outputs, and an operator T~ that unnormalizes the outputs to recover their original distributions. Although the final emulator in Fig. 1 (b) is structurally more complex than the standard feed forward neural network Net> that it replaces, the operators denoted by T are completely determined by the state of the model and the distribution of the training data, and the emulator network N4, is much easier to train. 3.3 Hierarchical Networks As a universal function approximator, a neural network should in principle be able to approximate the map !f> for any dynamical system, given enough sigmoid hidden units and 886 R. Grzeszczuk. D. Terzopoulos and G. E. Hinton training data. In practice, however, the number of hidden layer neurons needed and the training data requirements grow quickly with the size of the network, often making the training of large networks impractical. To overcome the "curse of dimensionality," we have found it prudent to structure NeuroAnimators for all but the simplest physics-based models as hierarchies of smaller networks rather than as large, monolithic networks. The strategy behind a hierarchical representation is to group state variables according to their dependencies and approximate each tightly coupled group with a subnet that takes part of its input from a parent network. 3.4 Training NeuroAnimators To arrive at a NeuroAnimator for a given physics-based model, we train the constituent neural network(s) through backpropagation on training examples generated by simulating the model. Training requires the generation and processing of many examples, hence it is typically slow, often requiring several CPU hours. However, once a NeuroAnimator is trained offline, it can be reused online to produce an infinite variety of fast animations. The important point is that by generalizing from the sparse training examples, a trained NeuroAnimator will produce an infinite variety of extended, continuous animations that it has never "seen". More specifically, each training example consists of an input vector x and an output vector y. In the general case, the input vector x = [s6', rl, u6'V comprises the state of the model, the external forces, and the control inputs at time t = O. The output vector y = SLl.t is the state of the model at time t = 6.t, where 6.t is the duration of the super timestep. To generate each training example, we could start the numerical simulator of the physicsbased model with the initial conditions So, ro, and uo, and run the dynamic simulation for n numerical time steps M such that flt = nl5t. In principle, we could generate an arbitrarily large set of training examples {XT; yT}, T = 1,2, ... , by repeating this process with different initial conditions. To learn a good neural network approximation N<I> of the map CP-, we would like ideally to sample q> as uniformly as possible over its domain, with randomly chosen initial conditions among all valid state, external force, and control combinations. However, we can make better use of computational resources by sampling those state, force, and control inputs that typically occur as a physics-based model is used in practice. We employ a neural network simulator called Xerion which was developed at the University of Toronto. We begin the off-line training process by initializing the weights of N~ to random values from a uniform distribution in the range [0, 1J (due to the normalization of inputs and outputs). Xerion automatically terminates the backpropagation learning algorithm when it can no longer reduce the network approximation error significantly. We use the conjugate gradient method to train networks of small and moderate size. For large networks, we use gradient descent with momentum. We divide the training examples into mini-batches, each consisting of approximately 30 uncorrelated examples, and update the network weights after processing each mini-batch. 4 Results We have successfully constructed and trained several NeuroAnimators to emulate a variety of physics-based models (Fig. 2). We used SDIFAST (a rigid body dynamics simulator marketed by Symbolic Dynamics, Inc.) to simulate the dynamics of the rigid body Emulation/or Animation 887 (a) (b) (c) (d) Figure 2: NeuroAnimators used in our experiments. (a) Emulator of a physics-based model of a planar multi-link pendulum suspended in gravity, subject to joint friction forces, external forces applied on the links, and controlled by independent motor torques at each of the three joints. (b) Emulator of a physics-based model of a truck implemented as a rigid body, subject to friction forces where the tires contact the ground, controlled by rear-wheel drive (forward and reverse) and steerable front wheels. (c) Emulator of a physics-based model of a lunar lander, implemented as a rigid body subject to gravitational forces and controlled by a main rocket thruster and three independent attitude jets. (d) Emulator of a biomechanical (mass-spring-damper) model of a dolphin capable of swimming in simulated water via the coordinated contraction of 6 independently controlled muscle actuators which deform its body, producing hydrodynamic propulsion forces. and articulated models, and we employ the simulator developed in [10] to simulate the deformable-body dynamics of the dolphin. In our experiments we have not attempted to minimize the number of network weights required for successful training. We have also not tried to minimize the number of sigmoidal hidden units, but rather used enough units to obtain networks that generalize well while not overfitting the training data. We can always expect to be able to satisfy these guidelines in view of our ability to generate sufficient training data. An important advantage of using neural networks to emulate dynamical systems is the speed at which they can be iterated to produce animation. Since the emulator for a dynamical system with the state vector of size N never uses more than O(N) hidden units, it can be evaluated using only O(N2) operations. By comparison, a single simulation timestep using an implicit time integration scheme requires O(N3) operations. Moreover, a forward pass through the neural network is often equivalent to as many as 50 physical simulation steps, so the efficiency is even more dramatic, yielding performance improvements up to two orders of magnitude faster than the physical simulator. A NeuroAnimator that predicts 100 physical simulation steps offers a speedup of anywhere between 50 and 100 times depending on the type of physical model. 5 Control Learning An additional benefit of the NeuroAnimator is that it enables a novel, highly efficient approach to the difficult problem of controlling physics-based models to synthesize motions that satisfy prescribed animation goals. The neural network approximation to the physical model is differentiable; hence, it can be used to discover the causal effects that control force inputs have on the actions of the models. Outstanding efficiency stems from exploiting the trained NeuroAnimator to compute partial derivatives of output states with respect to control inputs. The efficient computation of the approximate gradient enables the utilization of fast gradient-based optimization for controller synthesis. 888 R. Grzeszczuk, D. Terzopoulos and G. E. Hinton Nguyen and Widrow's [4] "truck backer-upper" demonstrated the neural network based approximation and control of a nonlinear kinematic system. Our technique offers a new controller synthesis algorithm that works well in dynamic environments with changing control objectives. See [8, 9] for the details. 6 Conclusion We have introduced an efficient alternative to the conventional approach of producing physically realistic animation through numerical simulation. Our approach involves the learning of neural network emulators of physics-based models by observing the dynamic state transitions produced by such models in action. The emulators approximate physical dynamics with dramatic efficiency, yet without serious loss of apparent fidelity. Our performance benchmarks indicate that the neural network emulators can yield physically realistic animation one or two orders of magnitude faster than conventional numerical simulation of the associated physics-based models. Our new control learning algorithm, which exploits fast emulation and the differentiability of the network approximation, is orders of magnitude faster than competing controller synthesis algorithms for computer animation. Acknowledgements We thank Zoubin Ghahramani for valuable discussions leading to the idea of the rotation and translation invariant emulator, which was crucial to the success of this work. We are indebted to Steve Hunt, John Funge, Alexander Reshetov, Sonja Jeter and Mike Gendimenico at Intel, and Mike Revow, Drew van Camp and Michiel van de Panne at the University of Toronto for their assistance. References [1] D. Terzopoulos, 1. Platt, A. Barr, K. Fleischer. Elastically deformable models. In M.e. Stone, ed., Computer Graphics (SIGGRAPH '87 Proceedings), 21, 205-214, July 1987. [2] J.K. Hahn: Realistic animation of rigid bodies. In J. Dill, ed., Computer Graphics (SIGGRAPH '88 Proceedings), 22, 299-308, August 1988. [3] J.K. Hodgins, w.L. Wooten, D.e. Brogan, J.F. O' Brien. Animating human athletics. In R. Cook, ed., Proc. of ACM SIGGRAPH 95 Conf, 71-78, August, 1995. [4] D. Nguyen, B. Widrow. The truck backer-upper: An example of self-learning in neural networks. In Proc. Inter. Joint Conf Neural Networks, 357-363. IEEE Press, 1989. [5] M. I. Jordan. Supervised learning and systems with excess degrees of freedom. Technical Report 88-27, Univ. of Massachusetts, Comp.& Info. Sci., Amherst, MA, 1988. [6] K. S. Narendra, K. Parthasarathy. Gradient methods for the optimization of dynamical systems containing neural networks. IEEE Trans. on Neural Networks, 2(2):252-262, 1991. [7] G. Cybenko. Approximation by superposition of sigmoidal function. Math. of Control Signals & Systems, 2(4):303-314, 1989. [8] R. Grzeszczuk. NeuroAnimator: Fast Neural Network Emulation and Control of Physics-Based Models. PhD thesis, Dept. of Compo Sci., Univ. of Toronto, May 1998. [9] R. Grzeszczuk, D. Terzopoulos, G. Hinton. NeuroAnimator: Fast neural network emulation and control of physics-based models. In M. Cohen, ed., Proc. of ACM SIGGRAPH 98 Conf, 9-20, July 1998. [10] X. Th, D. Terzopoulos. Artificial fishes: Physics, locomotion, perception, behavior. In A. Glassner, ed., Proc. of ACM SIGGRAPH 94 Conf , 43- 50. July 1994.
|
1998
|
88
|
1,590
|
Neural Networks for Density Estimation Malik Magdon-Ismail* magdon~cco.caltech.edu Caltech Learning Systems Group Department of Electrical Engineering California Institute of Technology 136-93 Pasadena, CA, 91125 Amir Atiya amir~deep.caltech.edu Caltech Learning Systems Group Department of Electrical Engineering California Institute of Technology 136-93 Pasadena, CA, 91125 Abstract We introduce two new techniques for density estimation. Our approach poses the problem as a supervised learning task which can be performed using Neural Networks. We introduce a stochastic method for learning the cumulative distribution and an analogous deterministic technique. We demonstrate convergence of our methods both theoretically and experimentally, and provide comparisons with the Parzen estimate. Our theoretical results demonstrate better convergence properties than the Parzen estimate. 1 Introduction and Background A majority of problems in science and engineering have to be modeled in a probabilistic manner. Even if the underlying phenomena are inherently deterministic, the complexity of these phenomena often makes a probabilistic formulation the only feasible approach from the computational point of view. Although quantities such as the mean, the variance, and possibly higher order moments of a random variable have often been sufficient to characterize a particular problem, the quest for higher modeling accuracy, and for more realistic assumptions drives us towards modeling the available random variables using their probability density. This of course leads us to the problem of density estimation (see [6]). The most common approach for density estimation is the nonparametric approach, where the density is determined according to a formula involving the data points available. The most common non parametric methods are the kernel density estimator, also known as the Parzen window estimator [4] and the k-nearest neighbor technique [1]. Non parametric density estimation belongs to the class of ill-posed problems in the sense that small changes in the data can lead to large changes in "To whom correspondence should be addressed. Neural Networks for Density Estimation 523 the estimated density. Therefore it is important to have methods that are robust to slight changes in the data. For this reason some amount of regularization is needed [7]. This regularization is embedded in the choice of the smoothing parameter (kernel width or k). The problem with these non-parametric techniques is their extreme sensitivity to the choice of the smoothing parameter. A wrong choice can lead to either undersmoothing or oversmoothing. In spite of the importance of the density estimation problem, proposed methods using neural networks have been very sporadic. We propose two new methods for density estimation which can be implemented using multilayer networks. In addition to being able to approximate any function to any given precision, multilayer networks give us the flexibility to choose an error function to suit our application. The methods developed here are based on approximating the distribution function, in contrast to most previous works which focus on approximating the density itself. Straightforward differentiation gives us the estimate of the density function. The distribution function is often useful in its own right - one can directly evaluate quantiles or the probability that the random variable occurs in a particular interval. One of the techniques is a stochastic algorithm (SLC), and the second is a deterministic technique based on learning the cumulative (SIC). The stochastic technique will generally be smoother on smaller numbers of data points, however, the deterministic technique is faster and applies to more that one dimension. We will present a result on the consistency and the convergence rate of the estimation error for our methods in the univariate case. When the unknown density is bounded and has bounded derivatives up to order K, we find that the estimation error is O((loglog(N)/N)-(l-t»), where N is the number of data points. As a comparison, for the kernel density estimator (with non-negative kernels), the estimation error is O(N-4/ 5 }, under the assumptions that the unknown density has a square integrable second derivative (see [6]), and that the optimal kernel width is used, which is not possible in practice because computing the optimal kernel width requires knowledge of the true density. One can see that for smooth density functions with bounded derivatives, our methods achieve an error rate that approaches O(N- 1 ). 2 New Density Estimation Techniques To illustrate our methods, we will use neural networks, but stress that any sufficiently general learning model will do just as well. The network's output will represent an estimate of the distribution function, and its derivative will be an estimate of the density. We will now proceed to a description of the two methods. 2.1 SLC (Stochastic Learning of the Cumulative) Let Xn E R, n = 1, ... , N be the data points. Let the underlying density be g(x) and its distribution function G(x) = J~oog(t)dt. Let the neural network output be H (x, w), where w represents the set of weights of the network. Ideally, after training the neural network, we would like to have H (x, w) = G (x). It can easily be shown that the density of the random variable G(x) (x being generated according to g(x)) is uniform in [0,1]. Thus, if H(x,w) is to be as close as possible to G(x), then the network output should have a density that is close to uniform in [0,1]. This is what our goal will be. We will attempt to train the network such that its output density is uniform, then the network mapping should represent the distribution function G(x). The basic idea behind the proposed algorithm is to use the N data points as inputs to the network. For every training cycle, we generate a different set of N network targets randomly from a uniform distribution in [0, 1], and adjust 524 M Magdon-Ismail and A. Atiya the weights to map the data points (sorted in ascending order) to these generated targets (also sorted in ascending order). Thus we are training the network to map the data to a uniform distribution. Before describing the steps of the algorithm, we note that the resulting network has to represent a monotonically non decreasing mapping, otherwise it will not represent a legitimate distribution function. In our simulations, we used a hint penalty to enforce monotonicity [5]. The algorithm is as follows. 1. Let Xl S X2 S ... S XN be the data points. Set t = 1, where t is the training cycle number. Initialize the weights (usually randomly) to w(l). 2. Generate randomly from a uniform distribution in [0,1] N points (and sort them): UI S U2 S '" S UN· The point Un is the target output for Xn· 3. Adjust the network weights according to the backpropagation scheme: a£ w(t + 1) = w(t) - 17(t) aw (1) where £ is the objective function that includes the error term and the monotonicity hint penalty term [5]: N 2 Nh 2 £ = I: [H(xn)-Un] +AI:8(H(Yk)-H(Yk+~)) [H(Yk)-H(Yk+~)] n=l k=l (2) where we have suppressed the w dependence. The second term is the monotonicity penalty term, A is a positive weighting constant, ~ is a small positive number, 8(x) is the familiar unit step function, and the Yk'S are any set of points where we wish to enforce the monotonicity. 4. Set t = t + 1, and go to step 2. Repeat until the error is small enough. Upon convergence, the density estimate is the derivative of H. Note that as presented, the randomly generated targets are different for every cycle, which will have a smoothing effect that will allow convergence to a truly uniform distribution. One other version, that we have implemented in our simulation studies, is to generate new targets after every fixed number L of cycles, rather than every cycle. This will generally improve the speed of convergence as there is more "continuity" in the learning process. Also note that it is preferable to choose the activation function for the output node to be in the range of 0 to 1, to ensure that the estimate of the distribution function is in this range. SLC is only applicable to estimating univariate densities, because, for the multivariate case, the nonlinear mapping Y = G (x) will not necessarily result in a uniformly distributed output y. Fortunately, many, if not the majority of problems encountered in practice are univariate. This is because multivariate problems, with even a modest number of dimensions, need a huge amount of data to obtain statistically accurate results. The next method, is applicable to the multivariate case as well. 2.2 SIC (Smooth Interpolation of the Cumulative) Again, we have a multilayer network, to which we input the point x, and the network outputs the estimate of the distribution function. Let g(x) be the true density function, and let G(x) be the corresponding distribution function. Let x = (Xl, ... , xd)T. The distribution function is given by xl x d G(x) = 100'" 100 g(x)dx1 ... xd, (3) Neural Networks for Density Estimation 525 a straightforward estimate of G(x) could be the fraction of data points falling in the area of integration: 1 N G(x) = N :Le(x - x n ), where e is defined as e(x) = {~ n=l if xi 2 0 for all i = 1, ... , d, otherwise. (4) The method we propose uses such an estimate for the target outputs of the neural network. The estimate given by (4) is discontinuous. The neural network method developed here provides a smooth, and hence more realistic estimate of the distribution function. The density can be obtained by differentiating the output of the network with respect to its inputs. For the low-dimensional case, we can uniformly sample (4) using a grid, to obtain the examples for the network. Beyond two or three dimensions, this becomes computationally intensive. Alternatively, one could sample the input space randomly (using say a uniform distribution over the approximate range of Xn 's), and for every point determine the network target according to (4). Another option is to use the data points themselves as examples. The target for a point Xm would then be N A 1 ~ G(xm) = N _ 1 Lt n=l, n;im (5) We also use monotonicity as a hint to guide the training. Once training is performed, and H(x, w) approximates G(x), the density estimate can be obtained as 3 Simulation Results · . '" · " · , , , , , , , , , , (a) g(x) = ad H(x, w) . (6) OXl ... oxd (b) Tn.lI:!DtIn3llly soc - - OpI_mat Parlen W\n00w Figure 1: Comparison of optimal Parzen windows, with neural network estimators. Plotted are the true density and the estimates (SLC, SIC, Parzen window with optimal kernel width [6, pg 40]). Notice that even the optimal Parzen window is bumpy as compared to the neural network. We tested our techniques for density estimation on data drawn from a mixture of two Gaussians: (7) 526 M. Magdon-Ismail and A. Atiya Data points were randomly generated and the density estimates using SLC or SIC (for 100 and 200 data points) were compared to the Parzen technique. Learning was performed with a standard 1 hidden layer neural network with 3 hidden units. The hidden unit activation function used was tanh and the output unit was an erf function l . A set of typical density estimates are shown in figure 1. 4 Convergence of the Density Estimation Techniques ~. ! 1 1 I ,,' ,,' ,,' N Figure 2: Convergence of the density estimation error for SIC. A five hidden unit two layer neural network was used to perform the mapping Xi -+ i/{N + 1), trained according to SIC. For various N, the resulting density estimation error was computed for over 100 runs. Plotted are the results on a Log-Log scale. For comparison, also shown is the best 1 IN fit. U sing techniques from stochastic approximation theory, it can be shown that SLC converges to a similar solution to SIC [3], so, we focus our attention on the convergence of SIC. Figure 2 shows an empirical study of the convergence behavior. The optimal linear fit between 10g(E) and 10g(N) has a slope of -0.97. This indicates that the convergence rate is about liN. The theoretically derived convergence rate is loglog(N)IN as we will shortly discuss. To analyze SIC, we introduce so called approximate generalized distribution functions. We will assume that the true distribution function has bounded derivatives. Therefore the cumulative will be "approximately" implementable by generalized distributions with bounded derivatives (in the asymptotic limit, with probability 1). We will then obtain the convergence to the true density. Let 9 be the space of distribution functions on the real line that possess continuous densities, i.e., X E 9 if X : R -+ [0,1]; X'(t) exists everywhere, is continuous and X' (t) ~ 0; X ( - 00) = 0 and X (00) = 1. This is the class of functions that we will be interested in. We define a metric with respect to 9 as follows II f II~ = i: f(t)2 X'(t)dt (8) II f II~ is the expectation of the squared value of f with respect to the distribution X E g. Let us name this the L2 X-norm of f. Let the data set (D) be {Xl S X2 S ... S XN}, and corresponding to each Xi, let the target be Yi = il N + 1. We will assume that the true distribution function has bounded derivatives up to order K . We define the set of approximate sample distribution functions 1l'D as follows Neural Networks for Density Estimation 527 Definition 4.1 Fix v > O. A v-approximate sample distribution (unction, H, satisfies the following two conditions We will denote the set of all v-approximate sample distribution functions for a data set, D, and a given v by 1-lo' Let Ai = sUPx IG(i) I, i = 1 . . . K where we use the notation f(i) to denote the ith derivative. Define BnD) by BV(D)= inf sup lQ(iJI (9) t QE1I.'h x for fixed v > O. Note that by definition, for all E > 0, :J H E 1-lo such that sUPx IH(i)(x)1 ::; Bi + E. Bi(D) is the lowest possible bound on the ith derivative for the v-approximate sample distribution functions given a particular data set. In a sense, the "smoothest" approximating sample distribution function with respect to the ith derivative has an ith derivative bounded by BnD). One expects that Bi ::; Ai, at least in the limit N -+ 00. In the next theorem, we present the main theoretical result of the paper, namely a bound on the estimation error for the density estimator obtained by using the approximate sample distribution functions. It is embedded in a large amount of technical machinery, but its essential content is that if the true distribution function has bounded derivatives to order K, then, picking the approximate distribution function obeying certain bounds, we obtain a convergence rate for the estimation error of O((loglog(N)jN)l- l/K). Theore"m 4.2 (L2 convergence to the true density) Let N data points, Xi be drawn i.i.d. from the distribution G E g. Let sUPx IG(i) 1 = Ai for i = 0 ... K , where K ~ 2. Fix v > 2 and E > O. Let B'K(D) = infQE1I.'h suPx IQ(K) I. Let H E 1-lo be a v-approximate distribution function with BK = sUPx IHKI ::; B'K + E (by the definition of B/o such a v-approximate sample distribution function must exist). Then, for any F E g, as N -+ 00, the inequality II H' - G' II~ ::; 22(K - l)(2AK + E)k F(N) (10) where :F(N) = [(1 + v) C10g~g(N)) l + N ~ 1 rr.. (11) holds with probability 1, as N -+ 00. We present the proof elsewhere [3]. • N otel: The theorem applies uniformly to any interpolator H E 1-lo' In particular, a large enough neural network will be one such monotonic interpolator, provided that the network can be trained to small enough error. This is possible by the universal approximation results for multilayer networks [2]. Note 2: This theorem holds for any E > 0 and v > 1. For smooth density functions, with bounded higher derivatives, the convergence rate approaches o (log log( N) j N) which is faster convergence than the kernel density estimator (for which the optimal rate is O(N- 4 / 5 )). 528 M. Magdon-Ismail and A. Atiya Note 3: No smoothing parameter needs to be determined. Note 4: One should try to find an approximate distribution function with the smallest possible derivatives. Specifically, of all the sample distribution functions, pick the one that "minimizes" B K, the bound on the Kth deri vative. This could be done by introducing penalty terms, penalizing the magnitudes of the derivatives (for example Tikhonov type regularizers [7]). 5 Comments We developed two techniques for density estimation based on the idea of learning the cumulative by mapping the data points to a uniform density. Two techniques were presented, a stochastic technique (SLC), which is expected to inherit the characteristics of most stochastic iterative algorithms, and a deterministic technique (SIC). SLC tends to be slow in practice, however, because each set of targets is drawn from the uniform distribution, this is anticipated to have a smoothing/regularizing effect - this can be seen by comparing SLC and SIC in figure 1 (a). We presented experimental comparison of our techniques with the Parzen technique. We presented a theoretical result that demonstrated the consistency of our techniques as well as giving a convergence rate of O(loglog(N)/N), which is better than the optimal Parzen technique. No smoothing parameter needs to be chosen smoothing occurs naturally by picking the interpolator with the lowest bound for a certain derivative. For our methods, the majority of time is spent in the learning phase, but once learning is done, evaluating the density is fast. 6 Acknowledgments We would like to acknowledge Yaser Abu-Mostafa and the Caltech Learning Systems Group for their useful input. References [1] K. Fukunaga and L. D. Hostetler. Optimization of k-nearest neighbor density estimates. IEEE Transactions on Information Theory, 19(3):320-326, 1973. [2] K. Hornik, M. Stinchcombe, and H. White. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3:551-560, 1990. [3] M. Magdon-Ismail and A. Atiya. Consistent density estimation from the sample distribution function. manuscript in preparation for submission, 1998. [4] E. Parzen. On the estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065-1076, 1962. [5] J. Sill and Y. S. Abu-Mostafa. Monotonicity hints. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems (NIPS), volume 9, pages 634-640. Morgan Kaufmann, 1997. [6] B. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, London, UK, 1993. [7] A. N. Tikhonov and V. 1. Arsenin. Solutions of Ill-Posed Problems. Scripta Series in Mathematics. Distributed solely by Halsted Press, Winston; New York, 1977. Translation Editor: Fritz, John.
|
1998
|
89
|
1,591
|
Learning Nonlinear Dynamical Systems using an EM Algorithm Zoubin Ghahramani and Sam T. Roweis Gatsby Computational Neuroscience Unit University College London London WC1N 3AR, U.K. http://www.gatsby.ucl.ac.uk/ Abstract The Expectation-Maximization (EM) algorithm is an iterative procedure for maximum likelihood parameter estimation from data sets with missing or hidden variables [2]. It has been applied to system identification in linear stochastic state-space models, where the state variables are hidden from the observer and both the state and the parameters of the model have to be estimated simultaneously [9]. We present a generalization of the EM algorithm for parameter estimation in nonlinear dynamical systems. The "expectation" step makes use of Extended Kalman Smoothing to estimate the state, while the "maximization" step re-estimates the parameters using these uncertain state estimates. In general, the nonlinear maximization step is difficult because it requires integrating out the uncertainty in the states. However, if Gaussian radial basis function (RBF) approximators are used to model the nonlinearities, the integrals become tractable and the maximization step can be solved via systems of linear equations. 1 Stochastic Nonlinear Dynamical Systems We examine inference and learning in discrete-time dynamical systems with hidden state Xt, inputs Ut, and outputs Yt. 1 The state evolves according to stationary nonlinear dynamics driven by the inputs and by additive noise (1) 1 All lowercase characters (except indices) denote vectors. Matrices are represented by uppercase characters. 432 Z. Ghahramani and S. T Roweis where w is zero-mean Gaussian noise with covariance Q. 2 The outputs are nonlinearly related to the states and inputs by Yt = g(Xt, Ut) + v (2) where v is zero-mean Gaussian noise with covariance R. The vector-valued non linearities f and 9 are assumed to be differentiable, but otherwise arbitrary. Models of this kind have been examined for decades in various communities. Most notably, nonlinear state-space models form one of the cornerstones of modern systems and control engineering. In this paper, we examine these models within the framework of probabilistic graphical models and derive a novel learning algorithm for them based on EM. With one exception,3 this is to the best of our knowledge the first paper addressing learning of stochastic nonlinear dynamical systems of the kind we have described within the framework of the EM algorithm. The classical approach to system identification treats the parameters as hidden variables, and applies the Extended Kalman Filtering algorithm (described in section 2) to the nonlinear system with the state vector augmented by the parameters [5].4 This approach is inherently on-line, which may be important in certain applications. Furthermore, it provides an estimate of the covariance of the parameters at each time step. In contrast, the EM algorithm we present is a batch algorithm and does not attempt to estimate the covariance of the parameters. There are three important advantages the EM algorithm has over the classical approach. First, the EM algorithm provides a straightforward and principled method for handing missing inputs or outputs. Second, EM generalizes readily to more complex models with combinations of discrete and real-valued hidden variables. For example, one can formulate EM for a mixture of nonlinear dynamical systems. Third, whereas it is often very difficult to prove or analyze stability within the classical on-line approach, the EM algorithm is always attempting to maximize the likelihood, which acts as a Lyapunov function for stable learning. In the next sections we will describe the basic components of the learning algorithm. For the expectation step of the algorithm, we infer the conditional distribution of the hidden states using Extended Kalman Smoothing (section 2). For the maximization step we first discuss the general case (section 3) and then describe the particular case where the nonlinearities are represented using Gaussian radial basis function (RBF; [6]) networks (section 4). 2 Extended Kalman Smoothing Given a system described by equations (1) and (2), we need to infer the hidden states from a history of observed inputs and outputs. The quantity at the heart of this inference problem is the conditional density P(XtIUl,"" UT, Yl,.' " YT), for 1 ::; t ::; T, which captures the fact that the system is stochastic and therefore our inferences about x will be uncertain. 2The Gaussian noise assumption is less restrictive for nonlinear systems than for linear systems since the nonlinearity can be used to generate non-Gaussian state noise. 3The authors have just become aware that Briegel and Tresp (this volume) have applied EM to essentially the same model. Briegel and Tresp's method uses multilayer perceptrons (MLP) to approximate the nonlinearities, and requires sampling from the hidden states to fit the MLP. We use Gaussian radial basis functions (RBFs) to model the nonlinearities, which can be fit analytically without sampling (see section 4) . 41t is important not to confuse this use of the Extended Kalman algorithm, to simultaneously estimate parameters and hidden states, with our use of EKS, to estimate just the hidden state as part of the E step of EM. Learning Nonlinear Dynamics Using EM 433 For linear dynamical systems with Gaussian state evolution and observation noises, this conditional density is Gaussian and the recursive algorithm for computing its mean and covariance is known as Kalman smoothing [4, 8]. Kalman smoothing is directly analogous to the forward-backward algorithm for computing the conditional hidden state distribution in a hidden Markov model, and is also a special case of the belief propagation algorithm.5 For nonlinear systems this conditional density is in general non-Gaussian and can in fact be quite complex. Multiple approaches exist for inferring the hidden state distribution of such nonlinear systems, including sampling methods [7] and variational approximations [3]. We focus instead in this paper on a classic approach from engineering, Extended Kalman Smoothing (EKS). Extended Kalman Smoothing simply applies Kalman smoothing to a local linearization of the nonlinear system. At every point x in x-space, the derivatives of the vector-valued functions f and 9 define the matrices, Ax == M I x=x and ex == ~ I x=x' respectively. The dynamics are linearized about Xt, the mean of the Kalman filter state estimate at time t: (3) The output equation (2) can be similarly linearized. If the prior distribution of the hidden state at t = 1 was Gaussian, then, in this linearized system, the conditional distribution of the hidden state at any time t given the history of inputs and outputs will also be Gaussian. Thus, Kalman smoothing can be used on the linearized system to infer this conditional distribution (see figure 1, left panel). 3 Learning The M step of the EM algorithm re-estimates the parameters given the observed inputs, outputs, and the conditional distributions over the hidden states. For the model we have described, the parameters define the nonlinearities f and g, and the noise covariances Q and R. Two complications arise in the M step. First, it may not be computationally feasible to fully re-estimate f and g. For example, if they are represented by neural network regressors, a single full M step would be a lengthy training procedure using backpropagation, conjugate gradients, or some other optimization method. Alternatively, one could use partial M steps, for example, each consisting of one or a few gradient steps. The second complication is that f and 9 have to be trained using the uncertain state estimates output by the EKS algorithm. Consider fitting f, which takes as inputs Xt and Ut and outputs Xt+l. For each t, the conditional density estimated by EKS is a full-covariance Gaussian in (Xt, xHd-space. So f has to be fit not to a set of data points but instead to a mixture of full-covariance Gaussians in input-output space (Gaussian "clouds" of data). Integrating over this type of noise is non-trivial for almost any form of f. One simple but inefficient approach to bypass this problem is to draw a large sample from these Gaussian clouds of uncertain data and then fit f to these samples in the usual way. A similar situation occurs with g. In the next section we show how, by choosing Gaussian radial basis functions to model f and g, both of these complications vanish. 5The forward part of the Kalman smoother is the Kalman filter. 434 Z. Ghahramani and S. T. Roweis 4 Fitting Radial Basis Functions to Gaussian Clouds We will present a general formulation of an RBF network from which it should be clear how to fit special forms for f and 9. Consider the following nonlinear mapping from input vectors x and u to an output vector z: [ z = L hi Pi (x) + Ax + Bu + b + w, (4) i=1 where w is a zero-mean Gaussian noise variable with covariance Q. For example, one form of f can be represented using (4) with the substitutions x f- Xt, u f- Ut, and z f- Xt+!; another with x f- (Xt, ud, u f- 0, and Z f- Xt+ 1. The parameters are: the coefficients of the I RBFs, hi; the matrices A and B multiplying inputs x and u, respectively; and an output bias vector b. Each RBF is assumed to be a Gaussian in x-space, with center Ci and width given by the covariance matrix Si: (5) The goal is to fit this model to data (u,x,z). The complication is that the data set comes in the form of a mixture of Gaussian distributions. Here we show how to analytically integrate over this mixture distribution to fit the RBF model. Assume the data set is: 1 P(x,z,u) = J LNj(x,z) 8(u - Uj). j (6) That is, we observe samples from the u variables, each paired with a Gaussian "cloud" of data, Nj, over (x, z). The Gaussian Nj has mean /1j and covariance matrix Cj . Let zo(x, u) = 2:;=1 hi Pi(X) + Ax + Bu + b, where () is the set of parameters () = {hI ... h [ , A, B, b}. The log likelihood of a single data point under the model is: -~ [z - zo(x, u)r Q-l [z - zo(x, u)]- ~ In IQI + const. The maximum likelihood RBF fit to the mixture of Gaussian data is obtained by minimizing the following integrated quadratic form: min{L r r Nj(X,Z)[Z-ZO(X,Uj)rQ_l[Z-ZO(X,Uj)]dXdz+JlnIQI}. (7) O,Q .}x }z J We rewrite this in a slightly different notation, using angled brackets (.) j to denote expectation over Nj , and defining () [h~ h; ... hI AT BT bTr cJ> [PI (x) P2 ( x) ... P [ ( x) x u 1] . Then, the objective can be written min {'" (( z - () cJ> r Q -1 (z - () cJ») . + J In I Q I} . O,Q ~ J J (8) Learning Nonlinear Dynamics Using EM 435 Taking derivatives with respect to 0, premultiplying by _Q-1, and setting to zero gives the linear equations I:j((z O~)~T)j = 0, which we can solve for 0 and Q: In other words, given the expectations in the angled brackets, the optimal parameters can be solved for via a set of linear equations. In appendix A we show that these expectations can be computed analytically. The derivation is somewhat laborious, but the intuition is very simple: the Gaussian RBFs multiply with the Gaussian densities Nj to form new unnormalized Gaussians in (x, y)-space. Expectations under these new Gaussians are easy to compute. This fitting algorithm is illustrated in the right panel of figure 1. ~ x, Gaussian evidence from I-I + xr_1 ~~fJi!~~X,'2 .~ from 1+1 rn · rZJ··--+ ~ t ~I ~ :; a. :; o •• · ·f .... •• · ....... , . input dimension , , Figure 1: Illustrations of the E and M steps of the algorithm. The left panel shows the information used in Extended Kalman Smoothing (EKS), which infers the hidden state distribution during the E-step. The right panel illustrates the regression technique employed during the M-step. A fit to a mixture of Gaussian densities is required; if Gaussian RBF networks are used then this fit can be solved analytically. The dashed line shows a regular RBF fit to the centres of the four Gaussian densities while the solid line shows the analytic RBF fit using the covariance information_ The dotted lines below show the support of the RBF kernels. 5 Results We tested how well our algorithm could learn the dynamics of a nonlinear system by observing only its inputs and outputs. The system consisted of a single input, state and output variable at each time, where the relation of the state from one time step to the next was given by a tanh nonlinearity. Sample outputs of this system in response to white noise are shown in figure 2 (left panel). We initialized the nonlinear model with a linear dynamical model trained with EM, which in turn we initialized with a variant of factor analysis. The model was given 11 RBFs in Xt-space, which were uniformly spaced within a range which was automatically determined from the density of points in Xt-space. After the initialization was over, the algorithm discovered the sigmoid nonlinearity in the dynamics within less than 10 iterations of EM (figure 2, middle and right panels). Further experiments need to be done to determine how practical this method will be in real domains. 436 NLOS ~~7-~~.~r.~.~~ Ilel'lltJons of EM Z. Ghahramani and S. T Roweis .. . . ' , ~ , . ~ u ' / :: -:",;~fS{ -!, '_ 1.5 _,''''$ '' II ,, & I t..S , .. ~ xlt) Figure 2: (left): Data set used for training (first half) and testing (rest), which consists of a time series of inputs, Ut (a), and outputs Yt (b). (middle): Representative plots of log likelihood vs iterations of EM for linear dynamical systems (dashed line) and nonlinear dynamical systems trained as described in this paper (solid line). Note that the actual likelihood for nonlinear dynamical systems cannot generally be computed analytically; what is shown here is the approximate likelihood computed by EKS. The kink in the solid curve comes when initialization with linear dynamics ends and the nonlinearity starts to be learned. (right): Means of (Xt , Xt+d Gaussian posteriors computed by EKS (dots), along with the sigmoid nonlinearity (dashed line) and the RBF nonlinearity learned by the algorithm. At no point does the algorithm actually observe (Xt , Xt+d pairs; these are inferred from inputs, outputs, and the current model parameters. 6 Discussion This paper brings together two classic algorithms, one from statistics and another from systems engineering, to address the learning of stochastic nonlinear dynamical systems. We have shown that by pairing the Extended Kalman Smoothing algorithm for state estimation in the E-step, with a radial basis function learning model that permits analytic solution of the M-step, the EM algorithm is capable of learning a nonlinear dynamical model from data. As a side effect we have derived an algorithm for training a radial basis function network to fit data in the form of a mixture of Gaussians. Our initial approach has three potential limitations. First, the M-step presented does not modify the centres or widths of the RBF kernels. It is possible to compute the expectations required to change the centres and widths, but it requires resorting to a partial M-step. For low dimensional state spaces, filling the space with pre-fixed kernels is feasible, but this strategy needs exponentially many RBFs in high dimensions. Second, EM training can be slow, especially if initialized poorly. Understanding how different hidden variable models are related can help devise sensible initialization heuristics. For example, for this model we used a nested initialization which first learned a simple linear dynamical system, which in turn was initialized with a variant of factor analysis. Third, the method presented here learns from batches of data and assumes stationary dynamics. We have recently extended it to handle online learning of nonstationary dynamics. The belief network literature has recently been dominated by two methods for approximate inference, Markov chain Monte Carlo [7] and variational approximations [3]. To our knowledge this paper is the first instance where extended Kalman smoothing has been used to perform approximate inference in the E step of EM. While EKS does not have the theoretical guarantees of variational methods, its simplicity has gained it wide acceptance in the estimation and control literatures as a method for doing inference in nonlinear dynamical systems. We are now exploring generalizations of this method to learning nonlinear multilayer belief networks. Learning Nonlinear Dynamics Using EM 437 Acknowledgements ZG would like to acknowledge the support of the CITO (Ontario) and the Gatsby Charitable Fund. STR was supported in part by the NSF Center for Neuromorphic Systems Engineering and by an NSERC of Canada 1967 Award. A Expectations Required to Fit the RBFs The expectations we need to compute for equation 9 are (x)j, (z)j, (xx T)j, (xz T)j, (zz T)j, (Pi(X))j, (x pi(X))j, (z Pi(X))j, (pi(X) Pl(X))). Starting with some of the easier ones that do not depend on the RBF, kernel p: (x)j = JLj (z)j = JL} (XXT)j = JLjJLj,T +Cr (xzT)j = JLjJLj,T +Cr (ZZT)j = JLjJLj,T +Cjz Observe that when we multiply the Gaussian RBF kernel pi(X) (equation 5) and Nj we get a Gaussian density over (x, z) with mean and covariance ( -1 [ S-:-l Ci ]) JLij = Cij Cj JLj + ' 0 and an extra constant (due to lack of normalization), {3ij = (21T)-d",/2IS;j-1/2ICjl-I/2ICijll/2 exp{ -~ij/2} where ~ij = c~ Si- I Ci + JLl Cj- 1 JLj JL0 Ci-/ JLij . Using {3ij and JLij, we can evaluate the other expectatIOns: (pi(X))j = {3ij, (x pi(X))j = {3ijJLfj , and (z pi(X))j = {3ijJL'ij . Finally, (pi(X) Pl(X))j = (21T)-d", ICj 1-1/2IS;j-1/2IS11-1/2ICilj 11/ 2 exp{ -,ifj/2}, where C,'l)" = (C):-l + [ Si- 1 +0 Sll 0]) -1 d C (C- 1 [ Si-1Ci + Sll Cl ]) o an JLilj = ilj ) JLj + 0 ' d TS-1 TS-l TC- l T C- 1 an ,iij = Ci i ci + Cl l Cl + JLj j JLj - JLilj ilj JLiij . References [1] T. Briegel and V. Tresp. Fisher Scoring and a Mixture of Modes Approach for Approximate Inference and Learning in Nonlinear State Space Models. In This Volume. MIT Press, 1999. [2] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society Series B, 39:1- 38, 1977. [3] M. I. Jordan, Z. Ghahramani, T . S. Jaakkola, and L. K. Saul. An Introduction to variational methods in graphical models. Machine Learning, 1999. [4] R. E. Kalman and R. S. Bucy. New results in linear filtering and prediction. Journal of Basic Engineering (A SME) , 83D:95-108, 1961. [5] L. Ljung and T. Soderstrom. Theory and Practice of Recursive Identification. MIT Press, Cambridge, MA, 1983. [6] J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1(2):281-294, 1989. [7] R. M. Neal. Probabilistic inference using Markov chain monte carlo methods. Technical Report CRG-TR-93-1, 1993. [8] H. E. Rauch. Solutions -to the linear smoothing problem. IEEE Transactions on Automatic Control, 8:371-372, 1963. [9] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using the EM algorithm. J. Time Series Analysis, 3(4):253- 264, 1982.
|
1998
|
9
|
1,592
|
Improved Switching among Temporally Abstract Actions Richard S. Sutton Satinder Singh AT&T Labs Florham Park, NJ 07932 { sutton,baveja}@research.att.com Doina Precup Balaraman Ravindran University of Massachusetts Amherst, MA 01003-4610 { dprecup,ravi}@cs.umass.edu Abstract In robotics and other control applications it is commonplace to have a preexisting set of controllers for solving subtasks, perhaps hand-crafted or previously learned or planned, and still face a difficult problem of how to choose and switch among the controllers to solve an overall task as well as possible. In this paper we present a framework based on Markov decision processes and semi-Markov decision processes for phrasing this problem, a basic theorem regarding the improvement in performance that can be obtained by switching flexibly between given controllers, and example applications of the theorem. In particular, we show how an agent can plan with these high-level controllers and then use the results of such planning to find an even better plan, by modifying the existing controllers, with negligible additional cost and no re-planning. In one of our examples, the complexity of the problem is reduced from 24 billion state-action pairs to less than a million state-controller pairs. In many applications, solutions to parts of a task are known, either because they were handcrafted by people or because they were previously learned or planned. For example, in robotics applications, there may exist controllers for moving joints to positions, picking up objects, controlling eye movements, or navigating along hallways. More generally, an intelligent system may have available to it several temporally extended courses of action to choose from. In such cases, a key challenge is to take full advantage of the existing temporally extended actions, to choose or switch among them effectively, and to plan at their level rather than at the level of individual actions. Recently, several researchers have begun to address these challenges within the framework of reinforcement learning and Markov decision processes (e.g., Singh, 1992; Kaelbling, 1993; Dayan & Hinton, 1993; Thrun and Schwartz, 1995; Sutton, 1995; Dietterich, 1998; Parr & Russell, 1998; McGovern, Sutton & Fagg, 1997). Common to much of this recent work is the modeling of a temporally extended action as a policy (controller) and a condition for terminating, which we together refer to as an option (Sutton, Precup & Singh, 1998). In this paper we consider the problem of effectively combining given options into one overall policy, generalizing prior work by Kaelbling (1993). Sections 1-3 introduce the framework; our new results are in Sections 4 and 5. Improved Switching among Temporally Abstract Actions 1067 1 Reinforcement Learning (MDP) Framework In a Markov decision process (MDP), an agent interacts with an environment at some discrete, lowest-level time scale t = 0,1,2, ... On each time step, the agent perceives the state of the environment, St E S, and on that basis chooses a primitive action, at E A. In response to each action, at, the environment produces one step later a numerical reward, Tt+l' and a next state, StH. The one-step model of the environment consists of the one-step statetransition probabilities and the one-step expected rewards, p~s' = Pr{sHl = s' I St = S,at = a} and T~ = E{TtH 1st = S,at = a}, for all s, s' E S and a E A. The agent's objective is to learn an optimal Markov policy, a mapping from states to probabilities of taking each available primitive action, 7r : S x A -+ [0, 1], that maximizes the expected discounted future reward from each state s: V 1T (s) = E{Tt+l +,Tt+2 + ... \ St = S,7r} = L 7r(s,a)[T~ +, LP~S,V1T(S')], aEA. s' where 7r(s, a) is the probability with which the policy 7r chooses action a E As in state s, and , E [0, 1] is a discount-rate parameter. V1T (s) is called the value of state S under policy 7r, and V1T is called the state-value Junction for7r. The optimal state-value function gives the value of a state under an optimal policy: V*(s) = max1T V1T(S) = maxaEA.[T~ +,2:s' P~SI V*(s')]. Given V*, an optimal policy is easily formed by choosing in each state S any action that achieves the maximum in this equation. A parallel set of value functions, denoted Q1T and Q*, and Bellman equations can be defined for state-action pairs, rather than for states. Planning in reinforcement learning refers to the use of models of the environment to compute value functions and thereby to optimize or improve policies. 2 Options We use the term options for our generalization of primitive actions to include temporally extended courses of action. Let ht,T = St, at, Tt+l, St+l, at+l, . .. , TT, ST be the history sequence from time t :::; T to time T, and let n denote the set of all possible histories in the given MDP. Options consist of three components: an initiation set I ~ S, a policy 7r : n x A -+ [0, 1], and a termination condition {3 : n -+ [0, 1]. An option 0 = (I, 7r, (3) can be taken in state S if and only if S E I. If 0 is taken in state St, the next action at is selected according to 7r(St, .). The environment then makes a transition to SHl, where o terminates with probability (3(ht,t+d, or else continues, determining atH according to 7r(ht,tH' .), and transitioning to state SH2, where 0 terminates with probability (3(ht,t+2) etc. We call the general options defined above semi-Markov because 7r and {3 depend on the history sequence; in Markov options 7r and {3 depend only on the current state. Semi-Markov options allow "timeouts", i.e., termination after some period of time has elapsed, and other extensions which cannot be handled by Markov options. The initiation set and termination condition of an option together limit the states over which the option's policy must be defined. For example, a h~nd-crafted policy 7r for a mobile robot to dock with its battery charger might be defined only for states I in which the battery charger is within sight. The termination condition (3 would be defined to be 1 outside of I and when the robot is successfupy docked. We can now define policies over options. Let the set of options available in state S be denoted as; the set of all options is denoted a = USES aS. When initiated in a state St, the Markov policy over options p : S X 0-+ [0,1] selects an option 0 E aS! according to the probability distribution p(St, .). The option 0 is then taken in St, determining actions until it terminates in St+k. at which point a new option is selected, according to P(SHk' .), and so on. In this way a policy over options, p, determines a (non-stationary) policy over actions, or flat policy, 7r = f(p). We define the value of a state S under a general flat policy 7r as the expected return 1068 R. S. Sutton, S. Singh, D. Precup and B. Ravindran if the policy is started in s: V 1T (s) d~f E {rt+l + r'rt+2 + .. ·1 £(7r, s, t) }, where £(7r, s, t) denotes the event of 7r being initiated in s at time t. The value of a state under a general policy (i.e., a policy over options) J-L can then be defined as the value of the state under the corresponding flat policy: VtL(s) ~f Vf(tL) (s). An analogous definition can be used for the option-value function, QtL(s,o). For semi-Markov options it is useful to define QtL(h, 0) as the expected discounted future reward after having followed option 0 through history h. 3 SMDP Planning Options are closely related to the actions in a special kind of decision problem known as a semi-Markov decision process, or SMDP (Puterman, 1994; see also Singh, 1992; Bradtke & Duff, 1995; Mahadevan et. aI., 1997; Parr & Russell, 1998). In fact, any MDP with a fixed set of options is an SMDP. Accordingly, the theory of SMDPs provides an important basis for a theory of options. In this section, we review the standard SMDP framework for planning, which will provide the basis for our extension. Planning with options requires a model of their consequences. The form of this model is given by prior work with SMDPs. The reward part of the model of 0 for state s E S is the total reward received along the way: r~ = E{rt+l +,rt+2 + .. . +,k-lrt+k I £(o,s,t)}, where £(0, s, t) denotes the event of 0 being initiated in state s at time t. The state-prediction part of the model is 00 p~s' = LP(s', k)'l, E{-l&s'st+k 1£(0, s, t)}, k=l for all s' E S, where p(s', k) is the probability that the option terminates in s' after k steps. We call this kind of model a multi-time model because it describes the outcome of an option not at a single time but at potentially many different times, appropriately combined. Using multi-time models we can write Bellman equations for general policies and options. For any general Markov policy J-L, its value functions satisfy the equations: VtL(s) = L J-L(s, 0) [r~ + 2:P~s' VtL(S')] and QtL(s,o) = r~ + LP~s' VtL(s'). oEO. s' s' Let us denote a restricted set of options by 0 and the set of all policies selecting only from options in 0 by IJ( 0). Then the optimal value function given that we can select only from 0 is Va(s) = maxoEO. [r~ + 2:s' P~s' Va(s')]. A corresponding optimal policy, denoted J-Lo' is any policy that achieves Va' i.e., for which VtLe, (s) = Va (s) in all states s E S. If Va and the models of the options are known, then J-Lo can be formed.by choosing in any proportion among the maximizing options in the equation above for Va' It is straightforward to extend MDP planning methods to SMDPs. For example, synchronous value iteration with options initializes an approximate value function %(s) arbitrarily and then updates it by: Vk+l(S) f- max[r~ + 2: p~s' Vk(s')], "Is E S. oEOs s'ES Note that this algorithm reduces to conventional value iteration in the special case in which o = A. Standard results from SMDP theory guarantee that such processes converge for Improved Switching among Temporally Abstract Actions 1069 general semi-Markov options: limk-too Vk(s) = Vo(s) for all s E S, 0 E 0, and for all O. The policies found using temporally abstract options are approximate in the sense that they achieve only Vo' which is typically less than the maximum possible, V·. 4 Interrupting Options We are now ready to present the main new insight and result of this paper. SMDP methods apply to options, but only when they are treated as opaque indivisible units. Once an option has been selected, such methods require that its policy be followed until the option terminates. More interesting and potentially more powerful methods are possible by looking inside options and by altering their internal structure (e.g. Sutton, Precup & Singh, 1998). In particular, suppose we have determined the option-value function QI' (s, 0) for some policy J-L and for all state-options pairs s,o that could be encountered while following J-L. This function tells us how well we do while following J-L committing irrevocably to each option, but it can also be used to re-evaluate our commitment on each step. Suppose at time t we are in the midst of executing option o. If 0 is Markov in s, then we can compare the value of continuing with 0, which is QI' (St, 0), to the value of interrupting 0 and selecting a new option according to J-L, which is VI'(s) = Lo' J-L(s, o')QI'(s, 0'). If the latter is more highly valued, then why not interrupt 0 and allow the switch? This new way of behaving is indeed better, as shown below. We can characterize the new way of behaving as following a policy J-L' that is the same as the original one, but over new options, i.e. J-L' (s, 0') = J-L( s, 0), for all s E S. Each new option 0' is the same as the corresponding old option 0 except that it terminates whenever switching seems better than continuing according to QI'. We call such a J-L' an interrupted policy of J-L. We will now state a general theorem, which extends the case described above, in that options may be semi-Markov (instead of Markov) and interruption is optional at each state where it could be done. The latter extension lifts the requirement that QI' be completely known, since the interruption can be restricted to states for which this information is available. Theorem 1 (Interruption) For any MDP, any set of options 0, and any Markov policy J-L : S x 0 -+ [0,1], define a new set of options, 0', with a one-to-one mapping between the two option sets as follows: for every 0 = (I, 7r, (3) E 0 we define a corresponding 0' = (I, 7r, (3') EO', where{3' = (3exceptthatforanyhistoryhinwhichQI'(h,o) < VI'(s), where s is the final state of h, we may choose to set (3' (h) = 1. Any histories whose termination conditions are changed in this way are called interrupted histories. Let J-L' be the policy over 0' corresponding to J-L.' J-L'(s, 0') = J-L(s, 0), where 0 is the option in 0 corresponding to o',for all s E S. Then 1. VI" (s) ~ VI'(s) for all s E S. 2. Iffrom state s E S there is a non-zero probability of encountering an interrupted history upon initiating J-L' in s, then VI" (s) > VI'(s). Proof: The idea is to show that, for an arbitrary start state s, executing the option given by the termination improved policy J-L' and then following policy J-L thereafter is no worse than always following policy J-L. In other words, we show that the following inequality holds: LJ-L'(s,o')[r~' + LP~~'VI'(s')] ~ VI'(s) = LJ-L(s,o)[r~ + LP~8'VI'(S')]. (1) 0' s' o s' If this is true, then we can use it to expand the left-hand side, repeatedly replacing every occurrence of VI'(x) on the left by the corresponding Lo' J-L' (x, o')[r~' + Lx' p~'x' VI' (x')]. In the limit, the left-hand side becomes VI", proving that VI" ~ VI'. Since J-L'(s, 0') = J-L(s,o) \Is E S, we need to show that (2) s' s' 1070 R. S. Sutton. S. Singh. D. Precup and B. Ravindran Let r denote the set of all interrupted histories: r = {h En: f3 (h) =f f3' (h)}. Then, the left hand side of (2) can be re-written as E {r + ,kVJL(s') I £(0', s), hSSI ~ r} + E {r + ,kVJL(s') I £(0', s), hSSI E r}, where s', r, and k are the next state, cumulative reward, and number of elapsed steps following option 0 from s (hSSI is the history from s to s'). Trajectories that end because of encountering a history hSSI ~ r never encounter a history in r, and therefore also occur with the same probability and expected reward upon executing option 0 in state s. Therefor~, we can re-write the right hand side of (2) as E {r + ,kVJL(S') I £(0', s), hSSI ~ r} + E {f3(s')[r + ,kVJL(S')] + (1 - f3(s'))[r + ,kQJL(hsSI, 0)]1 £(0', s), hSsl E r}. This proves (1) because for all hSSI E r, Q6(hsSI, 0) :S VJL(s'). Note that strict inequality holds in (2) if Q6(hsSI, 0) < VJL(s') for at least one history hSSI E r that ends a trajectory generated by 0' with non-zero probability.) <> As one application of this result, consider the case in which /-L is an optimal policy for a given set of Markov options O. The interruption theorem gives us a way of improving over /-La with just the cost of checking (on each time step) if a better option exists, which is negligible compared to the combinatorial process of computing Q'O or Va' Kaelbling (1993) and Dietterich (1998) demonstrated a similar performance improvement by interrupting temporally extended actions in a different setting. 5 Illustration Figure 1 shows a simple example of the gain that can be obtained by interrupting options. The task is to navigate from a start location to a goal location within a continuous twodimensional state space. The actions are movements of length 0.01 in any direction from the current state. Rather than work with these low-level actions, infinite in number, we introduce seven landmark locations in the space. For each landmark we define a controller that takes us to the landmark in a direct path. Each controller is only applicable within a limited range of states, in this case within a certain distance of the corresponding landmark. Each controller then defines an option: the circular region around the controller'S landmark is the option's initiation set, the controller itself is the policy, and the arrival at the target landmark is the termination condition. We denote the set of seven landmark options by O. Any action within 0.01 of the goal location transitions to the terminal state, , = 1, and the reward is -Ion all transitions, which makes this a minimum-time task. One of the landmarks coincides with the goal, so it is possible to reach the goal while picking only from O. The optimal policy within II(O) runs from landmark to landmark, as shown by the thin line in Figure 1. This is the optimal solution to the SMDP defined by 0 and is indeed the best that one can do while picking only from these options. But of course one can do better if the options are not followed all the way to each landmark. The trajectory shown by the thick line in Figure 1 cuts the corners and is shorter. This is the interrupted policy with respect to the SMDP-optimal policy. The interrupted policy takes 474 steps from start to goal which, while not as good as the optimal policy (425 steps), is much better than the SMDP-optimal policy, which takes 600 steps. The state-value functions, VJLe, and VJL' for the two policies are also shown in Figure 1. Figure 2 presents a more complex, mission planning task. A mission is a flight from base to observe as many of a given set of sites as possible and to return to base without running out of fuel. The local weather at each site flips from cloudy to clear according to independent lWe note that the same proof would also apply for switching to other options (not selected by /1-) if they improved over continuing with o. That result would be more general and closer to conventional policy improvement. We prefer the result given here because it emphasizes its primary application. Improved Switching among Temporally Abstract Actions Trajectories through , ~ - - -', Space of Landmarks ," ... 1-- ....... ----.. Interrupted Sorution /' \ /' .. G ...... (474 Stops) L _ /'..._ " , I I , \ , , I .... ~ I .... I I r '::l..-J I \< \ ~~ J_ J-\" ,\ ~ 1 ...... I .... 1-" I \ ->- -l- " - -.-1 \' ';' t', "-... /\ , /1 "S '- , ~ '- -" SMDPSoIution , , ' (600 Stops) ·100 ·200 ·300 1 SMDP Value Function 1071 o 0 0 Values with Interruption Figure 1: Using interruption to improve navigation with landmark-directed controllers. The task (left) is to navigate from S to G in minimum time using options based on controllers that run each to one of seven landmarks (the black dots). The circles show the region around each landmark within which the controllers operate. The thin line shows the optimal behavior that uses only these controllers run to termination, and the thick line shows the corresponding interrupted behavior, which cuts the corners. The right panels show the state-value functions for the SMDP-optimal and interrupted policies. Poisson processes. If the sky at a given site is cloudy when the plane gets there, no observation is made and the reward is a. If the sky is clear, the plane gets a reward, according to the importance of the site. The positions, rewards, and mean time between two weather changes for each site are given in Figure 2. The plane has a limited amount of fuel, and it consumes one unit of fuel during each time tick. If the fuel runs out before reaching the base, the plane crashes and receives a reward of -lOa. The primitive actions are tiny movements in any direction (there is no inertia). The state of the system is described by several variables: the current position of the plane, the fuel level, the sites that have been observed so far, and the current weather at each of the remaining sites. The state-action space has approximately 24.3 billion elements (assuming 100 discretization levels of the continuous variables) and is intractable by normal dynamic programming methods. We introduced options that can take the plane to each of the sites (including the base), from any position in the state space. The resulting SMDP has only 874,800 elements and it is feasible to exactly determine Vo (S') for all sites S'. From this solution and the model of the options, we can determine Qo(s, 0) = r~ + LSi P~SI VO(S') for any option 0 and any state s in the whole space. We performed asynchronous value iteration using the options in order to compute the optimal option-value function, and then used the interruption approach based on the values computed, The policies obtained by both approaches were compared to the results of a static planner, which exhaustively searches for the best tour assuming the weather does not change, and then re-plans whenever the weather does change. The graph in Figure 2 shows the reward obtained by each of these methods, averaged over 100 independent simulated missions. The policy obtained by interruption performs significantly better than the SMDP policy, which in turn is significantly better than the static planner.2 6 Closing This paper has developed a natural, even obvious, observation-that one can do better by continually re-evaluating one's commitment to courses of action than one can by committing irrevocably to them. Our contribution has been to formulate this observation precisely enough to prove it and to demonstrate it empirically. Our final example suggests that this technique can be used in applications far too large to be solved at the level of primitive actions. Note that this was achieved using exact methods, without function approximators to represent the value function. With function approximators and other reinforcement learning techniques, it should be possible to address problems that are substantially larger stilL 2In preliminary experiments, we also used interruption on a crudely learned estimate of Qo. The performance of the interrupted solution was very close to the result reported here. 1072 .4T~h, 15 (reward) '/I I 10\ ° 'n".! Ok: 25 (mean time between 50 weather changes) oPtions~8 100 ,'/ r~~j7'A decision ' '" '''' ')( ~ ""iiF • Base t1: 0 ~ 10 °50 R. S. Sutton, S. Singh, D. Precup and B. Ravindran Expected Reward per 50 Mission 40 High Fuel Low Fuel Figure 2: The mission planning task and the perfonnance of policies constructed by SMDP methods, interruption of the SMDP policy, and an optimal static re-planner that does not take into account possible changes in weather conditions. Acknowledgments The authors gratefully acknowledge the substantial help they have received from many colleagues, including especially Amy McGovern, Andrew Barto, Ron Parr, Tom Dietterich, Andrew Fagg, Leo Zelevinsky and Manfred Huber. We also thank Paul Cohen, Robbie Moll, Mance Harmon, Sascha Engelbrecht, and Ted Perkins for helpful reactions and constructive criticism. This work was supported by NSF grant ECS-9511805 and grant AFOSR-F4962096-1-0254, both to Andrew Barto and Richard Sutton. Satinder Singh was supported by NSF grant IIS-9711753. References Bradtke, S. 1. & Duff, M. O. (1995). Reinforcement learning methods for continuous-time Markov decision problems. In NIPS 7 (393-500). MIT Press. Dayan, P. & Hinton, G. E. (1993). Feudal reinforcement learning. In NIPS 5 (271-278). MIT Press. Dietterich, T. G. (1998). The MAXQ method for hierarchical reinforcement learning. In Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann. Kaelbling, L. P. (1993). Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning (167-173). Morgan Kaufmann. Mahadevan, S., Marchallek, N., Das, T. K. & Gosavi, A. (1997). Self-improving factory simulation using continuous-time average-reward reinforcement learning. In Proceedings of the Fourteenth International Conference on Machine Learning (202-210). Morgan Kaufmann. McGovern, A., Sutton, R. S., & Fagg, A. H. (1997). Roles of macro-actions in accelerating reinforcement learning. In Grace Hopper Celebration of Women in Computing (13-17). Parr, R. & Russell, S. (1998). Reinforcement learning with hierarchies of machines. In NIPS 10. MIT Press. Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley. Singh, S. P. (1992). Reinforcement learning with a hierarchy of abstract models. In Proceedings of the Tenth National Conference on Artificial Intelligence (202-207). MIT/AAAI Press. Sutton, R. S. (1995). TD models: Modeling the world as a mixture of time scales. In Proceedings of the Twelfth International Conference on Machine Learning (531-539). Morgan Kaufmann. Sutton, R. S., Precup, D. & Singh, S. (1998). Intra-option learning about temporally abstract actions. In Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufman. Sutton, R. S., Precup, D. & Singh, S. (1998). Between MDPs and Semi-MDPs: learning, planning, and representing knowledge at multiple temporal scales. TR 98-74, Department of Compo Sci., University of Massachusetts, Amherst. Thrun, S. & Schwartz, A. (1995). Finding structure in reinforcement learning. In NIPS 7 (385-392). MIT Press.
|
1998
|
90
|
1,593
|
Dynamics of Supervised Learning with Restricted Training Sets A.C.C. Coolen Dept of Mathematics King's College London Strand, London WC2R 2LS, UK tcoolen @mth.kcl.ac.uk D. Saad Neural Computing Research Group Aston University Birmingham B4 7ET, UK saadd@aston.ac.uk Abstract We study the dynamics of supervised learning in layered neural networks, in the regime where the size p of the training set is proportional to the number N of inputs. Here the local fields are no longer described by Gaussian distributions. We use dynamical replica theory to predict the evolution of macroscopic observables, including the relevant error measures, incorporating the old formalism in the limit piN --t 00. 1 INTRODUCTION Much progress has been made in solving the dynamics of supervised learning in layered neural networks, using the strategy of statistical mechanics: by deriving closed laws for the evolution of suitably chosen macroscopic observables (order parameters) in the limit of an infinite system size [1, 2, 3, 4]. For a recent review and guide to references see e.g. [5]. The main successful procedure developed so far is built on the following cornerstones: • The task to be learned is defined by a 'teacher', which is itself a neural network. This induces a natural set of order parameters (mutual weight vector overlaps between the teacher and the trained, 'student', network). • The number of network inputs is infinitely large. This ensures that fluctuations in the order parameters will vanish, and enables usage of the central limit theorem. • The number of 'hidden' neurons is finite, in both teacher and student, ensuring a finite number of order parameters and an insignificant cumulative impact of the fluctuations. • The size of the training set is much larger than the number of updates. Each example presented is now different from the previous ones, so that the local fields will have Gaussian distributions, leading to closure of the dynamic equations. In this paper we study the dynamics of learning in layered networks with restricted training sets, where the number p of examples scales linearly with the number N of inputs. Individual examples will now re-appear during the learning process as soon as the number of weight updates made is of the order of p. Correlations will develop between the weights 198 A. C. C. Coolen and D. Saad ,. to !et=O.S •• a=O.5 1=50 1=50 ,. " ,. I. 'J",:':",' I. Y 00 ,<.~~t,~~:i.~/ Y o. - 10 -H) -20 - ~ () -.. ~o ~oL-~~ __ ~ ____ ~~ __ __ -4000 _1000 -2000 - 1000 00 1000 2000 loon 4000 ...... 0 _10 - 10 -1 0 00 10 20 \0 40 X X Figure 1: Student and teacher fields (x, y) (see text) observed during numerical simulations of on-line learning (learning rate 11 = 1) in a perceptron of size N = 10, 000 at t = 50, using examples from a training set of size p = ~N. Left: Hebbian learning. Right: AdaTron learning [5]. Both distributions are clearly non-Gaussian. and the training set examples and the student's local fields (activations) will be described by non-Gaussian distributions (see e.g. Figure 1). This leads to a breakdown of the standard formalism: the field distributions are no longer characterized by a few moments, and the macroscopic laws must now be averaged over realizations of the training set. The first rigorous study of the dynamics of learning with restricted training sets in non-linear networks, via generating functionals [6], was carried out for networks with binary weights. Here we use dynamical replica theory (see e.g. [7]) to predict the evolution of macroscopic observabIes for finite a, incorporating the old formalism as a special case (a = p/ N -t 00). For simplicity we restrict ourselves to single-layer systems and noise-free teachers. 2 FROM MICROSCOPIC TO MACROSCOPIC LAWS A 'student' perceptron operates a rule which is parametrised by the weight vector J E '!RN: s: {-I,I}N -t {-I,l} S(e) = sgn [J . e] == sgn [x] (I) It tries to emulate a teacher ~erceptron which operates a similar rule, characterized by a (fixed) weight vector B E'!R . The student modifies its weight vector J iteratively, using examples of input vectors e which are drawn at random from a fixed (randomly composed) training set D = {e1 , . • . , e} c D = {-I, I}N, of size p = aN with a > 0, and the corresponding values of the teacher outputs T(e) = sgn[B· e] == sgn [y]. Averages over the training set D and over the full set D will be denoted as (<p(e))i> and (<p(e))D, respectively. We will analyze the following two classes of learning rules: on-line: J(m+I) = J(m) + N e(m) g [J(m)·e(m), B·e(m)] batch: J(m+I) = J(m) + N (e g [J(m)·e,B·e])i> (2) In on-line learning one draws at each step m a question e(m) at random from the training set, the dynamics is a stochastic process; in batch learning one iterates a deterministic map. Our key dynamical observables are the training- and generalization errors, defined as Eg(J) = (O[-(J ·e)(B ·e)]) D (3) Only if the training set D is sufficiently large, and if there are no correlations between J and the training set examples, will these two errors be identical. We now turn to macroscopic observables n[J] = (OdJ], ... , Ok[J]). For N -t 00 (with finite times t = m/ N Dynamics of Supervised Learning with Restricted Training Sets 199 and with finite k), and if our observables are of a so-called mean-field type, their associated macroscopic distribution Pt(!l) is found to obey a Fokker-Planck type equation, with flowand diffusion terms that depend on whether on-line or batch learning is used. We now choose a specific set of observables !l[J], taylored to the present problem: Q[J] = J2, R[J] = J·B, P[x,y;J] = (8[x-J·e] 8[y-B·eDb (4) This choice is motivated as follows: (i) in order to incorporate the old formalism we need Q[ J] and R[ J], (ii) the training error involves field statistics calculated over the training set, as given by P[x, y; J], and (iii) for a < 00 one cannot expect closed equations for a finite number of order parameters, the present choice effectively represents an infinite number. We will assume the number of arguments (x, y) for which P[x, y; J] is evaluated to go to infinity after the limit N ~ 00 has been taken. This eliminates technical subtleties and allows us to show that in the Fokker-Planck equation all diffusion terms vanish as N ~ 00. The latter thereby reduces to a LiouviIle equation, describing deterministic evolution of our macroscopic observables. For on-line learning one arrives at :t Q = 27] / dxdy P[x, y] x Q[x; y] + 7]2 / dxdy P[x, y] Q2[X; y] :t R = 7] / dxdy P[x, y] y Q[x; y] :t P[x, y] = ~ [/ dx' P[x', y]8[x-x' -7]Q[x' , y)) - P[x, yl] -7] :x / dx'dy' g(X', y'] A[x, y; x', y'] 1 2 / I , [' '] 2 [' '] 82 [ ] + - 7] dx dy P x ,y Q x, y 8 .) P x, Y 2 x(5) (6) (7) Expansion of these equations in powers of 7], and retaining only the terms linear in 7], gives the corresponding equations describing batch learning. The complexity of the problem is fully concentrated in a Green's function A[x, y; x', y'], which is defined as A[x, y; x', y'] = lim ((([1-8cc' 18[x-J·e] 8[y-B·e](6~/) 8[xl-J·(]8[yl-B·e/])b) b)~;t N~oo ...... It involves a sub-shell average, in which Pt (J) is the weight probability density at time t: J dJ K[J] pt(J)8[Q -Q[J]]8[R-R[J]] ITxy 8[P[x, y] -P[x, y; J]] (K[J])~:t = J dJ pt(J)8[Q-Q[J]]8[R-R[J]] ITXY 8[P[x, y] - P[x, y; J]] where the sub-shells are defined with respect to the order parameters. The solution of (5,6,7) can be used to generate the errors of (3): Et = / dxdy P[x,y]O[-xy] 1 Eg = - arccos[R/ JQ] 7r (8) 3 CLOSURE VIA DYNAMICAL REPLICA THEORY So far our analysis is still exact. We now close the macroscopic laws (5,6,7) by making, for N ~ 00, the two key assumptions underlying dynamical replica theory [7]: (i) Our macroscopic observables {Q, R, P} obey closed dynamic equations. (ii) These equations are self-averaging with respect to the realisation of jj. (i) implies that probability variations within the {Q, R, P} subshells are either absent or irrelevant to the evolution of {Q, R, P} . We may thus make the simplest choice for Pt (J): Pt(J) ~ p(J) '" 8[Q- Q[J]] 8[R-R[J)) IT 8[P[x, y] -P[x, y; J)) (9) xy 200 A. C. C. Coolen and D. Saad p(J) depends on time implicitly, via the order parameters {Q, R, Pl. The procedure (9) leads to exact laws if our observables {Q, R , P} indeed obey closed equations for N --7 00. It gives an approximation if they don't. (ii) allows us to average the macroscopic laws over all training sets; it is observed in numerical simulations. and can probably be proven using the formalism of [6]. Our assumptions result in the closure of (5,6,7), since now A[ ... ] is expressed fully in terms of {Q, R, P} . The final ingredient of dynamical replica theory is the realization that averaging fractions is simplified with the replica identity [8] / JdJ W[J,Z]G[J,Z]) = lim jdJ 1 .. . dJn (G[J 1,z] IT W[Ja,z])z \ J dJ W[J , z] Z n-40 a=l What remains is to perform integrations. One finds that P[x, y] = P[xly]P[y] with Ply] = (271")-~ e- h 2 • Upon introducing the short-hands Dy = (271")- ~ e- h 2 dy and (J(x, y)) = J Dydx P[xly]f(x, y) we can write the resulting macroscopic laws as follows: d 2 d dt Q = 2r/V + TJ Z dt R = TJW (10) 8 1 j 1 82 8tP[xly] =~ dx'P[x'ly] {8[x-X'-TJG[x',yll-8[x-x']} + "2TJ2 Z 8x2P[Xly] 8 -TJ 8x {P[xly] [U(x-Ry)+Wy+[V-RW-(Q-R2)U]~[x,yJ]} (11) with U = (<I> [x, y]Q[x, y]), V = (x9[x,y]), W = (y9[x,y]), Z = (92[X,y]) As before the batch equations follow upon expanding in TJ and retaining only the linear terms. Finding the function <I> [x, y] (in replica symmetric ansatz) requires solving a saddlepoint problem for a scalar observable q and a function M[xIY]. Upon introducing B = JqQ-R2 (J[x,y,z])* = Jdx M[xly]eBXZf[x,y,z] Q(l- q) J dx M[xly]eBxz (with J dx M[xly] = 1 for all y) the saddle-point equations acquire the form for all X , y : P[Xly] = j Dz (<5[X -x])* ((X-Ry)2) + (qQ-R2)[1-~] = [Q(1+q)-2R2](x~[x,y]) a The solution M[xly] of the functional saddle-point equation, given a value for q in the physical range q E [R2/Q, 1], is unique [9]. The function ~[x, y] is then given by <I> [X, y] = { JqQ- R2 P[Xly]} -1 j Dz z(<5[X -x])* (12) 4 THE LIMIT a -7 00 For consistency we show that our theory reduces to the simple (Q, R) formalism of infinite training sets in the limit a --7 00 . Upon making the ansatz P[xly] = [271"(Q-R2)]-t e-~[x-RyJ 2 /(Q-R2) one finds that the saddle-point equations are simultaneously and uniquely solved by M[xly] = P[xly], q = R2/Q and <I>[x,y] reduces to <I>[x,y] = (x-Ry)/(Q-R2) Insertion of our ansatz into equation (II), followed by rearranging of terms and usage of the above expression for <I> [x, y], shows that this equation is satisfied. Thus from our general theory we indeed recover for a --7 00 the standard theory for infinite training sets. Dynamics o/Supervised Learning with Restricted Training Sets 0.5 _---------------~---, 0.4 0.3 0.2 0. 1 "'O-<>-O-<>-CH>""O"""":ro-<>"""""'T<>-<r<>='""""'<To-o-o-tn~u_o_<:~CH>""O_<>_O'<,."...,"O'U'<~ 0:=0.25 ~LO.Q.Q..O.<l~>-<>-O'.Q..O..<>-O-<>..~""-"-'Q..O..O~>_O'<>~~o.D_O_O."_':>_O.<>..o.<>_j 0:=0.5 10 20 30 40 t 201 Figure 2: Simulation results for on-line Hebbian learning (system size N = 10.000) versus an approximate solution of the equations generated by dynamical replica theory (see main text), for a E {0.25, 0.5,1.0, 2.0, 4.0}. Upper five curves: Eg as functions of time. Lower five curves: Et as functions of time. Circles: simulation results for Eg; diamonds: simulation results for Et . Solid lines: the corresponding theoretical predictions. 5 BENCHMARK TESTS: HEBBIAN LEARNING Batch Hebbian Learning For the Hebbian rule, where 9[x, yJ = sgn(y), one can calculate our order parameters exactly at any time, even for a < 00 [10], which provides an excellent benchmark for general theories such as ours. For batch execution all integrations in our present theory can be done and all equations solved explicitly, and our theory is found to predict the following: f2 f2 22[2 1] R = RO+rJty;: Q = Qo+2rJtRoy ;:+rJ t ;+; e- ~[x-Ry - ( 1)1 /0) sgn(y)f /(Q_R2) P[xly] = J27r(Q-R2) (14) Eg = ~ arccos [ ~] E = ~ - ~ JDY erf [ lyIR+7]t/a 1 (15) " VIq! t 2 2 J2(Q-R2) Comparison with the exact solution, calculated along the lines of [10] (where this was done for on-line Hebbian learning) shows that the above expressions are all rigorously exact. On-Line Hebbian Learning For on-line execution we cannot (yet) solve the functional saddle-point equation analytically. However, some explicit analytical predictions can still be extracted [9] : R = Ro + 7]t/f Q = Qo + 27]tRo/f + 7]2t + 7]2t2 [~+ ~] (16) J dx xP[xly] = Ry + (7]t/a) sgn(y) (17) 1 P(xIY] '" [ a ] 2' [_ a(x-RY-(7]t/a) sgn(y))2] (t ---* (0) (18) 27r7]2 t 2 exp 27]2 t2 202 11- 2.0 ,-50 10 , ,, ~ V 00 f _10 l -1 0 to lO r 10 t i ,v 00 f _I 0 ~ -10 ~ - J 0 ~ I ... 1.0 .. 50 A. C. C. Coolen and D. Saad >0I · ·~;··:~:: . . Jj.' ':;, • c. ~. , . .'; . , ', ':c.: , ',1· , ',, ~ £ ; ,. -20010 ..... 0 1 __ ~-'--~_-'- __ .. _ .... _ "---o.--'---_~ .... O~~.....1~~~~o -100.0 -JOOoG _100 0 00 I0Il0 laII O *e «10.0 .... 0 -_0 -_.0 -1000 0.0 1000 lUIO ... .000 I "'O ~""""'--.l.....-.....--~ · &...... .. ~o . JGO.lt -DO -1t00 00 100.0 200.0 )010 dO • • • Figure 3: Simulation results for on-line Hebbian learning (N = 10,000) versus dynamical replica theory, for a E {2.0, 1.0, 0.5}. Dots: local fields (x, y) = (J·e, B ·e) (calculated for examples in the training set), at time t = 50. Dashed lines: conditional average of student field x as a function ofy, as predicted by the theory, x(y) = Ry + (."t/a) sgn(y). 001 .•.• , OOlS . .. . • '. • 001' - - .. ."" Figure 4: Simulations of Hebbian on-line learning with N = 10,000. Histograms: student field distributions measured at t = 10 and t = 20. Lines: theoretical predictions for student field distributions (using the approximate solution of the diffusion equation, see main text), for a=4 (left), a= 1 (middle), a=0.25 (right). Comparison with the exact result of [ 10] shows that the above expressions (16,17,18), and therefore also that of Eg at any time, are all rigorously exact. At intermediate times it turns out that a good approximation ofthe solution of our dynamic equations for on-line Hebbian learning (exact for t « a and for t -+ 00) is given by e- Hz:-RY-('1t / a ) sgn(y))2/(Q-R2+'12t/ a ) P[xly] = (19) J27r(Q - R2 + .,,2t/a) Eg = ~ arccos [ . ~] Et = ~ - ~ !DY erf [ lyIR+."t/a 1 (20) " V~ 2 2 J2(Q-R2_.,,2t/a) In Figure 2 we compare the approximate predictions (20) with the results obtained from numerical simulations (N = 10,000, Qo = 1, Ro = 0, ." = 1). All curves show excellent agreement between theory and experiment. We also compare the theoretical predictions for the distribution P[xly] with the results of numerical simulations. This is done in Figure 3 where we show the fields as observed at t = 50 in simulations (same parameters as in Figure 2) of on-line Hebbian learning, for three different values of a. In the same figure we draw (dashed lines) the theoretical prediction for the y-dependent average (17) of the conditional x-distribution P[xly]. Finally we compare the student field distribution P[x] = Dynamics of Supervised Learning with Restricted Training Sets 203 J Dy P[xly] according to (19) with that observed in numerical simulations, see Figure 4. The agreement is again excellent (note: here the learning process has almost equilibrated). 6 DISCUSSION In this paper we have shown how the formalism of dynamical replica theory [7] can be used successfully to build a general theory with which to predict the evolution of the relevant macroscopic performance measures, including the training- and generalisation errors, for supervised (on-line and batch) learning in layered neural networks with randomly composed but restricted training sets (i.e. for finite a = piN). Here the student fields are no longer described by Gaussian distributions, and the more familiar statistical mechanical formalism breaks down. For simplicity and transparency we have restricted ourselves to single-layer systems and realizable tasks. In our approach the joint distribution P[x, y] for student and teacher fields is itself taken to be a dynamical order parameter, in addition to the conventional observables Q and R. From the order parameter set {Q, R, P}, in turn, we derive both the generalization error Eg and the training error Et . Following the prescriptions of dynamical replica theory one finds a diffusion equation for P[x, y], which we have evaluated by making the replica-symmetric ansatz in the saddle-point equations. This equation has Gaussian solutions only for a -+ 00; in the latter case we indeed recover correctly from our theory the more familiar formalism of infinite training sets, with closed equations for Q and R only. For finite a our theory is by construction exact if for N -+ 00 the dynamical order parameters {Q, R, P} obey closed, deterministic equations, which are self-averaging (i.e. independent of the microscopic realization of the training set). If this is not the case, our theory is an approximation. We have worked out our general equations explicitly for the special case of Hebbian learning, where the existence of an exact solution [10], derived from the microscopic equations (for finite a), allows us to perform a critical test of our theory. Our theory is found to be fully exact for batch Hebbian learning. For on-line Hebbian learning full exactness is difficult to determine, but exactness can be establised at least for (i) t -+ 00, (ii) the predictions for Q, R, Eg and x(y) = J dx xP[xly] at any time. A simple approximate solution of our equations already shows excellent agreement between theory and experiment. The present study clearly represents only a first step, and many extensions, applications and generalizations are currently under way. More specifically, we study alternative learning rules as well as the extension of this work to the case of noisy data and of soft committee machines. References [I] Kinzel W. and Rujan P. (1990), Europhys. Lett. 13,473 [2] Kinouchi o. and Caticha N. (1992).1. Phys. A: Math. Gen. 25,6243 [3] Biehl M. and Schwarze H. (1992), Europhys. Lett. 20,733 Biehl M. and Schwarze H. (1995),1. Phys. A: Math. Gen. 28,643 [4] Saad D. and Solla S. (1995), Phys. Rev. Lett. 74,4337 [5] Mace C.W.H. and Coolen AC.C (1998), Statistics and Computing 8,55 [6] Horner H. (1992a), Z. Phys. B 86, 291 Horner H. (1992b), Z. Phys. B 87,371 [7] Coolen AC.C., Laughton S.N. and Sherrington D. (1996), Phys. Rev. B 53, 8184 [8] Mezard M., Parisi G. and Virasoro M.A (1987), Spin-Glass Theory and Beyond (Singapore: World Scientific) [9] Coolen AC.C. and Saad D. (1998), in preparation. [10] Rae H.C., Sollich P. and Cool en A.C.C. (1998), these proceedings
|
1998
|
91
|
1,594
|