index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
1,400
Reinforcement Learning with Hierarchies of Machines * Ronald Parr and Stuart Russell Computer Science Division, UC Berkeley, CA 94720 {parr,russell}@cs.berkeley.edu Abstract We present a new approach to reinforcement learning in which the policies considered by the learning process are constrained by hierarchies of partially specified machines. This allows for the use of prior knowledge to reduce the search space and provides a framework in which knowledge can be transferred across problems and in which component solutions can be recombined to solve larger and more complicated problems. Our approach can be seen as providing a link between reinforcement learning and "behavior-based" or "teleo-reactive" approaches to control. We present provably convergent algorithms for problem-solving and learning with hierarchical machines and demonstrate their effectiveness on a problem with several thousand states. 1 Introduction Optimal decision making in virtually all spheres of human activity is rendered intractable by the complexity of the task environment. Generally speaking, the only way around intractability has been to provide a hierarchical organization for complex activities. Although it can yield suboptimal policies, top-down hierarchical control often reduces the complexity of decision making from exponential to linear in the size of the problem. For example, hierarchical task network (HTN) planners can generate solutions containing tens of thousands of steps [5], whereas "fiat" planners can manage only tens of steps. HTN planners are successful because they use a plan library that describes the decomposition of high-level activities into lower-level activities. This paper describes an approach to learning and decision making in uncertain environments (Markov decision processes) that uses a roughly analogous form of prior knowledge. We use hierarchical abstract machines (HAMs), which impose constraints on the policies considered by our learning algorithms. HAMs consist of nondeterministic finite state machines whose transitions may invoke lower-level machines. Nondeterminism is represented by choice states where the optimal action is yet to be decided or learned. The language allows a variety of prior constraints to be expressed, ranging from no constraint all the way to a fully specified solution. One *This research was supported in part by ARO under the MURI program "Integrated Approach to Intelligent Systems," grant number DAAH04-96-1-0341. 1044 R. Parr and S. Russell 0_1. (a) (b) (c) Figure 1: (a) An MOP with ~ 3600 states. The initial state is in the top left. (b) Closeup showing a typical obstacle. (c) Nondetenninistic finite-state controller for negotiating obstacles. useful intennediate point is the specification of just the general organization of behavior into a layered hierarchy, leaving it up to the learning algorithm to discover exactly which lower-level activities should be invoked by higher levels at each point. The paper begins with a brief review of Markov decision processes (MOPs) and a description of hierarchical abstract machines. We then present, in abbreviated fonn, the following results: 1) Given any HAM and any MOP, there exists a new MOP such that the optimal policy in the new MOP is optimal in the original MOP among those policies that satisfy the constraints specified by the HAM. This means that even with complex machine specifications we can still apply standard decision-making and learning methods. 2) An algorithm exists that detennines this optimal policy, given an MOP and a HAM. 3) On an illustrative problem with 3600 states, this algorithm yields dramatic perfonnance improvements over standard algorithms applied to the original MOP. 4) A reinforcement learning algorithm exists that converges to the optimal policy, subject to the HAM constraints, with no need to construct explicitly a new MOP. 5) On the sample problem, this algorithm learns dramatically faster than standard RL algorithms. We conclude with a discussion of related approaches and ongoing work. 2 Markov Decision Processes We assume the reader is familiar with the basic concepts of MOPs. To review, an MOP is a 4-tuple, (5, A, T, R) where 5 is a set of states, A is a set of actions, T is a transition model mapping 5 x A x 5 into probabilities in [0, I J, and R is a reward function mapping 5 x A x 5 into real-valued rewards. Algorithms for solving MOPs can return a policy 7r that maps from 5 to A, a real-valued value function V on states, or a real-valued Q-function on state-action pairs. In this paper, we focus on infinite-horizon MOPs with a discount factor /3. The aim is to find an optimal policy 7r* (or, equivalently, V* or Q*) that maximizes the expected discounted total reward of the agent. Throughout the paper, we will use as an example the MOP shown in Figure l(a). Here A contains four primitive actions (up, down, left, right). The transition model, T, specifies that each action succeeds 80% of time, while 20% of the time the agent moves in an unintended perpendicular direction. The agent begins in a start state in the upper left corner. A reward of 5.0 is given for reaching the goal state and the discount factor /3 is 0.999. 3 Hierarchical abstract machines A HAM is a program which, when executed by an agent in an environment, constrains the actions that the agent can take in each state. For example, a very simple machine might dictate, "repeatedly choose right or down," which would eliminate from consideration all policies that go up or left. HAMs extend this simple idea of constraining policies by providing a hierarchical means of expressing constraints at varying levels of detail and Reinforcement Learning with Hierarchies of Machines 1045 specificity. Machines for HAMs are defined by a set of states, a transition function, and a start function that detennines the initial state of the machine. Machine states are of four types: Action states execute an action in the environment. Call states execute another machine as a subroutine. Choice states nondetenninistically select a next machine state. Stop states halt execution of the machine and return control to the previous call state. The transition function detennines the next machine state after an action Or call state as a stochastic function of the current machine state and some features of the resulting environment state. Machines will typically use a partial description of the environment to detennine the next state. Although machines can function in partially observable domains, for the purposes of this paper we make the standard assumption that the agent has access to a complete description as well. A HAM is defined by an initial machine in which execution begins and the closure of all machines reachable from the initial machine. Figure I(c) shows a simplified version of one element of the HAM we used for the MDP in Figure I. This element is used for traversing a hallway while negotiating obstacles of the kind shown in Figure 1 (b). It runs until the end of the hallway or an intersection is reached. When it encounters an obstacle, a choice point is created to choose between two possible next machine states. One calls the backoff machine to back away from the obstacle and then move forward until the next one. The other calls the follow-wall machine to try to get around the obstacle. The follow-wall machine is very simple and will be tricked by obstacles that are concave in the direction of intended movement; the backoff machine, on the other hand, can move around any obstacle in this world but could waste time backing away from some obstacles unnecessarily and should be used sparingly. Our complete "navigation HAM" involves a three-level hierarchy, somewhat reminiscent of a Brooks-style architecture but with hard-wired decisions replaced by choice states. The top level of the hierarchy is basically just a choice state for choosing a hallway navigation direction from the four coordinate directions. This machine has control initially and regains control at intersections or corners. The second level of the hierarchy contains four machines for moving along hallways, one for each direction. Each machine at this level has a choice state with four basic strategies for handling obstacles. Two back away from obstacles and two attempt to follow walls to get around obstacles. The third level of the hierarchy implements these strategies using the primitive actions. The transition function for this HAM assumes that an agent executing the HAM has access to a short-range, low-directed sonar that detects obstacles in any of the four axis-parallel adjacent squares and a long-range, high-directed sonar that detects larger objects such as the intersections and the ends of hallways. The HAM uses these partial state descriptions to identify feasible choices. For example, the machine to traverse a hallway northwards would not be called from the start state because the high-directed sonar would detect a wall to the north. Our navigation HAM represents an abstract plan to move about the environment by repeatedly selecting a direction and pursuing this direction until an intersection is reached. Each machine for navigating in the chosen direction represents an abstract plan for moving in a particular direction while avoiding obstacles. The next section defines how a HAM interacts with a specific MDP and how to find an optimal policy that respects the HAM constraints. 4 Defining and solving the HAM-induced MDP A policy for a model, M, that is HAM-consistent with HAM H is a scheme for making choices whenever an agent executing H in M, enters a choice state. To find the optimal HAM-consistent policy we apply H to M to yield an induced MDP, HoM. A somewhat simplified description of the construction of HoM is as follows: 1) The set of states in HoM is the cross-product of the states of H with the states of M. 2) For each state in HoM where the machine component is an action state, the model and machine transition 1046 20 ~ 1 I ~ : 10 WillloutHAM WittlHAM ....... r-o ____ ~ ________________ ~~. o SOO 1(0) lSOO 2:C)X) l500 lOOO 3SOO .4QOO ..soD ~ RuntiJne(.ca::ondI' (a) R. Parr and S. Russell (b) WllItoIIlHAM -WtthHAM .... -Figure 2: Experimental results showing policy value (at the initial state) as a function of runtime on the domain shown in Figure 1. (a) Policy iteration with and without the HAM. (b) Q-learning with and without the HAM (averaged over 10 runs). functions are combined. 3) For each state where the machine component is a choice state, actions that change only the machine component of the state are introduced. 4) The reward is taken from M for primitive actions, otherwise it is zero. With this construction, we have the following (proof omitted): Lemma 1 For any Markov decision process M and any! HAM H, the induced process HoM is a Markov decision process. Lemma 2 If 7r is an optimal policy for HoM, then the primitive actions specified by 7r constitute the optimal policy for M that is HAM-consistent with H. Of course, HoM may be quite large. Fortunately, there are two things that will make the problem much easier in most cases. The first is that not all pairs of HAM states and environment states will be possible, i.e., reachable from an initial state. The second is that the actual complexity of the induced MOP is determined by the number of choice points, i.e., states of HoM in which the HAM component is a choice state. This leads to the following: Theorem 1 For any MOP, M, and HAM, H, let C be the set of choice points in HoM. There exists a decision process, reduce(H 0 M), with states C such that the optimal policy for reduce(H 0 M) corresponds to the optimal policy for M that is HAM-consistent with H . Proof sketch We begin by applying Lemma 1 and then observing that in states of HoM where the HAM component is not a choice state, only one action is permitted. These states can be removed to produce an equivalent Semi-Markov decision process (SMOP). (SMOPs are a generalization of Markov decision processes that permit different discount rates for different transitions.) The optimal policy for this SMOP will be the same as the optimal policy for HoM and by Lemma 2, this will be the optimal policy for M that is HAM-consistent with H. 0 This theorem formally establishes the mechanism by which the constraints embodied in a HAM can be used to simplify an MDP. As an example of the power of this theorem, and to demonstrate that this transformation can be done efficiently, we applied our navigation HAM to the problem described in the previous section. Figure 2(a) shows the results of applying policy iteration to the original model and to the transformed model. Even when we add in the cost of transformation (which, with our rather underoptimized code, takes ITo preserve the Markov property, we require that if a machine has more than one possible caller in the hierarchy, that each appearance is treated as a distinct machine. This is equivalent to requiring that the call graph for the HAM is a tree. It follows from this that circular calling sequences are also forbidden. Reinforcement Learning with Hierarchies of Machines 1047 866 seconds), the HAM method produces a good policy in less than a quarter of the time required to find the optimal policy in the original model. The actual solution time is 185 seconds versus 4544 seconds. An important property of the HAM approach is that model transformation produces an MDP that is an accurate model of the application of the HAM to the original MDP. Unlike typical approximation methods for MDPs, the HAM method can give strict performance guarantees. The solution to the transformed model Teduce(H 0 M) is the optimal solution from within a well-defined class of policies and the value assigned to this solution is the true expected value of applying the concrete HAM policy to the original MDP. 5 Reinforcement learning with HAMs HAMs can be of even greater advantage in a reinforcement learning context, where the effort required to obtain a solution typically scales very badly with the size of the problem. HAM contraints can focus exploration of the state space, reducing the "blind search" phase that reinforcement learning agents must endure while learning about a new environment. Learning will also be fasterfor the same reason policy iteration is faster in the HAM-induced model; the agent is effectively operating in a reduced state space. We now introduce a variation of Q-learning called HAMQ-1earning that learns directly in the reduced state space without performing the model transformation described in the previous section. This is significant because the the environment model is not usually known a priori in reinforcement learning contexts. A HAMQ-learning agent keeps track of the following quantities: t, the current environment state; n, the current machine state; Se and me, the environment state and machine state at the previous choice point; a, the choice made at the previous choice point; and T e and 13e, the total accumulated reward and discount since the previous choice point. It also maintains an extended Q-table, Q([s, m], a), which is indexed by an environment-state/machine-state pair and by an action taken at a choice point. For every environment transition from state s to state t with observed reward T and discount 13, the HAMQ-Iearning agent updates: Te ~ Te + 13eT and 13e ~ 13l3e. For each transition to a choice point, the agent does Q([se, me], a) ~ Q([se, mc], a) + a[Te + 13e V([t, n]) - Q([Se, mc], a)], and then Te ~ 0, 13e ~ 1. Theorem 2 For any finite-state MDP, M, and any HAM, H, HAMQ-Iearning will converge to the optimal choice for every choice point in Teduce(H 0 M) with probability l. Proof sketch We note that the expected reinforcement signal in HAMQ-Iearning is the same as the expected reinforcement signal that would be received if the agent were acting directly in the transformed model of Theorem 1 above. Thus, Theorem 1 of [11] can be applied to prove the convergence of the HAMQ-learning agent, provided that we enforce suitable constraints on the exploration strategy and the update parameter decay rate. 0 We ran some experiments to measure the performance of HAMQ-learning on our sample problem. Exploration was achieved by selecting actions according to the Boltzman distribution with a temperature parameter for each state. We also used an inverse decay for the update parameter a. Figure 2(b) compares the learning curves for Q-Iearning and HAMQlearning. HAMQ-Iearning appears to learn much faster: Q-Iearning required 9,000,000 iterations to reach the level achieved by HAMQ-learning after 270,000 iterations. Even after 20,000,000 iterations, Q-Iearning did not do as well as HAMQ-learning.2 2Speedup techniques such as eligibility traces could be applied to get better Q-Ieaming results; such methods apply equally well to HAMQ-Iearning. 1048 R. Parr and S. Russell 6 Related work State aggregation (see, e.g., [18] and [7]) clusters "similar" states together and assigns them the same value, effectively reducing the state space. This is orthogonal to our approach and could be combined with HAMs. However, aggregation should be used with caution as it treats distinct states as a single state and can violate the Markov property leading to the loss of performance guarantees and oscillation or divergence in reinforcement learning. Moreover, state aggregation may be hard to apply effectively in many cases. Dean and Lin [8] and Bertsekas and Tsitsiklis [2], showed that some MDPs are loosely coupled and hence amenable to divide-and-conquer algorithms. A machine-like language was used in [13] to partition an MDP into decoupled subproblems. In problems that are amenable to decoupling, this could approaches could be used in combinated with HAMs. Dayan and Hinton [6] have proposedJeudal RL which specifies an explicit subgoal structure, with fixed values for each sub goal achieved, in order to achieve a hierarchical decomposition of the state space. Dietterich extends and generalizes this approach in [9]. Singh has investigated a number of approaches to subgoal based decomposition in reinforcement learning (e.g. [17] and [16]). Subgoals seem natural in some domains, but they may require a significant amount of outside knowledge about the domain and establishing the relationship between the value of subgoals with respect to the overall problem can be difficult. Bradtke and Duff [3] proposed an RL algorithm for SMDPs. Sutton [19] proposes temporal abstractions, which concatenate sequences of state transitions together to permit reasoning about temporally extended events, and which can thereby form a behavioral hierarchy as in [14] and [15]. Lin's somewhat informal scheme [12] also allows agents to treat entire policies as single actions. These approaches can be emcompassed within our framework by encoding the events or behaviors as machines. The design of hierarchically organized, "layered" controllers was popularized by Brooks [4]. His designs use a somewhat different means of passing control, but our analysis and theorems apply equally well to his machine description language. The "teleo-reactive" agent designs of Benson and Nilsson [I] are even closer to our HAM language. Both of these approaches assume that the agent is completely specified, albeit self-modifiable. The idea of partial behavior descriptions can be traced at least to Hsu's partial programs [10], which were used with a deterministic logical planner. 7 Conclusions and future work We have presented HAMs as a principled means of constraining the set of policies that are considered for a Markov decision process and we have demonstrated the efficacy of this approach in a simple example for both policy iteration and reinforcement learning. Our results show very significant speedup for decision-making and learning-but of course, this reflects the provision of knowledge in the form of the HAM. The HAM language provides a very general method of transferring knowledge to an agent and we only have scratched the surface of what can be done with this approach. We believe that if desired, subgoal information can be incorporated into the HAM structure, unifying subgoal-based approaches with the HAM approach. Moreover, the HAM structure provides a natural decomposition of the HAM-induced model, making it amenable to the divide-and-conquer approaches of [8] and [2]. There are opportunities for generalization across all levels of the HAM paradigm. Value function approximation can be used for the HAM induced model and inductive learning methods can be used to produce HAMs or to generalize their effects upon different regions of the state space. Gradient-following methods also can be used to adjust the transition probabilities of a stochastic HAM. HAMs also lend themselves naturally to partially observable domains. They can be applied directly when the choice points induced by the HAM are states where no confusion about Reinforcement Learning with Hierarchies of Machines 1049 the true state of the environment is possible. The application of HAMs to more general partially observable domains is more complicated and is a topic of ongoing research. We also believe that the HAM approach can be extended to cover the average-reward optimality criterion. We expect that successful pursuit of these lines of research will provide a formal basis for understanding and unifying several seemingly disparate approaches to control, including behavior-based methods. It should also enable the use of the MDP framework in real-world applications of much greater complexity than hitherto attacked, much as HTN planning has extended the reach of classical planning methods. References [1] S. Benson and N. Nilsson. Reacting, planning and learning in an autonomous agent. In K. Furukawa, D. Michie, and S. Muggleton, editors, Machine Intelligence 14. Oxford University Press, Oxford, 1995. [2] D. C. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Metlwds. Prentice-Hall, Englewood Cliffs, New Jersey, 1989. [3] S. J. Bradtke and M. O. Duff. Reinforcement learning methods for continuous-time Markov decision problems. In Advances in Neurallnfonnation Processing Systems 7: Proc. of the 1994 Conference, Denver, Colorado, December 1995. MIT Press. [4] R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 2, 1986. [5] K. W. Currie and A. Tate. O-Plan: the Open Planning Architecture. Artificial Intelligence, 52(1), November 1991. [6] P. Dayan and G. E. Hinton. Feudal reinforcement learning. In Stephen Jose Hanson, Jack D. Cowan, and C. Lee Giles, editors, Neural Information Processing Systems 5, San Mateo, California, 1993. Morgan Kaufman. [7] T. Dean, R. Givan, and S. Leach. Model reduction techniques for computing approximately optimal solutions for markov decision processes. In Proc. of the Thirteenth Conference on Uncertainty in Artificial Intelligence, Providence, Rhode Island, August 1997. Morgan Kaufmann. [8] T. Dean and S.-H. Lin. Decomposition techniques for planning in stochastic domains. In Proc. of the Fourteenth Int. Joint Conference on Artificial Intelligence, Montreal, Canada, August 1995. Morgan Kaufmann. [9] Thomas G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. Technical report, Department of Computer Science, Oregon State University, Corvallis, Oregon, 1997. [10] Y.-J. Hsu. Synthesizing efficient agents from partial programs. In Metlwdologiesfor Intelligent Systems: 6th Int. Symposium, ISMIS '91, Proc., Charlotte, North Carolina, October 1991. Springer-Verlag. [11] T. Jaakkola, M.l. Jordan, and S.P. Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation, 6(6), 1994. [12] L.-J. Lin. Reinforcement Learning for Robots Using Neural Networks. PhD thesis, Computer Science Department, Carnegie-Mellon University, Pittsburgh, Pennsylvania, 1993. [13] Shieu-Hong Lin. Exploiting Structure for Planning and Control. PhD thesis, Computer Science Department, Brown University, Providence, Rhode Island, 1997. [14] A. McGovern, R. S. Sutton, and A. H. Fagg. Roles of macro-actions in accelerating reinforcement learning. In 1997 Grace Hopper Celebration of Women in Computing, 1997. [15] D. Precup and R. S. Sutton. Multi-time models fortemporally abstract planning. In This Volume. [16] S. P. Singh. Scaling reinforcement learning algorithms by learning variable temporal resolution models. In Proceedings of the Ninth International Conference on Machine Learning, Aberdeen, July 1992. Morgan Kaufmann. [17] S. P. Singh. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8(3), May 1992. [18] S. P. Singh, T. Jaakola, and M. I. Jordan. Reinforcement learning with soft state aggregation. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Neural Information Processing Systems 7, Cambridge, Massachusetts, 1995. MIT Press. [19] R. S. Sutton. Temporal abstraction in reinforcement learning. In Proc. of the Twelfth Int. Conference on Machine Learning, Tahoe City, CA, July 1995. Morgan Kaufmann.
1997
53
1,401
Multiplicative Updating Rule for Blind Separation Derived from the Method of Scoring Howard Hua Yang Department of Computer Science Oregon Graduate Institute PO Box 91000, Portland, OR 97291, USA hyang@cse.ogi.edu Abstract For blind source separation, when the Fisher information matrix is used as the Riemannian metric tensor for the parameter space, the steepest descent algorithm to maximize the likelihood function in this Riemannian parameter space becomes the serial updating rule with equivariant property. This algorithm can be further simplified by using the asymptotic form of the Fisher information matrix around the equilibrium. 1 Introduction The relative gradient was introduced by (Cardoso and Laheld, 1996) to design multiplicative updating algorithms with equivariant property for blind separation problems. The idea is to calculate differentials by using a relative increment instead of an absolute increment in the parameter space. This idea has been extended to compute the relative Hessian by (Pham, 1996). For a matrix function f = f (W), the relative gradient is defined by Vf= ::V WT. (1) From the differential of f (W) based on the relative gradient, the following learning rule is given by (Cardoso and Laheld, 1996) to maximize the function f: dW = VfW= 8f WTW dt 1] 1] 8W (2) Also motivated by designing blind separation algorithms with equivariant property, Multiplicative Updating Rule for Blind Separation 697 the natural gradient defined by (3) was introduced in (Amari et al, 1996) which yields the same learning rule (2). The geometrical meaning of the natural gradient is given by (Amari, 1996). More details about the natural gradient can be found in (Yang and Amari, 1997) and (Amari, 1997). The framework of the natural gradient learning was proposed by (Amari, 1997) . In this framework, the ordinary gradient descent learning algorithm in the Euclidean space is not optimal in minimizing a function defined in a Riemannian space. The ordinary gradient should be replaced by the natural gradient which is defined by operating the inverse of the metric tensor in the Riemannian space on the ordinary gradient. Let w denote a parameter vector. It is proved by (Amari, 1997) that if C (w) is a loss function defined on a Riemannian space {w} with a metric tensor G, the negative natural gradient of C(w), namely, _G- 1 gg is the steepest descent direction to decrease this function in the Riemannian space. Therefore, the steepest descent algorithm in this Riemannian space has the following form: dw = -T}G-I ac. dt ow If the Fisher information matrix is used as the metric tensor for the Riemannian space and C(w) is replaced by the negative log-likelihood function, the above learning rule becomes the method of scoring (Kay, 1993) which is the focus ofthis paper. Both the relative gradient V and the natural gradient V were proposed in order to design the multiplicative updating algorithms with the equivariant property. The former is due to a multiplicative increment in calculating differential while the latter is du,: to an increment based on a nonholonomic basis (Amari, 1997). Neither V nor V' depends on the data model. The Fisher information matrix is a special and important choice for the Riemannian metric tensor for statistical estimation problems. It depends on the data model. Operating the inverse of the Fisher information matrix on the ordinary gradient, we have another gradient operator. It is called a natural gradient induced by the Fisher information matrix. In this paper, we show how to derive a multiplicative updating algorithm from the method of scoring. This approach is different from those based on the relative gradient and the natural gradient defined by (3). 2 Fisher Information Matrix For Blind Separation Consider a linear mixing system: x =As where A E Rnxn, x = (Xl,' .. ,xn)T and s = (Sl,···, snf. Assume that sources are independent with a factorized joint pdf: n res) = II r(si). i=l The likelihood function is rCA -IX) p(x;A) = IAI 698 H.H. Yang where IAI = Idet(A)I. Let W = A -1 and y = Wx ( a demixing system), then we have the log-likelihood function n L(W) = Llogri(Yi) + log IWI. i=1 It is easy to obtain 8L _ rHYi) x. + W:-:T (4) 8Wij ri(Yi) 1 '1 where w;t is the (i,j) entry in W-T = (W-l)T. Writing (4) in a matrix form, we have :~ = W-T _ ~(Y)XT = (I - ~(y)yT)W-T = F(y)W-T (5) where ~(y) = (<I>I (y.),"', <l>n(Yn))T, <l>i(Yi) = - ;H~:~ and F(y) = I ~(y)yT. The maximum likelihood algorithm based on the ordinary gradient 8'1:tt is d: = TJ(I _ ~(y)yT)W-T = TJF(y)W-T which has the high computational complexity due to the matrix inverse W- I . The maximum likelihood algorithm based on the natural gradient of matrix functions is dW dt = TJ"VL = TJ(I ~(y)yT)W. (6) The same algorithm is obtained from d!f = TJV LW by using the relative gradient. An apparent reason for using this algorithm is to avoid the matrix inverse W- 1 • Another good reason for using it is due to the fact that the matrix W driven by (6) never becomes singular if the initial matrix W is not singular. This is proved by (Yang and Amari, 1997). In fact, this property holds for any learning rule of the following type: dW dt = H(y)W. (7) Let < U,V >= Tr(UTV) denote the inner product of U and V E 3?nxn. When Wet) is driven by the equation (7), we have d'W, _< 8'U;' dW >-< IWI(W-I)T dW > dt 8 'dt , dt = Tr(IWIW- 1 H(y)W) = Tr(H(y))IWI. Therefore, IW(t)1 = IW(O)I exP{lt Tr(H(y(r)))dr} (8) which is non-singular when the initial matrix W(O) is non-singular. The matrix function F(y) is also called an estimating function. At the equilibrium of the system (6), it satisfies the zero condition E[F(y)] = 0, i.e., E[<I>i (Yi)Yj] = f5ij where f5ij = 1 if i = j and 0 otherwise. (9) To calculate the Fisher information matrix, we need a vector form of the equation (5). Let Vec(·) denote an operator on a matrix which cascades the columns of the Multiplicative Updating Rule for Blind Separation 699 matrix from the left to the right and forms a column vector. This operator has the following property: Vec(ABC) = (CT 0 A)Vec(B) (10) where 0 denotes the Kronecker product. Applying this property, we first rewrite (5) as aL aL -1 aVec(W) = Vec(aW) = (W 0I)Vec(F(y)), (11) and then obtain the Fisher information matrix G - E[ aL ( aL )T] aVec(W) aVec(W) = (W- l 0 I)E[Vec(F(y))VecT(F(y))](W-T 0 I). (12) The inverse of G is G- l = (WT 0 I)D-l (W 0 I) where D = E[Vec(F(y))VecT(F(y))]. (13) 3 Natural Gradient Induced By Fisher Information Matrix Define a Riemannian space V = {Vec(W); W E Gl(n)} in which the Fisher information matrix G is used as its metric. Here, Gl(n) is the space of all the n x n invertible matrices. Let C(W) be a matrix function to be minimized. It is shown by (Amari, 1997) that the steepest descent direction in the Riemannian space V is _G- 1 8V:C~W)' Let us define the natural gradient in V by ( (T -l( ) ac \lC W) = W 0I)D W 0 I aVec(W) (14) which is called the natural gradient induced by the Fisher information matrix. The time complexity of computing the natural gradient in the space V is high since inverting the matrix D of n2 x n2 is needed. Using the natural gradient in V to maximize the likelihood function L(W) or the method of scoring, from (11) and (14) we have the following learning rule Vec(d:) = T/(WT 0 I)D-1Vec(F(y)) (15) We shall prove that the above learning rule has the equivariant property. Denote Vec- l the inverse of the operator Vec. Let matrices B and A be of n2 x n2 and n x n, respectively. Denote B(i,·) the i-th row of Band Bi = Vec-I(B(i, .)), i = 1, ... , n 2 . Define an operator B* as a mapping from ~n x n to ~n x n: [ < Bl,A > .. , < BnLn+I,A > 1 B*A= ... .., ... < Bn,A > ... < B n2,A > where < .,. > is the inner product in ~nxn. With the operation *, we have BVec(A) = [ < Bl:' A> ] = Vec(Vec-l ( [ < ~\ A> ])) = Vec(B * A), <Bn2,A> <Bn2,A> 700 H. H. Yang i.e., BVec(A) = Vec(B * A). Applying the above relation, we first rewrite the equation (15) as dW _ Vec( dt) = 1](WT 0 J)Vec(D 1* F(y)), then applying (10) to the above equation we obtain d: = 1](D-1 *F(y))W. (16) Theorem 1 For the blind separation problem, the maximum likelihood algorithm based on the natural gradient induced by the Fisher information matrix or the method of scoring has the form (16) which is a multiplicative updating rule with the equivariant property. To implement the algorithm (16), we estimate D by sample average. Let fij(Y) be the (i,j) entry in F(y). A general form for the entries in D is dij,kl = E[Jij (y)fkl (y)] which depends on the source pdfs ri(si) . When the source pdfs are unknown, in practice we choose Ti(Si) as our prior assumptions about the source pdfs. To simplify the algorithm (16), we replace D by its asymptotic form at the solution points a = (ClSlT(I),· .. , CnSlT(n»)T where (0"(1),.··, O"(n)) is a permutation of (1,· · ·, n). Regarding the structure of the asymptotic D, we have the following theorem: Theorem 2 Assume that the pdfs of the sources Si are even fu.nctions. Then at the solution point a = (Cl SlT(l) , ... ,CnSlT(n»)T, D is a diagonal matrix and its n 2 diagonal entries have two forms, namely, E[Jij(a)!ij(a)] = J-LiAj, for i =fi j and E[(Jii(a))2] = Vi where J-Li = E[4>;(ai)], Ai = E[a;] and Vi = E[4>~(ai)a~] - 1. More concisely, we have D = diag( Vec( H)) where H = (J-LiAj )nx n diag(J-L1 AI, .. . ,J-LnAn) + diag( VI, ... , vn) The proof of Theorem 2 is given in Appendix 1. (17) Let H = (hij)nxn. Since all J-Li, Ai, and Vi are positive, and so are all hij . We define 1 1 H = (hij )nxn. Then from (17), we have D- 1 = diag(Vec( ~)). The results in Theorem 2 enable us to simplify the algorithm (16) to obtain a low complexity learning rule. Since D-1 is a diagonal matrix, for any n x n matrix A we have D-1Vec(A) = Vec( ~ 0 A) (18) Multiplicative Updating Rule/or Blind Separation 701 where 0 denotes the componentwise multiplication of two matrices of the same dimension. Applying (18) to the learning rule (15), we obtain the following learning rule dVV 1 Vec( ---;It) = 1](VVT Ci9 I)Vec(H 0 F(y». Again, applying (10) t.o the above equation we have the following learning rule dVV 1 dt = 1]( H 0 F(y»VV. (19) Like the learning rule (16), the algorithm (19) is also multiplicative; but unlike \16), there is no need to inverse the n2 x n2 matrix in (19). The computation of H is straightforward by computing the reciprocals of the entries in H. (f.Li, Ai, Vi) are 3n unknowns in G. Let us impose the following constraint Vi = f.LiAi. (20) Under this constraint, the number of unknowns in G is 2n, and D can be written as D=D>.Ci9D~ (21) where D>. = diag(Al,' . " An) and D~ = diag(f.LI, " . ,f.Ln)' From (14), using (21) we have the natural gradient descent rule in the Riemannian space V dVec(VV) = _ (VVTD- 1VVCi9D- 1 ) ae dt 1] >. ~ aVec(VV) . (22) Applying the property (10), we rewrite the above equation in a matrix form dVV = _ D-1 ae VVTD-1VV dt 1] ~ avv >" (23) Since f.Li and Ai are unknown, D ~ and D>. are replaced by the identity matrix in practice. Therefore, the algorithm (2) is an approximation of the algorithm (23). Taking e = - L(VV) as the negative likelihood function and applying the expression (5), we have the following maximum likelihood algorithm based on the natural gradient in V: dVV -1 ( ( ) T) -I ( ) -;It = 1]D~ I ~ y y D>. VV. 24 Again, replacing D~ and D>. by the identity matrix we obtain the maximum likelihood algorithm (6) based on the relative gradient or natural gradient of matrix functions. In the context of the blind separation, the source pdfs are unknown. The prior assumption ri(si) used to define the functions <Pi(Yi) may not match the true pdfs of the sources. However, the algorithm (24) is generally robust to the mismatch between the true pdfs and the pdfs employed by the algorithm if the mismatch is not too large. See (Cardoso, 1997) and ( Pham, 1996) for example. 4 Conclusion In the context of blind separation, when the Fisher information matrix is used as the Riemannian metric tensor for the parameter space, maximizing the likelihood function in this Riemannian space based on the steepest descent method is the method of scoring. This method yields a multiplicative updating rule with the equivariant property. It is further simplified by using the asymptotic form of the Fisher information matrix around the equilibrium. 5 Appendix Appendix 1 Proof of Theorem 2: By definition fij(Y) = 6ij -4>i(Yi)Yj. At the equilibrium a = (CI s".(1) , ... ,cns".(n)V, we have E[4>i(ai)aj] = 0 for i i= j and E[4>i(ai)ai] = 1. So E[jij(a)] = O. Since the source pdfs are even functions, we have E[ai] = 0 and E[4>i(ai)] = o. Applying these equalities , it is not difficult to verify that E[jij(a)!k,(a)] = 0, for (i,j) i= (k,l) . So, D is a diagonal matrix and for i i= j. Q.E.D. E[jii(a)!ii(a)] = E[(l - 4>i(ai)ai)2] = E[4>;(ai)a;] - 1, E[jij(a)!ij(a)] = E[4>;(ai)a;] = {tiAj References (25) [1] S. Amari. Natural gradient works efficiently in learning. Accepted by Neural Computation, 1997. [2] S. Amari. Neural learning in structured parameter spaces - natural Riemannian gradient. In Advances in Neural Information Processing Systems, 9, ed. M. C. Mozer, M. 1. Jordan and T. Petsche, The MIT Press: Cambridge, MA., pages 127-133, 1997. [3] S. Amari, A. Cichocki, and H. H. Yang. A new learning algorithm for blind signal separation. In Advances in Neural Information Processing Systems, 8, eds. David S. Touretzky, Michael C. Mozer and Michael E. Hasselmo, MIT Press: Cambridge, MA., pages 757-763, 1996. [4] J.-F. Cardoso. Infomax and maximum likelihood for blind source separation. IEEE Signal Processing Letters, April 1997. [5] J.-F. Cardoso and B. Laheld. Equivariant adaptive source separation. IEEE Trans. on Signal Processing, 44(12):3017-3030, December 1996. [6] S. M. Kay. FUndamentals of Statistical Signal Processing: Estimation Theory. PTR Prentice Hall, Englewood Cliffs, 1993. [7] D. T. Pham. Blind separation of instantaneous mixture of sources via an ica. IEEE Trans. on Signal Processing, 44(11):2768-2779, November 1996. [8] H. H. Yang and S. Amari. Adaptive on-line learning algorithms for blind separation: Maximum entropy and minimum mutual information. Neural Computation, 9(7):1457-1482, 1997.
1997
54
1,402
On the Separation of Signals from Neighboring Cells in Tetrode Recordings Maneesh Sahani, John S. Pezaris and Richard A. Andersen maneesh@caltech.edu, pz@caltech.edu, andersen@vis.caltech.edu Computation and Neural Systems California Institute of Technology 216-76 Caltech, Pasadena, CA 91125 USA Abstract We discuss a solution to the problem of separating waveforms produced by multiple cells in an extracellular neural recording. We take an explicitly probabilistic approach, using latent-variable models of varying sophistication to describe the distribution of waveforms produced by a single cell. The models range from a single Gaussian distribution of waveforms for each cell to a mixture of hidden Markov models. We stress the overall statistical structure of the approach, allowing the details of the generative model chosen to depend on the specific neural preparation. 1 INTRODUCTION Much of our empirical understanding of the systems-level functioning of the brain has come from a procedure called extracellular recording. The electrophysiologist inserts an insulated electrode with exposed tip into the extracellular space near one or more neuron cell bodies. Transient currents due to action potentials across nearby cell membranes are then recorded as deflections in potential, spikes, at the electrode tip. At an arbitrary location in gray matter, an extracellular probe is likely to see pertubations due to firing in many nearby cells, each cell exhibiting a distinct waveform due to the differences in current path between the cells and the electrode tip. Commonly, the electrode is maneuvered until all the recorded deflections have almost the same shape; the spikes are then all presumed to have arisen from a single isolated cell. This process of cell isolation is time-consuming, and it permits recording from only one cell at a time. IT differences in spike waveform can be exploited to sort recorded events by cell, the experimental cost of extracellular recording can be reduced, and data on interactions between simultaneously recorded cells can be obtained. Separation of Signals from Neighboring Cells 223 Many ad hoc solutions to spike sorting have been proposed and implemented, but thus far an explicit statistical foundation, with its accompanying benefits, has mostly been lacking. Lewicki (1994) is the exception to this rule and provides a wellfounded probabilistic approach, but uses assumptions (such as isotropic Gaussian variability) that are not well supported in many data sets (see Fee et al (1996)). A first step in the construction of a solution to the spike-sorting problem is the specification of a model by which the data are taken to be generated. The model has to be powerful enough to account for most of the variability observed in the data, while being simple enough to allow tractable and robust inference. In this paper we will discuss a number of models, of varying sophistication, that fall into a general framework. We will focus on the assumptions and inferential components that are common to these models and consider the specific models only briefly. In particular, we will state the inference algorithms for each model without derivation or proof; the derivations, as well as measures of performance, will appear elsewhere. 2 DATA COLLECTION The algorithms that appear in this paper are likely to be of general applicability. They have been developed, however, with reference to data collected from the parietal cortex of adult rhesus macaques using tetrodes (Pezaris et a11997). The tetrode is a bundle of four individually insulated 13/lm-diameter wires twisted together and cut so that the exposed ends lie close together. The potential on each wire is amplified (custom electronics), low-pass filtered (9-pole Bessel filter, Ie = 6.4 kHz) to prevent aliasing, and digitized (fs between 12.8 and 20 kHz) (filters and AjD converter from Thcker Davis Technologies). This data stream is recorded to digital media; subsequent operations are currently performed off-line. In preparation for inference, candidate events (where at least one cell fired) are identified in the data stream. The signal is digitally high-pass filtered (fe = 0.05Is) and the root-mean-square (RMS) amplitude on each channel is calculated. This value is an upper bound on the noise power, and approaches the actual value when the firing rates of resolvable cells are low. Epochs where the signal rises above three times the RMS amplitude for two consecutive signals are taken to be spike events. The signal is upsampled in the region of each such threshold crossing, and the time of the maximal subsequent peak across all channels is determined to within onetenth of a sample. A short section is then extracted at the original Is such that this peak time falls at a fixed position in the extracted segment. One such waveform is extracted for each threshold crossing. 3 GENERATIVE FRAMEWORK Our basic model is as follows. The recorded potential trace V(t) is the sum of influences that are due to resolvable foreground cells (which have a relatively large e~ect) and a background noise process. We write (1) Here, c~ is an indicator variable that takes the value 1 if the mth cell fires at time T and 0 otherwise. If cell m fires at T it adds a deflection of shape S::n (t - T) to the recorded potential. The effect of all background neural sources, and any electrical noise, is gathered into a single term 7](t). For a multichannel probe, such as a tetrode, all of V(t), 7](t) and S::n(t) are vector-valued. Note that we have indexed 224 8) ! M. Sahani, 1. S. pezaris and R. A Andersen @) ! Figure 1: Schematic graph of the general framework. the spike shapes from the mth cell by time; this allows us to model changes in the spike waveform due to intrinsic biophysical processes (such as sodium inactivation during a burst of spikes) as separate to the additive background process. We will discuss models where the choice of S::n is purely stochastic, as well as models in which both the probability of firing and the shape of the action potential depend on the recent history of the cell. It will be useful to rewrite (1) in terms of the event waveforms described in section 2. At times r when no foreground cell fires all the c~ are zero. We index the remaining times (when at least one cell fired) by i and write c!n for c~ at ri (similarly for S:n) to obtain (2) This basic model is sketched, for the case of two cells, in figure 1. Circles represent stochastic variables and squares deterministic functions, while arrows indicate conditional or functional dependence. We have not drawn nodes for 0Tl and O. The representation chosen is similar to, and motivated by, a directed acyclic graph (DAG) model of the generative distribution. For clarity, we have not drawn edges that represent dependencies across time steps; the measurement V(t) depends on many nearby values of S::n and c~, and f/(t) may be autocorrelated. We will continue to omit these edges, even when we later show connections in time between c~ and S::n. 4 INFERENCE We have two statistical objectives. The first is model selection, which includes the choice of the number of cells in the foreground. The second is inference: finding good estimates for the c~ given the measured V(t) . We will have little to say on the subject of model selection in this paper, besides making the observation that standard techniques such as cross-validation, penalized likelihood or approximation of the marginal likelihood (or "evidence") are all plausible approaches. We will instead focus on the inference of the spike times. Rather than calculating the marginalized posterior for the c~ we will find the distribution conditioned on the most probable values of the other variables. This is a common approximation to the true posterior (compare Lewicki (1994)). A simple property of the data allows us to estimate the most probable values of Separation of Signals from Neighboring Cells 225 the parameters in stages; times at which at least one foreground cell fires can be identified by a threshold, as described in section 2. We can then estimate the noise parameters 8Tf by looking at segments of the signal with no foreground spikes, the waveform distribution and firing time parameters 8 from the collection of spike events, and finally the spike times c:-n and the waveforms S:n by a filtering process applied to the complete data V(t) given these model parameters. 4.1 NOISE We study the noise distribution as follows. We extract lms segments from a bandpassed recording sampled at 16 kHz from a four-channel electrode, avoiding the foreground spikes identified as in section 2. Each segment is thus a 64-dimensional object. We find the principal components of the ensemble of such vectors, and construct histograms of the projections of the vectors in these directions. A few of these histograms are shown on a log-scale in figure 2 (points), as well as a zero-mean Gaussian fit to the distribution projected along the same axes (lines). It is clear that the Gaussian is a reasonable description, although a slight excess in kurtosis is visible in the higher principal components. pc 1 2l102~. c: 0 ., ::l 10 8 10-2 pc 12 pc 24 pc 36 pc 48 -100 0 100 -50 o 50 -20 o 20 -20 o 20 -20 o 20 IlV Figure 2: Distribution of background noise. The noise parameters are now seen to be the covariance of the noise, ETf (we represent it as a covariance matrix taken over the length of a spike). In general, we can fit an autoregressive process description to the background and apply a filter that will whiten the noise. This will prove to be quite useful during the filtering stages. 4.2 WAVEFORM PARAMETERS We can make some general remarks about the process of inferring the parameters of the models for S~ and c:-n. Specific models and their inference algorithms will appear in section 5. The models will, in general, be fit to the collection of segments extracted and aligned as described in section 2. At other times they have no influence on the waveform recorded. We will represent these segments by Vi, implying a connection to the firing events ri used in (2). It should be borne in mind that the threshold-based trigger scheme will not exactly identify all of the true ri correctly. We will assume that each segment represents a single Sm, that is, that no two cells fire at times close enough for their spike waveforms to overlap. This is an unreasonable assumption; we can shore it up partially by eliminating from our collection of Vi segments that appear heuristically to contain overlaps (for example, double-peaked waveforms). Ultimately, however, we will need to make our inference procedure robust enough that the parameters describing the model are well estimated despite the errors in the data. 226 M. Sahani, 1. S. Pezaris and R. A Andersen Figure 3: The mixture model for Vi. The advantage to making this assumption is that the overall model for the distribution of the Vi becomes a mixture: a single control variable ci sets exactly one of the c~ to 1. Vi is then drawn from the distribution of waveforms for the selected cell, convolved with the noise. This is a formal statement of the "clustering" approach to spike-sorting. Mixture models such as these are easy to fit using the ExpectationMaximization (EM) algorithm (Dempster et al1977). We will also consider models with additional latent state variables, which are used to describe the distributions of the Sm and Cm, where again EM will be of considerable utility. The measured ensemble Vi will be incorrect on a number of counts. The threshold may make either false positive or false negative errors in selecting ri, and some of the identified Vi will represent overlaps. We can use heuristics to minimize such errors, but need to account for any remaining outliers in our models. We do so by introducing additional mixture components. Segments of noise that are incorrectly identified as foreground events are handled by an explicit zero mixture component whose variability is entirely due to the background noise. Overlaps are handled by providing very broad low-probability components spanning large areas in waveform space; clusters of overlap waveforms are likely to be diffuse and sparse. The mixture model is sketched in figure 3. In the basic model the variables are chosen independently for each cross-threshold event. The dynamic models discussed below will introduce dependencies in time. 4.3 SPIKE TIMES In our final stage of inference, we make estimates of the c~ given the V(t) and the most-probable parameters fit in the previous two stages. This is exactly the signal detection problem of identifying pulses (perhaps with random or else adapting parameters) in Gaussian noise of known covariance. Solutions to this are well known (McDonough and Whalen 1995) and easily adapted to the problem at hand (Sahani et al1998). Separation of Signals from Neighboring Cells 227 5 SPECIFIC MODELS Finally, we describe examples of models that may be used within this framework. As stated before, in this brief catalog we summarize the motivation for each, and state without derivation or proof the algorithms for inference. The details of these algorithms, as well as tests of performance, will appear elsewhere. 5.1 CONSTANT WAVEFORM The simplest model is one in which we take the waveform of the mth cell to remain unchanged and the firing probability of each cell to be constant. In this case we drop the index r or i on the waveform shape and just write Sm (t - r i ) . We write Pm for the probability that a given event is due to the mth cell firing. The mixture model is then a mixture of multivariate Gaussian distributions, each with covariance Er" mean Sm and mixture fraction Pm. The EM algorithm for such a mixture is well known (Nowlan 1990). Given parameters 8(n) = {S~), p~)} from the nth iteration, we find the expected values of the e~ (called the responsibilities), (n) N(Vi. S(n) E ) ri = £[ei I {Vi} 8(n)] = pm ,m, 1) (3) m m , '"'p~n) N(Vi. S~n) E )' L.Jm 'm'1) in and then reestimate the parameters from the data weighted by the responsibilities. L:r~ P(n+l) = _i __ . m N' 5.2 REFRACTORY FIRING (4) A simple modification to this scheme can be used to account for the refractory period between spikes from the same cell (Sahani et al1998). The model is similar to the Gaussian mixture above, except that the choice of mixture component is no longer independent for each waveform. If the waveforms arrive within a refractory period they cannot have come from the same cell. This leads to the altered responsibilities: i i rm IT sm = Zi (5) j ;(i,j) refractory where Z is a normalizing constant. The M step here is identical to (4), with the responsibilities s~ replacing the r~. 5.3 STATIC MIXTURE As we have suggested above, the waveform of the mth cell is not, in fact, unchanged each time the cell fires. Variability in excess of the additive background noise is introduced by changes in the biophysical properties of the cell (due to recent firing patterns, or external modulators) as well as by background activity that may be correlated with foreground events. We can attempt to model this variability as giving rise to a discrete set of distinct waveforms, which are then convolved with the previously measured noise covariance to obtain the distribution of measurements. In effect, we are tiling an irregularly shaped distribution with a mixture of Gaussians 228 M. Sahani, J S. pezaris and R. A Andersen of fixed shape, E1J" We obtain a hierarchical mixture distribution in which each component corresponding to a cell is itself is a mixture of Gaussians. Given a particular hierarchical arrangement the parameters can be fit exactly as above. While this approach seems attractive, it suffers from the flaw that model selection is not well defined. In particular, the hierarchical mixture is equivalent in terms of likelihood and parameters to a single-layer, flat, mixture. To avoid this problem we may introduce a prior requiring that the Gaussian components from a single cell overlap, or otherwise lie close together. It is, however, difficult to avoid excessive sensitivity to such a prior. 5.4 DYNAMICAL MIXTURE An alternative approach is to replace the independent transitions between the components of the mixture distribution of a single cell with a dynamical process that reflects the manner in which both firing probability and waveform shape depend on the recent history of the cell. In this view we may construct a mixture of hidden Markov models (HMMs), one for each cell. Our earlier mixture assumption now means that the models must be coupled so that on anyone time step at most one makes a transition to a state corresponding to firing. This structure may be thought of as a special case of the factorial HMM discussed by Gharamani and Jordan (1997). The general model is known to be intractable .. In this special case, however, the standard forward-backward procedure for a single HMM can be modified to operate on reponsibiIity-weighted data, where the reponsibilities are themselves calculated during the forward phase. This is empirically found to provide an effective Estep. The M step is then straightforward. Acknowledgements This work has benefited considerably from important discussions with both Bill Bialek and Sam Roweis. John Hopfield has provided invaluable advice and mentoring to MS. We thank Jennifer Linden and Philip Sabes for useful comments on an earlier version of the manuscript. Funding for various components of the work has been provided by the Keck Foundation, the Sloan Center for Theoretical Neuroscience at Caltech, the Center for Neuromorphic Systems Engineering at Caltech, and the National Institutes of Health. References Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). J. Royal Stat. Soc. B 39, 1-38. Fee, M. S., P. P. Mitra, and D. Kleinfeld (1996). J. Neurophys. 76(3),3823-3833. Gharamani, Z. and M. I. Jordan (1997). Machine Learning 29, 245-275. Lewicki, M. S. (1994). Neural Compo 6(5), 1005-1030. McDonough, R. N. and A. D. Whalen (1995). Detection of Signals in Noise (2nd ed.). San Diego: Academic Press. Nowlan, S. J. (1990). In D. S. Touretzky (Ed.), Advances in Neural Information Processing Systems 2, San Mateo, CA. Morgan Kaufmann. Pezaris, J. S., M. Sahani, and R. A. Andersen (1997). In J. M. Bower (Ed.), Computational Neuroscience: Trends in Research, 1997. Sahani, M., J. S. Pezaris, and R. A. Andersen (1998). In J. M. Bower (Ed.), Computational Neuroscience: Trends in Research, 1998.
1997
55
1,403
Task and Spatial Frequency Effects on Face Specialization Matthew N. Dailey Garrison W. Cottrell Department of Computer Science and Engineering U.C. San Diego La Jolla, CA 92093-0114 {mdailey,gary}@cs.ucsd.edu Abstract There is strong evidence that face processing is localized in the brain. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing otber kinds of complex objects, indicates tbat face and nonface object recognition may be served by partially independent mechanisms in the brain. Is neural specialization innate or learned? We suggest that this specialization could be tbe result of a competitive learning mechanism that, during development, devotes neural resources to the tasks they are best at performing. Furtber, we suggest that the specialization arises as an interaction between task requirements and developmental constraints. In this paper, we present a feed-forward computational model of visual processing, in which two modules compete to classify input stimuli. When one module receives low spatial frequency information and the other receives high spatial frequency information, and the task is to identify the faces while simply classifying the objects, the low frequency network shows a strong specialization for faces. No otber combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that an innately-specified face processing module is unnecessary. 1 Background Studies of the preserved and impaired abilities in brain damaged patients provide important clues on how the brain is organized. Cases of pro sop agnosia, a face recognition deficit often sparing recognition of non-face objects, and visual object agnosia, an object recognition deficit that can occur witbout appreciable impairment of face recognition, provide evidence that face recognition is served by a "special" mechanism. (For a recent review of this 18 M. N. Dailey and G. W. Cottrell evidence, see Moscovitch, Winocur, and Behrmann (1997». In this study, we begin to provide a computational account of the double dissociation. Evidence indicates that face recognition is based primarily on holistic, configural information, whereas non-face object recognition relies more heavily on local features and analysis of the parts of an object (Farah, 1991; Tanaka and Sengco, 1997). For instance, the distance between the tip of the nose and an eye in a face is an important factor in face recognition, but such subtle measurements are rarely as critical for distinguishing, say, two buildings. There is also evidence that con figural information is highly relevant when a human becomes an "expert" at identifying individuals within other visually homogeneous object classes (Gauthier and Tarr, 1997). What role might configural information play in the development of a specialization for face recognition? de Schonen and Mancini (1995) have proposed that several factors, including different rates of maturation in different areas of cortex, an infant's tendency to track the faces in its environment, and the gradual increase in visual acuity as an infant develops, all combine to force an early specialization for face recognition. If this scenario is correct, the infant begins to form configural face representations very soon after birth, based primarily on the low spatial frequency information present in face stimuli. Indeed, Costen, Parker, and Craw (1996) showed that although both high-pass and low-pass image filtering decrease face recognition accuracy, high-pass filtering degrades identification accuracy more quickly than low-pass filtering. Furthermore, Schyns and Oliva (1997) have shown that when asked to recognize the identity of the "face" in a briefly-presented hybrid image containing a low-pass filtered image of one individual's face and a high-pass filtered image of another individual's face, subjects consistently use the lOW-frequency compone.nt of the image for the task. This work indicates that low spatial frequency information may be more important for face identification than high spatial frequency information. Jacobs and Kosslyn (1994) showed how differential availability of large and small receptive field sizes in a mixture of experts network (Jacobs, Jordan, Nowlan, and Hinton, 1991) can lead to experts that specialize for "what" and "where" tasks. In previous work, we proposed that a neural mechanism allocating resources according to their ability to perform a given task could explain the apparent specialization for face recognition evidenced by prosopagnosia (Dailey, Cottrell, and Padgett, 1997). We showed that a model based on the mixture of experts architecture, in which a gating network implements competitive learning between two simple homogeneous modules, could develop a specialization such that damage to one module disproportionately impaired face recognition compared to nonface object recognition. In the current study, we consider how the availability of spatial frequency information affects face recognition specialization given this hypothesis of neural resource allocation by competitive learning. We find that when high and low frequency information is "split" between the two modules in our system, and the task is to identify the faces while simply classifying the objects, the low-frequency module consistently specializes for face recognition. After describing the study, we discuss its results and their implications. 2 Experimental Methods We presented a modular feed-forward neural network preprocessed images of 12 different faces, 12 different books, 12 different cups, and 12 different soda cans. We gave the network two types of tasks: 1. Learning to recognize the superordinate classes of all four object types (hereafter referred to as classification). 2. Learning to distinguish the individual members of one class (hereafter referred to Task and Spatial Frequency Effects on Face Specialization 19 as identification) while simply classifying objects of the other three types. For each task, we investigated the effects of high and low spatial frequency information on identification and classification in a visual processing system with two competing modules. We observed how splitting the range of spatial frequency information between the two modules affected the specializations developed by the network. 2.1 Image Data We acquired face images from the Cottrell and Metcalfe facial expression database (1991 ) and captured multiple images of several books, cups, and soda cans with a CCD camera and video frame grabber. For the face images, we chose five grayscale images of each of 12 individuals. The images were photographed under controlled lighting and pose conditions; the subjects portrayed a different facial expression in each image. For each of the non-face object classes, we captured five different grayscale images of each of 12 books, 12 cups, and 12 cans. These images were also captured under controlled lighting conditions, with small variations in position and orientation between photos. The entire image set contained 240 images, each of which we cropped and scaled to a size of 64x64 pixels. 2.2 Image Preprocessing To convert the raw grayscale images to a biologically plausible representation more suitable for network learning and generalization, and to experiment with the effect of high and low spatial frequency information available in a stimulus, we extracted Gabor jet features from the images at multiple spatial frequency scales then performed a separate principal components analysis on the data from each filter scale separately to reduce input pattern dimensionality. 2.2.1 Gabor jet features The basic two-dimensional Gabor wavelet resembles a sinusoid grating restricted by a twodimensional Gaussian, and may be tuned to a particular orientation and sinusoidal frequency scale. The wavelet can be used to model simple cell receptive fields in cat primary visual cortex (Jones and Palmer, 1987). Buhmann, Lades, and von der Malsburg (1990) describe the Gabor "jet," a vector consisting of filter responses at multiple orientations and scales. We convolved each of the 240 images in the input data set with two-dimensional Gabor fil fi I · · h . . (0 11' 11' 311' 11' 511' 311' 711') d b led ters at ve sca es m elg t onentatlons '8' 4' 8' 2' T' 4' 8 an su samp an 8x8 grid of the responses to each filter. The process resulted in 2560 complex numbers describing each image. 2.2.2 Principal components analysis To reduce the dimensionality of the Gabor jet representation while maintaining a segregation of the responses from each filter scale, we performed a separate PCA on each spatial frequency component of the pattern vector described above. For each of the 5 filter scales in the jet, we extracted the subvectors corresponding to that scale from each pattern in the training set, computed the eigenvectors of their covariance matrix, projected the sub vectors from each of the patterns onto these eigenvectors, and retained the eight most significant coefficients. Reassembling the pattern set resulted in 240 40-dimensional vectors. 20 M. N. Dailey and G. W. Cottrell Module 1 Inputs Figure 1: Modular network architecture. The gating network units mix the outputs of the hidden layers multiplicatively. 2.3 The Model The model is a simple modular feed-forward network inspired by the mixture of experts architecture (Jordan and Jacobs, 1995); however, it contains hidden layers and is trained by backpropagation of error rather than maximum likelihood estimation or expectation maximization. The connections to the output units come from two separate inputlhidden layer pairs; these connections are gated multiplicatively by a simple linear network with softmax outputs. Figure 1 illustrates the model's architecture. During training, the network's weights are adjusted by backpropagation of error. The connections from the softmax units in the gating network to the connections between the hidden layers and output layer can be thought of as multiplicative connections with a constant weight of 1. The resulting learning rules gate the amount of error feedback received by a module according to the gating network's current estimate of its ability to process the current training pattern. Thus the model implements a form of competitive learning in which the gating network learns which module is better able to process a given pattern and rewards the "winner" with more error feedback. 2.4 Training Procedure Preprocessing the images resulted in 240 40-dimensional vectors; four examples of each face and object composed a I92-element training set, and one example of each face and object composed a 48-element test set. We held out one example of each individual in the training set for use in determining when to stop network training. We set the learning rate for all network weights to 0.1 and their momentum to 0.5. Both of the hidden layers contained 15 units in all experiments. For the identification tasks, we determined that a mean squared error (MSE) threshold of 0.02 provided adequate classification performance on the hold out set without overtraining and allowed the gate network to settle to stable values. For the four-way classification task, we found that an MSE threshold of 0.002 was necessary to give the gate network time to stabilize and did not result in overtraining. On all runs reported in the results section, we simply trained the network until it reached the relevant MSE threshold. For each of the tasks reported in the results section (four-way classification, book identification, and face identification), we performed two experiments. In the first, as a control, both modules and the gating network were trained and tested with the fu1l40-dimensional pattern vector. In the second, the gating network received the full 40-dimensional vector, Task and Spatial Frequency Effects on Face Specialization 21 but module 1 received a vector in which the elements corresponding to the largest two Gabor filter scales were set to 0, and the elements corresponding to the middle filter scale were reduced by 0.5. Module 2, on the other hand, received a vector in which the elements corresponding to the smallest two filter scales were set to 0 and the elements corresponding to the middle filter were reduced by 0.5. Thus module 1 received mostly high-frequency information, whereas module 2 received mostly low-frequency information, with deemphasized overlap in the middle range. For each of these six experiments, we trained the network using 20 different initial random weight sets and recorded the softmax outputs learned by the gating network on each training pattern. 3 Results Figure 2 displays the resulting degree of specialization of each module on each stimulus class. Each chart plots the average weight the gating network assigns to each module for the training patterns from each stimulus class, averaged over 20 training runs with different initial random weights. The error bars denote standard error. For each of the three reported tasks (four-way claSSification, book identification, and face identification), one chart shows division of labor between the two modules in the control situation, in which both modules receive the same patterns, and the other chart shows division of labor between the two modules when one module receives low-frequency information and the other receives highfrequency information. When required to identify faces on the basis of high- or lOW-frequency information, compared with the four-way-classification and same-pattern controls, the lOW-frequency module wins the competition for face patterns extremely consistently (lower right graph). Book identification specialization, however, shows considerably less sensitivity to spatial frequency. We have also performed the equivalent experiments with a cup discrimination and a can discrimination task. Both of these tasks show a low-frequency sensitivity lower than that for face identification but higher than that for book identification. Due to space limitations, these results are not presented here. The specialized face identification networks also provide good models of prosopagnosia and visual object agnosia: when the face-specialized module's output is "damaged" by removing connections from its hidden layer to the output layer, the overall network's generalization performance on face identification drops dramatically, while its generalization performance on object recognition drops much more slowly. When the non-face-specialized (high frequency) module'S outputs are damaged, the opposite effect occurs: the overall network's performance on each of the object recognition tasks drops, whereas its performance on face identification remains high. 4 Discussion The results in Figure 2 show a strong preference for low-frequency information in the face identification task, empirically demonstrating that, given a choice, a competitive mechanism will choose a module receiving low-frequency, large receptive field information for this task. This result concurs with the psychological evidence for configural face representations based upon low spatial frequency information, and suggests how the developing brain could be biased toward a specialization for face recognition by the infant's initially low visual acuity. On the basis of these results, we predict that human subjects performing face and object 22 1.0 0.8 i '0; 0.6 ~ i : 0.4 ~ 0.2 0.0 \.0 0.8 j -; 0.6 f 0.4 ~ 0.2 0.0 1.0 0.8 i -; 0.6 r : 0.4 ~ 0.2 0.0 Classification (control) Faces Books Cups Cans SdmulusType Bookid task (control) Faces Books Cups Cans Sdmul .. Type Face id task (control) Faces Books Cup. Cans Stimulus Type 1m Module 1 a Module 2 r:::::J Module 1 .. Module 2 C!I Module 1 .. Module 2 \.0 0.8 i "11 0.6 ~ r 0.4 ~ 0.2 0.0 1.0 0.8 j -; 0.6 r : 0.4 ~ 0.2 0.0 1.0 0.8 i -; 0.6 r 0.4 ~ 0.2 0.0 M. N Dailey and G. W. Cottrell Classification (split frequencies) Faces Books Cups Cans Stimulus Type Book id task (split frequencies) Faces Books Cups Cans Stimulus Type Face id task (split frequencies) Faces Books Cups Cans Stimulus Type CModule 1 (highfreq) • Module 2 (low freq) c:I Module 1 (highfreq) .. Module 2 (low freq) Cl Module 1 (high freq) IilII Module 2 (low freq) Figure 2: Average weight assigned to each module broken down by stimulus class. For each task, in the control experiment, each module receives the same pattern; the split-frequency charts summarize the specialization resulting when module 1 receives high-frequency Gabor filter information and module 2 receives low-frequency Gabor filter information. Task and Spatial Frequency Effects on Face Specialization 23 identification tasks will show more degradation of performance in high-pass filtered images of faces than in high-pass filtered images of other objects. To our knowledge, this has not been empirically tested, although Costen et al. (1996) have investigated the effect of highpass and low-pass filtering on face images in isolation, and Parker, Lishman, and Hughes (1996) have investigated the effect of high-pass and low-pass filtering of face and object images used as 100 ms cues for a same/different task. Their results indicate that relevant high-pass filtered images cue object processing better than low-pass filtered images, but the two types of filtering cue face processing equally well. Similarly, Schyns & Oliva's (1997) results described earlier suggest that the human face identification network preferentially responds to low spatial frequency inputs. Our results suggest that simple data-driven competitive learning combined with constraints and biases known or thought to exist during visual system development can account for some of the effects observed in normal and brain-damaged humans. The study lends support to the claim that there is no need for an innately-specified face processing module face recognition is only "special" insofar as faces form a remarkably homogeneous category of stimuli for which Within-category discrimination is ecologically beneficial. References Buhmann, J., Lades, M., and von der Malsburg, C. (1990). Size and distortion invariant object recognition by hierarchical graph matching. In Proceedings of the IJCNN International Joint Conference on Neural Networks, volume II, pages 411-416. Costen, N., Parker, D., and Craw, I. (1996). Effects of high-pass and low-pass spatial filtering on face identification. Perception & Psychophysics, 38(4):602-612. Cottrell, G. and Metcalfe, J. (1991). Empath: Face, gender and emotion recognition using holons. In Lippman, R., Moody, J., and Touretzky, D., editors, Advances in Neural Information ProceSSing Systems 3, pages 564-571. Dailey, M., Cottrell, G., and Padgett, C. (1997). A mixture of experts model exhibiting pro sop agnosia. In Proceedings of the Nineteenth Annual Conference of the Cognitive Science Society, pp. 155-160. Stanford, CA, Mahwah: Lawrence Erlbaum. de Schonen, S. and Mancini, J. (1995). About functional brain specialization: The development of face recognition. TR 95.1, MRC Cognitive Development Unit, London, UK. Farah, M. (1991). Patterns of co-occurrence among the associative agnosias: Implications for visual object representation. Cognitive Neuropsychology, 8:1-19. Gauthier, I. and Tarr, M. (1997). Becoming a "greeble" expert: Exploring mechanisms for face recognition. Vision Research. In press. Jacobs, R. and Kosslyn, S. (1994). Encoding shape and spatial relations The role of receptive field size in coordinating complementary representations. Cognitive Science, 18(3):361-386. Jacobs, R, Jordan, M., Nowlan, S., and Hinton, G. (1991). Adaptive mixtures of local experts. Neural Computation, 3:79-87. Jones, J. and Palmer, L. (1987). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. 1 Neurophys., 58(6):1233-1258. Moscovitch, M., Winocur, G., and Behrmann, M. (1997). What is special about face recognition? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition. Journal of Cognitive Neuroscience, 9(5):555-604. Parker, D., Lishman, J., and Hughes, J. (1996). Role of coarse and fine spatial information in face and object processing. Journal of Experimental Psychology: Human Perception and Performance, 22(6):1445-1466. Schyns, P. and Oliva, A. (1997). Dr. Angry and Mr. Smile: The multiple faces of perceptual categorizations. Submitted for publication. Tanaka, J. and Sengco, 1. (1997). Features and their configuration in face recognition. Memory and Cognition. In press.
1997
56
1,404
Extended leA Removes Artifacts from Electroencephalographic Recordings Tzyy-Ping JungI, Colin Humphries!, Te-Won Lee!, Scott Makeig2,3, Martin J. McKeown!, Vicente IraguP, Terrence J. SejnowskF 1 Howard Hughes Medical Institute and Computational Neurobiology Lab The Salk Institute, P.O. Box 85800, San Diego, CA 92186-5800 {jung,colin,tewon,scott,martin,terry}~salk.edu 2Naval Health Research Center, P.O. Box 85122, San Diego, CA 92186-5122 3Department of Neurosciences, University of California San Diego, La Jolla, CA 92093 Abstract Severe contamination of electroencephalographic (EEG) activity by eye movements, blinks, muscle, heart and line noise is a serious problem for EEG interpretation and analysis. Rejecting contaminated EEG segments results in a considerable loss of information and may be impractical for clinical data. Many methods have been proposed to remove eye movement and blink artifacts from EEG recordings. Often regression in the time or frequency domain is performed on simultaneous EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. However, EOG records also contain brain signals [1, 2], so regressing out EOG activity inevitably involves subtracting a portion of the relevant EEG signal from each recording as well. Regression cannot be used to remove muscle noise or line noise, since these have no reference channels. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records. The method is based on an extended version of a previous Independent Component Analysis (lCA) algorithm [3, 4] for performing blind source separation on linear mixtures of independent source signals with either sub-Gaussian or super-Gaussian distributions. Our results show that ICA can effectively detect, separate and remove activity in EEG records from a wide variety of artifactual sources, with results comparing favorably to those obtained using regression-based methods. Extended leA Removes Artifacts from EEG Recordings 895 1 Introduction Eye movements, muscle noise, heart signals, and line noise often produce large and distracting artifacts in EEG recordings. Rejecting EEG segments with artifacts larger than an arbitrarily preset value is the most commonly used method for eliminating artifacts. However, when limited data are available, or blinks and muscle movements occur too frequently, as in some patient groups, the amount of data lost to artifact rejection may be unacceptable. Methods are needed for removing artifacts while preserving the essential EEG signals. Berg & Scherg [5] have proposed a spatio-temporal dipole model for eye-artifact removal that requires a priori assumptions about the number of dipoles for saccade, blink, and other eye-movements, and assumes they have a simple dipolar structure. Several other proposed methods for removing eye-movement artifacts are based on regression in the time domain [6, 7] or frequency domain [8, 9] . However, simple time-domain regression tends to overcompensate for blink artifacts and may introduce new artifacts into EEG records [10] . The cause of this overcompensation is the difference between the spatial EOG-to-EEG transfer functions for blinks and saccades. Saccade artifacts arise from changes in orientation of the retinocorneal dipole, while blink artifacts arise from alterations in ocular conductance produced by contact of the eyelid with the cornea [11]. The transfer of blink artifacts to the recording electrodes decreases rapidly with distance from the eyes, while the transfer of saccade artifacts decreases more slowly, so that at the vertex the effect of saccades on the EEG is about double that of blinks [11], while at frontal sites the two effects may be near-equal. Regression in the frequency domain [8, 9] can account for frequency-dependent spatial transfer function differences from EOG to EEG , but is acausal and thus unsuitable for real-time applications. Both time and frequency domain regression methods depend on having a good regressor (e.g., an EOG), and share an inherent weakness that spread of excitation from eye movements and EEG signals is bidirectional. This means that whenever regression-based artifact removal is performed, a portion of relevant EEG signals also contained in the EOG data will be cancelled out along with the eye movement artifacts. Further, since the spatial transfer functions for various EEG phenomena present in the EOG differ from the regression transfer function, their spatial distributions after artifact removal may differ from the raw record. Similar problems complicate removal of other types of EEG artifacts. Relatively little work has been done on removing muscle activity, cardiac signals and electrode noise from EEG data. Regressing out muscle noise is impractical since regressing out signals from multiple muscle groups require multiple reference channels. Line noise is most commonly filtered out in the frequency domain. However, current interest in EEG in the 40-80 Hz gamma band phenomena may make this approach undesirable as well. We present here a new and generally applicable method for isolating and removing a wide variety of EEG artifacts by linear decomposition using a new Independent Component Analysis (ICA) algorithm [4] related to a previous algorithm [3, 12]. The ICA method is based on spatial filtering and does not rely on having a "clean" reference channel. It effectively decomposes multiple-channel EEG data into spatially-fixed and temporally independent components. Clean EEG signals can then be derived by eliminating the contributions of artifactual sources, since their time courses are generally temporally independent from and differently distributed than sources of EEG activity. 896 T-P' lung, C. Humphries, T-W Lee, S. Makeig, M. J. McKeown, V. Iragui and T. l. Sejnowski 2 Independent Component Analysis Bell and Sejnowski [3] have proposed a simple neural network algorithm that blindly separates mixtures, x, of independent sources, s, using infomax. They show that maximizing the joint entropy, H (y), of the output of a neural processor minimizes the mutual information among the output components, Yi = g( ud, where g( Ui) is an invertible bounded nonlinearity and u = Wx. This implies that the distribution of the output Yi approximates a uniform density. Independence is achieved through the nonlinear squashing function which provides necessary higher-order statistics through its Taylor series expansion. The learning rule can be derived by maximizing output joint entropy, H(y), with respect to W [3], giving, LlWex 8:~)WTW = [I + i>uT ] W (1) where Pi = (8/8ui) In(8yd8ui)' The 'natural gradient' WTW term [13] avoids matrix inversions and speeds convergence. The form of the nonlinearity g( u) plays an essential role in the success of the algorithm. The ideal form for gO is the cumulative density function (cdf) of the distributions of the independent sources. In practice, if we choose gO to be a sigmoid function (as in [3]), the algorithm is then limited to separating sources with super-Gaussian distributions. An elegant way of generalizing the learning rule to sources with either sub- or super-Gaussian distributions is to approximate the estimated probability density function (pdf) in the form of a 4th-order Edgeworth approximation as derived by Girolami and Fyfe [14]. For sub-Gaussians, the following approximation is possible: Pi = + tanh( Ui) - Ui. For super-Gaussians, the same approximation becomes Pi = - tanh( Ui) - Ui. The sign can be chosen for each component using its normalized kurtosis, k4(Ui), giving, Ll W ex 8::)wTw = [I - sign(k4) tanh(u)uT - uuT] W (2) Intuitively, for super-Gaussians the - tanh(u)uT term is an anti-Hebbian rule that tends to minimize the variance of u, whereas for sub-Gaussians the corresponding term is a Hebbian rule that tends to maximize its variance. 2.1 Applying leA to artifact correction The leA algorithm is effective in performing source separation in domains where, (1) the mixing medium is linear and propagation delays are negligible, (2) the time courses of the sources are independent, and (3) the number of sources is the same as the number of sensors, meaning if we employ N sensors the leA algorithm can separate N sources [3, 4, 12]. In the case of EEG signals [12), volume conduction is thought to be linear and instantaneous, hence assumption (1) is satisfied. Assumption (2) is also reasonable because the sources of eye and muscle activity, line noise, and cardiac signals are not generally time locked to the sources of EEG activity which is thought to reflect activity of cortical neurons. Assumption (3) is questionable since we do not know the effective number of statistically-independent signals contributing to the scalp EEG. However, numerical simulations have confirmed that the leA algorithm can accurately identify the time courses of activation and the scalp topographies of relatively large and temporally-independent sources from simulated scalp recordings, even in the presence of a large number of low-level and temporally-independent source activities [16]. For EEG analysis, the rows of the input matrix x are the EEG signals recorded at different electrodes, the rows of the output data matrix u = W x are time courses of activation ofthe leA components, and the columns of the inverse matrix, W-l, give the projection strengths of the respective components onto the scalp sensors. The Extended leA Removes Artifacts from EEG Recordings 897 scalp topographies of the components provide evidence for their biological origin (e.g., eye activity should project mainly to frontal sites). In general, and unlike PCA, the component time courses of activation will be nonorthogonal. 'Corrected' EEG signals can then be derived as x' = (W)-lU', where u' is the matrix of activation waveforms, u, with rows representing artifactual sources set to zero. 3 Methods and Materials One EEG data set used in the analysis was collected from 20 scalp electrodes placed according to the International 10-20 System and from 2 EOG placements, all referred to the left mastoid. A second EEG data set contained 19 EEG channels (no EOG channel). Data were recorded with a sampling rate of 256 Hz. ICA decomposition was performed on 10-sec EEG epochs from each data set using Matlab 4.2c on a DEC 2100A 5/300 processor. The learning batch size was 90, and initial learning rate was 0.001. Learning rate was gradually reduced to 5 x 10- 6 during 80 training iterations requiring 6.6 min of computer time. To evaluate the relative effectiveness of ICA for artifact removal, the multiple-lag regression method of Kenemans et al. [17] was performed on the same data. 4 Results 4.1 Eye movement artifacts Figure 1 shows a 3-sec portion of the recorded EEG time series and its ICA component activations, the scalp topographies of four selected components, and the 'corrected' EEG signals obtained by removing four selected EOG and muscle noise components from the data. The eye movement artifact at 1.8 sec in the EEG data (left) is isolated to ICA components 1 and 2 (left middle). The scalp maps (right middle) indicate that these two components account for the spread of EOG activity to frontal sites. After eliminating these two components and projecting the remaining components onto the scalp channels, the 'corrected' EEG data (right) are free of these artifacts. Removing EOG activity from frontal channels reveals alpha activity near 8 Hz that occurred during the eye movement but was obscured by the eye movement artifact in the original EEG traces. Close inspection of the EEG records (Fig. 1 b) confirms its presence in the raw data. ICA also reveals the EEG 'contamination' appearing in the EOG electrodes (right) . By contrast, the 'corrected' EEG resulting from multiple-lag regression on this data shows no sign of 8 Hz activity at Fp1 (Fig. 1b). Here, regression was performed only when the artifact was detected (l-SE'C surrounding the EOG peak) , since otherwise a large amount of EEG activity would also have been regressed out during periods without eye movements. 4.2 Muscle artifacts Left and right temporal muscle activity in the data are concentrated in ICA components 14 and 15 (Fig. la, right middle). Removing them from the data (right) reveals underlying EEG activity at temporal sites T3 and T4 that had been masked by muscle activity in the raw data (left). The signal at T3 (Fig. 1c left) sums muscle activity from component 14 (center) and underlying EEG activity. Spectral analysis of the two records (right) shows a large amount of overlap between their power spectra, so bandpass filtering cannot separate them. ICA component 13 (Fig. la, left middle) reveals the presence of small periodic muscle spiking (in right frontal channels, map not shown) that is highly obscured in the original data (left). 898 T-P' lung, C. Hwnphries, T-W. Lee, S. Makeig, M. 1. McKeown, V. Iragui and T. 1. Sejnowsld (a) Original EEG Fp1 Fp2 F3 F4 C3 C4 A2 P3 P4 01 02 11 F7 12 F8 13 T3 14 T4 15 T5 16 T6 17 18 19 pz 20 EOG1 21 EOG2 22 0 1 2 3 0 Time(sec) (b) o (c) 2 3 2 3 Tirne(sec) ~ .... ~ EEG Fp1 Fp1 Fp2 F3 F4 C3 C4 A2 P3 P4 01 02 F7 F8 T3 T4 T5 T6 Fz Cz pz EOG1 EOG2 Corrected EEG w. ~ .. . ...,....,. ~ 'rtl Jl"f'?'H , ..... Ol' ' . .. ' .... ...... ,a, 11.1, . J~;. ' .... '~JJ~" .I:~ rr' ..... ,M.J.L 1J.t. IU .A.. .... .... r ... ·· .. """ ·w ..... Iawi...~ .l.. ,"" J, J" ,1. ,iJ. II, w.;~ ',fl"l" ,., .J.. ..LL " . .... :A" ... ,,~ .. ,.. .J..f' 'r.1 " .... "r .... 'y o 2 3 RelT1c>ved using ICA 4 5 Remaining EEG - T3 Power Spectral Density 40,.-----------. ~ 30 ', r1\1(\A~.· ~r'\.", , 20 V V\.fl' \ft, o 16 32 48 64 Hz Figure 1: A 3-sec portion of an EEG time series (left), corresponding ICA components activations (left middle), scalp maps of four selected components (right middle), and EEG signals corrected for artifacts according to: (a) ICA with the four selected components removed (right), or (b) multiple-lag regression on the two EOG channels. ICA cancels multiple artifacts in all the EEG and EOG channels simultaneously. (c) The EEG record at T3 (left) is the sum of EEG activity recorded over the left temporal region and muscle activity occurring near the electrode (center). Below 20 Hz, the spectra of remaining EEG (dashed line) and muscle artifact (dotted line) overlap strongly, whereas ICA separates them by spatial filtering. Extended leA Removes Artifacts from EEG Recordings 899 4.3 Cardiac contamination and line noise Figure 2 shows a 5-sec portion of a second EEG time series, five ICA components that represent artifactual sources, and 'corrected' EEG signals obtained by removing these components. Eye blink artifacts at 0.5, 2.0 and 4.7 sec (left) are detected and isolated to ICA component 1 (middle left), even though the training data contains no EOG reference channel. The scalp map of the component captures the spread of EOG activity to frontal sites. Component 5 represents horizontal eye movements, while component 2 reveals the presence of small periodic muscle spiking in left frontal channels which is hard to see in the raw data. Line noise has a sub-Gaussian distribution and so could not be clearly isolated by earlier versions of the algorithm [3, 12]. By contrast, the new algorithm effectively concentrates the line noise present in nearly all the channels into ICA component 3. The widespread cardiac contamination in the EEG data (left) is concentrated in ICA component 4. After eliminating these five artifactual components, the 'corrected' EEG data (right) are largely free of these artifacts. 5 Discussion and Conclusions ICA appears to be an effective and generally applicable method for removing known artifacts from EEG records. There are several advantages of the method: (1) ICA is computationally efficient. Although it requires more computation than the algorithm used in [15, 12], the extended ICA algorithm is effective even on large EEG data sets. (2) ICA is generally applicable to removal of a wide variety of EEG artifacts. (3) A simple analysis simultaneously separates both the EEG and its artifacts into independent components based on the statistics of the data, without relying on the availability of 'clean' reference channels. This avoids the problem of mutual contamination between regressing and regressed channels. (4) No arbitrary thresholds (variable across sessions) are needed to determine when regression should be performed. (5) Once the training is complete, artifact-free EEG records can then be derived by eliminating the contributions of the artifactual sources. However, the results of lCA are meaningful only when the amount of data and number of channels are large enough. Future work should determine the minimum data length and number of channels needed to remove artifacts of various types. Acknow legement This report was supported in part by grants from the Office of Naval Research. The views expressed in this article are those of the authors and do not reflect the official policy or position of the Department of the Navy, Department of Defense, or the U.S. Government. Dr. McKeown is supported by a grant from the Heart & Stroke Foundation of Ontario. References [1] J.F. Peters (1967). Surface electrical fields generated by eye movement and eye blink potentials over the scalp, J. EEG Technol., 7:27-40. [2] P.J. Oster & J.A. Stern (1980). Measurement of eye movement electrooculography, In: Techniques in Psychophysiology, Wiley, Chichester, 275-309. [3] A.J. Bell & T.J. Sejnowski (1995). An information-maximization approach to blind separation and blind deconvolution, Neural Computation 7:1129-1159. [4] T.W. Lee and T. Sejnowski (1997). Independent Component Analysis for SubGaussian and Super-Gaussian Mixtures, Proc. 4th Joint Symp. Neural Computation 7:132-9. [5] P. Berg & M. Scherg (1991) Dipole models of eye movements and blinks, Electroencephalog. din. Neurophysiolog. 79:36-44. 900 T-P' lung, C. Hwnphries, T-W. Lee, S. Makeig, M. 1. McKeown, V. Iragui and T. l. Sejnowski Original EEG C4~~~-r~~~~~ P3~~~~~~-w--~ P4~~~~--~~~~ 01~~~~~~----~ 02~~~~~~~~~ o 234 Time(sec) 5 ICA Components Corrected EEG ~. . .... ~ -- ..,..-., -..... ..... ..- .-.r ...,. Fp1 Fp2 F3 F4 ~ ..... C3 4~~~~~.w~~~ C4 f,.... 5 P3 P4 Iw. .NY-.. o 2 3 4 5 01 02 F7 F8 ~ ""'" T3 T4 ~ T5 T6 Iw. """"',r... ~ Fz Cz ...... ""'" pz o 234 5 Figure 2: (left) A 5-sec portion of an EEG time series. (center) leA components accounting for eye movements, cardiac signals, and line noise sources. (right) The same EEG signals 'corrected' for artifacts by removing the five selected components. [6] S.A. Hillyard & R. Galambos (1970). Eye-movement artifact in the CNV, Electroencephalog. clin. N europhysiolog. 28: 173-182. [7] R. Verleger, T . Gasser & 1. Mocks (1982). Correction of EOG artifacts in event-related potentials ofEEG: Aspects ofreliability and validity, Psychoph., 19(4):472-80. [8] 1.1. Whitton. F. Lue & H. Moldofsky (1978) . A spectral method for removing eyemovement artifacts from the EEG. Electroencephalog. din. Neurophysiolog.44:735-41. [9] 1.C. Woestenburg, M.N. Verbaten & 1.L. Slangen (1983). The removal of the eyemovement artifact from the EEG by regression analysis in the frequency domain, Biological Psychology 16:127-47. [10] T.C. Weerts & P.l. Lang (1973). The effects of eye fixation and stimulus and response location on the contingent negative variation (CNV), Biological Psychology1(1):1-19. [11] D.A. Overton & C. Shagass (1969). Distribution of eye movement and eye blink potentials over the scalp, Electroencephalog. clin. Neurophysiolog. 27:546. [12] S. Makeig, A.l. Bell, T-P lung, T.l. Sejnowski (1996) Independent Component Analysis of Electroencephalographic Data, In: Advances in Neural Information Processing Systems 8:145-51. [13] S. Amari, A. Cichocki & H. Yang (1996) A new learning algorithm for blind signal separation, In: Advances in Neural Information Processing Systems, 8:757-63. [14] M Girolami & C Fyfe (1997) Generalized Independent Component Analysis through Unsupervised Learning with Emergent Bussgang Properties. in Proc. IEEE International Conference on Neural Networks, 1788-91. [15] A.l. Bell & T.l. Sejnowski (1995). Fast blind separation based on information theory, in Proc. Intern. Symp. on Nonlinear Theory and Applications (NOLTA) 1:43-7. [16] S. Makeig, T-P lung, D. Ghahremani & T.l. Sejnowski (1996). Independent Component Analysis of Simulated ERP Data, Tech. Rep. INC-9606, Institute for Neural Computation, San Diego, CA. [17] 1.L. Kenemans, P. Molenaar, M.N. Verbaten & 1.1. Slangen (1991). Removal of the ocular artifact from the EEG: a comparison of time and frequency domain methods with simulated and real data, Psychoph., 28(1):114-21.
1997
57
1,405
Radial Basis Functions: a Bayesian treatment David Barber* Bernhard Schottky Neural Computing Research Group Department of Applied Mathematics and Computer Science Aston University, Birmingham B4 7ET, U.K. http://www.ncrg.aston.ac.uk/ {D.Barber,B.Schottky}~aston.ac.uk Abstract Bayesian methods have been successfully applied to regression and classification problems in multi-layer perceptrons. We present a novel application of Bayesian techniques to Radial Basis Function networks by developing a Gaussian approximation to the posterior distribution which, for fixed basis function widths, is analytic in the parameters. The setting of regularization constants by crossvalidation is wasteful as only a single optimal parameter estimate is retained. We treat this issue by assigning prior distributions to these constants, which are then adapted in light of the data under a simple re-estimation formula. 1 Introduction Radial Basis Function networks are popular regression and classification tools[lO]. For fixed basis function centers, RBFs are linear in their parameters and can therefore be trained with simple one shot linear algebra techniques[lO]. The use of unsupervised techniques to fix the basis function centers is, however, not generally optimal since setting the basis function centers using density estimation on the input data alone takes no account of the target values associated with that data. Ideally, therefore, we should include the target values in the training procedure[7, 3, 9]. Unfortunately, allowing centers to adapt to the training targets leads to the RBF being a nonlinear function of its parameters, and training becomes more problematic. Most methods that perform supervised training of RBF parameters minimize the ·Present address: SNN, University of Nijmegen, Geert Grooteplein 21, Nijmegen, The Netherlands. http://wwv.mbfys.kun.nl/snn/ email: davidb<llmbfys.kun.nl Radial Basis Functions: A Bayesian Treatment 403 training error, or penalized training error in the case of regularized networks[7, 3, 9]. The setting of the associated regularization constants is often achieved by computationally expensive approaches such as cross-validation which search through a set of regularization constants chosen a priori. Furthermore, much of the information contained in such computation is discarded in favour of keeping only a single regularization constant. A single set of RBF parameters is subsequently found by minimizing the penalized training error with the determined regularization constant. In this work, we assign prior distributions over these regularization constants, both for the hidden to output weights and the basis function centers. Together with a noise model, this defines an ideal Bayesian procedure in which the beliefs expressed in the distribution of regularization constants are combined with the information in the data to yield a posterior distribution of network parameters[6]. The beauty of this approach is that none of the information is discarded, in contrast to cross-validation type procedures. Bayesian techniques applied to such non-linear, non-parametric models, however, can also be computationally extremely expensive, as predictions require averaging over the high-dimensional posterior parameter distribution. One approach is to use Markov chain Monte Carlo techniques to draw samples from the posterior[8]. A simpler approach is the Laplace approximation which fits a Gaussian distribution with mean set to a mode of the posterior, and covariance set to the inverse Hessian evaluated at that mode. This can be viewed as a local posterior approximation, as the form of the posterior away from the mode does not affect the Gaussian fit. A third approach, called ensemble learning, also fits a Gaussian, but is based on a less local fit criterion, the Kullback-Leibler divergence[4, 5]. As shown in [1], this method can be applied successfully to multi-layer perceptrons, whereby the KL divergence is an almost analytic quantity in the adaptable parameters. For fixed basis function widths, the KL divergence for RBF networks is completely analytic in the adaptable parameters, leading to a relatively fast optimization procedure. 2 Bayesian Radial Basis Function Networks For an N dimensional input vector x, we consider RBFs that compute the linear combination of K Gaussian basis functions, K f(x, m) = L WI exp {-Atllx - ctll2} (1) 1=1 where we denote collectivel~ the ce,nters Cl ..• cKl and wei~hts w = w~ ... Wk. by the parameter vector m = [cI , ... ,CK,Wl, ... ,WK]. We consIder the basIS functIOn widths AI, ... Ak to be fixed although, in principle, they can also be adapted by a similar technique to the one presented below. The data set that we wish to regress is a set of P input-output pairs D = {xl', yJL, JL = 1 ... Pl. Assuming that the target outputs y have been corrupted with additive Gaussian noise of variance (3-I, the likelihood of the data is l p(Dlm,{3) = exp (-{3ED) jZD, where the training error is defined, 1 p ED = '2 L (f(xJL , m) - yJL)2 1'=1 To discourage overfitting, we choose a prior regularizing distribution for m p(mla) = exp (-Em(m)) jZp lIn the following, ZD, Zp and ZF are normalising constants (2) (3) (4) 404 D. Barber and B. Schottky where we take Em(m) = ~m TAm for a matrix A of hyperparameters. More complicated regularization terms, such as those that penalize centers that move away from specified points are easily incorporated in our formalism. For expositional clarity, we deal here with only the simple case of a diagonal regularizer matrix A = aI. The conditional distribution p(mID, a,.8) is then given by p(mID,a,.8) = exp(-.8ED(m) - Em(m»/ZF (5) We choose to model the hyperparameters a and .8 by Gamma distributions, p(a) ex: aa-le-o:/b p(.8) ex: ac- 1e-/3/d , (6) where a, b, c, d are chosen constants. This completely specifies the joint posterior, p(m,a,.BID) = p(mID,a,.8)p(a)p(.B) . (7) A Bayesian prediction for a new test point x is then given by the posterior average (f(x, m»)p(In,o:,/3ID). If the centers are fixed, p(wID, a,.8) is Gaussian and computing the posterior average is trivial. However, with adaptive centers,the posterior distribution is typically highly complex and computing this average is difficult2 . We describe below approaches that approximate the posterior by a simpler distribution which can then be used to find the Bayesian predictions and error bars analytically. 3 Approximating the posterior 3.1 Laplace's method Laplace's method is an approximation to the Bayesian procedure that fits a Gaussian to the mode mo of p(m, ID,a,.8) by extremizing the exponent in (5) T = ~llmW + .8ED(m) (8) with respect to m. The mean of the approximating distribution is then set to the mode roo, and the covariance is taken to be the inverse Hessian around roo; this is then used to approximately compute the posterior average. This is a local method as no account is taken for the fit of the Gaussian away from the mode. 3.2 Kullback-Leibler method The Kullback-Leibler divergence between the posterior p(m, a, .8ID) and an approximating distribution q(m, a,.8) is defined by J (p(m,a,.8ID)) KL[q] = q(m,a,.B) In q(m,a,.8) . (9) K L[q] is zero only if p and q are identical, and is greater than zero otherwise. Since in (5) ZF is unknown, we can compute the KL divergence only up to an additive constant, L[q] = KL[q] - InZF. We seek then a posterior approximation of the form q(m, a,.8) = Q(m)R(a)S(B) where Q(m) is Gaussian and the distributions R and S are determined by minimization of the functional L[q][5]. We first consider optimizing L with respect to the mean m and covariance C of the Gaussian distribution Q(m) ex: exp {-~(m - m)TC-l(m - m)}. Omitting all constant terms and integrating out a and .8, the Q(m) dependency in Lis, L[Q(m)] = - J Q(m) [-i3ED(m) - ~Q:llmW -In Q(m)] dm + const. (10) 2The fixed and adaptive center Bayesian approaches are contrasted more fully in [2]. Radial Basis Functions: A Bayesian Treatment 405 where Q = J aR(a)da, E = J f3S(f3)df3 (11) are the mean values of the hyperparameters. For Gaussian basis functions, the remaining integration in (10) over Q(m) can be evaluated analytically, giving3 L[Q(m)] = ~Q {tr(C) + IImW} + E(ED(m»)Q ~ In(det C) + const. (12) where 1 p ( K K) (ED(m))Q = "2 ~ (y~)2 2y~ ~ sj + J;1 s~1 The analytical formulae for sj = (wlexp{-'\jllxl'-CIW})Q S~, = (WkWj exp{ -,\kllxtL - Ck 112} exp{ -'\l\lx~ - cdI2})Q (13) (14) (15) are straightforward to compute, requiring only Gaussian integration[2]. The values for C and m can then be found by optimizing (12). We now turn to the functional optimisation of (9) with respect to R. Integrating out m and f3 leaves, up to a constant, L[R] = JR(a){a["~W + tr~C) +t] + [K(~+l) +a-l]lna+lnR(a)}da (16) As the first two terms in (16) constitute the log of a Gamma distribution (6), the functional (16) is optimized by choosing a Gamma distribution for a, (17) with ! = IIml12 + !tr(C) + ! s 2 2 b' K(N + 1) r = 2 + a, a=rs. (18) The same procedure for S(f3) yields S(f3) ex: f3 U - 1e-/3/v (19) with P 1 1 u = - + c, - = (ED(m»)Q + -d' -:e = uv, (20) 2 v where the averaged training error is given by (13). The optimization of the approximating distribution Q(m)R(a)S(f3) can then be performed using an iterative procedure in which we first optimize (12) with respect to m and C for fixed a, 73, and then update a and 7J according to the re-estimation formulae (18,20). After this iterative procedure has converged, we have an approximating distribution of parameters, both for the hidden to output weights and center positions (figure l(a»). The actual predictions are then given by the posterior average over this distribution of networks. The model averaging effect inherent in the Bayesian procedure produces a final function potentially much more complex than that achievable by a single network. A significant advantage of our procedure over the Laplace procedure is that we can lower bound model the likelihood Inp(Dlmodel) ~ -(L+lnZD+lnZp). Hence, decreasing L increases p(Dlmodel). We can use this bound to rank different models, leading to principled Bayesian model selection. 3( ... }Q denotes J Q(m) ... dm 406 D. Barber and B. Schottky Center Fluctuations Widths Figure 1: Regressing a surface from 40 noisy training examples. (a) The KL approximate Bayesian treatment fits 6 basis functions to the data. The posterior distribution for the parameters gives rise to a posterior weighted average of a distribution of the 6 Gaussians. We plot here the posterior standard deviation of the centers (center fluctuations) and the mean centers. The widths were fixed a priori using Maximum Likelihood. (b) Fixing a basis function on each training point with fixed widths. The hidden-output weights were determined by cross-validation of the penalised training error. 4 Relation to non-Bayesian treatments One non-Bayesian approach to training RBFs is to minimze the training error (3) plus a regularizing term of the form (8) for fixed centers[7, 3, 9]. In figure l(b) we fix a center on each training input. For fixed hyperparameters a and (3, the optimal hidden-to-output weights can then be found by minimizing (8). To set the hyperparameters, we iterate this procedure using cross-validation. This results in a single estimate for the parameters mo which is then used for predictions f(x, mo). In figure(l), both the Bayesian adaptive center and the fixed center methods have similar performance in terms of test error on this problem. However, the parsimonious representiation of the data by the Bayesian adaptive center method may be advantageous if interpreting the data is important. In principle, in the Bayesian approach, there is no need to carry out a crossvalidation type procedure for the regularization parameters a, {3. After deciding on a particular Bayesian model with suitable hyperprior constants (here a, b, c, d), our procedure will combine these beliefs about the regularity of the RBF with the dataset in a principled manner, returning a-posteriori probabilities for the values of the regularization constants. Error bars on the predictions are easily calculated as the posterior distribution quantifies our uncertainty in the parameter estimates. One way of viewing the connection between the CV and Bayesian approaches, is to identify the a-priori choice of CV regularization coefficients ai that one wishes to examine as a uniform prior over the set {ad. The posterior regularizer distribution is then a delta peak centred at that a* with minimal CV error. This delta peak represents a loss of information regarding the performance of all the other networks trained with ai :la*. In contrast, in our Bayesian approach we assign a continuous prior distribution on a, which is updated according to the evidence in the data. Any loss of information then occurs in approximating the resulting posterior distribution. Radial Basis Functions: A Bayesian Treatment (a) Minimum KL (b) Laplace 0.8 0.8 0.& 0.& 0.' 0.' 0.2 0.2 " ,, ' ", . \, . " : ~ .& -0.& 0.8 0.8 0.' 0.2 ~.& (c) Regularized (non Bayesian) " \ , , \ , '. 407 Figure 2: Minimal KL Gaussian fit, Laplace Gaussian, and a non-Bayesian procedure on regressing with 6 Gaussian basis functions. The training points are labelled by crosses and the target function 9 is given by the solid lines. For both (a) and (b), the mean prediction is given by the dashed lines, and standard errors are given by the dots. (a) Approximate Bayesian solution based on Kullback-Leibler divergence. The regularization constant Q and inverse noise level f3 are adapted as described in the text. (b) Laplace method based on equation (8). Both Q and f3 are set to the mean of the hyperparameter distributions (6). The mean prediction is given by averaging over the locally approximated posterior. Note that the error bars are somewhat large, suggesting that the local posterior mass has been underestimated. (c) The broken line is the Laplace solution without averaging over the posterior, showing much greater variation than the averaged prediction in (b). The dashed line corresponds to fixing the basis function centers at each data point, and estimating the regularization constants a by cross-validation. 5 Demonstration We apply the above outlined Bayesian framework to a simple one-dimensional regression problem. The function to be learned is given by (21) and is plotted in figure(2). The training patterns are sampled uniformly between [-4,4] and the output is corrupted with additive Gaussian noise of variance a2 = 0.005. The number of basis function is K = 6, giving a reasonably flexible model for this problem. In figure(2), we compare the Bayesian approaches (a),(b) to non-Bayesian approaches(c). In this demonstration, the basis function widths were chosen by penalised training error minimization and fixed throughout all experiments. For the Bayesian procedures, we chose hyperprior constants, a = 2, b = 1/4, c = 4, d = 50, corresponding to mean values it = 0.5 and E = 200. In (c), we plot a more conventional approach using cross-validation to set the regularization constant. A useful feature of the Bayesian approaches lies in the principled theory for the error bars. In (c), although we know the test error for each regularization constant in the set of constants we choose to examine, we do not know any principled procedure for using these values for error bar assessment. 408 D. Barber and B. Schottky 6 Conclusions We have incorporated Radial Basis FUnctions within a Bayesian framework, arguing that the selection of regularization constants by non-Bayesian methods such as cross-validation is wasteful of the information contained in our prior beliefs and the data set. Our framework encompasses flexible priors such as hard assigning a basis function center to each data point or penalizing centers that wander far from pre-assigned points. We have developed an approximation to the ideal Bayesian procedure by fitting a Gaussian distribution to the posterior based on minimizing the Kullback-Leibler divergence. This is an objectively better and more controlled approximation to the Bayesian procedure than the Laplace method. FUrthermore, the KL divergence is an analytic quantity for fixed basis function widths. This framework also includes the automatic adaptation of regularization constants under the influence of data and provides a rigorous lower bound on the likelihood of the model. Acknowledgements We would like to thank Chris Bishop and Chris Williams for useful discussions. BS thanks the Leverhulme Trust for support (F /250/K). References [1] D. Barber and C. M. Bishop. On computing the KL divergence for Bayesian Neural Networks. Technical report, Neural Computing Research Group, Aston University, Birmingham, 1998. See also D. Barber and C. M. Bishop These proceedings. [2] D. Barber and B. Schottky. Bayesian Radial Basis Functions. Technical report, Neural Computing Research Group, Aston University, Birmingham, 1998. [3] C. M. Bishop. Improving the Generalization Properties of Radial Basis Function Networks. Neural Computation, 4(3):579-588, 1991. [4] G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the Seventh Annual ACM Workshop on Computational Learning Theory (COLT '93), 1993. [5J D. J. C. MacKay. Developments in probabilistic modelling with neural networks ensemble learning. In Neural Networks: Artificial Intelligence and Industrial Applications. Proceedings of the 3rd Annual Symposium on Neural Networks, Nijmegan, Netherlands, 14-15 September 1995, pages 191-198. Springer. [6] D. J. C. MacKay. Bayesian Interpolation. Neural Computation, 4(3):415-447, 1992. [7] J. Moody and C. J. Darken. Fast Learning in Networks of Locally-Tuned Processing Units. Neural Computation, 1:281-294, 1989. [8] Neal, R. M. Bayesian Learning for Neural Networks. Springer, New York, 1996. Lecture Notes in Statistics 118. [9J M. J. L. Orr. Regularization in the Selection of Radial Basis Function Centers. Neural Computation, 7(3):606-623, 1995. [10] M. J. L. Orr. Introduction to Radial Basis Function Networks. Technical report, Centre for Cognitive Science, Univeristy of Edinburgh, Edinburgh, ER8 9LW, U.K., 1996.
1997
58
1,406
Blind Separation of Radio Signals Fading Channels Kari Torkkola Motorola, Phoenix Corporate Research Labs, 2100 E. Elliot Rd, MD EL508, Tempe, AZ 85284, USA email: A540AA(Qemail.mot.com Abstract • In We apply information maximization / maximum likelihood blind source separation [2, 6) to complex valued signals mixed with complex valued nonstationary matrices. This case arises in radio communications with baseband signals. We incorporate known source signal distributions in the adaptation, thus making the algorithms less "blind". This results in drastic reduction of the amount of data needed for successful convergence. Adaptation to rapidly changing signal mixing conditions, such as to fading in mobile communications, becomes now feasible as demonstrated by simulations. 1 Introduction In SDMA (spatial division multiple access) the purpose is to separate radio signals of interfering users (either intentional or accidental) from each others on the basis of the spatial characteristics of the signals using smart antennas, array processing, and beamforming [5, 8). Supervised methods typically use a variant of LMS (least mean squares), either gradient based, or algebraic, to adapt the coefficients that describe the channels or their inverses. This is usually a robust way of estimating the channel but a part of the signal is wasted as predetermined training data, and the methods might not be fast enough for rapidly varying fading channels. Unsupervised methods either rely on information about the antenna array manifold, or properties of the signals. Former approaches might require calibrated antenna arrays or special array geometries. Less restrictive methods use signal properties only, such as constant modulus, finite alphabet, spectral self-coherence, or cyclostationarity. Blind source separation (BSS) techniques typically rely only on source signal independence and non-Gaussianity assumptions. Our aim is to separate simultaneous radio signals occupying the same frequency band, more specifically, radio signals that carry digital information. Since linear mixtures of antenna signals end up being linear mixtures of (complex) baseband signals due to the linearity of the downconversion process, we will apply BSS at the baseband stage of the receiver. The main contribution of this paper is to show that by making better use of the known signal properties, it is possible to devise algorithms that adapt much faster than algorithms that rely only on weak assumptions, such as source signal independence. We will first discuss how the probability density functions (pdf) of baseband DPSK signals could be modelled in' a way that can efficiently be used in blind separation algorithms. We will incorporate those models into information maximization and Blind Separation of Radio Signals in Fading Channels 757 into maximum likelihood approaches [2, 6). We will then continue with the maximum likelihood approach and other modulation techniques, such as QAM. Finally, we will show in simulations, how this approach results in an adaptation process that is fast enough for fading channels. 2 Models of baseband signal distributions In digital communications the binary (or n-ary) information is transmitted as discrete combinations of the amplitude and/or the phase of the carrier signal. After downconversion to baseband the instantaneous amplitude of the carrier can be observed as the length of a complex valued sample of the baseband signal, and the phase of the carrier is discernible as the phase angle of the same sample. Possible combinations that depend on the modulation method employed, are called symbol constellations. N-QAM (quadrature amplitude modulation) utilizes both the amplitude and the phase, whereby the baseband signals can only take one of N possible locations on a grid on the complex plane. In N-PSK (phase shift keying) the amplitude of the baseband signal stays constant, but the phase can take any of N discrete values. In DPSK (differential phase shift keying) the information is encoded as the difference between phases of two consecutive transmitted symbols. The phase can thus take any value, and since the amplitude remains constant, the baseband signal distribution is a circle on the complex plane. Information maximization BSS requires a nonlinear function that models the cumulative density function (cdf) of the data. This function and its derivative need to be differentiable. In the case of a circular complex distribution with uniformly distributed phase, there is only one important direction of deviation, the radial direction. A smooth cdf G for a circular distribution at the unit circle can be constructed using the hyperbolic tangent function as G(z) = tanh(w(lzl - 1)) (1) and the pdf, differentiated in the radial direction, that is, with respect to Izl is 8 g(z) = Bizi tanh(w(lzl - 1)) = w(l - tanh2(w(lzl - 1))) (2) where z = x + iy is a complex valued variable, and the parameter w controls the steepness of the slope of the tanh function. Note that this is in contrast to more commonly used coordinate axis directions to differentiate and to integrate to get the pdf from the cdf and vice versa. These functions are plotted in Fig. 1. a) CDF b) PDF Figure 1: Radial tanh with w=2.0 (equations 1 and 2). Note that we have not been worrying about the pdf integrating to unity. Thus we could leave the first multiplicative constant w out of the definition of g. Scaling will not be important for our purposes of using these functions as the nonlinearities in the information maximization BSS. Note also that when the steepness w approaches infinity, the densities approach the ideal density of a DPSK source, the unit circle. Many other equally good choices are possible where the ideal density is reached as a limit of a parameter value. For example, the radial section of the circular "ridge" of the pdf could be a Gaussian. 758 K. Torkkola 3 The information maximization adaptation equation The information maximization adaptation equation to learn the unmixing matrix W using the natural gradient is [2] AWex (guT + I)W where ~ . 8 !!Jli.. (3) YJ 8Yi 8Ui Vector U = W x denotes a time sample of the separated sources, x denotes the corresponding time sample of the observed mixtures, and Yj is the nonlinear function approximating the cdf of the data, which is applied to each component of the u. Now we can insert (1) into Yj. Making use of {)lzI/{)z = zllzl this yields for 'OJ: ~ () () (I I)) Uj Yj = -() -() tanh(w Uj -1 = -2WYj-1 -I Yj Uj Uj (4) When (4) is inserted into (3) we get <lWex (I - 2 (Wjtanh(WI~~~jl-l»"j) j "H) W (5) where (.)j denotes a vector with elements of varying j. Here, we have replaced the transpose operator by the hermitian operator H, since we will be processing complex data. We have also added a subscript to W as these parameters can be learned, too. We will not show the adaptation equations due to lack of space. 4 Connection to the maximum likelihood approach Pearlmutter and Parra have shown that (3) can be derived from the maximum likelihood approach to density estimation [6]. The same fact has also been pointed out by others, for example, by Cardoso [3]. We will not repeat their straightforward derivation, but the final adaptation equation is of the following form: AWex - dO WTW = ((fj(Uj; Wj)) uT + I) W. (6) dW Ii (Uj; Wj) j where U = Wx are the sources separated from mixtures x, and fj(uj;wj) is the pdf of source j parametrized by Wj. This is exactly the form of Bell and Sejnowski when Ii is taken to be the derivative of the necessary nonlinearity gj, which was assumed to be "close" to the true cdf of the source. Thus the information maximization approach makes implicit assumptions about the cdf's of the sources in the form of the nonlinear squashing function, and does implicit density estimation, whereas in the ML approach the density assumptions are made explicit. This fact makes it more intuitive and lucid to derive the adaptation for other forms of densities, and also to extend it to complex valued variables. Now, we can use the circular pdf's (2) depicted in Fig. 1 as the densities Ii (omitting scaling) fj(uj;wj) = 1- tanh2(wj(lujl- 1)). where the steepness Wj acts as the single parameter of the density. Now we need to compute its derivative fj(uj;wj) = (){) Ii(uj;wj) = -2tanh(wj(IUjl-l))Ii(Uj;Wj)WjIUjl (7) Uj Uj Inserting this into (6) and changing transpose operators into hermitians yields <l W ex (I _ 2 (Wjtanh(WI~~~jl- 1»,,; ) ; "H) W, (8) which is exactly the information maximization rule (5). Notice that at this time we did not have to ponder what would be an appropriate way to construct the cdf from the pdf for complex valued distributions. Blind Separation of Radio Signals in Fading Channels 759 5 Modifications for QAM and other signal constellations So far we have only looked at signals that lie on the unit circle, or that have a constant modulus. Now we will take a look at other modulation techniques, in which the alphabet is constructed as discrete points on the complex plane. An example is the QAM (quadrature amplitude modulation), in which the signal alphabet is a regular grid. For example, in 4-QAM, the alphabet could be A4 = {I+i, -I+i, -I-i, l-i}, or any scaled version of A4. In the ideal pdf of 4-QAM, each symbol is represented just as a point. Again, we can construct a smoothed version of the ideal pdf as the sum of "bumps" over all of the alphabet where the ideal pdf will be approached by increasing w. g(U) = 2:)1 - tanh2(wklu - Uk!)) k N ow the density for each source j will be !i(Uj; Wj) = 2:)1 - tanh2(wklu j - Uk!)) k (9) (10) where Wj is now a vector of parameters Wk. In practice each Wk would be equal in which case a single parameter W will suffice. This density function could now be inserted into (6) resulting in the weight update equation. However, since !i(Uj; Wj) is a sum of multiple components, f' / f will not have a particularly simple form. In essence, for each sample to be processed, we would need to evaluate all the components of the pdf model of the constellation. This can be avoided by evaluating only the component of the pdf corresponding to that symbol of the alphabet U c which is nearest to the current separated sample u. This is a very good approximation when W is large. But the approximation does not even have to be a good one when W is small, since the whole purpose of using "wide" pdf components is to be able to evaluate the gradients on the whole complex plane. Figure 2 depicts examples of this approximation with two different values of w. The discontinuities are visible at the real and imaginary axes for the smaller w. a) w = 1.0 b) w = 5.0 Figure 2: A piecewise continuous PDF for a 4-QAM source using the tanh function. Thus for the 4-QAM, the complex plane will be divided into 4 quadrants, each having its own adaptation rule corresponding to the single pdf component in that quadrant. Evaluating (6) for each component of the sum gives Ll.. W '" (I _ 2 (w. tanh( WI~~j - '"I) Uj \ "H ) W, (11) for each symbol k of the alphabet or for the corresponding location Uk on the complex plane. This equation can be applied as such when the baseband signal is sampled at the symbol rate. With oversampling, it may be necessary to include in the pdf model the transition paths between the symbols, too. 6 Practical simplifications To be able to better vectorize the algorithm, it is practical to accumulate ~ W from a number of samples before updating the W. This amounts to computing an expectation of ~W over a number, say, 10-500 samples of the mixtures. Looking at the DPSK case, (5) or (8) the expectation of IUil in the denominator equals one "near" convergence since we assume baseband signals that are distributed on the unit circle. Also, near the solution we can assume that the separated outputs Uj are close to true distributions, the exact unit circle, which can be derived from h by increasing its steepness. At the limit the tanh will equal the sign function, when the whole adaptation, ignoring scaling, is (12) However, this simplification can only be used when the W is not too far off from the correct solution. This is especially true when the number of available samples of the mixtures is small. The smooth tanh is needed in the beginning of the adaptation to give the correct direction to the gradient in the algorithm since the pdfs of the outputs Uj are far from the ideal ones in the beginning. 7 Performance with static and fading signals We have tested the performance of the proposed algorithm both with static and dynamic (changing) mixing conditions. In the static case with four DPSK signals (8 x oversampled) mixed with random matrices the algorithm needs only about 80 sample points (corresponding to 10 symbols) of the mixtures to converge to a separating solution, whereas a more general algorithm, such as [4], needs about 800-1200 samples for convergence. We attribute this improvement to making much better use of the baseband signal distributions. In mobile communications the signals are subject to fading. If there is no direct line of sight from the transmitter to the receiver, only multiple reflected and diffracted signal components reach the receiver. When either the receiver or the transmitter is moving, for example, in an urban environment, these components are changing very rapidly. If the phases of the carrier signals in these components are aligned the components add constructively at the receiver. If the phases of carriers are 180 degrees off the components add destructively. Note that a half of a wavelength difference in the lengths of the paths of the received components corresponds to a 180 degree phase shift. This is only about 0.17 m at 900 MHz. Since this small a spatial difference can cause the signal to change from constructive interference to a null received signal, the result is that both the amplitude and the phase of the received signal vary seemingly randomly at a rate that is proportional to relative speeds of the transmitter and the receiver. The amplitude of the received signal follows a Rayleigh distribution, hence the name Rayleigh fading. As an example, Figure 3 depicts a 0.1 second fragment of the amplitude of a fading channel. 10 o r //--\" '\ _lo F \\. '-'\ / i \1-\/\(\/ \//-~-'V I V \f 'V' \ -20 Ii -w , o om 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.()9 0.1 Figure 3: Amplitude (in dB) of a fading radio channel corresponding to a vehicle speed of 60 mph, when the carrier is 900 Mhz. Horizontal axis is time in seconds. Blind Separation of Radio Signals in Fading Channels 761 With fading sources, the problem is to be able to adapt to changing conditions, keeping up with the fading rate. In the signal of Fig. 3 it takes less than 5 milliseconds to move from a peak of the amplitude into a deep fade. Assuming a symbol rate of 20000 symbols/second, this corresponds to a mere 100 symbols during this change. We simulated again DPSK sources oversampling by 8 relative to the symbol rate. The received sampled mixtures are (13) j where 8j[n] are the source signals, fij[n] represents the fading channel from transmitter j to receiver i, and ni[n] represents the noise observed by receiver i. In our experiments, we used a sliding window of 80 samples centered at the current sample. The weight matrix update (the gradient) was calculated using all the samples of the window, the weight matrix was updated, the window was slid one sample forward, and the same was repeated. Using this technique we were able to keep up with the fading rate corresponding to 60 mph relative speed of the transmitter and the receiver. Figure 4 depicts how the algorithm tracks the fading channels in the case of three simultaneous source signals. 4 l~ .. '/,\" .~. ~ .... ' . .)~' ..... 1 :~ o 2000 4000 6000 8000 10000 12000 14000 16000 _:E+Ff!'-------'----.L..fc .•.. ·-'-------"'-'.t +'------'--~____'__'__~----'-----': o 2000 4000 6000 8000 10000 12000 14000 16000 5r-----~----~----~--~r-----~----~----~--__, 10 O~ ____ ~-L __ L-____ L_ __ ~L_ ____ ~ ____ ~~ __ ~ __ ~ o 2000 4000 6000 8000 10000 12000 14000 16000 Figure 4: Separation of three signals subject to fading channels. Top graph: The real parts 16 independent fading channels. 2nd graph: The inverse of the instantaneous fading conditions (only the real part is depicted). This is one example of an ideal separation solution. 3rd graph: The separation solution tracked by the algorithm. (only the real part is depicted). Bottom graph: The resulting signal/interference (S/I) ratio in dB for each of the four separated source signals. Horizontal axis is samples. 16000 samples (8 x oversampled) corresponds to 0.1 seconds. On the average, the S/I to start with is zero. The average output S/I is 20 dB for the worst of the three separated signals. Since the mixing is now dynamic the instantaneous mixing matrix, as determined by the instantaneous fades, can occasionally be singular and cannot be inverted. Thus the signals at this instance cannot be separated. In our 0.1 second test signal this occurred four times in the three source signal case (9 independent fading paths), at which instances the output 762 K. Torkkola SII bounced to or near zero momentarily for one or more of the separated signals. Durations of these instances are short, lasting about 15 symbols, and covering about 3 per cent of the total signal time. 8 Related work and discussion Although the whole field of blind source separation has started around 1985, rather surprisingly, no application to radio communications has yet emerged. Most of the source separation algorithms are based on higher-order statistics, and these should be relatively straightforward to generalize for complex valued baseband data. Perhaps the main reason is that all theoretical work has concentrated in the case of static mixing, not in the dynamic case. Many communications channels are dynamic in nature, and thus rapidly adapting methods are necessary. Making use of all available knowledge of the sources, in this case the pdf's of the source signals, allows successful adaptation based on a very small number of samples, much smaller than by just incorporating the coarse shapes of the pdf's into the algorithm. It is not unreasonable to presume this knowledge, on the contrary, the modulation method of a communications system must certainly be known. To our knowledge, no successful blind separation of signals subject to rapidly varying mixing conditions, such as fading, has been reported in the literature. Different techniques applied to separation of various simulated radio signals under static mixing conditions have been described, for example, in [9, 4]. The maximum likelihood method reported recently by Yellin and Friedlander [9] seems to be the closest to our approach, but they only apply it to simulated baseband radio signals with static mixing conditions. It must also be noted that channel time dispersion is not taken into account in our current simulations. This is valid only in cases where the delay spread is short compared to the inverse of the signal bandwidths. If this is not a valid assumption, separation techniques for convolutive mixtures, such as in [7] or [1], need to be combined with the methods developed in this paper. References [1] S. Amari, S. Douglas, A. Cichocki, and H. H. Yang. Multichannel blind deconvolution and equalization using the natural gradient. In Proc. 1 st IEEE Signal Processing Workshop on Signal Processing Advances in Wireless Communications, pages 101104, Paris, France, April 16-18 1997. [2] A. Bell and T. Sejnowski. An information-maximisation approach to blind separation and blind deconvolution. Neural Computation, 7(6):1129-1159, 1995. [3] J.-F. Cardoso. Infomax and maximum likelihood for source separation. IEEE Letters on Signal Processing, 4(4):112-114, April 1997. [4] J.-F. Cardoso and B. Laheld. Equivariant adaptive source separation. IEEE 1'ransactions on Signal Processing, 44(12):3017-3030, December 1996. [5] A. Paulraj and C. B. Papadias. Array processing in mobile communications. In Handbook of Signal Processing. CRC Press, 1997. [6] B. A. Pearlmutter and L. C. Parra. A context-sensitive generalization of ICA. In International Conference on Neural Information Processing, Hong Kong, Sept. 24-27 1996. Springer. [7] K. Torkkola. Blind separation of convolved sources based on information maximization. In IEEE Workshop on Neural Networks for Signal Processing, pages 423-432, Kyoto, Japan, September 4-6 1996. [8] A.-J. van der Veen and A. Paulraj. An analytical constant modulus algorithm. IEEE 1hmsactions on Signal Processing, 44(5), May 1996. [9] D. Yellin and B. Friedlander. A maximum likelihood approach to blind separation of narrowband digital communication signals. In Proc. 30th Asilomar Conf. on Signals, Systems, and Computers, 1996.
1997
59
1,407
Asymptotic Theory for Regularization: One-Dimensional Linear Case Petri Koistinen Rolf Nevanlinna Institute, P.O. Box 4, FIN-00014 University of Helsinki, Finland. Email: PetrLKoistinen@rnLhelsinkLfi Abstract The generalization ability of a neural network can sometimes be improved dramatically by regularization. To analyze the improvement one needs more refined results than the asymptotic distribution of the weight vector. Here we study the simple case of one-dimensional linear regression under quadratic regularization, i.e., ridge regression. We study the random design, misspecified case, where we derive expansions for the optimal regularization parameter and the ensuing improvement. It is possible to construct examples where it is best to use no regularization. 1 INTRODUCTION Suppose that we have available training data (Xl, Yd, .. 0' (Xn' Yn) consisting of pairs of vectors, and we try to predict Yi on the basis of Xi with a neural network with weight vector w. One popular way of selecting w is by the criterion (1) 1 n - L £(Xi' Yi, w) + >..Q(w) = min!, n I where the loss £(x,y,w) is, e.g., the squared error Ily - g(x,w)11 2 , the function g(., w) is the input/output function of the neural network, the penalty Q(w) is a real function which takes on small values when the mapping g(o, w) is smooth and high values when it changes rapidly, and the regularization parameter >.. is a nonnegative scalar (which might depend on the training sample). We refer to the setup (1) as (training with) regularization, and to the same setup with the choice >.. = 0 as training without regularization. Regularization has been found to be very effective for improving the generalization ability of a neural network especially when the sample size n is of the same order of magnitude as the dimensionality of the parameter vector w, see, e.g., the textbooks (Bishop, 1995; Ripley, 1996). Asymptotic Theory for Regularization: One-Dimensional Linear Case 295 In this paper we deal with asymptotics in the case where the architecture of the network is fixed but the sample size grows. To fix ideas, let us assume that the training data is part of an Li.d. (independent, identically distributed) sequence (X,Y);(Xl'Yl),(X2'Y2)"" of pairs of random vectors, i.e., for each i the pair (Xi, Yi) has the same distribution as the pair (X, Y) and the collection of pairs is independent (X and Y can be dependent). Then we can define the (prediction) risk of a network with weights w as the expected value (2) r(w) := IE:f(X, Y, w). Let us denote the minimizer of (1) by Wn (.),) , and a minimizer of the risk r by w*. The quantity r(wn (>.)) is the average prediction error for data independent of the training sample. This quantity r(wn (>.)) is a random variable which describes the generalization performance of the network: it is bounded below by r( w*) and the more concentrated it is about r(w*), the better the performance. We will quantify this concentration by a single number, the expected value IE:r(wn(>.)) . We are interested in quantifying the gain (if any) in generalization for training with versus training without regularization defined by (3) When regularization helps, this is positive. However, relatively little can be said about the quantity (3) without specifying in detail how the regularization parameter is determined. We show in the next section that provided>' converges to zero sufficiently quickly (at the rate op(n- 1/ 2 )), then IE: r(wn(O)) and IE: r(wn(>.)) are equal to leading order. It turns out, that the optimal regularization parameter resides in this asymptotic regime. For this reason, delicate analysis is required in order to get an asymptotic approximation for (3). In this article we derive the needed asymptotic expansions only for the simplest possible case: one-dimensional linear regression where the regularization parameter is chosen independently of the training sample. 2 REGULARIZATION IN LINEAR REGRESSION We now specialize the setup (1) to the case of linear regression and a quadratic smoothness penalty, i.e., we take f(x,y,w) = [y-xTwJ2 and Q(w) = wTRw, where now y is scalar, x and w are vectors, and R is a symmetric, positive definite matrix. It is well known (and easy to show) that then the minimizer of (1) is (4) 1 n 1 n [ ] -1 wn (>') = ~ ~ XiX! + >'R ~ ~ XiYi. This is called the generalized ridge regression estimator, see, e.g., (Titterington, 1985); ridge regression corresponds to the choice R = I, see (Hoerl and Kennard, 1988) for a survey. Notice that (generalized) ridge regression is usually studied in the fixed design case, where Xi:s are nonrandom. Further, it is usually assumed that the model is correctly specified, i.e., that there exists a parameter such that Yi = Xr w* + €i , and such that the distribution of the noise term €i does not depend on Xi. In contrast, we study the random design, misspecified case. Assuming that IE: IIXI12 < 00 and that IE: [XXT] is invertible, the minimizer of the risk (2) and the risk itself can be written as (5) w* = A-lIE: [XY], with A:=IE:[XXT] (6) r(w) = r(w*) + (w - w*f A(w - w*). 296 P. Koistinen If Zn is a sequence of random variables, then the notation Zn = open-a) means that na Zn converges to zero in probability as n -+ 00. For this notation and the mathematical tools needed for the following proposition see, e.g., (Serfiing, 1980, Ch. 1) or (Brockwell and Davis, 1987, Ch. 6). Proposition 1 Suppose that IE: y4 < 00, IE: IIXII4 < 00 and that A = IE: [X XTj is invertible. If,\ = op(n-I/2), then both y'n(wn(O) -w*) and y'n(wn('\) - w*) converge in distribution to N (0, C), a normal distribution with mean zero and covariance matrix C. The previous proposition also generalizes to the nonlinear case (under more complicated conditions). Given this proposition, it follows (under certain additional conditions) by Taylor expansion that both IE:r(wn('\)) - r(w*) and IEr(wn(O)) - r(w*) admit the expansion f31 n -} + o( n -}) with the same constant f3I. Hence, in the regime ,\ = op(n-I/2) we need to consider higher order expansions in order to compare the performance of wn(,\) and wn(O). 3 ONE-DIMENSIONAL LINEAR REGRESSION We now specialize the setting of the previous section to the case where x is scalar. Also, from now on, we only consider the case where the regularization parameter for given sample size n is deterministic; especially ,\ is not allowed to depend on the training sample. This is necessary, since coefficients in the following type of asymptotic expansions depend on the details of how the regularization parameter is determined. The deterministic case is the easiest one to analyze. We develop asymptotic expansions for the criterion (7) where now the regularization parameter k is deterministic and nonnegative. The expansions we get turn out to be valid uniformly for k ~ O. We then develop asymptotic formulas for the minimizer of I n, and also for In(O) - inf I n. The last quantity can be interpreted as the average improvement in generalization performance gained by optimal level of regularization, when the regularization constant is allowed to depend on n but not on the training sample. From now on we take Q(w) = w2 and assume that A = IEX2 = 1 (which could be arranged by a linear change of variables). Referring back to formulas in the previous section, we see that (8) r(wn(k)) - r(w*) = ern - kw*)2/(Un + 1 + k)2 =: h(Un, Vn, k), whence In(k) = IE:h(Un, Vn , k), where we have introduced the function h (used heavily in what follows) as well as the arithmetic means Un and Vn (9) (10) _ 1 n Vn:= - L Vi, with n I Vi := XiYi - w* xl For convenience, also define U := X2 - 1 and V := Xy - w* X2 . Notice that U; UI, U2 , • .. are zero mean Li.d. random variables, and that V; Vi, V2 ,. " satisfy the same conditions. Hence Un and Vn converge to zero, and this leads to the idea of using the Taylor expansion of h(u, v, k) about the point (u, v) = (0,0) in order to get an expansion for In(k). Asymptotic Theory for Regularization: One-Dimensional Linear Case 297 To outline the ideas, let Tj(u,v,k) be the degree j Taylor polynomial of (u,v) f-7 h(u, v, k) about (0,0), i.e., Tj(u, v, k) is a polynomial in u and v whose coefficients are functions of k and whose degree with respect to u and v is j. Then IETj(Un,Vn,k) depends on n and moments of U and V. By deriving an upper bound for the quantity IE Ih(Un, Vn, k) - Tj(Un, Vn, k)1 we get an upper bound for the error committed in approximating In(k) by IE Tj(Un, Vn, k). It turns out that for odd degrees j the error is of the same order of magnitude in n as for degree j - 1. Therefore we only consider even degrees j. It also turns out that the error bounds are uniform in k ~ 0 whenever j ~ 2. To proceed, we need to introduce assumptions. Assumption 1 IE IXlr < 00 and IE IYls < 00 for high enough rand s. Assumption 2 Either (a) for some constant j3 > 0 almost surely IXI ;::: j3 or (b) X has a density which is bounded in some neighborhood of zero. Assumption 1 guarantees the existence of high enough moments; the values r = 20 and s = 8 are sufficient for the following proofs. E.g., if the pair (X, Y) has a normal distribution or a distribution with compact support, then moments of all orders exist and hence in this case assumption 1 would be satisfied. Without some condition such as assumption 2, In(O) might fail to be meaningful or finite. The following technical result is stated without proof. Proposition 2 Let p > 0 and let 0 < IE X 2 < 00. If assumption 2 holds, then where the expectation on the left is finite (a) for n ~ 1 (b) for n > 2p provided that assumption 2 (a), respectively 2 (b) holds. Proposition 3 Let assumptions 1 and 2 hold. Then there exist constants no and M such that In(k) = JET2(Un, Vn, k) + R(n, k) where _ _ (w*)2k2 -1 [IEV2 (w*)2k2JEU2 W*kIEUV] IET2(Un, Vn, k) = (1+k)2 +n (1+k)2 +3 (1+k)4 +4 (1+k)3 IR(n, k) I :s; Mn-3/2(k + 1)-1, "In;::: no, k ;::: o. PROOF SKETCH The formula for IE T2(Un , Vn. k) follows easily by integrating the degree two Taylor polynomial term by term. To get the upper bound for R(n, k), consider the residual where we have omitted four similar terms. Using the bound 298 P. Koistinen the Ll triangle inequality, and the Cauchy-Schwartz inequality, we get IR(n, k)1 = IJE [h(Un, Vn, k) - T2(Un, Vn, k)]1 ., (k+ W' {Ii: [(~ ~Xl)-'] r {2(k + 1)3[JE (lUnI2IVnI 4)]l/2 + 4(w*)2k2(k + 1)[18 IUnI6]l/2 ... } By proposition 2, here 18 [(~ 2:~ X[)-4] = 0(1). Next we use the following fact, cf. (Serfiing, 1980, Lemma B, p. 68). Fact 1 Let {Zd be i.i.d. with 18 [Zd = 0 and with 18 IZI/v < 00 for some v ~ 2. Then v Applying the Cauchy-Schwartz inequality and this fact, we get, e.g., that [18 (IUnI2 IVnI 4 )]l/2 ~ [(18 IUnI4)1/2(E IVnI8)1/2p/2 = 0(n- 3/ 2). Going through all the terms carefully, we see that the bound holds. Proposition 4 Let assumptions 1 and 2 hold, assume that w* :j; 0, and set al := (18 V2 - 2w*E [UVD/(w*)2. o If al > 0, then there exists a constant ni such that for all n ~ nl the function k ~ ET2(Un, Vn,k) has a unique minimum on [0,(0) at the point k~ admitting the expanszon k~ = aIn-1 + 0(n-2); further, In(O) - inf{Jn(k) : k ~ O} = In(O) - In(aln- 1 ) = ar(w*)2n- 2 + 0(n- 5/ 2). If a ~ 0, then PROOF SKETCH The proof is based on perturbation expansio!1 c~nsidering lin a small parameter. By the previous proposition, Sn(k) := ET2(Un , Vn , k) is the sum of (w*)2k2/(1 + k)2 and a term whose supremum over k ~ ko > -1 goes to zero as n ~ 00. Here the first term has a unique minimum on (-1,00) at k = O. Differentiating Sn we get S~(k) = [2(w*)2k(k + 1)2 + n-1p2(k)]/(k + 1)5, where P2(k) is a second degree polynomial in k. The numerator polynomial has three roots, one of which converges to zero as n ~ 00. A regular perturbation expansion for this root, k~ = aln-I + a2n-2 + ... , yields the stated formula for al. This point is a minimum for all sufficiently large n; further, it is greater than zero for all sufficiently large n if and only if al > O. The estimate for J n (0) - inf { J n (k) : k ~ O} in the case al > 0 follows by noticing that In(O) - In(k) = 18 [h(Un, Vn, 0) - h(Un, Vn, k)), where we now use a third degree Taylor expansion about (u, v, k) = (0,0,0) h(u,v,O) - h(u,v,k) = 2w* kv - (w*)2k2 - 4w*kuv + 2(w*?k2u + 2kv2 - 4w*k2v + 2(W*)2k3 + r(u, v, k). Asymptotic Theory for Regularization: One-Dimensional Linear Case 0.2 0.18 0.16 0.14 0.12 0.1 ~~ __ ~ __ ~ __ ~ __ ~ __ ~ __ ~ __ L-~ __ ~ o 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 299 Figure 1: Illustration of the asymptotic approximations in the situation of equation (11). Horizontal axis kj vertical axis .In(l£) and its asymptotic appr~ximations. Legend: markers In(k); solid line IE T2(Un, Vn, k)j dashed line IET4 (Un, Vn, k). Usin~ t~e techniques of the previous proposition, it can be shown that IE Ir(Un , Vn , k~)1 = O(n-S/ 2 ). Integrating the Taylor polynomial and using this estimate gives In(O) - In(aI/n) = af(w*)2n-2 + O(n- S/ 2 ). Finally, by the mean value theorem, In(O) -inf{ In(k) : k ~ O} = In(O) -In(aI/n) + ! (In(O) - In(k)]lk=8(k~ -aI/n) = In(O) - In(aI/n) + O(n-1)O(n-2) where () lies between k~ and aI/n, and where we have used the fact that the indicated derivative evaluated at () is of order O(n- 1), as can be shown with moderate effort. 0 Remark In the preceding we assumed that A = IEX 2 equals 1. If this is not the case, then the formula for a1 has to be divided by A; again, if a1 > 0, then k~ = a1n-1 + O(n-2 ) . If the model is correctly specified in the sense that Y = w* X + E, where E is independent of X and IE E = 0, then V = X E and IE [UV] = O. Hence we have a1 = IE [E2]j(w*)2, and this is strictly positive expect in the degenerate case where E = 0 with probability one. This means that here regularization helps provided the regularization parameter is chosen around the value aI/n and n is large enough. See Figure 1 for an illustration in the case (11) X "'" N(O, 1), Y = w* X + f , f "'" N(O, 1), w* = 1, where E and X are independent. In(k) is estimated on the basis of 1000 repetitions of the task for n = 8. In addition to IE T2(Un, Vn, k) the function IE T4(Un, lin, k) is also plotted. The latter can be shown to give In(k) correctly up to order O(n-s/2(k+ 1)-3). Notice that although IE T2(Un, Vn, k) does not give that good an approximation for In(k), its minimizer is near the minimizer of In(k), and both of these minimizers lie near the point al/n = 0.125 as predicted by the theory. In the situation (11) it can actually be shown by lengthy calculations that the minimizer of In(k) is exactly al/n for each sample size n ~ 1. It is possible to construct cases where a1 < O. For instance, take X "'" Uniform (a, b), Y = cjX + d+ Z, 1 1 a=- b=-(3Vs-l) 2' 4 c= -5,d= 8, 300 P. Koistinen and Z '" N (0, a 2) with Z and X independent and 0 :::; a < 1.1. In such a case regularization using a positive regularization parameter only makes matters worse; using a properly chosen negative regularization parameter would, however, help in this particular case. This would, however, amount to rewarding rapidly changing functions. In the case (11) regularization using a negative value for the regularization parameter would be catastrophic. 4 DISCUSSION We have obtained asymptotic approximations for the optimal regularization parameter in (1) and the amount of improvement (3) in the simple case of one-dimensional linear regression when the regularization parameter is chosen independently of the training sample. It turned out that the optimal regularization parameter is, to leading order, given by Qln-1 and the resulting improvement is of order O(n-2 ). We have also seen that if Ql < 0 then regularization only makes matters worse. Also (Larsen and Hansen, 1994) have obtained asymptotic results for the optimal regularization parameter in (1). They consider the case of a nonlinear network; however, they assume that the neural network model is correctly specified. The generalization of the present results to the nonlinear, misspecified case might be possible using, e.g., techniques from (Bhattacharya and Ghosh, 1978). Generalization to the case where the regularization parameter is chosen on the basis of the sample (say, by cross validation) would be desirable. Acknowledgements This paper was prepared while the author was visiting the Department for Statistics and Probability Theory at the Vienna University of Technology with financial support from the Academy of Finland. I thank F. Leisch for useful discussions. References Bhattacharya, R. N. and Ghosh, J. K. (1978). On the validity of the formal Edgeworth expansion. The Annals of Statistics, 6(2):434-45l. Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press. Brockwell, P. J. and Davis, R. A. (1987). Time Series: Theory and Methods. Springer series in statistics. Springer-Verlag. Hoerl, A. E. and Kennard, R. W. (1988). Ridge regression. In Kotz, S., Johnson, N. L., and Read, C. B., editors, Encyclopedia of Statistical Sciences. John Wiley & Sons, Inc. Larsen, J. and Hansen, L. K. (1994). Generalization performance of regularized neural network models. In Vlontos, J., Whang, J.-N., and Wilson, E., editors, Proc. of the 4th IEEE Workshop on Neural Networks for Signal Processing, pages 42-51. IEEE Press. Ripley, B. D. (1996). Pattern Recognition and Neural Networks. Cambridge University Press. Serfiing, R. J. (1980). Approximation Theorems of Mathematical Statistics. John Wiley & Sons, Inc. Titterington, D. M. (1985). Common structure of smoothing techniques in statistics. International Statistical Review, 53:141-170. A General Purpose Image Processing Chip: Orientation Detection Ralph Etienne-Cummings and Donghui Cai Department of Electrical Engineering Southern Illinois University Carbondale, IL 6290 1-6603 Abstract A 80 x 78 pixel general purpose vision chip for spatial focal plane processing is presented. The size and configuration of the processing receptive field are programmable. The chip's architecture allows the photoreceptor cells to be small and densely packed by performing all computation on the read-out, away from the array. In addition to the raw intensity image, the chip outputs four processed images in parallel. Also presented is an application of the chip to line segment orientation detection, as found in the retinal receptive fields of toads. 1 INTRODUCTION The front-end of the biological vision system is the retina, which is a layered structure responsible for image acquisition and pre-processing. The early processing is used to extract spatiotemporal information which helps perception and survival. This is accomplished with cells having feature detecting receptive fields, such as the edge detecting center-surround spatial receptive fields of the primate and cat bipolar cells [Spillmann, 1990]. In toads, the receptive fields of the retinal cells are even more specialized for survival by detecting ''prey'' and "predator" (from size and orientation filters) at this very early stage [Spi11mann, 1990]. The receptive of the retinal cells performs a convolution with the incident image in parallel and continuous time. This has inspired many engineers to develop retinomorphic vision systems which also imitate these parallel processing capabilities [Mead, 1989; Camp, 1994]. While this approach is ideal for fast early processing, it is not space efficient. That is, in realizing the receptive field within each pixel, considerable die area is required to implement the convolution kernel. In addition, should programmability be required, the complexity of each pixel increases drastically. The space constraints are eliminated if the processing is performed serially during read-out. The benefits of this approach are 1) each pixel can be as small as possible to allow high resolution imaging, 2) a single processor unit is used for the entire retina thus reducing mis-match problems, 3) programmability can be obtained with no impact on the density of imaging array, and 874 R. Etienne-Cummings and D. Cai 4) compact general purpose focal plane visual processing is realizable. The space constrains are then transfonned into temporal restrictions since the scanning clock speed and response time of the processing circuits must scale with the size of the array. Dividing the array into sub-arrays which are scanned in parallel can help this problem. Clearly this approach departs from the architecture of its biological counterpart, however, this method capitalizes on the main advantage of silicon which is its speed. This is an example of mixed signal neuromorphic engineering, where biological ideas are mapped onto silicon not using direct imitation (which has been the preferred approach in the past) but rather by realizing their essence with the best silicon architecture and computational circuits. This paper presents a general purpose vision chip for spatial focal plane processing. Its architecture allows the photoreceptor cells to be small and densely packed by performing all computation on the read-out, away from the array. Performing computation during read-out is ideal for silicon implementation since no additional temporal over-head is required, provided that the processing circuits are fast enough. The chip uses a single convolution kernel, per parallel sub-array, and the scanning bit pattern to realize various receptive fields. This is different from other focal plane image processors which am usually restricted to hardwired convolution kernels, such as oriented 20 Gabor filters [Camp, 1994]. In addition to the raw intensity image, the chip outputs four processed versions per sub-array. Also presented is an application of the chip to line segment orientation detection, as found in the retinal receptive fields of toads [Spillmann, 1990]. 2 THE GENERAL PURPOSE IMAGE PROCESSING CHIP 2.1 System Overview This chip has an 80 row by 78 column photocell array partitioned into four independent sub-arrays, which are scanned and output in parallel, (see figure I). Each block is 40 row by 39 column, and has its own convolution kernel and output circuit. The scanning circuit includes three parts: virtual ground, control signal generator (CSG), and scanning output transformer. Each block has its own virtual ground and scanning output transformer in both x direction (horizontal) and y direction (vertical). The control signal generator is shared among blocks. 2.2 Hardware Implementation The photocell is composed of phototransistor, photo current amplifier, and output control. The phototransistor performance light transduction, while the amplifier magnifies the photocurrent by three orders of magnitude. The output control provides multiple copies of the amplified photocun-ent which is subsequently used for focal plane image processing. The phototransistor is a parasitic PNP transistor in an Nwell CMOS process. The current amplifier uses a pair of diode connected pmosfets to obtain a logarithmic relationship between light intensity and output current. This circuit also amplifies the photocurrent from nanoamperes to microamperes. The photocell sends three copies of the output currents into three independent buses. The connections from the photocell to the buses are controlled by pass transistors, as shown in Fig. 2. The three current outputs allow the image to be processed using mUltiple receptive field organization (convolution kernels), while the raw image is also output. The row (column) buses provides currents for extracting horizontally (vertically) oriented image features, while the original bus provides the logarithmically compressed intensity image. The scanning circuit addresses the photocell array by selecting groups of cells at one time. Since the output of the cells are currents, virtual ground circuits are used on each bus to mask the> I pF capacitance of the buses. The CSG, implemented with shift registers A General Purpose Image Processing Chip: Orientation Detection V lloc',I: .... , ..-. ~ . .....,. ..... ..,. lIIIoc~' l: Drl, ...... ...... ..". "'!J"rty , I :, - -t.". ciii-. ...f~;;;i r 1 f f 1 ! " J, " " " " t I -=_"ODat~tl'g<I"O'''''' I 1 : ~!; ~~.+ :': : ~ I I -f L 1 I I 1 : >< ~;;:-i :':'< t :1 *",1,,",,1, ,',,1,><,.,,', = -t ~I .. _ .... """"'1 1 " ,, " :;"" o;::!' ,, : 1 ...... J9~ . .,. . .-I Figure 1: Block diagram of the chip. i ! f L 1 ~ , 875 ~'!"' I ~~".... ! m ! 1 ~ 1 f W V 1I ..... ,2orf. _ e69r>0 . .. o-Y ......... 1IIocIf,.·""._ ..... , 1Id,..,.Mpwy A I [!J I ~ I 1 { w • , I_ ..•..•.• •• I c .... jir;i oA""; d"' ~ ;;; " " ~~ ~ ~ produces signals which select photocells and control the scanning output transformer. The scanning output transformer converts currents from all row buses into Ipe« and Icenx' and converts currents from all row buses into lpery and Iccny. This transformation is required to implement the various convolution kernels discussed later. The output transformer circuits are controlled by a central CSG and a peripheral CSG. These two generators have identical structures but different initial values. It consists of an n-bit shift register in x direction (horizontally) and an m-bit shift register in y direction (vertically). A feedback circuit is used to restore the scanning pattern into the x shift register after each row scan is completed. This is repeated until all the row in each block are scanned. The control signals from the peripheral and central CSGs select all the cells covered by a 2D convolution mask (receptive field). The selected cells send Ixy to the original bus, Ixp to the row bus, and Iyp to the column bus. The function of the scanning output transformer is to identify which rows (columns) are considered as the center (Icenx or Ircny) or periphery (Irerx or Ipcry) of the convolution kernel, respectively. Figure 3 shows how a 3x3 convolution kernel can be constructed. Figure 4 shows how the output transformer works for a 3x3 mask. Only row bus transformation is shown in this example, but the same mechanism applies to the column bus as well. The photocell array is m row by n column, and the size is 3x3. The XC (x center) and YC (y center) come from the central CSG; while XP (x peripheral) and YP (y peripheral) come from the peripheral CSG. After loading the CSG, the initial values of XP and YP are both 00011...1. The initial values of XC and YC are both 10 111...1. This identifies the central cell as location (2, 2). The currents from the central row (column) are summed to form Iren• and leeny, while all the peripheral cells are summed to form Iperx and lpery. This is achieved by activating the switches labeled XC, YC, XP and YP in figure 2. XPj (YP,) {i= I, 2, ... , n} controls whether the output current of one cell 876 YC~ IO----+----yp XC---<::A. ,.b---~p Jyp hp Original Bus--<E,..-I'----+_--+ __ Column Bus_~_--,_-+ _ _ Row BUS_~ ___ -----4 __ Figure 2: Connections between a photocell and the current buses. R. Etienne-Cummings and D. Cai (1 ,1) (1 ,2) lori (2,1) (2,2) (3,1) (3,2) (3,:l Figure 3: Constructing a 3x3 receptive field. goes to the row (column) bus. Since XP j (YP) is connected to the gate of a pmos switch, a 0 in XPj (YPj) it turns on. YCj (XCj ) {i=l, 2, ... , n} controls whether a row (column) bus connects to Icenx bus in the same way. On the other hand, the connection from a row (column) bus to Ipcrx bus is controlled by an nmos and a pmos switch. The connection is made if and only if YC, (XCi)' an nmos switch, is 1 and YPi (XPi), a pmos switches, is O. The intensity image is obtained directly when XCi and YC j are both O. Hence, lori = 1(2,2), Icenx = lrow2= 1(2,1) + 1(2,2) + 1(2,3) and Iperx = lrowl + lrow3 = 1(1,1) + 1(1,2) + 1(1,3) + 1(3,1) + 1(3,2) + 1(3,3). The convolution kernel can be programmed to perform many image processing tasks by loading the scanning circuit with the appropriate bit pattern. This is illustrated by configuring the chip to perform image smoothing and edge extraction (x edge, y edge, and 20 edge), which are all computed simultaneously on read-out. It receives five inputs (lori' Iccn,' lperx, Iceny, Ipcry) from the scanning circuit and produces five outputs (lori' ledge.> ledgey' Ismllllth,ledge2d). The kernel (receptive field) size is programmable from 3x3, 5x5, 7x7, 9x9 and 11 x 11 . Fig. 5 shows the 3x3 masks for this processing. Repeating the above steps for 5x5, 7x7, 9x9, and II x 11 masks, we can get similar results. p.. u >>YPI VCI c:s:l -Yl'2 YC2 c:s:l c:s:l YP3 YC3 c:s:l -..... -YPIoI YCN Figure 4: Scanning output transformer for an m row by n column photo cell array. A General Purpose Image Processing Chip: Orientation Detection 877 1 I 1 -I -I -I -1 2 -I 0 -1 0 1 1 1 2 2 2 -I 2 -I -I 4 -I 1 1 1 -I -I -1 -I 2 -I 0 -I 0 (a) smooth (b) edge_x (c) edge-y (d) edge_2D Figure 5: 3x3 convolution masks for various image processing. In general, the convolution results under different mask sizes can be expressed as follows: I~mooth=Icen. + Ire... Iedge.=Kld * Icen. -Ipc", Iedgey=Kld * Iceny-Ipcry Iedge2D=K2d *I"ri-Icen.-Iceny Where Kid and K2d are the programmable coefficients (from 2-6 and 6-14, respectively) for ID edge extraction and 2D edge extraction, respectively. By varying the locations of the O's in the scanning circuits, different types of receptive fields (convolution kernels) can be realized. 2.3 Results The chip contains 65K transistors in a footprint of 4.6 mm x 4.7 mm. There are 80 x 78 photocells in the chip, each of which is 45.6 11m x 45 !lm and a fill factor of 15%. The convolution kernel occupies 690.6 !lm x 102.6 11m. The power consumption of the chip for a 3x3 (1\ x 11) receptive field, indoor light, and 5V power supply is < 2 m W (8 m W). To capitalize on the programmability of this chip, an ND card in a Pentium 133MHz PC is used to load the scanning circuit and to collect data. The card, which has a maximum analog throughput of 100KHz limits the frame rate of the chip to 12 frames per second. At this rate, five processed versions of the image is collected and displayed. The scanning and processing circuits can operate at 10 MHz (6250 fps), however, the phototransistors have much slower dynamics. Temporal smoothing (smear) can be observed on the scope when the frame rate exceeds 100 fps. The chip displays a logarithmic relationship between light intensity and output current (unprocessed imaged) from 0.1 lux (100 nA) to 6000 lux (10 IlA). The fixed pattern noise, defined as standard-deviation/mean, decreases abruptly from 25% in the dark to 2% at room light (800 lux). This behavior is expected since the variation of individual pixel current is large compared to the mean output when the mean is small. The logarithmic response of the photocell results in high sensitivity at low light, thus increasing the mean value sharply. Little variation is observed between chips. The contrast sensitivity of the edge detection masks is also measured for the 3x3 and 5x5 receptive fields. Here contrast is defined as (1m .. - Imin)/(Im .. + Imin) and sensitivity is given as a percentage of the maximum output. The measurements are performed for normal room and bright lighting conditions. Since the two conditions corresponded to the saturated part of the logarithmic transfer function of the photocells, then a linear relationship between output response and contrast is expected. Figure 6 shows contrast sensitivity plot. Figure 7 shows examples of chip's outputs. The top two images are the raw and smoothed (5x5) images. The bottom two are the 1 D edge_x (left) and 2D edge (right) images. The pixels with positive values have been thresholded to white. The vertical black line in the image is not visible in the edge_x image, but can be clearly seen in the edge_2D image. 878 80 >< ~ 60 ... o ~ :; 40 & ::I o 20 ..... ··5,5 Bri!,hl -e-5xS Normal ... • ··3,3 Bri!,hl __ 3,3 Nonnal Contrast [%] Figure 6: Contrast sensitivity function of the x edge detection mask. R. Etienne-Cummings and D. Cai Figure 7: (Clockwise) Raw image. 5x5 smoothed image. edge_2D and edge_x. 3 APPLICATION: ORIENTATION DETECTION 3.1 Algorithm Overview This vision chip can be elegantly used to measure the orientation of line segments which fall across the receptive field of each pixel. The output of the 10 Laplacian operators, edge_x and edge_y, shown in figure 5, can be used to detennine the orientation of edge segments. Consider a continuous line through the origin, represented by a delta function in 20 space by IX y-xtan()). If the origin is the center of the receptive field. the response ofthe edge_x kernel can be computed by evaluating the convolution equation (1). where W(x) = u(x+m)-u(x-m) is the x window over which smoothing is performed, 2m+ J is the width of the window and 2n+ J is the number of coefficients realizing the discrete Laplacian operator. In our case, n = m. Evaluating this equation and substituting the origin for the pixel location yields equation (2), which indicates that the output of the 10 edge_x (edge-y) detectors have a discretized linear relationship to orientation from on to 45" (45° to 90°). At 0", the second term in equation (2) is zero. As e increase, more terms are subtracted until all tenns are subtracted at 45°. Above 45 0 (below 45°), the edge_x (edge-y) detectors output zero since equal numbers of positive and negative coefficients are summed. Provided that contrast can be nonnalized. the output of the detectors can be used to extract the orientation of the line. Clearly these responses are even about the x- and y-axis. respectively. Hence, a second pair of edge detectors. oriented at 45", is required to uniquely extract the angle of the line segment. 10 --370 Lux 8 0 9- -260 Lux - -. - - (R!) Lux , 0..,. -·· .. -·- (25 Lux ~ 6 b~ ..=. :; so: :::I 4 -o._~ 0 "I 'b 2 , , ~ ••.••.• • . _. . '0, ,b ' . . . . . CI' ... ' -. ....... ... ~ .~ ..... :-. • • .,,;:: . :-:-' . - •• ~ . ': T -0- _ _ _ o o 20 40 60 80 Angle ['-'] Figure 8: Measured orientation transfer function of edge_x detectors. A General Purpose Image Processing Chip: Orientation Detection. 879 n 0edge_Ax,y) = [2nW(x ±m)8(y)-...EW(x ±m)8(y±i)]*8(y-xtane) (I) ;=} n· . ~ I -I Oedge AO.O) = 2n-[ ~(W(--)+ W(--)] ;=} tane fane (2) 3.2 Results Figure 8 shows the measured output of the edge_x detectors for various lighting conditions as a line is rotated. The average positive outputs are plotted. As expected, the output is maximum for bright ambients when the line is horizontal. As the line is rotated, the output current decreases linearly and levels off at approximately 45". On the other hand, the edge_y (not shown) begins its linear increase at 45" and maximizes at 90°. After normalizing for brightness. the four curves are very similar (not shown). To further demonstrate orientation detection with this chip, a character consisting of a circle and some straight lines is presented. The intensity image of the character is shown in figure 9(a). Figures 9(b) and 9(c) show the outputs of the edge_x and edge-y detectors, respectively. Since a 7x7 receptive field is used in this experiment, some outer pixels of each block are lost. The orientation selectivity of the 1 D edge detectors are clearly visible in the figures, where edge_x highlights horizontal edges and edge_y vertical edges. Figure 9(d) shows the reported angles. A program is written which takes the two I D edge images, finds the location of the edges from the edge_2D image, the intensity at the edges (positive lobe) and then computes the angle of the edge segment. In figure 9(d), the black background is chosen for locations where no edges are detected, white is used for 0° and gray for 90°. (a) (b) (c) (d) Figure 9: Orientation detection using ID Laplacian Operators. 4 CONCLUSION A 80x78 pixel general purpose vision chip for spatial focal plane processing has been presented. The size and configuration of the processing receptive field are programmable. In addition to the raw intensity image, the chip outputs four processed images in parallel. The chip has been successfully used for compact line segment orientation detection, which can be used in character recognition. The programmability and relatively low power consumption makes it ideal for many visual processing tasks. References Camp W. and J. Van cler Spiegel, "A Silicon VLSI Optical Sensor for Pattern Recognition, " Sensors and Actuators A, Vol. 43, No. 1-3, pp. 188-195, 1994. Mead C. and M. Ismail (Eds.), Analog VLSI Implementation of Neural Networks, Kluwer Academic Press, Newell, MA, 1989. Spi11mann L. and J. Werner (Eds.), Visual Perception: The Neurophysiological Foundations, Academic Press, San Diego, CA, 1990.
1997
6
1,408
Computing with Action Potentials John J. Hopfield* Carlos D. Brody t Sam Roweis t Abstract Most computational engineering based loosely on biology uses continuous variables to represent neural activity. Yet most neurons communicate with action potentials. The engineering view is equivalent to using a rate-code for representing information and for computing. An increasing number of examples are being discovered in which biology may not be using rate codes. Information can be represented using the timing of action potentials, and efficiently computed with in this representation. The "analog match" problem of odour identification is a simple problem which can be efficiently solved using action potential timing and an underlying rhythm. By using adapting units to effect a fundamental change of representation of a problem, we map the recognition of words (having uniform time-warp) in connected speech into the same analog match problem. We describe the architecture and preliminary results of such a recognition system. Using the fast events of biology in conjunction with an underlying rhythm is one way to overcome the limits of an eventdriven view of computation. When the intrinsic hardware is much faster than the time scale of change of inputs, this approach can greatly increase the effective computation per unit time on a given quantity of hardware. 1 Spike timing Most neurons communicate using action potentials - stereotyped pulses of activity that are propagated along axons without change of shape over long distances by active regenerative processes. They provide a pulse-coded way of sending information. Individual action potentials last about 2 ros. Typical active nerve cells generate 5-100 action potentials/sec. Most biologically inspired engineering of neural networks represent the activity of a nerve cell by a continuous variable which can be interpreted as the short-time average rate of generating action potentials. Most traditional discussions by neurobiologists concerning how information is represented and processed in the brain have similarly relied on using "short term mean firing rate" as the carrier of information and the basis for computation. But this is often an ineffective way to compute and represent information in neurobiology. *Dept. of Molecular Biology, Princeton University. jhopfield@watson.princeton. edu t Computation & Neural Systems, California Institute of Technology. Computing with Action Potentials 167 To define "short term mean firing rate" with reasonable accuracy, it is necessary to either wait for several action potentials to arrive from a single neuron, or to average over many roughly equivalent cells. One of these necessitates slow processing; the other requires redundant "wetware". Since action potentials are short events with sharp rise times, action potential timing is another way that information can be represented and computed with ([Hopfield, 1995]). Action potential timing seems to be the basis for some neural computations, such as the determination of a sharp response time to an ultrasonic pulse generated by the moustache bat. In this system, the bat generates a 10 ms pulse during which the frequency changes monotonically with time (a "chirp"). In the cochlea and cochlear nucleus, cells which are responsive to different frequencies will be sequentially driven, each producing zero or one action potentials during the time when the frequency is in their responsive band. These action potentials converge onto a target cell. However, while the times of initiation of the action potentials from the different frequency bands are different, the length and propagation speed of the various axons have been coordinated to result in all the action potentials arriving at the target cell at the same time, thus recognizing the "chirped" pulse as a whole, while discriminating against random sounds of the same overall duration. Taking this hint from biology, we next investigate the use of action potential timing to represent information and compute with in one of the fundamental computational problems relevant to olfaction, noting why the elementary "neural net" engineering solution is poor, and showing why computing with action potentials lacks the deficiencies of the conventional elementary solution. 2 Analog match The simplest computational problem of odors is merely to identify a known odor when a single odor dominates the olfactory scene. Most natural odors consist of mixtures of several molecular species. At some particular strength a complex odor b can be described by the concentrations Nt of its constitutive molecular of species i. If the stimulus intensity changes, each component increases (or decreases) by the same multiplicative factor. It is convenient to describe the stimulus as a product of two factors, an intensity .A and normalized components n~ as: .A = ~jNJ => b _ Nbj \ Nb _ \ b ni i A or i -Ani (1) The n~ are normalized, or relative concentrations of different molecules, and .A describes the overall odor intensity. Ideally, a given odor quality is described by the pattern of n~ , which does not change when the odor intensity .A changes. When a stimulus s described by a set {NJ} is presented, an ideal odor quality detector answers "yes" to the question "is odor b present?" if and only if for some value of .A: N~~.An~ 't/J' J 3 (2) This general computation has been called analog match. 1 The elementary "new·at net" way to solve analog match and recognize a single odor independent of intensity would bt: to use a single "grandmother unit" of the following type. I The analog match problem of olfaction is actually viewed through olfactory receptor cells. Studies of vertebrate sensory cells have shown that each molecular species stimulates many different sensory cells, and each cell is excited by many different molecular species. The pattern of relative excitation across the population of sensory cell classes determines the odor quality in the generalist olfactory system. There are about 1000 broadly responsive cell types; thus, the olfactory systems of higher animals apparently solve an analog match problem of the type described by (2), except that the indices refer to cell types, and the actual dimension is no more than 1000. 168 1. J. Hopfield, C. D. Brody and S. Roweis Call the unknown odor vector I, and the weight vector W. The input to the unit will then be I . W. If W = n / lin II and I is pre-normalized by dividing by the Euclidean magnitude 11111, recognition can be identified by I . W > .95 , or whatever threshold describes the degree of precision in identification which the task requires. This solution has four major weaknesses. 1. Euclidean normalization is used; not a trivial calculation for real neural hardware. 2. The size of input components Ik and their importance is confounded. If a weak component has particular importance, or a strong one is not reliable, there is no way to represent this. W describes only the size of the target odor components. 3. There is no natural composition if the problem is to be broken into a hierarchy by breaking the inputs into several parts, solving independently, and feeding these results on to a higher level unit for a final recognition. This is best seen by analogy to vision. If I recognize in a picture grandmother's nose at one scale, her mouth at another, and her right eye at a third scale, then it is assuredly not grandmother. Separate normalization is a disaster for creating hierarchies. 4. A substantial number of inputs may be missing or giving grossly wrong information. The "dot-product-and-threshold" solution cannot contend with this problem. For example, in olfaction, two of the common sources of noise are the adaptation of a subset of sensors due to previous strong odors, and receptors stuck "on" due to the retention of strongly bound molecules from previous odors. All four problems are removed when the information is encoded and computed with in an action potential representation, as illustrated below .. The three channels of analog input Ia'/b, Ie are illustrated on the left. They are converted to a spike timing representation by the position of action potentials with respect to a fiducial time T. The interval between T and the time of an action potential in a channel j is equal to log Ij. Each channel is connected to an output unit through a delay line of length t,.j = logn~, where n b is the target vector to be identified. When the analog match criterion is satisAed, the pulses on all three channels will arrive at the target unit at the same time, driving it strongly. If all inputs are scaled by a, then the times of the action potentials will all be changed by log a. The three action potentials will arrive at the recognition unit simultaneously, but a a time shifted by loga. Thus a pattern can be recognized (or not) on the basis of its relative components. Scale information is retained in the time at which the recognition unit is driven. The system clearly "composes", and difficulty (3) is sunnounted. No normalization is required, eliminating difficulty (1). Each pathway has two parameters describing it, a delay (which contains the information about the pattern to be recognized) and a synaptic strength (which describes the weight of the action potential at the recognition unit). Scale and importance are separately represented. The central computational motif is very similar to that used in bat sonar, using relative timing to represent information and time delays to represent target patterns. 1\ atilOg ~--I m~a : j ---t>@recognitionunitsums b ~ log 1 ~ t--..... -+~ EPSPs then thresholds C ··) log 1 ~ ._ o T'; 0 T ~ Imin Imax Delays set prototype pattern Weights set relath'c feature importance Computing with Action Potentials 169 This system also tolerates errors due to missing or grossly inaccurate information. The figure below illustrates this fact for the case of three inputs, and contrasts the receptive fields of a system computing with action potentials with those of a conventional grandmother cell. (The only relevant variables are the projections of the input vector on the surface of the unit sphere, as illustrated.) When the thresholds are set high, both schemes recognize a small, roughly circular region around the target pattern (here chosen as 111). Lowering the recognition threshold in the action-potential based scheme results in a star-shaped region being recognized; this region can be characterized as "recognize if any two components are in the correct ratio, independent of the size of the third component." Pattern 110 is thus recognized as being similar to 111 while still rejecting most of the space as not resembling the target. In contrast, to recognize 110 with the conventional unit requires such threshold lowering that almost any vector would be recognized. Spike Timing Normalize, Dot Product thresh = 0.7 0.6 0.4 thresh = 0.99 0.95 0.90 This method of representation and computation using action potential timing requires a fiducial time available to all neurons participating in stimulus encoding. Fiducial times might be externally generated by salient events, as they are in the case of moustache bat sonar. Or they could be internally generated, sporadically or periodically. In the case of the olfactory system, the first processing area of all animals has an oscillatory behavior. A large piece of the biophysics of neurons can be represented by the idea that neurons are leaky integrators, and that when their internal potential is pushed above a critical value, they produce an action potential, and their internal potential is reset a fixed distance below threshold. When a sub-threshold input having a frequency j is combined with a steady analog current I, the system generates action potentials at frequency j, but whose phase with respect to the underlying oscillation is a monotone function of I. Thus the system encodes I into a phase (or time) of an action potential with respect to the underlying rhythm. Interestingly, in mammals, the second stage of the olfactory system, the prepiriform cortex, has slow axons propagating signals across it. The propagation time delays are comparable to 1/ j. The system has the capability of encoding and analyzing information in action potential timing. 3 Time warp and speech Recognizing syllables or words independent of a uniform stretch (''uniform time warp") can in principle be cast as an analog match problem and transformed into neural variables [Hopfield, 1996]. We next describe this approach in relationship to a previous "neural network" way of recognizing words in connected speech [Hopfield and Tank, 1987, Unnikrishnan et aI., 1991, Unnikrishnan et aI., 1992] (URT for short). A block diagram below shows the UHT neural network for recognizing a small vocabulary of words in connected speech. The speech signal is passed through a bank of band-pass filters, and an elementary neural feature detector then examines whether each frequency is a local maximum of the short-term power spectrum. If so, it propagates a "1" down a delay line from that feature detector, thus converting the pattern of features in time into a pattern in space. The recognition unit for a particular word is then connected to these delay lines by a pattern of weights which are trained on a large data base. 170 UHT diagram Time De\ays Time-warp diagram 1. 1. Hopfield, C. D. Brody and S. Roweis Time De\ays The conceptual strength of this circuit is that it requires no indication of the boundaries between words. Indeed, there is no such concept in the circuit. The conceptual weakness of this "neural network" is that the recognition process for a particular word is equivalent to sliding a rigid template across the feature pattern. Unfortunately, even a single speaker has great variation in the duration of a given word under different circumstances, as illustrated in the two spectrograms below. Clearly no single template will fit these both of these utterances of "one" very well. This general problem is known as time-warp. A time-warp invariant recognizer would have considerable advantage. Two instances of "one" [~~Ci 00 0.3 00 0.3 Smoothed spectrogram Thresholding Time since feature ~~t~ 00 0.3 00 0.3 The UHT approach represents a sequence by the presence of a signal on feature signal lines A, B, C, as shown on the left of the figure below. Suppose the end of the word occurs at some particular time as indicated. Then the feature starts and stops can be described as an analog vector of times, whose components are shown by the arrows as indicated. In this representation, a word which is spoken more slowly simply has all its vector components multiplied by a common factor. The problem of recognizing words within a uniform time warp is thus isomorphic with the analog match problem, and can be readily solved by using action potential timing and an underlying rhythm, as described above. In our present modeling, the rhythm has a frequency of 50 Hz, significantly faster than the rate at which new features appear in speech. This frequency corresponds to the clock rate at which speech features are effectively "sampled". In the UHT circuit this rate was set by the response timescale of the recognition units. But where each template in the UHT circuit attempted only a single match with the feature vector per sample, this circuit allows the attempted match of many possible time-warps with the feature vector per sample. (The range of time-warps allowed is determined by the oscillation frequency and the temporal resolution of the spike timing system.) A: 000111111000001 B: 00111100000000 C: 00000001111000 new., rep start feature stop feature A: B: C: The block diagram of the neural circuit necessary to recognize words in connected speech with uniform time warp is sketched above. It looks superficially similar to the UHT circuit beside it. except for the insertion of a ramp generator and a phase encoder between the Computing with Action Potentials 171 feature detectors and the delay system. Recognizing a feature activates a ramp generator whose output decays. This becomes the input to a "neuron" which has an additional oscillatory input at frequency f. If the ramp decay and oscillation shapes are properly matched, the logarithm of the time since the occurrence of a feature is encoded in action potential timing as above. Following this encoding system there is a set of tapped delay lines of the same style which would have been necessary to solve the olfactory decoding problem. The total the amount of hardware is similar to the UHT approach because the connections and delay lines dominate the resource requirements. The operation of the present circuit is, however, entirely different. What the present circuit does is to "remember" recent features by using ramp generators, encode the logarithms of times since features into action potential timing, and recognize the pattern with a timedelay circuit. The time delays in the present circuit have an entirely different meaning from those of the UHT circuit, since they are dimensionally not physical time, but instead are a representation of the logarithm of feature times. The time delays are only on the scale of 1/ f rather than the duration of a word. There are simple biological implementations of these ideas. For example, when a neuron responds, as many do, to a step in its input by generating a train of action potentials with gradually falling firing frequency (adaptation), the temporal spacing between the action potentials is an implicit representation of the time since the "step" occurred (see [Hopfield, 1996]). For our initial engineering investigations, we used very simple features. The power within each frequency band is merely thresholded. An upward crossing of that threshold represents a "start" feature for that band, and a downward crossing a "end" feature. A pattern of such features is identified above beside the spectrograms. Although the pattern of feature vectors for the two examples of "one" do not match well because of time warp, when the logarithms of the patterns are taken, the difference between the two patterns is chiefly a shift, i.e. the dominant difference between the patterns is merely uniform time warp. To recognize the spoken digit "one", for example, the appropriate delay for each channel was chosen so as to minimize the variance of the post-delay spike times (thus aligning the spikes produced by all features), averaged over the different exemplars which contained that feature. All channels with a feature present were given a unity weight connection at that delay value; inactive channels were given weight zero. The figure below shows, on the left, the spike input to the recognition unit (top) and the sum of the EPSPs caused by these inputs (bottom). The examples of "one" produced maximum outputs in different cycles of the oscillation, corresponding to the actual "end times" at which the words should be viewed as recognized. Only the maximum cycle for each utterance is shown here. Within their maximum cycle, different examples of the utterances produced maximal outputs at different phases of the cycle, corresponding to the fact that the different utterances were recognized as having different time warp factors. The panels on the right show the result of playing spoken "four"s into the same recognition unit. III Q) ,§ ~ ::J Three "ones"s ..... ;... ... iii _Q) f~.{ ~----~ ' ~"'~ " --------.~ > ~ :1:: c::: ::J "5 threshold a. S~---'-"~~-----o time within cycle Several "four"s .< « ~.::~~ :, ,,_ ...... P": " •. - .... ..... . time within cycle 172 J 1. Hopfield, C. D. Brody and S. Roweis There is no difficulty in distinguishing "ones" from other digits. When, however, the possibility of adjusting the time-warp is turned off, resulting in a "rigid" template it was not possible to discriminate between "one" and other digits. (Disabling time-warp effectively forces recognition to take place at the same "time" during each oscillation. Imagine drawing a vertical line in the figure and notice that it cannot pass through all the peaks of output unit activities.) We have described the beginning of a research project to use action potentials and timing to solve a real speech problem in a "neural" fashion. Very unsophisticated features were used, and no competitive learning was employed in setting the connection weights. Even so, the system appears to function in a word-spotting mode, and displays a facility of matching patterns with time warp. Its intrinsic design makes it insensitive to burst noise and to frequency-band noise. How is computation being done? After features are detected, rates of change are slow, and little additional information is accumulated during say a 50 ms. interval. If we let "time be its own representation", as Carver Mead used to say, we let the information source be the effective clock, and the effective clock rate is only about 20 Hz. Instead, by adding a rhythm, we can interleave many calculations (in this particular case about the possibility of different time warps) while the basic inputs are changing very little. Using an oscillation frequency of 50 Hz and a resolving time of I ms in the speech example we describe increases the effective clock rate by more than a factor of 10 compared to the effective clock rate of the UHT computation. We believe that "time as its own representation" is a loser for processing information when the computation desired is complex but the data is slowly changing. No computer scientist would use a computer with a 24 Hz clock to analyze a movie because the movie is viewed at 24 frames a second. Biology will surely have found its way out of this "paced by the environment" dilemma. Finally, because problems are easy or hard according to how algorithms fit on hardware and according to the representation of information, the differences in operation between the system we have described and conventional ANN suggest the utility of thinking about other problems in a timing representation. Acknowledgements The authors thank Sanjoy Mahajan and Erik Winfree for comments and help with preparation of the manuscript. This work was supported in part by the Center for Neuromorphic Systems Engineering as a part of the National Science Foundation Engineering Research Center Program under grant EEC-9402726. Roweis is supported by the Natural Sciences and Engineering Research Council of Canada under an NSERC 1967 Award. References [Hopfield, 1995] Hopfield, J. (1995). Pattern recognition computation using action potential timing for stimulus representation. Nature, 376:3J---36. [Hopfield, 1996] Hopfield, J. (1996). Transforming neural computations and representing time. Proceedings o/the National Academy o/Sciences, 93:15440-15444. [Hopfield and Tank, 1987] Hopfield, J. and Tank, D. (1987). Neural computation by concentrating information in time. Proceedings 0/ the National Academy 0/ Sciences, 84: 1896-1900. [Unnikrishnan et aI., 1991] Unnikrishnan, K., Hopfield, J., and Tank, D. (1991). Connected digit speaker-dependent speech recognition using a neural network with time-delayed connections. IEEE Transactions on Signal ProceSSing, 39:698-713. [Unnikrishnan et aI., 1992] Unnikrishnan, K., Hopfield, J., and Tank, D. (1992). Speakerindependent digit recognition using a neural network with time-delayed connections. Neural Computation, 4:108-119.
1997
60
1,409
New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 2200, FIN-02015 HUT, Finland Email: aapo.hyvarinen<Ohut.fi Abstract We derive a first-order approximation of the density of maximum entropy for a continuous 1-D random variable, given a number of simple constraints. This results in a density expansion which is somewhat similar to the classical polynomial density expansions by Gram-Charlier and Edgeworth. Using this approximation of density, an approximation of 1-D differential entropy is derived. The approximation of entropy is both more exact and more robust against outliers than the classical approximation based on the polynomial density expansions, without being computationally more expensive. The approximation has applications, for example, in independent component analysis and projection pursuit. 1 Introduction The basic information-theoretic quantity for continuous one-dimensional random variables is differential entropy. The differential entropy H of a scalar random variable X with density f(x) is defined as H(X) = - / f(x) log f(x)dx. (1) The 1-D differential entropy, henceforth called simply entropy, has important applications such areas as independent component analysis [2, 10] and projection pursuit [5, 6]. Indeed, both of these methods can be considered as a search for directions in which entropy is minimal, for constant variance. Unfortunately, the estimation of entropy is quite difficult in practice. Using definition (1) requires estimation of the density of X, which is recognized to be both 274 A Hyviirinen theoretically difficult and computationally demanding. Simpler approximations of entropy have been proposed both in the context of projection pursuit [9] and independent component analysis [1, 2]. These approximations are usually based on approximating the density f(x) using the polynomial expansions of Gram-Charlier or Edgeworth [11]. This construction leads to the use of higher-order cumulants, like kurtosis. However, such cumulant-based methods often provide a rather poor approximation of entropy. There are two main reasons for this. Firstly, finitesample estimators of higher-order cumulants are highly sensitive to outliers: their values may depend on only a few, possibly erroneous, observations with large values [6]. This means that outliers may completely determine the estimates of cumulants, thus making them useless. Secondly, even if the cumulants were estimated perfectly, they measure mainly the tails of the distribution, and are largely unaffected by structure near the centre of the distribution [5]. Therefore, better approximations of entropy are needed. To this end, we introduce in this paper approximations of entropy that are both more exact in the expectation and have better finite-sample statistical properties, when compared to the cumulantbased approximations. Nevertheless, they retain the computational and conceptual simplicity of the cumulant-based approach. Our approximations are based on an approximative maximum entropy method. This means that we approximate the maximum entropy that is compatible with our measurements of the random variable X. This maximum entropy, or further approximations thereof, can then be used as a meaningful approximation of the entropy of X. To accomplish this, we derive a first-order approximation of the density that has the maximum entropy given a set of constraints, and then use it to derive approximations of the differential entropy of X. 2 Applications of Differential Entropy First, we discuss some applications of the approximations introduced in this paper. Two important applications of differential entropy are independent component analysis (ICA) and projection pursuit. 'In the general formulation of ICA [2], the purpose is to transform an observed random vector x = (Xl, ... , Xm)T linearly into a random vector s = (81, ... , 8m )T whose components are statistically as independent from each other as possible. The mutual dependence of the 8i is classically measured by mutual information. Assuming that the linear transformation is invertible, the mutual information 1(81, ... , 8 m ) can be expressed as 1(81, ... , 8m) = 2:i H(8i) - H(Xl, ... , Xm) -log Idet MI where M is the matrix defining the transformation s = Mx. The second term on the right-hand side does not depend on M, and the minimization of the last term is a simple matter of differential calculus. Therefore, the critical part is the estimation of the 1-D entropies H(8i}: finding an efficient and reliable estimator or approximation of entropy enables an efficient and reliable estimation of the ICA decomposition. In projection pursuit, the purpose is to search for projections of multivariate data which have 'interesting' distributions [5, 6, 9]. Typically, interestingness is considered equivalent with non-Gaussianity. A natural criterion of non-Gaussianity is entropy [6, 9], which attains its maximum (for constant variance) when the distribution is Gaussian, and all other distributions have smaller entropies. Because of the difficulties encountered in the estimation of entropy, many authors have considered other measures of non-Gaussianity (see [3]) but entropy remains, in our view, the best choice of a projection pursuit index, especially because it provides a simple connection to ICA. Indeed, it can be shown [2] that in ICA as well as in projection pursuit, the basic problem is to find directions in which entropy is minimized for New Approximations of Differential Entropy 275 constant variance. 3 Why maximum entropy? Assume that the information available on the density f(x) of the scalar random variable X is of the form J f(x)Gi(x)dx = Ci, for i = 1, ... , n, (2) which means in practice that we have estimated the expectations E{Gi(X)} of n different functions of X. Since we are not assuming any model for the random variable X, the estimation of the entropy of X using this information is not a well-defined problem: there exist an infinite number of distributions for which the constraints in (2) are fulfilled, but whose entropies are very different from each other. In particular, the differential entropy reaches -00 in the limit where X takes only a finite number of values. A simple solution to this dilemma is the maximum entropy method. This means that we compute the maximum entropy that is compatible with our constraints or measurements in (2), which is a well-defined problem. This maximum entropy, or further approximations thereof, can then be used as an approximation of the entropy of X. Our approach thus is very different from the asymptotic approach often used in projection pursuit [3, 5]. In the asymptotic approach, one establishes a sequence of functions Gi so that when n goes to infinity, the information in (2) gives an asymptotically convergent approximation of some theoretical projection pursuit index. We avoid in this paper any asymptotic considerations, and consider directly the case of finite information, i.e., finite n. This non-asymptotic approach is justified by the fact that often in practice, only a small number of measurements of the form (2) are used, for computational or other reasons. 4 Approximating the maximum entropy density In this section, we shall derive an approximation of the density of maximum entropy compatible with the measurements in (2). The basic results of the maximum entropy method tell us [4] that under some regularity conditions, the density fo(x) which satisfies the constraints (2) and has maximum entropy among all such densities, is of the form (3) where A and ai are constants that are determined from the Ci, using the constraints in (2) (i.e., by substituting the right-hand side of (3) for fin (2)), and the constraint J fo(x)dx = 1. This leads in general to a system of n+ 1 non-linear equations which is difficult to solve. Therefore, we decide to make a simple approximation of fo. This is based on the assumption that the density f(x) is not very far from a Gaussian distribution of the same mean and variance. Such an assumption, though perhaps counterintuitive, is justified because we shall construct a density expansion (not unlike a Taylor expansion) in the vicinity of the Gaussian density. In addition, we can make the technical assumption that f(x) is near the standardized Gaussian density 'P(x) = exp(-x2/2)/..;2ii, since this amounts simply to making X zeromean and of unit variance. Therefore we put two additional constraints in (2), defined by Gn+1 (x) = x, Cn+l = 0 and Gn+2(x) = x2, Cn+2 = 1. To further simplify 276 A. HyvlJrinen the calculations, let us make another, purely technical assumption: The functions Gi , i = 1, ... , n, form an orthonormal system according to the metric defined by cp, and are orthogonal to all polynomials of second degree. In other words, for all i,j=1, ... ,n ! { 1, if i = j cp(x)Gi(x)Gj(x)dx = 0, if i i- j , (4) For any linearly independent functions Gi , this assumption can always be made true by ordinary Gram-Schmidt orthonormalization. Now, note that the assumption of near-Gaussianity implies that all the other ai in (3) are very small compared to an+2 ~ -1/2, since the exponential in (3) is not far from exp( _x2 /2). Thus we can make a first-order approximation of the exponential function (detailed derivations can be found in [8]). This allows for simple solutions for the constants in (3), and we obtain the approximative maximum entropy density, which we denote by j(x): n j(x) = cp(x)(1 + L CiGi(X)) (5) i=l where Ci = E{ G i (X)}. To estimate this density in practice, the Ci are estimated, for example, as the corresponding sample averages of the Gi(X). The density expansion in (5) is somewhat similar to the Gram-Charlier and Edgeworth expansions [11]. 5 Approximating the differential entropy An important application of the approximation of density shown in (5) is in approximation of entropy. A simple approximation of entropy can be found by approximating both occurences of f in the definition (1) by j as defined in Eq. (5), and using a Taylor approximation of the logarithmic function, which yields (1 + €) log(1 + €) ~ € + €2/2. Thus one obtains after some algebraic manipulations [8] H(X) ~ - ! j(x) log j(x)dx ~ H(v) - ~ t c; i=l (6) where H(v) = ~(1 +log(27r)) means the entropy of a standardized Gaussian variable, and Ci = E{ Gi(X)} as above. Note that even in cases where this approximation is not very accurate, (6) can be used to construct a projection pursuit index (or a measure of non-Gaussianity) that is consistent in the sense that (6) obtains its maximum value, H(v), when X has a Gaussian distribution. 6 Choosing the measuring functions Now it remains to choose the 'measuring' functions Gi that define the information given in (2). As noted in Section 4, one can take practically any set of linearly independent functions, say Ch i = 1, ... , n, and then apply Gram-Schmidt orthonormalization on the set containing those functions and the monomials xk, k = 0,1,2, so as to obtain the set Gi that fulfills the orthogonality assumptions in (4). This can be done, in general, by numerical integration. In the practical choice of the functions Gi , the following criteria must be emphasized: First, the practical estimation of E{Gi(x)} should not be statistically difficult. In particular, this estimation should not be too sensitive to outliers. Second, the maximum entropy method assumes New Approximations of Differential Entropy 277 that the function fo in (3) is integrable. Therefore, to ensure that the maximum entropy distribution exists in the first place, the Gi(x) must not grow faster than quadratically as a function of Ixl, because a function growing faster might lead to non-integrability of fo [4]. Finally, the Gi must capture aspects of the distribution of X that are pertinent in the computation of entropy. In particular, if the density f(x) were known, the optimal function GoPt would clearly be -logf(x), because -E{log f(X)} gives directly the entropy. Thus, one might use the log-densities of some known important densities as Gi . The first two criteria are met if the Gi(x) are functions that do not grow too fast (not faster than quadratically) when Ixl grows. This excludes, for example, the use of higher-order polynomials, as are used in the Gram-Charlier and Edgeworth expansions. One might then search, according to the last criterion above, for logdensities of some well-known distributions that also fulfill the first two conditions. Examples will be given in the next section. It should be noted, however, that the criteria above only delimit the space of function that can be used. Our framework enables the use of very different functions (or just one) as Ch The choice is not restricted to some well-known basis of a functional space, as in most approaches [1,2,9]. However, if prior knowledge is available on the distributions whose entropy is to estimated, the above consideration shows how to choose the optimal function. 7 A simple special case A simple special case of (5) is obtained if one uses two functions G1 and G2 , which are chosen so that G 1 is odd and (h is even. Such a system of two functions can measure the two most important features of non-Gaussian 1-D distributions. The odd function measures the asymmetry, and the even function measures the bimodality /sparsity dimension (called central hole/central mass concentration in [3)). After extensive experiments, Cook et al [3] also came to the conclusion that two such measures (or two terms in their projection pursuit index) are enough for projection pursuit in most cases. Classically, these features have been measured by skewness and kurtosis, which correspond to G1(x) = x3 and G2 (x) = X4, but we do not use these functions for the reasons explained in Section 6. In this special case, the approximation in (6) simplifies to where k1 and k2 are positive constants (see [8]), and v is a Gaussian random variable of zero mean and unit variance. Practical examples of choices of Gi that are consistent with the requirements in Section 6 are the following. First, for measuring bimodality /sparsity, one might use, according to the recommendations of Section 6, the log-density of the double exponential (or Laplace) distribution: G2a (x) = Ixl. For computational reasons, a smoother version of G2a might also be used. Another choice would be the Gaussian function, which may be considered as the log-density of a distribution with infinitely heavy tails: G2b (X) = exp( -x2 /2). For measuring asymmetry, one might use, on more heuristic grounds, the following function: G 1 (x) = x exp ( - x2 /2). which corresponds to the second term in the projection pursuit index proposed in [3]. Using the above examples one obtains two practical examples of (7): Ha(X) = H(v) - [k1 (E{X exp( _X2 /2)})2 + k2(E{IXI} - }2/71")2], (8) Hb(X) = H(v) - [k1 (E{X exp( _X2 /2)})2 + k~(E{exp( _X2 /2)} - Ji72)2], (9) 278 A. Hyvllrinen with kl = 36/(8V3 - 9), k~ = 1/(2 - 6/'rr), and k~ = 24/(16V3 - 27). As above, H(v) = !(1 + log(27r)) means the entropy of a standardized Gaussian variable. These approximations Ha(X) and Hb(X) can be considered more robust and accurate generalizations of the approximation derLved using the Gr~m-Charlier expansion in [9]. Indeed, using the polynomials G 1 (x) = x3 and G2(x) = x4 one obtains the approximation of entropy in (9), which is in practice almost identical to those proposed in [1, 2]. Finally, note that the approximation in (9) is very similar to the first two terms of the projection pursuit index in [3]. Algorithms for independent component analysis and projection pursuit can be derived from these approximations, see [7]. 8 Simulation results To show the validity of our approximations of differential entropy we compared the approximations Ha and Hb in Eqs (8) and (9) in Section 7, with the one offered by higher-order cumulants as given in [9]. The expectations were here evaluated exactly, ignoring finite-sample effects. First, we used a family of Gaussian mixture densities, defined by f(x) = J.tcp(x) + (1 - J.t)2cp(2(x - 1)) (10) where J.t is a parameter that takes all the values in the interval 0 ::; J.t ::; 1. This family includes asymmetric densities of both negative and positive kurtosis. The results are depicted in Fig. 1. Note that the plots show approximations of negentropies: the negentropy of X equals H(v) -H(X), where v is again a standardized Gaussian variable. One can see that both of the approximations Ha and Hb introduced in Section 7 were considerably more accurate than the cumulant-based approximation. Second, we considered the following family of density functions: (11) where a is a positive constant: and C1, C2 are normalization constants that make f Ot a probability density of unit variance. For different values of a , the densities in this family exhibit different shapes. For a < 2, one obtains (sparse) densities of positive kurtosis. For a = 2, one obtains the Gaussian density, and for a > 2, a density of negative kurtosis. Thus the densities in this family can be used as examples of different symmetric non-Gaussian densities. In Figure 2, the different approximations are plotted for this family, using parameter values .5 ::; a ~ 3. Since the densities used are all symmetric, the first terms in the approximations were neglected. Again, it is clear that both of the approximations Ha and Hb introduced in Section 7 were much more accurate than the cumulant-based approximation in [2, 9]. (In the case of symmetric densities, these two cumulant-based approximations are identical). Especially in the case of sparse densities (or densities of positive kurtosis), the cumulant-based approximations performed very poorly; this is probably because it gives too much weight to the tails of the distribution. References [1] S. Amari, A. Cichocki, and H.H. Yang. A new learning algorithm for blind source separation. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing 8 (Proc. NIPS '95), pages 757- 763. MIT Press, Cambridge, MA, 1996. 12] P. Comon. Independent component analysis - a new concept? Signal Processing, 36:287- 314, 1994. New Approximationr of Differential Entropy 0025 0 01 $ o. , 0 6 . '. , ' , " -., '., , ' "':" ,' ...... ~~ .. -... :- - .... "-: "" "', ... " , , \' " " ""-.... " -:-~ ~--~-.--, . 279 Figure 1: Comparison of different approximations of negentropy, for the family of mixture densities in (10) parametrized by JL ranging from 0 to 1. Solid curve: true negentropy. Dotted curve: cumulantbased approximation. Dashed curve: approximation Ha in (8). Dot-dashed Cu:Lve: approximation Hb in (9). Our two approximations were clearly better than the cumulant-based one. . .' Figure 2: Comparison of different approximations of negentropy, for the family of densities (11) parametrized by Q. On the left, approximations for densities of positive kurtosis (.5 ~ Q < 2) are depicted, and on the right, approximations for densities of negative kurtosis (2 < Q ~ 3). Solid curve: true negentropy. Dotted curve: cumulant-based approximation. Dashed curve: approximation Ha in (8). Dot-dashed curve: approximation Hb in (9). Clearly, our two approximations were much better than the cumulant-based one, especially in the case of densities of positive kurtosis. [3) D. Cook, A. Buja, and J. Cabrera. Projection pursuit indexes based on orthonormal function expansions. J. of Computational and Graphical Statistics, 2(3):225-250, 1993. [4) T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 1991. [5) J.H. Friedman. Exploratory projection pursuit. J. of the American Statistical Association, 82(397):249-266, 1987. [6) P.J. Huber. Projection pursuit. The Annals of Statistics, 13(2):435-475, 1985. [7) A. Hyvarinen. Independent component analysis by minimization of mutual information. Technical Report A46, Helsinki University of Technology, Laboratory of Computer and Information Science, 1997. [8) A. Hyviirinen. New approximations of differential entropy for independent component analysis and projection pursuit. Technical Report A47, Helsinki University of Technology, Laboratory of Computer and Information Science, 1997. Available at http://www.cis.hut.fi;-aapo. [9) M.C. Jones and R. Sibson. What is projection pursuit ? J. of the Royal Statistical Society, ser. A, 150:1-36, 1987. [10) C. Jutten and J. Herault. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1-10, 1991. [11) M. Kendall and A. Stuart. The Advanced Theory of Statistics. Charles Griffin & Company, 1958.
1997
61
1,410
Independent Component Analysis for identification of artifacts in Magnetoencephalographic recordings Ricardo Vigario1; Veikko J ousmiiki2 , Matti Hiimiiliiinen2, Riitta Hari2, and Erkki Oja1 1 Lab. of Computer & Info. Science Helsinki University of Technology P.O. Box 2200, FIN-02015 HUT, Finland {Ricardo.Vigario, Erkki.Oja}@hut.fi 2 Brain Research Unit, Low Temperature Lab. Helsinki University of Technology P.O. Box 2200, FIN-02015 HUT, Finland {veikko, msh, hari}@neuro.hut.fi Abstract We have studied the application of an independent component analysis (ICA) approach to the identification and possible removal of artifacts from a magnetoencephalographic (MEG) recording. This statistical technique separates components according to the kurtosis of their amplitude distributions over time, thus distinguishing between strictly periodical signals, and regularly and irregularly occurring signals. Many artifacts belong to the last category. In order to assess the effectiveness of the method, controlled artifacts were produced, which included saccadic eye movements and blinks, increased muscular tension due to biting and the presence of a digital watch inside the magnetically shielded room. The results demonstrate the capability of the method to identify and clearly isolate the produced artifacts. 1 Introduction When using a magnetoencephalographic (MEG) record, as a research or clinical tool, the investigator may face a problem of extracting the essential features of the neuromagnetic • Corresponding author 230 R. Vigario, v. Jousmiiki, M. Hiimiiliiinen, R. Hari and E. Oja signals in the presence of artifacts. The amplitude of the disturbance may be higher than that of the brain signals, and the artifacts may resemble pathological signals in shape. For example, the heart's electrical activity, captured by the lowest sensors of a whole-scalp magnetometer array, may resemble epileptic spikes and slow waves (Jousmili and Hari 1996). The identification and eventual removal of artifacts is a common problem in electroencephalography (EEG), but has been very infrequently discussed in context to MEG (Hari 1993; Berg and Scherg 1994). The simplest and eventually most commonly used artifact correction method is rejection, based on discarding portions of MEG that coincide with those artifacts. Other methods tend to restrict the subject from producing the artifacts (e.g. by asking the subject to fix the eyes on a target to avoid eye-related artifacts, or to relax to avoid muscular artifacts). The effectiveness of those methods can be questionable in studies of neurological patients, or other non-co-operative subjects. In eye artifact canceling, other methods are available and have recently been reviewed by Vigario (I 997b) whose method is close to the one presented here, and in Jung et aI. (1998). This paper introduces a new method to separate brain activity from artifacts, based on the assumption that the brain activity and the artifacts are anatomically and physiologically separate processes, and that their independence is reflected in the statistical relation between the magnetic signals generated by those processes. The remaining of the paper will include an introduction to the independent component analysis, with a presentation of the algorithm employed and some justification of this approach. Experimental data are used to illustrate the feasibility of the technique, followed by a discussion on the results. 2 Independent Component Analysis Independent component analysis is a useful extension of the principal component analysis (PC A). It has been developed some years ago in context with blind source separation applications (Jutten and Herault 1991; Comon 1994). In PCA. the eigenvectors of the signal covariance matrix C = E{xxT } give the directions oflargest variance on the input data x. The principal components found by projecting x onto those perpendicular basis vectors are uncorrelated, and their directions orthogonal. However, standard PCA is not suited for dealing with non-Gaussian data. Several authors, from the signal processing to the artificial neural network communities, have shown that information obtained from a second-order method such as PCA is not enough and higher-order statistics are needed when dealing with the more demanding restriction of independence (Jutten and Herault 1991; Comon 1994). A good tutorial on neural ICA implementations is available by Karhunen et al. (1997). The particular algorithm used in this study was presented and derived by Hyvarinen and Oja (1997a. 1997b). 2.1 The model In blind source separation, the original independent sources are assumed to be unknown, and we only have access to their weighted sum. In this model, the signals recorded in an MEG study are noted as xk(i) (i ranging from 1 to L, the number of sensors used, and k denoting discrete time); see Fig. 1. Each xk(i) is expressed as the weighted sum of M ICAfor Identification of Artifacts in MEG Recordings 231 independent signals Sk(j), following the vector expression: M Xk = La(j)sdj) = ASk, (1) j=l where Xk = [xk(1), ... , xk(L)]T is an L-dimensional data vector, made up of the L mixtures at discrete time k. The sk(1), ... , sk(M) are the M zero mean independent source signals, and A = [a(1), . .. , a(M)] is a mixing matrix independent of time whose elements ail are th.e unknown coefficients of the mixtures. In order to perform ICA, it is necessary to have at least as many mixtures as there are independent sources (L ~ M). When this relation is not fully guaranteed, and the dimensionality of the problem is high enough, we should expect the first independent components to present clearly the most strongly independent signals, while the last components still consist of mixtures of the remaining signals. In our study, we did expect that the artifacts, being clearly independent from the brain activity, should come out in the first independent components. The remaining of the brain activity (e.g. a and J-L rhythms) may need some further processing. The mixing matrix A is a function of the geometry of the sources and the electrical conductivities of the brain, cerebrospinal fluid, skull and scalp. Although this matrix is unknown. we assume it to be constant, or slowly changing (to preserve some local constancy). The problem is now to estimate the independent signals Sk (j) from their mixtures, or the equivalent problem of finding the separating matrix B that satisfies (see Eq. 1) (2) In our algorithm, the solution uses the statistical definition of fourth-order cumulant or kurtosis that, for the ith source signal, is defined as kurt(s(i)) = E{s(i)4} - 3[E{s(i)2}]2, where E( s) denotes the mathematical expectation of s. 2.2 The algorithm The initial step in source separation, using the method described in this article, is whitening, or sphering. This projection of the data is used to achieve the uncorrelation between the solutions found, which is a prerequisite of statistical independence (Hyvarinen and Oja 1997a). The whitening can as well be seen to ease the separation of the independent signals (Karhunen et al. 1997). It may be accomplished by PCA projection: v = V x, with E{ vvT} = I. The whitening matrix V is given by V - A-1/ 2-=T ..... , where A = diag[-\(1), ... , -\(M)] is a diagonal matrix with the eigenvalues of the data covariance matrix E{xxT}, and 8 a matrix with the corresponding eigenvectors as its columns. Consider a linear combination y = w T v of a sphered data vector v, with Ilwll = 1. Then E{y2} = .1 andkurt(y) = E{y4}-3, whose gradientwithrespecttow is 4E{v(wTv)3}. Based on this, Hyvarinen and Oja (1997a) introduced a simple and efficient fixed-point algorithm for computing ICA, calculated over sphered zero-mean vectors v, that is able to find one of the rows of the separating matrix B (noted w) and so identify one independent source at a time the corresponding independent source can then be found using Eq. 2. This algorithm, a gradient descent over the kurtosis, is defined for a particular k as 1. Take a random initial vector Wo of unit norm. Let l = 1. 232 R. Vigario, v. Jousmiiki, M. Hiimiiliiinen, R. Hari and E. Oja 2. Let Wi = E{V(W[.1 v)3} - 3Wl-I. The expectation can be estimated using a large sample OfVk vectors (say, 1,000 vectors). 3. Divide Wi by its norm (e.g. the Euclidean norm Ilwll = JLi wI J. 4. lflwT wi-II is not close enough to 1, let I = 1+1 andgo back to step 2. Otherwise, output the vector Wi. In order to estimate more than one solution, and up to a maximum of lvI, the algorithm may be run as many times as required. It is, nevertheless, necessary to remove the infonnation contained in the solutions already found, to estimate each time a different independent component. This can be achieved, after the fourth step of the algorithm, by simply subtracting the estimated solution s = w T v from the unsphered data Xk . As the solution is defined up to a multiplying constant, the subtracted vector must be multiplied by a vector containing the regression coefficients over each vector component of Xk. 3 Methods The MEG signals were recorded in a magnetically shielded room with a 122-channel whole-scalp Neuromag-122 neuromagnetometer. This device collects data at 61 locations over the scalp, using orthogonal double-loop pick-up coils that couple strongly to a local source just underneath, thus making the measurement "near-sighted" (HamaHi.inen et al. 1993). One of the authors served as the subject and was seated under the magnetometer. He kept his head immobile during the measurement. He was asked to blink and make horizontal saccades, in order to produce typical ocular artifacts. Moreover, to produce myographic artifacts, the subject was asked to bite his teeth for as long as 20 seconds. Yet another artifact was created by placing a digital watch one meter away from the helmet into the shieded room. Finally, to produce breathing artifacts, a piece of metal was placed next to the navel. Vertical and horizontal electro-oculograms (VEOG and HEOG) and electrocardiogram (ECG) between both wrists were recorded simultaneously with the MEG, in order to guide and ease the identification of the independent components. The bandpassfiltered MEG (0.03-90 Hz), VEOG, HEOG, and ECG (0.1-100 Hz) signals were digitized at 297 Hz, and further digitally low-pass filtered, with a cutoff frequency of 45 Hz and downsampled by a factor of 2. The total length of the recording was 2 minutes. A second set of recordings was perfonned, to assess the reproducibility of the results. Figure 1 presents a subset of 12 spontaneous MEG signals from the frontal, temporal and occipital areas. Due to the dimension of the data (122 magnetic signals were recorded), it is impractical to plot all MEG signals (the complete set is available on the internet see reference list for the adress (Vigario 1997a». Also both EOG channels and the electrocardiogram are presented. 4 Results Figure 2 shows sections of9 independent components (IC's) found from the recorded data, corresponding to a I min period, starting 1 min after the beginning of the measurements. The first two IC's, with a broad band spectrum, are clearly due to the musclular activity originated from the biting. Their separation into two components seems to correspond, on the basis of the field patterns, to two different sets of muscles that were activated during the process. IC3 and IC5 are, respectively showing the horizontal eye movements and the eye blinks, respectively. IC4 represents cardiac artifact that is very clearly extracted. In agreement with Jousmaki and Hari (1996), the magnetic field pattern of IC4 shows some predominance on the left. ICA/or Identification 0/ Artifacts in MEG Recordings MEG [ 1000 fTlcm EOG [ 500 IlV ECG [ 500 IlV I-- saccades ---l I-blinking ---l I-biting ---l MEG ~=::::::::::::::=:: :: ~·'104~ M rJ. ......... ,J.\ ....... 1iIIiM~ .. t... 2 t :;::::::;:::~= :; ~::::::::;::= :; ~ •• ",~Jrt ..,. t .... ~,.~ .• .J.. . .../\""$""~I 4 ~ 5 t ., ... ...., ,'fIJ'\, ,..,d I • .,............. ................. " .... Dei ..... " I; rp .. I p", .... , ............ at ... '.... 5 ~ ........ ..-. ,LIlt ... ., ....,..,."........ . .'''IIb'''*. rt ,P .... , ., ............... ' tMn':M.U ... , , ..... ' 6 t U\..,.--II..------'-__ ooII..Jl,,- VEOG -1I\JY. · --".'tIItS It ... 11.1. HEOG ~UijuJJJ.LU Wl Uij.lJU.LllU.UUUllUUij,UU~ijJJJ ECG 10 s 233 Figure 1: Samples of MEG signals, showing artifacts produced by blinking, saccades, biting and cardiac cycle. For each of the 6 positions shown, the two orthogonal directions of the sensors are plotted. The breathing artifact was visible in several independent components, e.g. IC6 and IC7. It is possible that, in each breathing the relative position and orientation of the metallic piece with respect to the magnetometer has changed. Therefore, the breathing artifact would be associated with more than one column of the mixing matrix A, or to a time varying mixing vector. To make the analysis less sensible to the breathing artifact, and to find the remaining artifacts, the data were high-pass filtered, with cutoff frequency at 1 Hz. Next, the independent component IC8 was found. It shows clearly the artifact originated at the digital watch, located to the right side of the magnetometer. The last independent component shown, relating to the first minute of the measurement, shows an independent component that is related to a sensor presenting higher RMS (root mean squared) noise than the others. 5 Discussion The present paper introduces a new approach to artifact identification from MEG recordings, based on the statistical technique of Independent Component Analysis. Using this method, we were able to isolate both eye movement and eye blinking artifacts, as well as 234 R. Vigario, v. Jousmiiki, M HtJmlJliiinen, R. Hari and E. Oja cardiac, myographic, and respiratory artifacts. The basic asswnption made upon the data used in the study is that of independence between brain and artifact waveforms. In most cases this independence can be verified by the known differences in physiological origins of those signals. Nevertheless, in some eventrelated potential (ERP) studies (e.g. when using infrequent or painful stimuli), both the cerebral and ocular signals can be similarly time-locked to the stimulus. This local time dependence could in principle affect these particular ICA studies. However, as the independence between two signals is a measure of the similarity between their joint amplitude distribution and the product of each signal's distribution (calculated throughout the entire signal, and not only close to the stimulus applied), it can be expected that the very local relation between those two signals, during stimulation, will not affect their global statistical relation. 6 Acknowledgment Supported by a grant from Junta Nacional de Investiga~ao Cientifica e Tecnologica, under its 'Programa PRAXIS XXI' (R.Y.) and the Academy of Finland (R.H.). References Berg, P. and M. Scherg (1994). A multiple source approach to the correction of eye artifacts. Electroenceph. clin. Neurophysiol. 90, 229-241. Comon, P. (1994). Independent component analysis - a new concept? Signal Processing 36,287-314. Hamalainen, M., R. Hari, R. Ilmoniemi, 1. Knuutila, and O. Y. Lounasmaa (1993, April). Magnetoencephalography-theory, instrumentation, and applications to noninvasive studies of the working human brain. Reviews o/Modern Physics 65(2), 413-497. Hari, R. (1993). Magnetoencephalography as a tool of clinical neurophysiology. In E. Niedermeyer and F. L. da Silva (Eds.), Electroencephalography. Basic principles, clinical applications, and relatedjields, pp. 1035-1061. Baltimore: Williams & Wilkins. Hyvarinen, A. and E. Oja (l997a). A fast fixed-point algorithm for independent component analysis. Neural Computation (9), 1483-1492. Hyvarinen, A. and E. Oja (1997b). One-unit learning rules for independent component analysis. In Neural Information Processing Systems 9 (Proc. NIPS '96). MIT Press. Jousmiiki, Y. and R. Hari (1996). Cardiac artifacts in magnetoencephalogram. Journal o/Clinical Neurophysiology 13(2), 172-176. Jung, T.-P., C. Hwnphries, T.-W. Lee, S. Makeig, M. J. McKeown, Y. lragui, and T. Sejnowski (1998). Extended ica removes artifacts from electroencephalographic recordings. In Neural Information Processing Systems 10 (Proc. NIPS '97). MIT Press. Jutten, C. and 1. Herault (1991). Blind separation of sources, part i: an adaptive algorithm based on neuromimetic architecture. Signal Processing 24, 1-10. Karhunen, J., E. Oja, L. Wang, R. Vigmo, and J. Joutsensalo (1997). A class of neural networks for independent component analysis. IEEE Trans. Neural Networks 8(3), 1-19. Vigmo, R. (1997a). WWW adress for the MEG data: http://nuc1eus.hut.firrvigarioINIPS97 _data.html. Vigmo, R. (1997b). Extraction of ocular artifacts from eeg using independent component analysis. To appear in Electroenceph. c/in. Neurophysiol. ICAfor Identification of Artifacts in MEG Recordings 235 IC1 ~~~ ------,--y~-------------------------.-.------~~ .. , .. U ~ ... IC2 IC3 ~ ''' ... '' .. ' \ <> . ).\ .\ C:> • ~~~}a .", ~~-" __ ~ __ I 4-_ .. _ .......... _.---_ ... _ ........ -... __ """"""¢t;_-"'''' .... ~_1 ......... -......,..... IC4 IC5 IC6 ~ ... W" .... "1011 ... ~"_f .... ..".,.""'_ /tJ'IfII/'h I' ...... d1b .. ~*W,.'tJ ...... r' .. ns ... IC7 ICB ICg ~._-~.,. . . . . . t .. Wt:n:ePWt.~ .. ,.~I'NJ'~~ I 10 s I Figure 2: Nine independent components found from the MEG data. For each component the left, back and right views of the field patterns generated by these components are shown full line stands for magnetic flux coming out from the head, and dotted line the flux inwards.
1997
62
1,411
Classification by Pairwise Coupling TREVOR HASTIE * Stanford University and ROBERT TIBSHIRANI t University of Toronto Abstract We discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in simulated datasets. The classifiers used include linear discriminants and nearest neighbors: application to support vector machines is also briefly described. 1 Introduction We consider the discrimination problem with J{ classes and N training observations. The training observations consist of predictor measurements x = (Xl, X2, ... Xp) on p predictors and the known class memberships. Our goal is to predict the class membership of an observation with predictor vector Xo Typically J{ -class classification rules tend to be easier to learn for J{ = 2 than for f{ > 2 only one decision boundary requires attention. Friedman (1996) suggested the following approach for the the K-class problem: solve each of the two-class problems, and then for a test observation, combine all the pairwise decisions to form a J{ -class decision. Friedman's combination rule is quite intuitive: assign to the class that wins the most pairwise comparisons. Department of Statistics, Stanford University, Stanford California 94305; trevor@playfair.stanford.edu bepartment of Preventive Medicine and Biostatistics, and Department of Statistics; tibs@utstat.toronto.edu 508 T. Hastie and R. Tibshirani Friedman points out that this rule is equivalent to the Bayes rule when the class posterior probabilities Pi (at the test point) are known: argm~[Pd = argma~[LI(pd(Pi + Pj) > Pj/(Pi + Pj»] Jti Note that Friedman's rule requires only an estimate of each pairwise decision. Many (pairwise) classifiers provide not only a rule, but estimated class probabilities as well. In this paper we argue that one can improve on Friedman's procedure by combining the pairwise class probability estimates into a joint probability estimate for all J{ classes. This leads us to consider the following problem. Given a set of events AI, A 2 , ... A.K, some experts give us pairwise probabilities rij = Prob(AilA or Aj) . Is there a set of probabilities Pi = Prob(Ai) that are compatible with the 1'ij? In an exact sense, the answer is no. Since Prob(AdAi or Aj) = Pj /(Pi + pj) and 2: Pi = 1, we are requiring that J{ -1 free parameters satisfy J{ (/{ -1) /2 constraints and, this will not have a solution in general. For example, if the 1'ij are the ijth entries in the matrix ( . 0.9 0.4) 0.1 . 0.7 0.6 0.3 . (1) then they are not compatible with any pi's. This is clear since r12 > .5 and 1'23 > .5, but also r31 > .5. The model Prob(Ai IAi or Aj) = Pj /(Pi + pj) forms the basis for the BradleyTerry model for paired comparisons (Bradley & Terry 1952). In this paper we fit this model by minimizing a Kullback-Leibler distance criterion to find the best approximation foij = pd('Pi + pj) to a given set of 1'il's. We carry this out at each predictor value x, and use the estimated probabilities to predict class membership at x. In the example above, the solution is p = (0.47, 0.25, 0.28). This solution makes qualitative sense since event Al "beats" A2 by a larger margin than the winner of any of the other pairwise matches. Figure 1 shows an example of these procedures in action. There are 600 data points in three classes, each class generated from a mixture of Gaussians. A linear discriminant model was fit to each pair of classes, giving pairwise probability estimates 1'ij at each x. The first panel shows Friedman's procedure applied to the pairwise rules. The shaded regions are areas of indecision, where each class wins one vote. The coupling procedure described in the next section was then applied , giving class probability estimates f>(x) at each x. The decision boundaries resulting from these probabilities are shown in the second panel. The procedure has done a reasonable job of resolving the confusion, in this case producing decision boundaries similar to the three-class LDA boundaries shown in panel 3. The numbers in parentheses above the plots are test-error rates based on a large test sample from the same population. Notice that despite the indeterminacy, the max-wins procedure performs no worse than the coupling procedure. and both perform better than LDA. Later we show an example where the coupling procedure does substantially better than max-wms. Classification by Pairwise Coupling 509 Pairwise LDA + Max (0.132) Pairwise LOA + Coupling (0.136) 3·Class LOA (0.213) Figure 1: A three class problem, with the data in each class generated from a mixture of Gaussians. The first panel shows the maximum-win procedure. The second panel shows the decision boundary from coupling of the pairwise linear discriminant rules based on d in (6). The third panel shows the three-class LDA boundaries. Test-error rates are shown in parentheses. This paper is organized as follows. The coupling model and algorithm are given in section 2. Pairwise threshold optimization, a key advantage of the pairwise approach, is discussed in section 3. In that section we also examine the performance of the various methods on some simulated problems, using both linear discriminant and nearest neighbour rules. The final section contains some discussion. 2 Coupling the probabilities Let the probabilities at feature vector x be p(x) = (PI (x), ... PK (x)). In this section we drop the argument x , since the calculations are done at each x separately. \Ve assume that for each i -# j, there are nij observations in the training set and from these we have estimated conditional probabilities Tij = Prob( iii or j). Our model is or equivalently a log-nonlinear model. J..Lij Binomial( nij , J-Lij ) Pi Pi + Pj log J-Lij = log (Pi ) - log (Pi + Pj), (2) (3) We wish to find Pi'S so that the Uij'S are close to the Tij'S. There are K - 1 independent parameters but K(I{ - 1)/2 equations, so it is not possible in general to find .Pi's so that {iij = Tij for all i, j. Therefore we must settle for {iij'S that are close to the observed Tij'S. Our closeness criterion is the average (weighted) Kullback-Leibler distance between Tij and J-Lij : (4) 510 T. Hastie and R. libshirani and we find p to minimize this function . This model and criterion is formally equivalent to the Bradley-Terry model for preference data. One observes a proportion fij of nij preferences for item i, and the sampling model is binomial, as in (2) . If each of the fij were independent, then R(p) would be equivalent to the log-likelihood under this model. However our fij are not independent as they share a common training set and were obtained from a common set of classifiers. Furthermore the binomial models do not apply in this case; the fij are evaluations of functions at a point, and the randomness arises in the way these functions are constructed from the training data. We include the nij as weights in (4); this is a crude way of accounting for the different precisions in the pairwise probability estimates. The score (gradient) equations are: Lnijj1ij = Lnijfij; i= 1,2 .... K (5) jti j#i subject to L Pi = 1. We use the following iterative procedure to compute the iN's: Algorithm 1. Start with some guess for the Pi, and corresponding Pij. 2. Repeat (i = 1,2, . .. , K, 1, ... ) until convergence: renormalize the Pi, and recompute the Pij. The algorithm also appears in Bradley & Terry (1952). The updates in step 2 attempt to modify p so that the sufficient statistics match their expectation, but go only part of the way. We prove in Hastie & Tibshirani (1996) that R(p) increases at each step. Since R(p) is bounded above by zero, the procedure converges. At convergence, the score equations are satisfied, and the PijS and p are consistent. This algorithm is similar in flavour to the Iterative Proportional Scaling (IPS) procedure used in log-linear models. IPS has a long history, dating back to Deming & Stephan (1940). Bishop, Fienberg & Holland (1975) give a modern treatment and many references. The resulting classification rule is (6) Figure 2 shows another example similar to Figure 1, where we can compare the performance of the rules d and d. The hatched area in the top left panel is an indeterminate region where there is more than one class achieving max(pd. In the top right panel the coupling procedure has resolved this indeterminacy in favor of class 1 by weighting the various probabilities. See the figure caption for a description of the bottom panels. Classification by Pairwise Coupling 511 PallWlse LOA + Max (0.449) Pairwise LOA + Coupling (0 358) LOA (0.457) aDA (0.334) Figure 2: A three class problem similar to that in figure 1, with the data in each class generated from a mixture of Gaussians. The first panel shows the maximumwins procedure d). The second panel shows the decision boundary from coupling of the pairwise linear discriminant rules based on d in (6). The third panel shows the three-class LDA boundaries, and the fourth the QDA boundaries. The numbers in the captions are the error rates based on a large test set from the same population. 3 Pairwise threshold optimization As pointed out by Friedman (1996), approaching the classification problem in a pairwise fashion allows one to optimize the classifier in a way that would be computationally burdensome for a J< -class classifier. Here we discuss optimization of the classification threshold. For each two class problem, let logit Pij(X) = dij(x). Normally we would classify to class i if dij (x) > O. Suppose we find that dij (x) > tij is better. Then we define dij (x) = dij (x) - tij, and hence pij (x) = logiC 1 di/x). We do this for all pairs, and then apply the coupling algorithm to the P~j (x) to obtain probabilities pi( x) . In this way we can optimize over J«J< - 1)/2 parameters separately, rather than optimize jointly over J< parameters. With nearest neigbours, there are other approaches to threshold optimization, that bias the class probability estimates in different ways. See Hastie & Tibshirani (1996) for details. An example of the benefit of threshofd optimization is given next. Example: ten Gaussian classes with unequal covariance In this simulated example taken from Friedman (1996), there are 10 Gaussian classes in 20 dimensions. The mean vectors of each class were chosen as 20 independent uniform [0,1] random variables. The covariance matrices are constructed from eigenvectors whose square roots are uniformly distributed on the 20-dimensional unit sphere (subject to being mutually orthogonal), and eigenvalues uniform on [0.01,1.01]. There are 100 observations per class in the training set, and 200 per 512 T. Hastie and R .. 1ibshirani class in the test set. The optimal decision boundaries in this problem are quadratic, and neither linear nor nearest-neighbor methods are well-suited. Friedman states that the Bayes error rate is less than 1%. Figure 3 shows the test error rates for linear discriminant analysis, J -nearest neighbor and their paired versions using threshold optimization. We see that the coupled classifiers nearly halve the error rates in each case. In addition, the coupled rule works a little better than Friedman's max rule in each task. Friedman (1996) reports a median test error rate of about. 16% for his thresholded version of pairwise nearest neighbor. Why does the pairwise t.hresholding work in this example? We looked more closely at the pairwise nearest neighbour rules rules that were constructed for this problem. The thresholding biased the pairwise distances by about 7% on average. The average number of nearest neighbours used per class was 4.47 (.122), while t.he standard Jnearest neighbour approach used 6.70 (.590) neighbours for all ten classes. For all ten classes, the 4.47 translates into 44.7 neighbours. Hence relative to t.he standard J -NN rule, the pairwise rule, in using the threshold optimization to reduce bias, is able to use about six times as many near neighbours. It) C\I ci o C\I ci o ci T , , J-nn II ! 1 I : 1.' c....l..." c....l..." nn/max nn/coup I T 1 I , Ida Ida/max Ida/coup Figure 3: Test errors for 20 simulations of ten-class Gaussian example. 4 Discussion Due to lack of space, there are a number of issues that we did not discuss here. In Hastie & Tibshirani (1996), we show the relationship between the pairwise coupling and the max-wins rule: specifically, if the classifiers return 0 or Is rather than probabilities, the two rules give the same classification. We also apply the pairwise coupling procedure to nearest neighbour and support vector machines. In the latter case, this provides a natural way of extending support vector machines, which are defined for two-class problems, to multi-class problems. Classification by Pairwise Coupling 513 The pairwise procedures, both Friedman's max-win and our coupling, are most likely to offer improvements when additional optimization or efficiency gains are possible in the simpler 2-class scenarios. In some situations they perform exactly like the multiple class classifiers. Two examples are: a) each of the pairwise rules are based on QDA: i.e. each class modelled by a Gaussian distribution with separate covariances, and then the rijS derived from Bayes rule; b) a generalization of the above, where the density in each class is modelled in some fashion, perhaps nonparametrically via density estimates or near-neighbor methods, and then the density estimates are used in Bayes rule. Pairwise LDA followed by coupling seems to offer a nice compromise between LDA and QDA, although the decision boundaries are no longer linear. For this special case one might derive a different coupling procedure globally on the logit scale, which would guarantee linear decision boundaries. Work of this nature is currently in progress with Jerry Friedman. Acknowledgments We thank Jerry Friedman for sharing a preprint of his pairwise classification paper with us, and acknowledge helpful discussions with Jerry, Geoff Hinton, Radford Neal and David Tritchler. Trevor Hastie was partially supported by grant DMS-9504495 from the National Science Foundation, and grant ROI-CA-72028-01 from the National Institutes of Health. Rob Tibshirani was supported by the Natural Sciences and Engineering Research Council of Canada and the iRIS Centr of Excellence. References Bishop, Y., Fienberg, S. & Holland, P. (1975), Discrete multivariate analysis, MIT Press, Cambridge. Bradley, R. & Terry, M. (1952), 'The rank analysis of incomplete block designs. i. the method of paired comparisons', Biometrics pp. 324-345. Deming, W. & Stephan, F. (1940), 'On a least squares adjustment of a sampled frequency table when the expected marginal totals are known', A nn. Math. Statist. pp. 427-444. Friedman, J . (1996), Another approach to polychotomous classification, Technical report, Stanford University. Hastie, T. & Tibshirani, R. (1996), Classification by pairwise coupling, Technical report, University of Toronto.
1997
63
1,412
Hybrid reinforcement learning and its application to biped robot control Satoshi Yamada, Akira Watanabe, M:ichio Nakashima {yamada, watanabe, naka}~bio.crl.melco.co.jp Advanced Technology R&D Center Mitsubishi Electric Corporation Amagasaki, Hyogo 661-0001, Japan Abstract A learning system composed of linear control modules, reinforcement learning modules and selection modules (a hybrid reinforcement learning system) is proposed for the fast learning of real-world control problems. The selection modules choose one appropriate control module dependent on the state. This hybrid learning system was applied to the control of a stilt-type biped robot. It learned the control on a sloped floor more quickly than the usual reinforcement learning because it did not need to learn the control on a flat floor, where the linear control module can control the robot. When it was trained by a 2-step learning (during the first learning step, the selection module was trained by a training procedure controlled only by the linear controller), it learned the control more quickly. The average number of trials (about 50) is so small that the learning system is applicable to real robot control. 1 Introduction Reinforcement learning has the ability to solve general control problems because it learns behavior through trial-and-error interactions with a dynamic environment. It has been applied to many problems, e.g., pole-balance [1], back-gammon [2], manipulator [3], and biped robot [4]. However, reinforcement learning has rarely been applied to real robot control because it requires too many trials to learn the control even for simple problems. For the fast learning of real-world control problems, we propose a new learning system which is a combination of a known controller and reinforcement learning. It is called the hybrid reinforcement learning system. One example of a known controller is a linear controller obtained by linear approximation. The hybrid learning system 1072 S. Yamada, A. Watanabe and M. Nakashima will learn the control more quickly than usual reinforcement learning because it does not need to learn the control in the state where the known controller can control the object. A stilt-type biped walking robot was used to test the hybrid reinforcement learning system. A real robot walked stably on a flat floor when controlled by a linear controller [5]. Robot motions could be approximated by linear differential equations. In this study, we will describe hybrid reinforcement learning of the control of the biped robot model on a sloped floor, where the linear controller cannot control the robot. 2 Biped Robot a) b) pitch axis Figure 1: Stilt-type biped robot. a) a photograph of a real biped robot, b) a model structure of the biped robot. Ul, U2, U3 denote torques. Figure I-a shows a stilt-type biped robot [5J. It has no knee or ankle, has 1 m legs and weighs 33 kg. It is modeled by 3 rigid bodies as shown in Figure I-b. By assuming that motions around a roll axis and those around a pitch axis are independent, 5-dimensional differential equations in a single supporting phase were obtained. Motions of the real biped robot were simulated by the combination of these equations and conditions at a leg exchange period. If angles are approximately zero, these equations can be approximated by linear equations. The following linear controller is obtained from the linear equations. The biped robot will walk if the angles of the free leg are controlled by a position-derivative (PD) controller whose desired angles are calculated as follows: r{J (J+~+{3 if; (J + 2~ ( -A7) + 6 A = If (1) where~, {3, 6, and 9 are a desired angle between the body and the leg (7°), a constant to make up a loss caused by a leg exchange (1.3°), a constant corresponding to walking speed, and gravitational acceleration (9.8 ms- 2 ), respectively. The linear controller controlled walking of the real biped robot on a flat floor [5]. However, it failed to control walking on a slope (Figure 2). In this study, the objective of the learning system was to control walking on the sloped floor shown in Figure 2-a. Hybrid Reinforcement Learning for Biped Robot Control a) lOem] Oem b) i 1m I 2m 45·'s ,-- ----------,-,-----, Angular I fall down Velocity iJ -45 .'sO~---------------:' IOcm b Time(s) 1.0 Height of . ______ fall down Free Leg's Tip VVy';.:£!Pft'~',....~ 't:, -2cmO ··~ Time(s) 10 Robo' Po".: I = I -lm O Time(s) 10 I 3m 1073 Figure 2: Biped robot motion on a sloped floor controlled by the linear controller. a) a shape of a floor, b) changes in angular velocity, height of free leg's tip, and robot position 3 Hybrid Reinforcement Learning state inputs 71,~,iJ linear control module reinforcement: r(t) 1-----10( decision of k Figure 3: Hybrid reinforcement learning system. We propose a hybrid reinforcement learning system to learn control quickly, The hybrid reinforcement learning system shown in Figure 3 is composed of a linear control module, a reinforcement learning module, and a selection module. The reinforcement learning module and the selection module select an action and a module dependent on their respective Q-values. This learning system is similar to the modular reinforcement learning system proposed by Tham [6] which was based on hierarchical mixtures of the experts (HME) [7]. In the hybrid learning system, the selection module is trained by Q-Iearning. To combine the reinforcement learning with the linear controller described in (1), the ~u!put of the reinforcement learning module is set to k in the adaptable equation for (, ( = -kiJ + 6. The angle and the angular velocity of the supporting leg at the leg exchange period ('T], iJJi) are used as inputs. The k values are kept constant until the next leg exchange. The reinforcement learning module is trained by "Q-sarsa" learning [8]. Q values are calculated by CMAC neural networks [9], [10]. The Q values for action k (Q c (x, k)) and those for module s selection (Q s (x, s)) are 1074 s. Yamada, A Watanabe and M. Nakashima calculated as follows: L we(k, m, i, t)y(m, i, t) Tn ,i Q .. (x, s) = L w .. (s, m, i, t)y(m, i, t), (2) m,i where we{k,m,i,t) and w .. {s,m,i,t) denote synaptic strengths and y{m,i,t) represents neurons' outputs in CMAC networks at time t. Modules were selected and actions performed according to the £-greedy policy [8] with £ = O. The temporal difference (TD) error for the reinforcement learning module (fe(t)) is calculated by 1 0 r(t) + Qe(x(t + l),per(t + 1)) - Qe{x{t),per{t)) fe(t) = r{t) + Q .. {x(t + 1), sel(t + 1)) - Qe(x(t),per{t)), sel(t) = lin sel{t) = rein sel(t + 1) = rein sel{t) = rein sel{t + 1) = lin (3) where r{t), per{t), sel(t), lin and rein denote reinforcement signals (r{t) = -1 if the robot falls down, 0 otherwise), performed actions, selected modules, the linear control module and the reinforcement learning module, respectively. TD error (f t (t)) calculated by Q .. (x, s) is considered to be a sum of TD error caused by the reinforcement learning module and that by the selection module. TD error (f .. {t)) used in the selection-module's learning is calculated as follows: f .. {t) = ft(t) - fe(t) = r{t) + ,Q .. {x(t + 1), sel(t + 1)) - Q .. (x{t), sel{t)) - fe(t), (4) where, denotes a discount factor. The reinforcement learning module used replacing eligibility traces {e c (k, m, i, t)) [11]. Synaptic strengths are updated as follows: we(k, m, i, t + 1) wc{k, m, i, t) + Qefc{t)ee{k, m, i, t)/nt w .. (s,m,i ,t + 1) { w .. (s, m, i, t) + Q .. f .. (t)y(m, i, t)/nt s = sel(t) w .. (s,m,i,t) otherwise { 1 k=per(t),y(m,i,t)=l o k ::f: per(t), y(m, i, t) = 1 >.ec(k, m, i, t - 1) otherwise ee(k, m, i, t) = (5) where Qe, Q .. , >. and nt are a learning constant for the reinforcement learning module, that for the selection module, decay rates and the number of tHings, respectively. In this study, the CMAC used 10 tHings. Each of the three dimensions was divided into 12 intervals. The reinforcement learning module had 5 actions (k = 0, A/2, A, 3A/2, 2A). The parameter values were Q .. = 0.2, Q e = 0.4, >. = 0.3, , = 0.9 and 6 = 0.05. Each run consisted of a sequence of trials, where each trial began with robot state of position=O, _5° < () < -2.5°,1.5° < "I < 3°, cp = ()+~, 'I/J = cp+~, ( = "1+ 2°,9 = cp = "j; = iJ = ( = 0, and ended with a failure signals indicating robot's falling down. Runs were terminated if the number of walking steps of three consecutive trials exceeded 100. All results reported are an average of 50 runs. Hybrid Reinforcement Learning for Biped Robot Control '" ft' 80 '" bll ~ 60 C; ~ 40 '-o o Z 20 · . · . · . . .... -... ----................................... . · . . · . . · . o~~~~~~~~~~ o jO 100 IjO 200 Trials 1075 Figure 4: Learning profiles for control of walking on the sloped floor. (0) hybrid reinforcement learning, (0) 2-step hybrid reinforcement learning, (\7) reinforcement learning and (6) HME-type modular reinforcement learning 4 Results Walking control on the sloped floor (Figure 2-a) was first trained by the usual reinforcement learning. The usual reinforcement learning system needed many trials for successful termination (about 800, see Figure 4(\7)). Because the usual reinforcement learning system must learn the control for each input, it requires many trials. Figure 4(0) also shows the learning curve for the hybrid reinforcement learning. The hybrid system learned the control more quickly than the usual reinforcement learning (about 190 trials). Because it has a higher probability of succeeding on the flat floor, it learned the control quickly. On the other hand, HME-type modular reinforcement learning [6] required many trials to learn the control (Figure 4(6)). 45°'s ,--- ----------- -------, Angular ~A A A A A AA A I h AA Ab A AA A AA A A A A A A A A A A ~ Vel~ity VYVV vvrW{VYvvvVYVVlJm rv Y VvVVl () -45°,s '-------------------'o Time(s) 20 Height 0: Oem \ . "hfrl}f~iM£f11tfWfNWV\{\/YVVVy1 Free Leg's Tip ~ -2cm 2 RObotP~i::[ ~ 1° -1m 20 o Time(s) Figure 5: Biped robot motion controlled by the network trained by the 2-step hybrid reinforcement learning. 1076 S. Yamada, A. Watanabe and M Nakashima In order to improve the learning rate, a 2-step learning was examined. The 2-step learning is proposed to separate the selection-module learning from the reinforcement-learning-module learning. In the 2-step hybrid reinforcement learning, the selection module was first trained by a special training procedure in which the robot was controlled only by the linear control module. And then the network was trained by the hybrid reinforcement learning. The 2-step hybrid reinforcement learning learned the control more quickly than the I-step hybrid reinforcement learning (Figure 4(0)). The average number of trials were about 50. The hybrid learning system may be applicable to the real biped robot. Figure 5 shows the biped robot motion controlled by the trained network. On the slope, the free leg's lifting was magnified irregularly (see changes in the height of the free leg's tip of Figure 5) in order to prevent the reduction of an amplitude of walking rhythm. On the upper flat floor, the robot was again controlled stably by the linear control module. a) 004 ~ """ 00] bI) t: .~ 002 j 001 b) .!! ~ EO." "2 15 8 0.4 t; ... . 5 :: 02 o .2 o eOc........ ................. ~ ..................... ~...L.....~ -0.03 -002 -001 0 0.01 0.02 003 ·003 -002 -0.01 0 001 002 003 initial synaptic strength values initial synaptic strength values Figure 6: Dependence of (a) the learning rate and (b) the selection ratio of the linear control module on the initial synaptic strength values (wa(rein, m, i, 0)). (a) learning rate of (0) the hybrid reinforcement learning, and (0) the 2-step hybrid reinforcement learning. The learning rate is defined as the inverse of the number of trials where the average walking steps exceed 70. (b) the ratio of the linear-controlmodule selection. Circles represent the selection ratio of the linear control module when controlled by the network trained by the hybrid reinforcement learning, rectangles represent that by the 2-step hybrid reinforcement learning. Open symbols represent the selection ratio on the flat floor, closed symbols represent that on the slope. The dependence of learning characteristics on initial synaptic strengths for the reinforcement-learning-module selection (W3 (rein, m, i, 0)) was considered (other initial synaptic strengths were 0). If initial values of ws(rein, m, i, t) (ws(rein, m, i, 0)) are negative, the Q-values for the reinforcement-learning-module selection (Q8(x,rein)) are smaller than Q8(x,lin) and then the linear control module is selected for all states at the beginning of the learning. In the case of the 2-step learning, if Ws (rein, m, i, 0) are given appropriate negative values, the reinforcement learning module is selected only around failure states, where Qa(x, lin) is trained in the first learning step, and the linear control module is selected otherwise at the beginning of the second learning step. Because the reinforcement learning module only requires training around failure states in the above condition, the 2Hybrid Reinforcement Learning fo r Biped Robot Control 1077 step hybrid system is expected to learn the control quickly. Figure 6-a shows the dependence of the learning rate on the initial synaptic strength values. The 2-step hybrid reinforcement learning had a higher learning rate when Ws (rein, m, i, 0) were appropriate negative values (-0.01 '" -0.005). The trained system selected the linear control module on the flat floor (more than 80%), and selected both modules on the slope (see Figure 6-b), when ws(rein, m, i, 0) were negative. Three trials were required in the first learning step of the 2-step hybrid reinforcement learning. In order to learn the Q-value function around failure states, the learning system requires 3 trials. 5 Conclusion We proposed the hybrid reinforcement learning which learned the biped robot control quickly. The number of trials for successful termination in the 2-step hybrid reinforcement learning was so small that the hybrid system is applicable to the real biped robot. Although the control of real biped robot was not learned in this study, it is expected to be learned quickly by the 2-step hybrid reinforcement learning. The learning system for real robot control will be easily constructed and should be trained quickly by the hybrid reinforcement learning system. References [1] Barto, A. G., Sutton, R. S. and Anderson, C. W.: Neuron like adaptive elements that can solve difficult learning control problems, IEEE Trans. Sys. Man Cybern., Vol. SMC-13, pp. 834-846 (1983). [2] Tesauro, G.: TD-gammon, a self-teaching backgammon program, achieves master-level play, Neural Computation, Vol. 6, pp. 215-219 (1994). [3] Gullapalli, V., Franklin, J. A. and Benbrahim, H.: Acquiring robot skills via reinforcement learning, IEEE Control System, Vol. 14, No.1, pp. 13-24 (1994). [4] Miller, W. T.: Real-time neural network control of a biped walking robot, IEEE Control Systems, Vol. 14, pp. 41-48 (1994). [5] Watanabe, A., Inoue, M. and Yamada, S.: Development of a stilts type biped robot stabilized by inertial sensors (in Japanese), in Proceedings of 14th Annual Conference of RSJ, pp. 195-196 (1996). [6] Tham, C. K.: Reinforcement learning of multiple tasks using a hierarchical CMAC architecture, Robotics and Autonomous Systems, Vol. 15, pp. 247-274 (1995). [7] Jordan, M. I. and Jacobs, R. A.: Hierarchical mixtures of experts and the EM algorithm, Neural Computation, Vol. 6, pp. 181-214 (1994). [8] Sutton, R. S.: Generalization in reinforcement learning: successful examples using sparse coarse coding, Advances in NIPS, Vol. 8, pp. 1038-1044 (1996). [9] Albus, J. S.: A new approach to manipulator control: The cerebellar model articulation controller (CMAC), Transaction on ASME J. Dynamical Systems, Measurement, and Controls, pp. 220-227 (1975). [10] Albus, J. S.: Data storage in the cerebellar articulation controller (CMAC), Transaction on ASME J. Dynamical Systems, Measurement, and Controls, pp. 228-233 (1975). [11] Singh, S. P. and Sutton, R. S.: Reinforcement learning with replacing eligibility traces, Machine Learning, Vol. 22, pp. 123-158 (1996).
1997
64
1,413
Synaptic Transmission: An Information-Theoretic Perspective Amit Manwani and Christof Koch Computation and Neural Systems Program California Institute of Technology Pasadena, CA 91125 email: quixote@klab.caltech.edu koch@klab.caltech.edu Abstract Here we analyze synaptic transmission from an infonnation-theoretic perspective. We derive c1osed-fonn expressions for the lower-bounds on the capacity of a simple model of a cortical synapse under two explicit coding paradigms. Under the "signal estimation" paradigm, we assume the signal to be encoded in the mean firing rate of a Poisson neuron. The perfonnance of an optimal linear estimator of the signal then provides a lower bound on the capacity for signal estimation. Under the "signal detection" paradigm, the presence or absence of the signal has to be detected. Perfonnance of the optimal spike detector allows us to compute a lower bound on the capacity for signal detection. We find that single synapses (for empirically measured parameter values) transmit infonnation poorly but significant improvement can be achieved with a small amount of redundancy. 1 Introduction Tools from estimation and infonnation theory have recently been applied by researchers (Bialek et. ai, 1991) to quantify how well neurons transmit infonnation about their random inputs in their spike outputs. In these approaches, the neuron is treated like a black-box, characterized empirically by a set of input-output records. This ignores the specific nature of neuronal processing in tenns of its known biophysical properties. However, a systematic study of processing at various stages in a biophysically faithful model of a single neuron should be able to identify the role of each stage in infonnation transfer in tenns of the parameters relating to the neuron's dendritic structure, its spiking mechanism, etc. Employing this reductionist approach, we focus on a important component of neural processing, the synapse, and analyze a simple model of a cortical synapse under two different representa·tional paradigms. Under the "signal estimation" paradigm, we assume that the input signal 202 A. Manwani and C. Koch is linearly encoded in the mean firing rate of a Poisson neuron and the mean-square error in the reconstruction of the signal from the post-synaptic voltage quantifies system performance. From the performance of the optimal linear estimator of the signal, a lower bound on the capacity for signal estimation can be computed. Under the "signal detection" paradigm, we assume that information is encoded in an all-or-none format and the error in deciding whether or not a presynaptic spike occurred by observing the post-synaptic voltage quantifies system performance. This is similar to the conventional absentipresent(Yes-No) decision paradigm used in psychophysics. Performance of the optimal spike detector in this case allows us to compute a lower bound on the capacity for signal detection. Stimulys Spike I No Spike Poisson Encoding Encodlna 0x-0 NoSpI<o !'-. NoR Spoke Release 1 &, 1 1h{l) Stochastic Variable EPSC EPSP Vesicle Release Amplitude Shape Synaptic Channel Optimal Estimator Optimal Detector Decoding Spike I No Spike Figure 1: Schematic block diagram for the signal detection and estimation tasks. The synapse is modeled as a binary channel followed by a filter h(t) = at exp( -tits). where a is a random variable with probability density, P(a) = a (aa)k- 1exp( -aa)/(k - 1)!. The binary channel, (inset, EO = Pr[spontaneous release], E1 = Pr [release failure]) models probabilistic vesicle release and h(t) models the variable epsp size observed for cortical synapses. n( t) denotes additive post-synaptic voltage noise and is assumed to be Gaussian and white over a bandwidth En. Performance of the optimal linear estimator (Wiener Filter) and the optimal spike detector (Matched Filter) quantify synaptic efficacy for signal estimation and detection respectively. 2 The Synaptic Channel Synaptic transmission in cortical neurons is known to be highly random though the role of this variability in neural computation and coding is still unclear. In central synapses, each synaptic bouton contains only a single active release zone, as opposed to the hundreds or thousands found at the much more reliable neuromuscular junction. Thus, in response to an action potential in the presynaptic terminal at most one vesicle is released (Kom and Faber, 1991). Moreover, the probability of vesicle release p is known to be generally low (0.1 to 0.4) from in vitro studies in some vertebrate and invertebrate systems (Stevens, 1994). This unreliability is further compounded by the trial-to-trial variability in the amplitude of the post-synaptic response to a vesicular release (Bekkers et. ai, 1990). In some cases, the variance in the size of EPSP is as large as the mean. The empirically measured distribution of amplitudes is usually skewed to the right (possibly biased due the inability of measuring very small events) and can be modeled by a Gamma distribution. In light of the above, we model the synapse as a binary channel cascaded by a random amplitude filter (Fig. 1). The binary channel accounts for the probabilistic vesicle release. EO Synaptic Trarumission: An Information-Theoretic Perspective 203 and €l denote the probabilities of spontaneous vesicle release and failure respectively. We follow the binary channel convention used in digital communications (€ 1 = 1-p), whereas, p is more commonly used in neurobiology. The filter h(t) is chosen to correspond to the epsp profile of a fast AMPA-like synapse. The amplitude of the filter a is modeled as random variable with density Pea), mean J.la and standard deviation aa. The CV (standard deviation/mean) of the distribution is denoted by eVa. We also assume that additive Gaussian voltage noise net) at the post-synaptic site further corrupts the epsp response. net) is assumed to white with variance a~ and a bandwidth En corresponding to the membrane time constant T. One can define an effective signal-to-noise ratio, SN R = Ea/No• given by the ratio of the energy in the epsp pulse, Eh = 1000 h2 (t) dt to the noise power spectral density, No = a;/ En. The performance of the synapse depends on the SN R and not on the absolute values of Eh or an. In the above model, by regarding synaptic parameters as constants, we have tacitly ignored history dependent effects like paired-pulse facilitation, vesicle depletion, calcium buffering. etc, which endow the synapse with the nature of a sophisticated nonlinear filter (Markram and Tsodyks, 1997). a) '" m(t) '~t ",!t) n(t) Effective Continuous Estimation Channel b) N.:" V..:~ .. Spike ~ SpIke X=l Y=l l·P. Effective Elinm. Detection Channel 3 Signal Estimation Figure 2: (a) Effective channel model for signal estimation. met), met), net) denote the stimulus, the best linear estimate, and the reconstruction noise respectively. (b) Effective channel model for signal detection. X and Y denote the binary variables corresponding to the input and the decision respectively. Pi and Pm are the effective error probabilities. Let us assume that the spike train of the presynaptic neuron can be modeled as a doubly stochastic Poisson process with a rate A(t) = k(t) * met) given as a convolution between the stimulus met) and a filter k(t). The stimulus is drawn from a probability distribution which we assume to be Gaussian. k(t) = exp( -tiT) is a low-pass filter which models the phenomenological relationship between a neuron's firing rate and its input current. T is chosen to correspond to the membrane time constant. The exact form of k(t) is not crucial and the above form is assumed primarily for analytical tractability. The objective is to find the optimal estimator ofm(t) from the post-synaptic voltage v(t), where optimality is in a least-mean square sense. The optimal mean-square estimator is, in general, nonlinear and reduces to a linear filter only when all the signals and noises are Gaussian. However, instead of making this assumption, we restrict ourselves to the analysis of the optimal linear estimator, met) = get) * vet), i.e. the filter get) which minimizes the mean-square error E = (m(t) - m(t))2) where (.) denotes an ensemble average. The overall estimation system shown in Fig. 1 can be characterized by an effective continuous channel (Fig. 2a) where net) = met) - met) denotes the effective reconstruction noise. System performance can be quantified by E, the lower E, the better the synapse at signal transmission. The expression for the optimal filter (Wiener filter) in the frequency domain is g(w) = Smv( -w)/Svv(w) where Smv(w) is the cross-spectral density (Fourier transform of the cross-correlation Rmv) ofm(t) and set) and Svv(w) is the power spectral density of vet). The minimum mean-square error is given by, E = a~ - Is I Smv(w) 12 / Svv(w) dw. The set S = {w 1 Svv (w) =J. O} is called the support of Svv (w). 204 A. Manwani and C. Koch Another measure of system performance is the mutual information rate I (m; v) between m(t) and v(t), defined as the rate of information transmitted by v(t) about s(t). By the Data Processing inequality (Cover 1991), l(m, v) ~ l(m, m). A lower bound of l(m, m) and thus of l(m; v) is given by the simple expression lib = ~ Is log2[~::/w/l dw (units of bits/sec). The lower bound is achieved when n(t) is Gaussian and is independent of m(t). Since the spike train s(t) = L 6(t - ti) is a POiSSOl!process with rate k(t) * m(t), its power spectrum is given by the expression, Sss(w) = >'+ 1 K(w) 12 Smm(w) where ). is the mean firing rate. We assume that the mean (J..Lm) and variance (CT~) of m(t) are chosen such that the probability that >.(t) < 0 is negligible1 The vesicle release process is the spike train gated by the binary channel and so it is also a Poisson process with rate (1 E1 )>.(t). Since v(t) = L aih(t - ti) + n(t) is a filtered P~isson process, its power spectral density is given by Svv (w) =1 H(w) 12 {(J..L~+CT~)(1-E1)>'+J..L~(1-E1)21 K(w) 12 Smm(w)} + Snn{w). The cross-spectral density is given by the expression Svm(w) = (1 - Et)J..LaSmm(w)H(w)K(w). This allows us to write the mean-square error as, Thus, the power spectral density ofn(t) is given by Snn = >'eff(w) + Self(w). Notice that if K (w) ---+ 00, E ---+ 0 i. e. perfect reconstruction takes place in the limit of high firing rates. For the parameter values chosen, SefJ{w) « >'e//(w), and can be ignored. Consequently, signal estimation is shot noise limited and synaptic variability increases shot noise by a factor Nsyn = (1 + eVa2 ) / (1 - E1)' For eVa = 0.6 and E1 = 0.6, Nsyn = 3.4, and for eVa = 1 and E1 = 0.6, Nsyn = 5. If m(t) is chosen to be white, band-limited to Bm Hz, closed-form expressions for E and lib can be obtained. The expression for lib is tedious and provides little insight and so we present only the expression for E below. 2 ,1 -1 BT E(r,BT ) = CTm [1- ~-B tan (~)l 1+, T +, E is a monotonic function of, (decreasing) and BT (increasing). ,can be considered as the effective number of spikes available per unit signal bandwidth and BT is the ratio of the signal bandwidth and the neuron bandwidth. Plots of normalized reconstruction error Er = E/CT~ and llb versus mean firing rate ().) for different values of signal bandwidth Bm are shown in Fig. 3a and Fig. 3b respectively. Observe that lib (bits/sec) is insensitive to Bm for firing rates upto 200Hz because the decrease in quality of estimation (E increases with Bm) is compensated by an increase in the number of independent samples (2Bm) available per second. This phenomenon is characteristic of systems operating in the low SNR regime. lib has the generic form, llb = B log(1 + S/(N B)), where B, S and N denote signal bandwidth, signal power and noise power respectively. For low SNR, I ~ B S / (N B) = S / N, is independent of B. So one can argue that, for our choice of parameters, a single synapse is a low SNR system. The analysis generalizes very easily to the case of multiple synapses where all are driven by the same signal s (t). (Manwani and Koch, in preparation). However, instead of presenting the rigorous analysis, we appeal to the intuition gained from the single synapse case. Since a single synapse can be regarded as a shot noise source, n parallel synapses can be treated as n parallel noise sources. Let us make the plausible lWe choose pm and O'm so that X = 30'). (std of ,X) so that Prob['x(t) ~ 0] < 0.01. Synaptic Transmission: An Information-Theoretic Perspective 205 assumption that these noises are uncorrelated. If optimal estimation is carried out separately for each synapse and the estimates are combined optimally, the effective noise variance is given by the harmonic mean of the individual variances i.e. l/u~eff = Li l/u~i. However, if the noises are added first and optimal estimation is carried out with respect to the sum, the effective noise variance is given by the arithmetic mean of the individual variances, i.e. u~ef f :::: Li u~dn2. If we assume that all synapses are similar so that U~i = u2, u~ef f = u2 In. Plots of Er and Jib for the case of 5 identical synapses are shown in Fig. 3c and Fig. 3d respectively. Notice that Jib increases with Bm suggesting that the system is no longer in the low SNR regime. Thus, though a single synapse has very low capacity, a small amount of redundancy causes a considerable increase in performance. This is consistent with the fact the in the low S N R regime, J increases linearly with S N R , consequently, linearly with n, the number of synapses. a) o.a o.s ~ e W 0.7 "0 Q)m .~ 1U 0.8 E ~ o Z 0.5 o.a '- 0.8 g W "0 0.7 Q) .~ (Q o.s E o Z 0.5 0 .• x x 0 0 0 ~ ~ + ~ .. + .. + + + + x x x x 000 000 0 0 0 0 x x x x x x X X X X X X x o B = 10Hz m Bm- 25Hz Bm= 50 Hz Bm=75HZ Bm: 100Hz 20 40 l1li l1li 100 120 140 1l1li 180 200 ~ . x 0 0 .. +"-: .... __ .. + + .. + + + ------ :.. + + + .. . . o 0 o 0 o 0 Firing Rate (Hz) b) 1. 12 1. 12 U IO Q) (/) UiS ."t::: .0 S Q) ~. .E2 .5 I x B = 10Hz m o B=25Hz m Bm= 50Hz Bm=75Hz Bm= 100 Hz l1li 80 100 120 140 180 180 200 20 40 l1li 80 ~ ~ ~ ~ ~ Firing Rate (Hz) Figure 3: Er and!,b vs. mean firing rate (X) for n = I [(a) and (b)] and n = 5 [(c) and (d)] identical synapses respectively (different values of Em) for signal estimation. Parameter values are 101 = 0.6, 100 = 0, eVa = 0.6, ts = 0.5 msec, T = I Omsec, (7n = 0.1 mY, En = 100 Hz. 4 Signal Detection The goal in signal detection is to decide which member from a finite set of signals was generated by a source, on the basis of measurements related to the output only in a statistical sense. Our example corresponds to its simplest case, that of binary detection. The objective is to derive an optimal spike detector based on the post-synaptic voltage in a given time interval. The criterion of optimality is minimum probability of error (Pe). A false alarm 206 A. Manwani and C. Koch (FA) error occurs when a spike is falsely detected even when no presynaptic spike occurs and a miss error (M) occurs when a spike fails to be detected. The probabilities of the errors are denoted by P, and Pm respectively. Thus, Pe = (1- Po) P, +Po Pm where Po denotes the a priori probability of a spike occurrence. Let X and Y be binary variables denoting spike occurrence and the decision respectively. Thus, X = 1 if a spike occurred else X = O. Similarly, Y = 1 expresses the decision that a spike occurred. The posterior likelihood ratio is defined as £(v) = Pr(v I X = l)/Pr(v I X = 0) and the prior likelihood as £0 = (1 - Po)/Po. The optimal spike detector employs the well-known likelihood ratio test, "If£(v) ~ £0 Y=lelseY=O". When X = 1,v(t) = ah(t)+n(t) elsev(t) = n(t). Since a is a random variable, £(v) = (f Pr(v I X = 1; a) P(a) da)/ Pr(v I X = 0). If the noise n( t) is Gaussian and white, it can be shown that the optimal decision rule reduces to a matchedfilte?, i.e. if the correlation, r between v(t) and h(t) exceeds a particular threshold (denoted by TJ), Y = 1 else Y = O. The overall decision system shown in Fig. 1 can be treated as effective binary channel (Fig. 2b). The system perfonnance can be quantified either by Pe or J (X; Y), the mutual infonnation between the binary random variables, X and Y. Note that even when n(t) = 0 (SN R = 00), Pe =j:. 0 due to the unreliability of vesicular release. Let Pe* denote the probability of error when S N R = 00. If EO = 0, Pe* = Po El is the minimum possible detection error. Let PJ and P~ denote FA and M errors when the release is ideal (El = 0, EO = 0). It can be shown that Pe = Pe* + P~[Po(1- Ed - (1 - Po)EO] + PJ[(l - Po)(l EO) - PoEl] P, = PJ ' Pm = P~ + El (1 P~ + PI) Both PJ and P~ depend on TJ. The optimal value ofT) is chosen such that Pe is minimized. In general, PJ and P~ can not be expressed in closed-fonn and the optimal 'f} is found using the graphical ROC analysis procedure. Ifwe normalize a such that /-La = 1, PJ and P~ can be parametrically expressed in tenns ofa nonnalized threshold 'f}*, PJ = 0.5[1- Er f('f}*)], P~ = 0.5[1+ Iooo Erf(TJ* - JSNRa) P(a) da]. J(X;Y) can be computed using the fonnula for the mutual infonnation for a binary channel, J = 1i (Po (1 - Pm) + (1 Po) P,) - Po 1i(Pm ) - (1- Po)1i(P, ) where 1i(x) = -x log2 (x) - (1- x) log2(1- x) is the binary entropy function. The analysis can be generalized to the case of n syna!Jses but the expressions involve n-dimensional integrals which need to be evaluated numerically. The Central Limit Theorem can be used to simplify the case of very large n. Plots of Pe and J(X; Y) versus n for different values of SNR (1,10,00) for the case of identical synapses are shown in Fig. 4a and Fig. 4b respectively. Yet again, we observe the poor perfonnance of a single synapse and the substantial improvement due to redundancy. The linear increase of J with n is similar to the result obtained for signal estimation. 5 Conclusions We find that a single synapse is rather ineffective as a communication device but with a little redundancy neuronal communication can be made much more robust. Infact, a single synapse can be considered as a low SNR device, while 5 independent synapses in parallel approach a high SNR system. This is consistently echoed in the results for signal estimation and signal detection. The values of infonnation rates we obtain are very small compared to numbers obtained from some peripheral sensory neurons (Rieke et. ai, 1996). This could be due to an over-conservative choice of parameter values on our part or could argue for the preponderance of redundancy in neural systems. What we have presented above are preliminary results of work in progress and so the path ahead is much 2 For deterministic a, the result is well-known, but even if a is a one-sided random variable, the matched filter can be shown to be optimal. Synaptic Tranrmission: An lnformation-Theoretic Perspective 207 a) ~ e w 0. 0.' 0 ... -4-- SNR = In!. ..... SNR=10 --SNR=1 o~ , --~~2~~--~3--~--~'--~~ Number of Synapses (n) b) ."r;:====~"'-'-------::::::::~ -4-- SNR = In!. i' 0.7 ~ 0 •• :0 ~ ... * ex: 0.4 c: .~ 0.:1 E 00.2 £ .., ..... SNR=10 --SNR= 1 ~~~--~2~----~3------~'----~ Number of Synapses (n) Pe (a) and l,b (b) vs. the number of synapses, n, (different values of SN R) for signal detection. SNR = Inf. corresponds to no post-synaptic voltage noise. All the synapses are assumed to be identical. Parameter values are po = 0.5, 101 = 0.6, 100 = 0, eVa = 0.6, ts = 0.5 msec, T = 10 msec, an = 0.1 mY, Bn = 100 Hz. longer than the distance we have covered so far. To the best of our knowledge, analysis of distinct individual components of a neuron from an communications standpoint has not been carried out before. Acknowledgements This research was supported by NSF, NIMH and the Sloan Center for Theoretical Neuroscience. We thank Fabrizio Gabbiani for illuminating discussions. References Bekkers, J.M., Richerson, G.B. and Stevens, C.F. (1990) "Origin of variability in quantal size in cultured hippocampal neurons and hippocampal slices," Proc. Natl. Acad. Sci. USA 87: 5359-5362. Bialek, W. Rieke, F. van Steveninck, R.D.R. and Warland, D. (1991) "Reading a neural code," Science 252: 1854-1857. Cover, T.M., and Thomas, lA. (1991) Elements of Information Theory. New York: Wiley. Kom, H. and Faber, D.S. (1991) "Quantal analysis and synaptic efficacy in the CNS," Trends Neurosci. 14: 439-445. Markram, H. and Tsodyks, T. (1996) "Redistibution of synaptic efficacy between neocortical pyramidal neurons," Nature 382: 807-810. Rieke, F. Warland, D. van Steveninck, R.D.R. and Bialek, W. (1996) Spikes: Exploring the Neural Code. Cambridge: MIT Press. Stevens, C.F. (1994) "What form should a cortical theory take," In: Large-Scale Neuronal Theories of the Brain, Koch, C. and Davis, J.L., eds., pp. 239-256. Cambridge: MIT Press.
1997
65
1,414
Just One View: Invariances in Inferotemporal Cell Thning Maximilian Riesenhuber Tomaso Poggio Center for Biological and Computational Learning and Department of Brain and Cognitive Sciences Massachusetts Institute of Techno)ogy, E25-201 Cambridge, MA 02139 {max,tp }@ai.mit.edu Abstract In macaque inferotemporal cortex (IT), neurons have been found to respond selectively to complex shapes while showing broad tuning ("invariance") with respect to stimulus transformations such as translation and scale changes and a limited tuning to rotation in depth. Training monkeys with novel, paperclip-like objects, Logothetis et al. 9 could investigate whether these invariance properties are due to experience with exhaustively many transformed instances of an object or if there are mechanisms that allow the cells to show response invariance also to previously unseen instances of that object. They found object-selective cells in anterior IT which exhibited limited invariance to various transformations after training with single object views. While previous models accounted for the tuning of the cells for rotations in depth and for their selectivity to a specific object relative to a population of distractor objects,14,1 the model described here attempts to explain in a biologically plausible way the additional properties of translation and size invariance. Using the same stimuli as in the experiment, we find that model IT neurons exhibit invariance properties which closely parallel those of real neurons. Simulations show that the model is capable of unsupervised learning of view-tuned neurons. We thank Peter Dayan, Marcus Dill, Shimon Edelman, Nikos Logothetis, Jonathan Mumick and Randy O'Reilly for useful discussions and comments. 216 M. RiesenhuberandT. Poggio 1 Introduction Neurons in macaque inferotemporal cortex (IT) have been shown to respond to views of complex objects,8 such as faces or body parts, even when the retinal image undergoes size changes over several octaves, is translated by several degrees of visual angle7 or rotated in depth by a certain amount9 (see [13] for a review). These findings have prompted researchers to investigate the physiological mechanisms underlying these tuning properties. The original model14 that led to the physiological experiments of Logothetis et al. 9 explains the behavioral view invariance for rotation in depth through the learning and memory of a few example views, each represented by a neuron tuned to that view. Invariant recognition for translation and scale transformations have been explained either as a result of object-specific learning4 or as a result of a normalization procedure ("shifter") that is applied to any image and hence requires only one object-view for recognition.12 A problem with previous experiments has been that they did not illuminate the mechanism underlying invariance since they employed objects (e.g., faces) with which the monkey was quite familiar, having seen them numerous times under various transformations. Recent experiments by Logothetis et al. 9 addressed this question by training monkeys to recognize novel objects ("paperclips" and amoeba-like objects) with which the monkey had no previous visual experience. After training, responses of IT cells to transformed versions of the training stimuli and to distractors of the same type were collected. Since the views the monkeys were exposed to during training were tightly controlled, the paradigm allowed to estimate the degree of invariance that can be extracted from just one object view. In partiCUlar, Logothetis et al. 9 tested the cells' responses to rotations in depth, translation and size changes. Defining "in variance" as yielding a higher response to test views than to distractor objects, they report9,10 an average rotation invariance over 30°, translation invariance over ±2°, and size invariance of up to ±1 octave around the training view. These results establish that there are cells showing some degree of invariance even after training with just one object view, thereby arguing against a completely learning-dependent mechanisms that requires visual experience with each transformed instance that is to be recognized. On the other hand, invariance is far from perfect but rather centered around the object views seen during training. 2 The Model Studies of the visual areas in the ventral stream of the macaque visual system8 show a tendency for cells higher up in the pathway (from VI over V2 and V4 to anterior and posterior IT) to respond to increasingly complex objects and to show increasing invariance to transformations such as translations, size changes or rotation in depth.13 We tried to construct a model that explains the receptive field properties found in the experiment based on a simple feedforward model. Figure 1 shows a cartoon of the model: A retinal input pattern leads to excitation of a set of "VI" cells, in the figure abstracted as having derivative-of-Gaussian receptive field profiles. These "VI" cells are tuned to simple features and have relatively small receptive fields. While they could be cells from a variety of areas, e.g., VI or V2 (cf. Discussion), for simplicity, we label them as "VI" cells (see figure). Different cells differ in preferred feature, e.g., orientation, preferred spatial frequency (scale), and receptive field location. "VI" cells of the same type (i.e., having the same preferred stimulus, but of different preferred scale and receptive field location) feed into the same neuron in an intermediate layer. These intermediate neurons could be complex cells in VI or V2 or V4 or even posterior IT: we label them as "V4" cells, in the Just One View: Invariances in Inferotemporal Cell Tuning 217 same spirit in which we labeled the neurons feeding into them as "VI" units. Thus, a "V4" cell receives inputs from "VI" cells over a large area and different spatial scales ([8] reports an average receptive field size in V 4 of 4.4° of visual angle, as opposed to about I ° in VI; for spatial frequency tuning, [3] report an average FWHM of 2.2 octaves, compared to 1.4 (foveally) to 1.8 octaves (parafoveally) in VI 5). These "V4" cells in turn feed into a layer of "IT" neurons, whose invariance properties are to be compared with the experimentally observed ones. Retina Figure 1: Cartoon of the model. See text for explanation. A crucial element of the model is the mechanism an intermediate neuron uses to pool the activities of its afferents. From the computational point of view, the intermediate neurons should be robust feature detectors, i.e., measure the presence of specific features without being confused by clutter and context in the receptive field. More detailed considerations (Riesenhuber and Poggio, in preparation) show that this cannot be achieved with a response function that just summates over all the afferents (cf. Results). Instead, intermediate neurons in our model perform a "max" operation (akin to a "Winner-Take-AII") over all their afferents, i.e., the response of an intermediate neuron is determined by its most strongly excited afferent. This hypothesis appears to be compatible with recent data,15 that show that when two stimuli (gratings of different contrast and orientation) are brought into the recepti ve field of a V 4 cell, the cell's response tends to be close to the stronger of the two individual responses (instead of e.g., the sum as in a linear model). 218 M. Riesenhuber and T. Poggio Thus, the response function 0i of an intennediate neuron i to stimulation with an image v IS (1) with Ai the set of afferents to neuron i, aU) the receptive field center of afferent j, v a(j) the (square-nonnalized) image patch centered at aU) that corresponds in size to the receptive field, ~j (also square-nonnalized) of afferent j and "." the dot product operation. Studies have shown that V 4 neurons respond to features of "intennediate" complexity such as gratings, corners and crosses.8 In V4 the receptive fields are comparatively large (4.40 of visual angle on average8 ), while the preferred stimuli are usually much smaller.3 Interestingly, cells respond independently of the location of the stimulus within the receptive field. Moreover, average V 4 receptive field size is comparable to the range of translation invariance of IT cells (:S ±2°) observed in the experiment.9 For afferent receptive fields ~j, we chose features similar to the ones found for V 4 cells in the visual system:8 bars (modeled as second derivatives of Gaussians) in two orientations, and "corners" of four different orientations and two different degrees of obtuseness. This yielded a total of lO intennediate neurons. This set of features was chosen to give a compact and biologically plausible representation. Each intennediate cell received input from cells with the same type of preferred stimulus densely covering the visual field of 256 x 256 pixels (which thus would correspond to about 4.40 of visual angle, the average receptive field size in V48 ), with receptive field sizes of afferent cells ranging from 7 to 19 pixels in steps of 2 pixels. The features used in this paper represent the first set of features tried, optimizing feature shapes might further improve the model's performance. The response tj oftop layer neuron j with connecting weights Wj to the intennediate layer was set to be a Gaussian, centered on Wj, tj = ~exp (_IIO;~jI12) 271'0'2 0' (2) where ° is the excitation of the intennediate layer and 0' the variance of the Gaussian, which was chosen based on the distribution of responses (for section 3.1) or learned (for section 3.2). The stimulus images were views of 21 randomly generated "paperclips" of the type used in the physiology experiment.9 Distractors were 60 other paperclip images generated by the same method. Training size was 128 x 128 pixels. 3 Results 3.1 Invariance of Representation In a first set of simulations we investigated whether the proposed model could indeed account for the observed invariance properties. Here we assumed that connection strengths from the intennediate layer cells to the top layer had already been learned by a separate process, allowing us to focus on the tolerance of the representation to the above-mentioned transformations and on the selectivity of the top layer cells. To {,·,tablish the tuning properties of view-tuned model neurons, the connections Wj between the intermediate layer and top layer unit j were set to be equal to the excitation 0training in the intermediate layer caused by the training view. Figure 2 shows the "tuning curve" for rotation in depth and Fig. 3 the response to changes in stimulus size of one such neuron. The neuron shows rotation invariance (i.e., producing a higher response than to any distractor) over about 440 and invariance to scale changes over the whole range tested. For translation Just One View: Invariances in InJerotemporal Cell Tuning Q.J U'J c: o O. Q.. til ~ 60 80 100 120 angle 1~----------------0.5 20 40 60 distractor 219 Figure 2: Responses of a sample top layer neuron to different views of the training stimulus and to distractors. The left plot shows the rotation tuning curve, with the training view (900 view) shown in the middle image over the plot. The neighboring images show the views of the paperclip at the borders of the rotation tuning curve, which are located where the response to the rotated clip falls below the response to the best distractor (shown in the plot on the right). The neuron exhibits broad rotation tuning over more than 40°. (not shown), the neuron showed invariance over translations of ±96 pixels around the center in any direction, corresponding to ± 1.7° of visual angle. The average invariance ranges for the 21 tested paperclips were 35° of rotation angle, 2.9 octaves of scale invariance and ± 1.80 of translation invariance. Comparing this to the experimentally observed10 300,2 octaves and ±2°, resp., shows a very good agreement of the invariance properties of model and experimental neurons. 3.2 Learning In the previous section we assumed that the connections from the intermediate layer to a view-tuned neuron in the top layer were pre-set to appropriate values. In this section, we investigate whether the system allows unsupervised learning of view-tuned neurons. Since biological plausibility of the learning algorithm was not our primary focus here, we chose a general, rather abstract learning algorithm, viz. a mixture of Gaussians model trained with the EM algorithm. Our model had four neurons in the top level, the stimuli were views of four paperclips, randomly selected from the 21 paperclips used in the previous experiments. For each clip, the stimulus set contained views from 17 different viewpoints, spanning 340 of viewpoint change. Also, each clip was included at 11 different scales in the stimulus set, covering a range of two octaves of scale change. Connections Wi and variances O'i, i = 1, ... ,4, were initialized to random values at the beginning of training. After a few iterations of the EM algorithm (usually less than 30), a stationary state was reached, in which each model neuron had become tuned to views of one paperclip: For each paperclip, all rotated and scaled views were mapped to (i.e., activated most strongly) the same model neuron and views of different paperclips were mapped to different neurons. Hence, when the system is presented with multiple views of different objects, receptive fields of top level neurons self-organize in such a way that different neurons become tuned to different objects. 220 M. Riesenhuber and T. Poggio 1r-----------------~ 0.8 0.6 0.5 0.4 0.20 100 200 20 40 60 stimulus size distractor Figure 3: Responses of the same top layer neuron as in Fig. 2 to scale changes of the training stimulus and to distractors. The left plot shows the size tuning curve, with the training size (128 x 128 pixels) shown in the middle image over the plot. The neighboring images show scaled versions of the paperclip. Other elements as in Fig. 2. The neuron exhibits scale invariance over more than 2 octaves. 4 Discussion Object recognition is a difficult problem because objects must be recognized irrespective of position, size, viewpoint and illumination. Computational models and engineering implementations have shown that most of the required invariances can be obtained by a relatively simple learning scheme, ba<;ed on a small set of example views.14,17 Quite sensibly, the visual system can also achieve some significant degree of scale and translation invariance from just one view. Our simulations show that the maximum response function is a key component in the performance ofthe model. Without it i.e., implementing a direct convolution of the filters with the input images and a subsequent summation invariance to rotation in depth and translation both decrease significantly. Most dramatically, however, invariance to scale changes is abolished completely, due to the strong changes in afferent cell activity with changing stimulus size. Taking the maximum over the afferents, as in our model, always picks the best matching filter and hence produces a more stable response. We expect a maximum mechanism to be essential for recognition-in-context, a more difficult task and much more common than the recognition of isolated objects studied here and in the related psychophysical and physiological experiments. The recognition of a specific paperclip object is a difficult, subordinate level classification task. It is interesting that our model sol ves it well and with a performance closely resembling the physiological data on the same task. The model is a more biologically plausible and complete model than previous ones14, 1 but it is still at the level of a plausibility proof rather than a detailed physiological model. It suggests a maximum-like response of intermediate cells as a key mechanism for explaining the properties of view-tuned IT cells, in addition to view-based representations (already described in (1,9]). Neurons in the intermediate layer currently use a very simple set of features. While this appears to be adequate for the class of paperclip objects, more complex filters might be necessary for more complex stimulus classes like faces. Consequently, future work will aim to improve the filtering step ofthe model and to test it on more real world stimuli. One can imagine a hierarchy of cell layers, similar to the "S" and "C" layers in Fukushima's Just One View: Invariances in In!erotemporal Cell Tuning 221 Neocognitron,6 in which progressively more complex features are synthesized from simple ones. The corner detectors in our model are likely candidates for such a scheme. We are currently investigating the feasibility of such a hierarchy of feature detectors. The demonstration that unsupervised learning of view-tuned neurons is possible in this representation (which is not clear for related view-based models14, 1) shows that different views of one object tend to form distinct clusters in the response space of intermediate neurons. The current learning algorithm, however, is not very plausible, and more realistic learning schemes have to be explored, as, for instance, in the attention-based model of Riesenhuber and Dayan16 which incorporated a learning mechanism using bottom-up and top-down pathways. Combining the two approaches could also demonstrate how invariance over a wide range of transformations can be learned from several example views, as in the case of familiar stimuli. We also plan to simulate detailed physiological implementations of several aspects of the model such as the maximum operation (for instance comparing nonlinear dendritic interactionsll with recurrent excitation and inhibition). As it is, the model can already be tested in additional physiological experiments, for instance involving partial occlusions. References [1] Bricolo. E, Poggio, T & Logothetis, N (1997). 3D object recognition: A model of view-tuned neurons. In Advances In Neural Information Processing 9,41-47. MIT Press. [2] Biilthoff, H & Edelman, S (1992). Psychophysical support for a two-dimensional view interpolation theory of object recognition. Proc. Nat. Acad. Sci. USA 89, 60-64. [3] Desimone, R & Schein, S (1987). Visual properties of neurons in area V 4 of the macaque: Sensitivity to stimulus fonn. 1. Neurophys. 57, 835-868. [4] Foldiak, P (1991). Learning invariance from transfonnation sequences. Neural Computation 3, 194-200. [5] Foster, KH, Gaska, JP, Nagler, M & Pollen, DA (1985). Spatial and temporal selectivity of neurones in visual cortical areas VI and V2 of the macaque monkey. 1. Phy. 365,331-363. [6] Fukushima, K (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36, 193-202. [7] Ito, M, Tamura, H, Fujita, I & Tanaka, K (1995). Size and position invariance of neuronal responses in monkey inferotemporal cortex. 1. Neurophys. 73, 218-226. [8] Kobatake, E & Tanaka, K (1995). Neuronal selectivities to complex object features in the ventral visual pathway of the macaque cerebral cortex 1. Neurophys., 71, 856-867. [9] Logothetis, NK, Pauls, J & Poggio, T (1995). Shape representation in the inferior temporal cortex of monkeys. Current Biology, 5, 552-563. [10] Nikos Logothetis, personal communication. [11] Mel, BW, Ruderman, DL & Archie, KA (1997). Translation-invariant orientation tuning in visual 'complex' cells could derive from intradendritic computations. Manuscript in preparation. [12] Olshausen, BA, Anderson, CH & Van Essen, DC (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of infonnation. 1. Neurosci. 13,4700-4719. [13] Perret, D & Oram, M (1993). Neurophysiology of shape processing. Image Vision Comput. 11, 317-333. [14] Poggio, T & Edelman, S (1990). A Network that learns to recognize 3D objects. Nature 343, 263-266. [15] Reynolds, JH & Desimone, R (1997). Attention and contrast have similar effects on competitive interactions in macaque area V4. Soc. Neurosc. Abstr. 23,302. [16] Riesenhuber, M & Dayan, P (1997). Neural models for part-whole hierarchies. In Advances In Neural Information Processing 9, 17-23. MIT Press. [17] Ullman, S (1996). High-level vision: Object recognition and visual cognition. MIT Press.
1997
66
1,415
Agnostic Classification of Markovian Sequences Ran EI-Yaniv Shai Fine Naftali Tishby* Institute of Computer Science and Center for Neural Computation The Hebrew University Jerusalem 91904, Israel E-Dlail: {ranni,fshai,tishby}Ocs.huji.ac.il Category: Algorithms. Abstract Classification of finite sequences without explicit knowledge of their statistical nature is a fundamental problem with many important applications. We propose a new information theoretic approach to this problem which is based on the following ingredients: (i) sequences are similar when they are likely to be generated by the same source; (ii) cross entropies can be estimated via "universal compression"; (iii) Markovian sequences can be asymptotically-optimally merged. With these ingredients we design a method for the classification of discrete sequences whenever they can be compressed. We introduce the method and illustrate its application for hierarchical clustering of languages and for estimating similarities of protein sequences. 1 Introd uction While the relationship between compression (minimal description) and supervised learning is by now well established, no such connection is generally accepted for the unsupervised case. Unsupervised classification is still largely based on ad-hock distance measures with often no explicit statistical justification. This is particularly true for unsupervised classification of sequences of discrete symbols which is encountered in numerous important applications in machine learning and data mining, such as text categorization, biological sequence modeling, and analysis of spike trains. The emergence of "universal" (Le. asymptotically distribution independent) se·Corresponding author. 466 R. EI-Yaniv, S. FineandN. TlShby quence compression techniques suggests the existence of "universal" classification methods that make minimal assumptions about the statistical nature of the data. Such techniques are potentially more robust and appropriate for real world applications. In this paper we introduce a specific method that utilizes the connection between universal compression and unsupervised classification of sequences. Our only underlying assumption is that the sequences can be approximated (in the information theoretic sense) by some finite order Markov sources. There are three ingredients to our approach. The first is the assertion that two sequences are statistically similar if they are likely to be independently generated by the same source. This likelihood can then be estimated, given a typical sequence of the most likely joint source, using any good compression method for the sequence samples. The third ingredient is a novel and simple randomized sequence merging algorithm which provably generates a typical sequence of the most likely joint source of the sequences, under the above Markovian approximation assumption. Our similarity measure is also motivated by the known "two sample problem" [Leh59] of estimating the probability that two given samples are taken from the same distribution. In the LLd. (Bernoulli) case this problem was thoroughly investigated and the. optimal statistical test is given by the sum of the empirical cross entropies between the two samples and their most likely joint -source. We argue that this measure can be extended for arbitrary order Markov sources and use it to construct and sample the most likely joint source. The similarity measure and the statistical merging algorithm can be naturally combined into classification algorithms for sequences. Here we apply the method to hierarchical clustering of short text segments in 18 European languages and to evaluation of similarities of protein sequences. A complete analysis of the method, with further applications, will be presented elsewhere [EFT97]. 2 Measuring the statistical similarity of sequences Estimating the statistical similarity of two individual sequences is traditionally done by training a statistical model for each sequence and then measuring the likelihood of the other sequence by the model. Training a model entails an assumption about the nature of the noise in the data and this is the rational behind most "edit distance" measures, even when the noise model is not explicitly stated. Estimating the log-likelihood of a sequence-sample over a discrete alphabet E by a statistical model can be done through the Cross Entropy or Kullback-Leibler Divergence[CT91] between the sample empirical distribution p and model distribution q, defined as: DKL (pllq) = L P (0-) log P((:» . uEE q (1) The KL-divergence, however, has some serious practical drawbacks. It is nonsymmetric and unbounded unless the model distribution q is absolutely continuous with respect to p (Le. q = 0 ::::} P = 0). The KL-divergence is therefore highly sensitive to low probability events under q. Using the "empirical" (sample) distributions for both p and q can result in very unreliable estimates of the true divergences. Essentially, D K L [Pllq] measures the asymptotic coding inefficiency when coding the sample p with an optimal code for the model distribution q. The symmetric divergence, i.e. D (p, q) = DKL [Pllq] + DKL [qllp], suffers from Agnostic Classification of Markovian Sequences 467 similar sensitivity problems and lacks the clear statistical meaning. 2.1 The "two sample problem" Direct Bayesian arguments, or alternately the method of types [CK81], suggest that the probability that there exists one source distribution M for two independently drawn samples, x and y [Leh59], is proportional to ! dJ-L (M) Pr (xIM) . Pr (yIM) = ! dJ-L (M) . 2-(lzIDKdp",IIM1+lyIDKL[P1l1IM]), (2) where dJ-L(M) is a prior density of all candidate distributions, pz and Py are the empirical (sample) distributions, and Ixl and Iyl are the corresponding sample sizes. For large enough samples this integral is dominated (for any non-vanishing prior) by the maximal exponent in the integrand, or by the most likely joint source of x and y, M>.., defined as M>.. = argmin {lxIDKL (PzIIM') + IYIDKL (pyIlM')}. (3) M' where 0 ~ A = Ixl/(lxl + Iyl) ~ 1 is the sample mixture ratio. The convexity of the KL-divergence guarantees that this minimum is unique and is given by M>.. = APz + (1 - A) PY' the A - mixture of pz and py. The similarity measure between two samples, d(x, y), naturally follows as the minimal value of the above exponent. That is, Definition 1 The similarity measure, d(x, y) = V>..(Pz,Py), of two samples x and y, with empirical distributions pz and Py respectively, is defined as d(x, y) = V>..(Pz,Py) = ADKL (PzIIM>..) + (1- A) DKL (pyIIM>..) (4) where M>.. is the A-mixture of pz and Py. The function V>.. (p, q) is an extension of the Jensen-Shannon divergence (see e.g. [Lin91]) and satisfies many useful analytic properties, such as symmetry and boundedness on both sides by the L1-norm, in addition to its clear statistical meaning. See [Lin91, EFT97] for a more complete discussion of this measure. 2.2 Estimating the V>.. similarity measure The key component of our classification method is the estimation of V>.. for individual finite sequences, without an explicit model distribution. Since cross entropies, D K L, express code-length differences, they can be estimated using any efficient compression algorithm for the two sequences. The existence of "universal" compression methods, such as the Lempel-Ziv algorithm (see e.g. [CT91]) which are provably asymptotically optimal for any sequence, give us the means for asymptotically optimal estimation of V>.., provided that we can obtain a typical sequence of the most-likely joint source, M >... We apply an improvement of the method of Ziv and Merhav [ZM93] for the estimation of the two cross-entropies using the Lempel-Ziv algorithm given two sample sequences [BE97]. Notice that our estimation of V>.. is as good as the compression method used, namely, closer to optimal compression yields better estimation of the similarity measure. It remains to show how a typical sequence of the most-likely joint source can be generated. 468 R. El-Yaniv, S. Fine and N. Tishby 3 Joint Sources of Markovian Sequences In this section we first explicitly generalize the notion of the joint statistical source to finite order Markov probability measures. We identify the joint source of Markovian sequences and show how to construct a typical random sample of this source. More precisely, let x and y be two sequences generated by Markov processes with distributions P and Q, respectively. We present a novel algorithm for the merging the two sequences, by generating a typical sequence of an approximation to the most likely joint source of x and y. The algorithm does not require the parameters of the true sources P and Q and the computation of the sequence is done directly from the sequence samples x and y. As before, r; denotes a finite alphabet and P and Q denote two ergodic Markov sources over r; of orders Kp and KQ, respectively. By equation 3, the :A-mixture joint source M>.. of P and Q is M>.. = argminM' :ADKdPIIM')+(I-:A)DKdQIIM') , where for sequences DKdPIIM) = limsuPn-too ~ L:zE!:n P(x) log :1:))' The following theorem identifies the joint source of P and Q. Theorem 1 The unique :A-mixture joint source M>.. of P and Q, of order K = max {K p, K Q}, is given by the following conditional distribution. For each s E r;K,aEE, :AP(s) (1 - :A)Q(s) M>..(als) = :AP(s) + (1 _ :A)Q(s) P(als) + :AP(s) + (1- :A)Q(s) Q(als) . This distribution can be naturally extended to n sources with priors :At, ... ,:An. 3.1 The "sequence merging" algorithm The above theorem can be easily translated into an algorithm. Figure 1 describes a randomized algorithm that generates from the given sequences x and y, an asymptotically typical sequence z of the most likely joint source, as defined by Theorem 1, of P and Q. Initialization: • z [OJ = choose a symbol from x with probability ,x or y with probability 1 - ,x • i = 0 Loop: Repeat until the approximation error is lower then a prescribe threshold • s", := max length suffix of z appearing somewhere in x • Sy := max length suffix of z appearing somewhere in y • A(,x S } >.Pr .. (s .. ) , "" Sy >.Pr .. (s.,)+(l->.) Prll(s\I) • r = choose x with probability A(,x, s"" Sy} or y with probability 1-A(,x, S"" S1/} • r (Sr) = randomly choose one of the occurrences of Sr in r • z [i + 1) = the symbol appearing immediately after r (Sr) at r • i=i+1 End Repeat Figure 1: The most-likely joint source algorithm Agnostic Classification of Markovian Sequences 469 Notice that the algorithm is completely unparameterized, even the sequence alphabets, which may differ from one sequence to another, are not explicitly needed. The algorithm can be efficiently implemented by pre-preparing suffix trees for the given sequences, and the merging algorithm is naturally generalizable to any number of sequences. 4 Applications There are several possible applications of our sequence merging algorithm and similarity measure. Here we focus on three possible applications: the source merging problem, estimation of sequence similarity, and bottom-up sequence-classification. These algorithms are different from most existing approaches because they rely only on the sequenced data, similar to universal compression, without explicit modeling assumptions. Further details, analysis, and applications of the method will be presented elsewhere [EFT97]. 4.1 Merging and synthesis of sequences An immediate application of the source merging algorithm is for synthesis of typical sequences of the joint source from some given data sequences, without any access to an explicit model of the source. To illustrate this point consider the sequence in Figure 2. This sequence was randomly generated, character by character, from two natural excerpts: a 47,655character string from Dickens' Tale of Two Cities, and a 59,097-character string from Twain's The King and the Pauper. Do your way to her breast, and sent a treason's sword- and not empty. "I am particularly and when the stepped of his ovn commits place. No; yes, of course, and he passed behind that by turns ascended upon him, and my bone to touch it, less to say: 'Remove thought, everyone! Guards! In miness?" The books third time. There was but pastened her unave misg his ruined head than they had knovn to keep his saw whether think" The feet our grace he called offer information? [Twickens, 1997] Figure 2: A typical excerpt of random text generated by the "joint source" of Dickens and Twain. 4.2 Pairwise similarity of proteins The joint source algorithm, combined with the new similarity measure, provide natural means for computing the similarity of sequences over any alphabet. In this section we illustrate this applicationl for the important case of protein sequences (sequences over the set of the 20 amino-acids). From a database of all known proteins we selected 6 different families and within each family we randomly chose 10 proteins. The families chosen are: Chaperonin, MHC1, Cytochrome, Kinase, Globin Alpha and Globin Beta. Our pairwise distances between all 60 proteins were computed using our agnostic algorithm and are depicted in the 6Ox60 matrix of Figure 3. As can be seen, the algorithm succeeds to IThe protein results presented here are part of an ongoing work with G. Yona and E. Ben-Sasson. 470 R. El-Yaniv, S. Fine and N. TlShby identify the families (the success with the Kinase and Cytochrome families is more limited). PairwIse Distances of Protein Sequences chaperonin MHC I cytochrome kinase globin a globin b Figure 3: A 60x60 symmetric matrix representing the pairwise distances, as computed by our agnostic algorithm, between 60 proteins, each consecutive 10 belong to a different family. Darker gray represent higher similarity. In another experiment we considered all the 200 proteins of the Kinase family and computed the pairwise distances of these proteins using the agnostic algorithm. For comparison we computed the pairwise similarities of these sequences using the widely used Smith-Waterman algorithm (see e.g. [HH92]).2 The resulting agnostic similarities, computed with no biological information whatsoever, are very similar to the Smith-Waterman similarities.3 Furthermore, our agnostic measure discovered some biological similarities not detected by the Smith-Waterman method. 4.3 Agnostic classification of languages The sample of the joint source of two sequences can be considered as their "average" or "centroid", capturing a mixture of their statistics. Averaging and measuring distance between objects are sufficient for most standard clustering algorithms such as bottom-up greedy clustering, vector quantization (VQ), and clustering by deterministic annealing. Thus, our merging method and similarity measure can be directly applied for the classification of finite sequences via standard clustering algorithms. To illustrate the power of this new sequence clustering method we give the result of a rudimentary linguistic experiment using a greedy bottom-up (conglomerative) clustering of short excerpts (1500 characters) from eighteen languages. Specifically, we took sixteen random excerpts from the following Porto-Indo-European languages: Afrikaans, Catalan, Danish, Dutch, English, Flemish, French, German, Italian, Latin, Norwegian, Polish, Portuguese, Spanish, Swedish and Welsh, together with 2we applied the Smith-Waterman for computing local-alignment costs using the stateof-the-art blosum62 biological cost matrix. 3These results are not given here due to space limitations and will be discussed elsewhere. Agnostic Classification of Markovian Sequences 471 two artificial languages: Esperanto and Klingon4. The resulting hierarchical classification tree is depicted in Figure 4. This entirely unsupervised method, when applied to these short random excerpts, clearly agrees with the "standard" philologic tree of these languages, both in terms of the grouping and the levels of similarity (depth of the split) of the languages (the Polish-Welsh "similarity" is probably due to the specific transcription used). Figure 4: Agnostic bottom-up greedy clustering of eighteen languages Acknowledgments We sincerely thank Ran Bachrach and Golan Yona for helpful discussions. We also thank Sageev Oore for many useful comments. References [BE97] R. Bachrach and R. EI-Yaniv, An Improved Measure of Relative Entropy Between Individual Sequences, unpublished manuscript. [CK81] 1. Csiszar and J . Krorner. Information Theory: Coding Theorems for Discrete Memoryless Systems, Academic Press, New-York 1981. [CT91] T. M. Cover and J. A. Thomas. Elements of Information Theory, John Wiley & Sons, New-York 1991. [EFT97] R. EI-Yaniv, S. Fine and N. Tishby. Classifying Markovian Sources, in preparations, 1997. [HH92] S. Henikoff and J . G. Henikoff (1992). Amino acid substitution matrices from protein blocks. Proc. Natl. Acad. Sci. USA 89, 10915-10919. [Leh59] E. L. Lehmann. Testing Statistical Hypotheses, John Wiley & Sons, NewYork 1959. [Lin91] J. Lin, 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on In/ormation Theory, 37(1):145-15l. [ZM93] J. Ziv and N. Merhav, 1993. A Measure of Relative Entropy Between Individual Sequences with Application to Universal Classification, IEEE Transactions on In/ormation Theory, 39(4). 4Klingon is a synthetic language that was invented for the Star-Trek TV series.
1997
67
1,416
Visual Navigation in a Robot using Zig-Zag Behavior M. Anthony Lewis Beckman Institute 405 N. Mathews Avenue University of Illinois Urbana, lllinois 61801 Abstract We implement a model of obstacle avoidance in flying insects on a small, monocular robot. The result is a system that is capable of rapid navigation through a dense obstacle field. The key to the system is the use of zigzag behavior to articulate the body during movement. It is shown that this behavior compensates for a parallax blind spot surrounding the focus of expansion normally found in systems without parallax behavior. The system models the cooperation of several behaviors: halteres-ocular response (similar to VOR), optomotor response, and the parallax field computation and mapping to motor system. The resulting system is neurally plausible, very simple, and should be easily hosted on a VLSI hardware. 1 INTRODUCTION Srinivasan and Zhang (1993) describe behavioral evidence for two distinct movement detecting systems in bee: (1) A direction selective pathway with low frequency response characteristics serving the optomotor response and (2) A non-direction selective movement system with higher frequency response serving functions of obstacle avoidance and the 'tunnel centering' response where the animal seeks a flight path along the centerline of a narrow corridor. Recently, this parallel movement detector view has received support from anatomical evidence in fly (Douglass and Strausfeld, 1996). We are concerned here with the implications of using non-direction selective movement detectors for tasks such as obstacle avoidance. A reasonable model of a non-direction selective pathwa:¥ would be that this pathway is computing the absolute value of the optic flow, i.e. s = II[x, Y111 where x, y are the components of the optic flow field on the retina at the point [x, y 1 . What is the effect of using the absolute value of the flow field and throwing away direction information? In section 2 we analyze the effect of a non-direction selective movement field. We understand from this analysis that rotational information, and the limited dynamic range of real sensors contaminates the non-direction selective field and Visual Navigation in a Robot Using Zig-Zag Behavior 823 probably prevents the use of this technique in an area around the direction heading of the observer. One technique to compensate for this 'parallax blind spot' is by periodically changing the direction of the observer. Such periodic movements are seen in insects as well as lower vertebrates and it is suggestive that these movements may compensate for this basic problem. In Section 3, we describe a robotic implementation using a crude non-direction selective movement detector based on a rectified temporal derivative of luminosity. Each 'neuron' in the model retina issues a vote to control the motors of the robot. This system, though seemingly naively simple, compares favorably with other robotic implementations that rely on the optic flow or a function of the optic flow (divergence). These techniques typically require a large degree of spatial temporal averaging and seem computationally complex. In addition, our model agrees better with with the biological evidence. Finally, the technique presented here is amenable to implementation in custom aVLSI or mixed a VLSIIdVLSI chips. Thus it should be possible to build a subminiature visually guided navigation system with several (one?) low-power simple custom chips. 2 ANALYSIS OF NON-DIRECTION SELECTIVE MOVEMENT DETECTION SYSTEM Let us assume a perspective projection (1) where A. is the focal length of the lens, X, Y, Z is the position of a point in the environment, and x, y is the projection of that point on the retinal plane. The velocity of the image of a moving point in the world can be found by differentiating (1) with respect to time: (2) If we assume that objects in the environment are fixed in relation to one-a~-other and that the observer is moving with relativT translational velocity Cv = [vx Vy v J and relative rotational velocity cn = rrox roy roJ to the environment given in frame c, a point in the environment has relati~e v~loclty: (3) Now substituting in (2): ~ = ~l: ~Jv.+{:> -~~;' j'n. (4) and taking the absolute value of the optic flow: r;; = '~l"x- ~(Xy ... + .. y,x' + 1)+ y",l)' +~+y+ ~( -.. p' + \)+xY"y +x",l]j (5) where we have made the substitution: [X/2 Y/~ -t [(1 IlJ (that is, the heading direction). We can see that the terms involving [rox rov rozl cannot be separated from the x, y terms. If we assume that [rox roy roJ = 0 then we can r~arrange the equation as: 824 M. A. Lewis .-l() _ITZI_ lsi L> s -1ZfAJ[(X-a)2 + (y_~)2J (6) in the case of Z translation. If ITzl = 0 then we have: ~ -1(s) = 1 = lsi (7) IZI AJT 2 + T 2 x y this corresponds to the case of pure lateral translations. Locusts (as well as some vertebrates) use peering or side to side movements to gauge distances before jumping. We call the quantity in (6) inverse relative depth. Under the correct circumstances it is equivalent to the reciprocal of time to contact. Equation (6) can be restated as: ~-l(s) = gjS where g is a gain factor that depends on the current direction heading and the position in the retina. This gain factor can be implemented neurally as a shunting inhibition, for example. This has the following implications. If the observer is using a non-direction sensitive movement detector then (A) it must rotationally stabilize its eyes (B) it must dynamically alter the gain of this infonnation in a pathway between the retinal input and motor output or it must always have a constant direction heading and use constant gain factors. In real systems there is likely to be imperfection in rotational stabilization of the observer as well as sensors with limited dynamic range. To understand the effect of these, let us assume that there is a base-line noise level 0 and we assume that this defines a minimum threshold substituting s = 0, we can find a level curve for the minimum detectability of an object, i.e.: (8) Thus, for constant depth and for 0 independent of the spatial position on the retina, the level curve is a circle. The circle increases in radius with increasing distance, and noise, and decreases with increasing speed. The circle is centered around the direction heading. The solution to the problem of a 'parallax blind spot' is to make periodic changes of direction. This can be accomplished in an open loop fashion or, perhaps, in an image driven fashion as suggested by Sobey (1994). 3 ROBOT MODEL Figure la is a photograph of the robot model. The robot's base is a Khepera Robot. The Khepera is a small wheeled robot a little over 2" in diameter and uses differential drive motors. The robot has been fitted with a solid-state gyro attached to its body. This gyroscope senses angular velocities about the body axis and is aligned with the axis of the camera joint. A camera, capable of rotation about an axis perpendicular to the ground plane, is also attached. The camera has a field of view of about 90 0 and can swing of ±90° . The angle of the head rotation is sensed by a small potentiometer. For convenience, each visual process is implemented on a separate Workstation (SGI Indy) as a heavyweight process. Interprocess communication is via PVM distributed computing library. Using a distributed processing model, behaviors can be dynamically added and deleted facilitating analysis and debugging. 3.1 ROBOT CONTROL SYSTEM The control is divided into control modules as illustrated in Fig 2. At the top of the drawing we see a gaze stabilization pathway. This uses a gyro (imitating a halteres organ) for stabilization of rapid head movements. In addition, a visual pathway, using direction selective movement detector (DSMD) maps is used for slower optomotor response. Each of the six maps uses correlation type detectors (Borst and Egelhaaf, 1989). Each map is Visual Navigation in a Robot Using Zig-Zag Behavior Figure 1. Physical setup. (A) Modified Khepera Robot with camera and gyro mounted. (B) Typical obstacle field run experiment. 825 tuned to a different horizontal velocity (three for left image translations and three for right image translations). The lower half of the drawing shows the obstacle avoidance pathway. A crude nondirection selective movement detector is created using a simple temporal derivative. The use of this as a movement detector was motivated by the desire to eventually replace the camera front end with a Neuromorphic chip. Temporal derivative chips are readily available (Delbrtick and Mead, 1991). Next we reason that the temporal derivative gives a crude estimate of the ~bsolute value of the optic flow. For example if we expect only horizontal flows then: E xX = - E t (Horn and Shunck, 1981). Here E, is the temporal derivative of the luminosity and gA; is the spatial derivative. If we sample over a patch of the image, Ex will take on a range of values. If we take the average rectified temporal derivative over a patch then Ixl = I-E,I/IExl. Thus the average rectified temporal derivative over a patch will give a velocity proportional the absolute value of the optic flow. In order to make the change to motor commands, we use a voting scheme. Each pixel in the nondirection selective movement detector field (NDSMD) votes on a direction for the robot. The left pixels for a right turn and the right pixels vote for a left turn. The left and right votes are summed. In certain experiments described below the difference of the left and right votes was used to drive the rotation of the robot. In others a symmetry breaking scheme was used. It was observed that with an object dead ahead of the robot, often the left and right activation would have high but nearly equal activation. In the symmetry breaking scheme, the side with the lower activation was further decrease by a factor of 50%. This admittedly ad hoc solution remarkably improved the performance in the nonzig-zag case as noted below. The zig-zag behavior is implemented as a feedforward command to the motor system and is modeled as: Khepera . COZigZag = stn(cot)K Finally, a constant forward bias is added to each wheel so the robot makes constant progress. K is chosen empirically but in principle one should be able to derive it using the analysis in section 2. As described above, the gaze stabilization module has control of head rotation and the zigzag behavior and the depth from parallax behavior control the movement of the robot's body. During normal operation, the head may exceed the ±90° envelope defined by the mechanical system. This problem can be addressed in several ways among them are by making a body saccade to bring the body under the head or making a head saccade to align the head with the body. We choose the later approach solely because it seemed to work better in practice. 826 Gyro Camera 60x80image -k NDSMD c J Quick Phase Nystagmus Gaze Stabilillltion Cmd l-__ --:-___ L-_~-:_;: Motor f--__ ~ a,-l(s) Map M. A. Lewis e Figure 2. ZigZag Navigation model is composed of a gaze stabilization system (top) and an obstacle avoidance system (bottom). See text. 3.2 BIOLOGICAL INSPIRATION FOR MODEL Course-grained visual pathways are modeled using inspiration from insect neurobiology. The model of depth from parallax is inspired by details given in Srinivasan & Zhang (1993) on work done in bees. Gaze stabilization using a fast channel, mediated by the halteres organs, and a slow optomotor response is inspired by a description of the blowfly Calliphora as reviewed by Hengstenberg (1991). 4 EXPERIMENTS Four types of experimental setups were used. These are illustrated in Fig 3. In setup 1 the robot must avoid a dense field of obstacles (empty soda cans). This is designed to test the basic competence of this technique. In setup 2, thin dowels are place in the robot's path. This tests the spatial resolving capability of the robot. Likewise setup 3 uses a dense obstacle field with one opening replaced by a lightly textured surface. Finally, experimental setup 4 uses a single small object (1cm black patch) and tests the distance at which the robot can 'lock-on' to a target. In this experiment, the avoidance field is sorted for a maximal element over a given threshold. A target cross is placed at this maximal element. The closest object should correspond with this maximal element. If a maximal element over a threshold is identified for a continuous 300ms and the target cross is on the correct target, the robot is stopped and its distance to the object is measured. The larger the distance, the better. 5 RESULTS The results are described briefly here. In the setup 1 without the use of symmetry breaking, the scores were ZigZag: 10 Success, 0 Failures and the non-ZigZag: 4 Success and 6 Failures. With Symmetry Breaking installed the results were: ZigZag: 49 Success, 3 Failures and the non-ZigZag: 44 Success and 9 failures. In the case palisades test: ZigZag: 22 Success, 4 Failures and the non-ZigZag: 14 Success and 11 failures. In the false opening case: ZigZag: 8 Success, 2 Failures and the non-ZigZag: 6 Success and 4 Failures. Finally, in the distance-to-Iock setup, a lock was achieved at an average distance 21.3 CM (15 data points) for zigzag and 9.6 cm (15 data points) for the non-zigzag case. Visual Navigation in a Robot Using Zig-ZAg Behavior 827 Dense Obstacle Field A voidance Palisades Test False Opening A voidance Distance-To-Lock • • @ ~ • 8 © ~o Thin Dowels lightly textured barrier Figure 3. Illustrations of the four experimental setups. We tentatively conclude that zig-zag behavior should improve performance in robot and in animal navigation. 6 DISCUSSION In addition to the robotic implementation presented here, there have been many other techniques presented in the literature. Most relevant is Sobey (1994) who uses a zigzag behavior for obstacle avoidance. In this work, optic flow is computed through a process of discrete movements where 16 frames are collected, the robot stops, and the stored frames are analyzed for optic flow. The basic strategy is very clever: Always choose the next move in the direction of an identified object. The reasoning is that since we know the distance to the object in this direction, we can confidently move toward the object, stopping before collision. The basic idea of using zig-zag behavior is similar except that the zig-zagging is driven by perceptual input. In addition, the implementation requires estimation of the flow field requiring smoothing over numerous images. Finally, Sobey uses Optic Flow and we use the absolute value of the Optic Flow as suggested by biology. Franceschini et al (1992) reports an analog implementation that uses elementary movement detectors. A unique feature is the non-uniform sampling and the use of three separate arrays. One array uses a sampling around the circumference. The other two sampling systems are mounted laterally on the robot and concentrate in the 'blind-spot' immediately in front of the robot. It is not clear that the strategy of using three sensor arrays, spatially separated, and direction selective movement detectors is in accord with the biological constraints. Santos-Victor et al (1995) reports a system using optic flow and having lateral facing cameras. Here the authors were reproducing the centering reflex and did not focus on avoiding obstacles in front of the robot. Coombs and Roberts (1992,1993) use a similar technique. Weber et al (1996) describe wall following and stopping in front of an obstacle using an optic flow measure. Finally, a number of papers report the use of flow field divergence, apparently first suggested by Nelson and Aloimonos (1989). This requires the computation of higher derivatives and requires significant smoothing. Even in this case, there is a problem of a 'parallax hole.' See Fig. 3 of that article, for example. In any case they did not implement their idea on a mobile robot. However, this approach has been followed up with an implementation in a robot by Camus et al (1996) reporting good results. The system described here presents a physical model of insect like behavior integrated on a small robotic platform. Using results derived from an analysis of optic flow, we concluded that a zig-zag behavior in the robot would allow it to detect obstacles in front of the robot by periodically articulating the blind spot. The complexity of the observed behavior and the simplicity of the control is striking. The robot is able to navigate through a field of obstacles, always searching out a freeway for movement. 828 M. A. Lewis The integrated behavior outlined here should be a good candidate for a neuromorphic implementation. A multichip (or single chip?) system could be envisioned using a relatively simple non-directional 2-d movement detector. Two arrays of perpendicular I-d array of movement detectors should be sufficient for the optomotor response. This information could then be mapped to a circuit comprised of a few op-amp adder circuits and then sent to the head and body motors. Even the hal teres organ could be simulated with a silicon fabricated gyroscope. The results would be an extremely compact robot capable of autonomous, visually guided navigation. Finally, from our analysis of optic flow, we can make a reasonable prediction abuut the neural wiring in flying insects. The estimated depth of objects in the environment depends on where the object falls on the optic array as well as the ratio of translation to forward movement. Thus a bee or a fly should probably modulate its incoming visual signal to account for this time varying interpretation of the scene. We would predict that there should be motor information, related to the ratio of forward to lateral velocities would be projected to the non-directional selective motion detector array. This would allow a valid time varying interpretation of the scene in a zig-zagging animal. Acknowledgments The author acknowledges the useful critique of this work by Narendra Ahuja, Mark Nelson, John Hart and Lucia Simo. Special thanks to Garrick Kremesic and Barry Stout who assisted with the experimental setup and the modification of the Khepera. The author acknowledges the support of ONR grant NOOOI49610657. The author also acknowledges the loan of the Khepera from UCLA (NSF grant CDA-9303148). References A. Borst and M. Egelhaaf (1989), Principles of Visual Motion Detection, Trends in Neurosciences, 12(8):297-306 T. Camus, D. Coombs, M. Herman, and T.-H. Hong (1996), "Real-time Single-Workstation Obstacle Avoidance Using Only Wide-Field Flow Divergence", Proceedings of the 13th International Conference on Pattern Recognition. pp. 323-30 vol.3 D. Coombs and K. Roberts (1992), '''Bee-Bot': Using Peripheral Optical Flow to A void Obstacles" I SPIE Vol 1825, Intelligent Robots and Computer Vision XI, pp 714-721. D. Coombs and K. Roberts (1993), "Centering behavior using peripheral vision", Proc. 1993 IEEE Computer Society Conf. CVPR pp. 440-5, 16 refs. 1993 T. Delbriick and C. A. Mead (1991), Time-derivative adaptive silicon photoreceptor array. Proc. SPIE - Int. Soc. Opt. Eng. (USA). vol 1541, pp. 92-9. J. K. Douglass and N. 1. Strausfeld (1996), Visual Motion-Detection Circuits in Flies: Parallel Direction- and Non-Direction-Sensitive Pathways between the Medulla and Lobula Plate, J. of Neuroscience 16(15):4551-4562. N. Franceschini, J. M. Pichon and C. Blanes (1992), "From Insect Vision to Robot Vision", Phil. Trans. R. Soc Lond. B. 337, pp 283-294. R. Hengstenberg (1991), Gaze Control in the Blowfly Calliphora: a Multisensory, Two-Stage Integration Process, Seminars, in the Neurosciences, Vol3,pp 19-29. B. K. P. Hom and B. G. Shunck (1981), "Determining Optic Flow", Artificial Intelligence, 17(13):185-204. R. C. Nelson and J. Y. Aloimonos (1989) Obstacle Avoidance Using Flow Field Divergence, IEEE Trans. on Pattern Anal. and Mach. Intel. 11(10):1102-1106. 1. Santos-Victor, G. Sandini, F. Curotto and S. Garibaldi (1995), "Divergent Stereo in Autonomous Navigation: From Bees to Robots," Int. J. of Compo Vis. 14, pp 159-177. P. J. Sobey (1994), "Active Navigation With a Monocular Robot" BioI. Cybern, 71:433-440 M. V. Srinivasan and S. W. Zhang (1993), Evidence for Two Distinct Movement-Detecting Mechanisms in Insect Vision, Naturwissenschaften, 80, pp 38-41. K. Weber, S. Venkatash and M.V. Srinivasan (1996), "Insect Inspired Behaviours for the Autonomous Control of Mobile Robots" Proc. of ICPR'96, pp 156-160.
1997
68
1,417
Generalized Prioritized Sweeping David Andre Nir Friedman Ronald Parr Computer Science Division, 387 Soda Hall University of California, Berkeley, CA 94720 {dandre,nir,parr}@cs.berkeley.edu Abstract Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping. 1 Introduction In reinforcement learning, there is a tradeoff between spending time acting in the environment and spending time planning what actions are best. Model-free methods take one extreme on this question- the agent updates only the state most recently visited. On the other end of the spectrum lie classical dynamic programming methods that reevaluate the utility of every state in the environment after every experiment. Prioritized sweeping (PS) [6] provides a middle ground in that only the most "important" states are updated, according to a priority metric that attempts to measure the anticipated size of the update for each state. Roughly speaking, PS interleaves perfonning actions in the environment with propagating the values of states. After updating the value of state s, PS examines all states t from which the agent might reach s in one step and assigns them priority based on the expected size of the change in their value. A crucial desideratum for reinforcement learning is the ability to scale-up to complex domains. For this, we need to use compact ( or generalizing) representations of the model and the value function. While it is possible to apply PS in the presence of such representations (e.g., see [1]), we claim that classic PS is ill-suited in this case. With a generalizing model, a single experience may affect our estimation of the dynamics of many other states. Thus, we might want to update the value of states that are similar, in some appropriate sense, to s since we have a new estimate of the system dynamics at these states. Note that some of these states might never have been reached before and standard PS will not assign them a priority at all. 1002 D. Andre, N. Friedman and R Parr In this paper, we present generalized prioritized sweeping (GenPS), a method that utilizes a fonnal principle to understand and extend PS and extend it to deal with parametric representations for both the model and the value function. If GenPS is used with an explicit state-space model and value function representation, an algorithm similar to the original (classic) PS results. When a model approximator (such as a dynamic Bayesian network [2]) is used, the resulting algorithm prioritizes the states of the environment using the generalizations inherent in the model representation. 2 The Basic Principle We assume the reader is familiar with the basic concepts of Markov Decision Processes (MDPs); see, for example, [5]. We use the following notation: A MDP is a 4-tuple, (S,A,p,r) where S is a set of states, A is a set of actions, p(t I s,a) is a transition model that captures the probability of reaching state t after we execute action a at state s, and r( s) is a reward function mapping S into real-valued rewards. In this paper, we focus on infinite-horizon MDPs with a discount factor ,. The agent's aim is to maximize the expected discounted total reward it will receive. Reinforcement learning procedures attempt to achieve this objective when the agent does not know p and r. A standard problem in model-based reinforcement learning is one of balancing between planning (Le., choosing a policy) and execution. Ideally, the agent would compute the optimal value function for its model of the environment each time the model changes. This scheme is unrealistic since finding the optimal policy for a given model is computationally non-trivial. Fortunately, we can approximate this scheme if we notice that the approximate model changes only slightly at each step. Thus, we can assume that the value function from the previous model can be easily "repaired" to reflect these changes. This approach was pursued in the DYNA [7] framework, where after the execution of an action, the agent updates its model of the environment, and then performs some bounded number of value propagation steps to update its approximation of the value function. Each vaiuepropagation step locally enforces the Bellman equation by setting V(s) ~ maxaEA Q(s, a), where Q(s,a) = f(s) + ,Ls'ESP(s' I s,a)V(s'), p(s' I s,a) and f(s) are the agent's approximation of the MDP, and V is the agent's approximation of the value function. This raises the question of which states should be updated. In this paper we propose the following general principle: GenPS Principle: Update states where the approximation of the value function will change the most. That is, update the states with the largest Bellmanerror,E(s) = IV(s) -maxaEAQ(s,a)l. The motivation for this principle is straightforward. The maximum Bellman error can be used to bound the maximum difference between the current value function, V (s) and the optimal value function, V"'(s) [9]. This difference bounds the policy loss, the difference between the expected discounted reward received under the agent's current policy and the expected discounted reward received under the optimal policy. To carry out this principle we have to recognize when the Bellman error at a state changes. This can happen at two different stages. First, after the agent updates its model of the world, new discrepancies between V (s) and maxa Q( s, a) might be introduced, which can increase the Bellman error at s. Second, after the agent performs some value propagations, V is changed, which may introduce new discrepancies. We assume that the agent maintains a value function and a model that are parameterized by Dv and D M . (We will sometimes refer to the vector that concatenates these vectors together into a single, larger vector simply as D.) When the agent observes a transition from state s to s' under action a, the agent updates its environment model by adjusting some of the parameters in D M. When perfonning value-propagations, the agent updates V by updating parameters in Dv. A change in any of these parameters may change the Bellman error at other states in the model. We want to recognize these states without explicitly Generalized Prioritized Sweeping 1003 computing the Bellman error at each one. Formally, we wish to estimate the change in error, I~E(B) I, due to the most recent change ~() in the parameters. We propose approximating I~E(8) 1 by using the gradient of the right hand side of the Bellman equation (i.e. maxa Q(8,a). Thus, we have: I~E(s)1 ~ lV'maxa Q(8,a) . ~()I which estimates the change in the Bellman error at state 8 as a function of the change in Q( 8, a). The above still requires us to differentiate over a max, which is not differentiable. In general, we want to to overestimate the change, to avoid "starving" states with nonnegligible error. Thus, we use the following upper bound: 1V'(maxa Q(8, a)) . ~81 ~ maxa IV'Q(s,a). ~81· We now define the generalized prioritized sweeping procedure. The procedure maintains a priority queue that assigns to each state 8 a priority,pri( 8). After making some changes, we can reassign priorities by computing an approximation of the change in the value function. Ideally, this is done using a procedure that implements the following steps: procedure update-priorities (&) for all s E S pri(s) +- pri(s) + maxa IV'Q(s, a) . &1. Note that when the above procedure updates the priority for a state that has an existing priority, the priorities are added together. This ensures that the priority being kept is an overestimate of the priority of each state, and thus, the procedure will eventually visit all states that require updating. Also, in practice we would not want to reconsider the priority of all states after an update (we return to this issue below). Using this procedure, we can now state the general learning procedure: procedure GenPS 0 loop perform an action in the environment update the model; let & be the change in () call update-priorities( &) while there is available computation time let smax = arg maxs pri( s) perform value-propagation for V(smax); let & be the change in () call update-priorities( &) pri(smax) +- W(smax) - maxa Q(smax,a)11 Note that the GenPS procedure does not determine how actions are selected. This issue, which involves the problem of exploration, is orthogonal to the our main topic. Standard approache!;, such as those described in [5, 6, 7], can be used with our procedure. This abstract description specifies neither how to update the model, nor how to update the value function in the value-propagation steps. Both of these depend on the choices made in the corresponding representation of the model and the value function. Moreover, it is clear that in problems that involve a large state space, we cannot afford to recompute the priority of every state in update-priorities. However, we can simplify this computation by exploiting sparseness in the model and in the worst case we may resort to approximate methods for finding the states that receive high priority after each change. 3 Explicit, State-based Representation In this section we briefly describe the instantiation of the generalized procedure when the rewards, values, and transition probabilities are explicitly modeled using lookup tables. In this representation, for each state 8, we store the expected reward at 8, denoted by Of(s)' the estimated value at 8, denoted by Bv (s), and for each action a and state t the number of times the execution of a at 8 lead to state t, denoted NS,Q.,t. From these transition counts we can I In general, this will assign the state a new priority of 0, unless there is a self loop. In this case it will easy to compute the new Bellman error as a by-product of the value propagation step. 1004 D. Andre, N. Friedman and R. Parr reconstruct the transition probabilities pACt I 8, a) = N •. a.! +N~ ! a . t where NO are " N +NO ' 8,a,t W t' ",a ,t' " ,a ,t l fictional counts that capture our prior information about the system's dynamics.2 After each step in the worJd, these reward and probability parameters are updated in the straightforward manner. Value propagation steps in this representation set 8Y(t) to the right hand side of the Bellman equation. To apply the GenPS procedure we need to derive the gradient of the Bellman equation for two situations: (a) after a single step in the environment, and (b) after a value update. In case (a), the model changes after performing action 8~t . In this case, it is easy to verify that V'Q(s,a) 'do = dOr(t) + t t N.}t+N~.a.t (V(t) 2:~p(t' I s,a)V(t')), and that V' Q( s', a') . do = 0 if s' =1= s or a' =1= a. Thus, s is the only state whose priority changes. In case (b), the value function changes after updating the value of a state t. In this case, V'Q(s, a) ·do = ,pet I s, a)l1ov(t)' It is easy to see that this is nonzero only ift is reachable from 8. In both cases, it is straightforward to locate the states where the Bellman error might have have changed, and the computation of the new priority is more efficient than computing the Bellman-error.3 Now we can relate GenPS to standard prioritized sweeping. The PS procedure has the general form of this application of GenPS with three minor differences. First, after performing a transition s~t in the environment, PS immediately performs a value propagation for state s, while GenPS increments the priority of s. Second, after performing a value propagation for state t, PS updates the priority of states s that can reach t with the value maxa p( tis, a) ./1y( t). The priority assigned by GenPS is the same quantity multiplied by ,. Since PS does not introduce priorities after model changes, this multiplicative constant does not change the order of states in the queue. Thirdly, GenPS uses addition to combine the old priority of a state with a new one, which ensures that the priority is indeed an upper bound. In contrast, PS uses max to combine priorities. This discussion shows that PS can be thought of as a special case of GenPS when the agent uses an explicit, state-based representation. As we show in the next section, when the agent uses more compact representations, we get procedures where the prioritization strategy is quite different from that used in PS. Thus, we claim that classic PS is desirable primarily when explicit representations are used. 4 Factored Representation We now examine a compact representation of p( s' I s, a) that is based on dynamic Bayesian networks (DBNs) [2]. DBNs have been combined with reinforcement learning before in [8], where they were used primarily as a means getting better generalization while learning. We will show that they also can be used with prioritized sweeping to focus the agent's attention on groups of states that are affected as the agent refines its environment model. We start by assuming that the environment state is described by a set of random variables, XI, . .. , X n· For now, we assume that each variable can take values from a finite set Val(Xi). An assignment of values XI, .• • , Xn to these variables describes a particular environment state. Similarly, we assume that the agent's action is described by random variables AI, ... ,Ak . To model the system dynamics, we have to represent the probability of transitions s~t, where sand t are two assignments to XI, .. . , Xn and a is an assignment to AI, ... ,Ak • To simplify the discussion, we denote by Yi, .. . , Yn the agent's state after 2Formally, we are using multinomial Dirichlet priors. See, for example, [4] for an introduction to these Bayesian methods. 3 Although ~~(s .a ) involves a summation over all states, it can be computed efficiently. To see .. ,a ,t this, note that the summation is essentially the old value of Q( s, a) (minus the immediate reward) which can be retained in memory. Generalized Prioritized Sweeping 1005 the action is executed (e.g., the state t). Thus, p(t I s,a) is represented as a conditional probability P(YI, ... , Yn I XI, ... ,Xn , AI , ... ,Ak). A DBN model for such a conditional distribution consists of two components. The first is a directed acyclic graph where each vertex is labeled by a random variable and in which the vertices labeled XI, . .. ,Xn and AI, .. . , Ak are roots. This graph speoifies the factorization of the conditional distribution: n P(Yi, ... , Yn I XI'· .. ' X n , AI,···, Ak) = II P(Yi I Pai), (1) i=I where Pai are the parents of Yi in the graph. The second component of the DBN model is a description of the conditional probabilities P(Yi I Pai). Together, these two components describe a unique conditional distribution. The simplest representation of P(Yi I Pai) is a table that contains a parameter (}i,y,z = P(Yi = y I Pai = z) for each possible combination of y E Val(Yi) and z E Val(Pai) (note that z is a joint assignment to several random variables). It is easy to see that the "density" of the DBN graph determines the number of parameters needed. In particular, a complete graph, to which we cannot add an arc without violating the constraints, is equivalent to a state-based representation in terms of the number of parameters needed. On the other hand, a sparse graph requires few parameters. In this paper, we assume that the learneris supplied with the DBN structure and only has to learn the conditional probability entries. It is often easy to assess structure information from experts even when precise probabilities are not available. As in the state-based representation, we learn the parameters using Dirichlet priors for each multinomial distribution [4]. In this method, we assess the conditional probability (}i,y,z using prior knowledge and the frequency of transitions observed in the past where Yi = y among those transitions where Pai = z. Learning amounts to keeping counts Ni,y,z that record the number of transitions where Yi = y and Pai = z for each variable Yi and values y E Val(Yi) and z E Val(Pai). Our prior knowledge is represented by fictional counts Np,y,z. Then we estimate probabil. . . h ~ I LI N; ,1I,z+N?,y,z h N· -" N · NO ltles USIng t e 10rmu a Ui,y,z N;,. ,z ' W ere ~, ' ,z L....,y' t,y',z + i,y' ,z· We now identify which states should be reconsidered after we update the DBN parameters. Recall that this requires estimating the term V Q( s, a) ./1n. Since!!:.o is sparse, after making the transition s* ~t*, we have that VQ(s, a) . !!:.O = 2:i a~Q(s:a). ' where yi and zi are the I,V i '%i assignments to Yi and Pai, respectively, in s* ~t*. (Recall that s*, a* and t* jointly assign values to all the variables in the DBN.) We say that a transition s~t is consistent with an assignment X = x for a vector of random variables X, denoted (s,a, t) F= (X = x), if X is assigned the value x in s~t. We also need a similar notion for a partial description of a transition. We say that s and a are consistent with X = x, denoted (s,a,·) F= (X = x), if there is a t such that (s, a, t) F= (X = x). Using this notation, we can show that if (s, a, .) F (Pai = zi), then 8Q(s, a) = -::Nt-''~'-': [8, .• : .,: t(,.J~.: .': p(t I s, a)1l(t) - t(, .~I=': p(t I s, a)1l(t)] and if s, a are inconsistent with Pai = zi, then aa:.(8:a). = o. ""i 'Z i This expression shows that if s is similar to s* in that both agree on the values they assign to the parents of some Yi (i.e., (s, a*) is consistent with zi), then the priority of s would change after we update the model. The magnitude of the priority change depends upon both the similarity of sand s* (i.e. how many of the terms in VQ(s, a) · !!:'o will be non-zero), and the value of the states that can be reached from s. 1006 ~ '3 " 0 ~ £ (a) (b) 2.S I.S D. Andre, N. Friedman and R. Parr PSPS+fact<X'ed --- .G:nps •· .. :.~ .. _~_ ..:: . .:.;..::::.:.. .. ~ . .:.....:._--I---... ·--•• oX-'-./ 005 ,.l ___ / ~ o~~~--~~--~~~--~~ 1000 2000 3000 4000 sooo 6000 7000 8000 9000 10000 Number of iterations (c) Figure 1: (a) The maze used in the experiment. S marks the start space, G the goal state, and 1, 2 and 3 are the three flags the agent has to set to receive the reward. (b) The DBN structure that captures the independencies in this domain. (c) A graph showing the performance of the three procedures on this example. PS is GenPS with a state-based model, PS+factored is the same procedure but with a factored model, and GenPS exploits the factored model in prioritization. Each curve is the average of 5 runs. The evaluation of 8~(8:a). requires us to sum over a subset of the states -namely, those 1,1Ii .z, states t that are consistent with zi. Unfortunately, in the worst case this will be a large fragment of the state space. If the number of environment states is not large, then this might be a reasonable cost to pay for the additional benefits of GenPS. However, this might be a burdensome when we have a large state space, which are the cases where we expect to gain the most benefit from using generalized representations such as DBN. In these situations, we propose a heuristic approach for estimating V'Q(s, a)~ without summing over large numbers of states for computing the change of priority for each possible state. This can be done by finding upper bounds on or estimates of 8~( 8:a ).. Once we I,ll, ' %i have computed these estimates, we can estimate the priority change for each state s. We use the notation s '"Vi s* if sand s* both agree on the assignment to Pai . If Ci is an upper bound on (or an estimate of) 18~~~:t:: I, we have that IV'Q(8, a)/loM 1 ~ Li :s~iS. Ci . Thus, to evaluate the priority of state s, we simply find how "similar" it is to s*. Note that it is relatively straightforward to use this equation to enumerate all the states where the priority change might be large. Finally, we note that the use of a DBN as a model does not change the way we update priorities after a value propagation step. If we use an explicit table of values, then we would update priorities as in the previous section. If we use a compact description of the value function, then we can apply GenPS to get the appropriate update rule. S An Experiment We conducted an experiment to evaluate the effect of using GenPS with a generalizing model. We used a maze domain similar to the one described in [6]. The maze, shown in Figure 1 (a), contains 59 cells, and 3 binary flags, resulting in 59 x 23 = 472 possible states. Initially the agent is at the start cell (marked by S) and the flags are reset. The agent has four possible actions, up, down, left, and right, that succeed 80% of the time, and 20% of the time the agent moves in an unintended perpendicular direction. The i'th flag is set when the agent leaves the cell marked by i. The agent receives a reward when it arrives at the goal cell (marked by G) and all of the flags are set. In this situation, any action resets the game. As noted in [6], this environment exhibits independencies. Namely, the probability of transition from one cell to another does not depend on the flag settings. Generalized Prioritized Sweeping 1007 These independencies can be captured easily by the simple DBN shown in Figure I(b) Our experiment is designed to test the extent to which GenPS exploits the knowledge of these independencies for faster learning. We tested three procedures. The first is GenPS, which uses an explicit state-based model. As explained above, this variant is essentially PS. The second procedure uses a factored model of the environment for learning the model parameters, but uses the same prioritization strategy as the first one. The third procedure uses the GenPS prioritization strategy we describe in Section 4. All three procedures use the Boltzman exploration strategy (see for example [5]). Finally, in each iteration these procedures process at most 10 states from the priority queue. The results are shown in Figure l(c). As we can see, the GenPS procedure converged faster than the procedures that used classic PS. As we can see, by using the factored model we get two improvements. The first improvement is due to generalization in the model. This allows the agent to learn a good model of its environment after fewer iterations. This explains why PS+factored converges faster than PS. The second improvement is due to the better prioritization strategy. This explains the faster convergence of GenPS. 6 Discussion We have presented a general method for approximating the optimal use of computational resources during reinforcement learning. Like classic prioritized sweeping, our method aims to perform only the most beneficial value propagations. By using the gradient of the Bellman equation our method generalizes the underlying principle in prioritized sweeping. The generalized procedure can then be applied not only in the explicit, state-based case, but in cases where approximators are used for the model. The generalized procedure also extends to cases where a function approximator (such as that discussed in [3]) is used for the value function, and future work will empirically test this application of GenPS. We are currently working on applying GenPS to other types of model and function approximators. Acknowledgments We are grateful to Geoff Gordon, Daishi Harada, Kevin Murphy, and Stuart Russell for discussions related to this work and comments on earlier versions of this paper. This research was supported in part by ARO under the MURI program "Integrated Approach to Intelligent Systems," grant number DAAH04-96-I-0341. The first author is supported by a National Defense Science and Engineering Graduate Fellowship. References [I] S. Davies. Multidimensional triangulation and interpolation for reinforcement learning. In Advances in Neurallnfonnation Processing Systems 9. 1996. [2] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computationallntelligence, 5:142-150, 1989. [3] G. J. Gordon. Stable function approximation in dynamic programming. In Proc. 12th Int. Con! on Machine Learning, 1995. [4] D. Heckerman. A tutorial on learning with Bayesian networks. Technical Report MSR-TR-9506, Microsoft Research, 1995. Revised November 1996. [5] L. P. Kaelbling, M. L. Littman and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237-285, 1996. [6] A. W. Moore and C. G. Atkeson. Prioritized sweeping-reinforcement learning with less data and less time. Machine Learning, 13:103-130, 1993. [7] R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine Learning: Proc. 7th Int. Con!, 1990. [8] P. Tadepalli and D. Ok. Scaling up average reward reinforcement learning by approximating the domain models and the value function. In Proc. 13th Int. Con! on Machine Learning, 1996. [9] R. J. Williams and L. C. III Baird. Tight performance bounds on greedy policies based on imperfect value functions. Technical report, Computer Science, Northeastern University. 1993.
1997
69
1,418
On Efficient Heuristic Ranking of Hypotheses Steve Chien, Andre Stechert, and Darren Mutz Jet Propulsion Laboratory, California Institute of Technology 4800 Oak Grove Drive, MIS 525-3660, Pasadena, CA 91109-8099 steve.chien@jpl.nasa.gov, Voice: (818) 306-6144 FAX: (818) 306-6912 Content Areas: Applications (Stochastic Optimization),Model Selection Algorithms Abstract This paper considers the problem of learning the ranking of a set of alternatives based upon incomplete information (e.g., a limited number of observations). We describe two algorithms for hypothesis ranking and their application for probably approximately correct (PAC) and expected loss (EL) learning criteria. Empirical results are provided to demonstrate the effectiveness of these ranking procedures on both synthetic datasets and real-world data from a spacecraft design optimization problem. 1 INTRODUCTION In many learning applications, the cost of information can be quite high, imposing a requirement that the learning algorithms glean as much usable information as possible with a minimum of data. For example: • In speedup learning, the expense of processing each training example can be significant [Tadepalli921. • In decision tree learning, ihe cost of using all available training examples when evaluating potential attributes for partitioning can be computationally ex.pensive [Musick93]. • In evaruating medical treatment policies, additional training examples imEly suboptimal treatment of human subjects. • n data-poor applications, training data may be very scarce and learning as well as possible from limited data may be key. This paper provides a statistical decision-theoretic framework for the ranking of parametric distributions. This framework will provide the answers to a wide range of questions about algorithms such as: how much information is enough? At what point do we have adequate information to rank the alternatives with some requested confidence? On Efficient Heuristic Ranking of Hypotheses 445 The remainder of this paper is structured as follows. First, we describe the hypothesis ranking problem more formally, including definitions for the probably approximately correct (PAC) and expected loss (EL) decision criteria. We then define two algorithms for establishing these criteria for the hypothesis ranking problem - a recursive hypothesis selection algorithm and an adjacency based algorithm. Next, we describe empirical tests demonstrating the effectiveness of these algorithms as well as documenting their improved performance over a standard algorithm from the statistical ranking literature. Finally, we describe related work and future extensions to the algorithms. 2 HYPOTHESIS RANKING PROBLEMS Hypothesis ranking problems, an extension of hypothesis selection problems, are an abstract class of learning problems where an algorithm is given a set of hypotheses to rank according to expected utility over some unknown distribution, where the expected utility must be estimated from training data. In many of these applications, a system chooses a single alternative and never revisits the decision. However, some systems require the ability to investigate several options (either serially or in parallel), such as in beam search or iterative broadening, where the ranking formulation is most appropriate. Also, as is the case with evolutionary approaches, a system may need to populate future alternative hypotheses on the basis of the ranking of the current population[Goldberg89] . In any hypothesis evaluation problem, always achieving a correct ranking is impossible in practice, because the actual underlying probability distributions are unavailable and there is always a (perhaps vanishingly) small chance that the algorithms will be unlucky because only a finite number of samples can be taken. Consequently, rather than always requiring an algorithm to output a correct ranking, we impose probabilistic criteria on the rankings to be produced. While several families of such requirements exist, in this paper we examine two, the probably approximately correct (PAC) requirement from the computational learning theory community [Valiant84] and the expected loss (EL) requirement frequently used in decision theory and gaming problems [Russe1l92] . The expected utility of a hypothesis can be estimated by observing its values over a finite set of training examples. However, to satisfy the PAC and EL requirements, an algorithm must also be able to reason about the potential difference between the estimated and true utilities of each hypotheses. Let Ui be the true expected utility of hypothesis i and let Ui be the estimated expected utility of hypothesis i. Without loss of generality, let us presume that the proposed ranking of hypotheses is U1 > U2 >, ... , > Uk-I> Uk. The PAC requirement states that for some userspecified £. with probability 1 - 8: k-l /\ [(Ui + f) > MAX(Ui+I, ... ,UIe)] (1) ;=1 Correspondingly, let the loss L of selecting a hypothesis HI to be the best from a set of k hypotheses HI, ... , Hk be as follows. L(HI' {HI, ... ,HIe}) = MAX(O, MAX(U2 , ... ,UIe) - UI) (2) and let the loss RL of a ranking H 1, ... , H k be as follows. Ie-I RL(Hl, ... , Hie) = L L(Hi, {Hi+l, ... , Hie}) (3) i=1 A hypothesis ranking algorithm which obeys the expected loss requirement must produce rankings that on average have less than the requested expected loss bound. 446 S. Chien, A. Stechert and D. Mutz Consider ranking the hypotheses with expected utilities: U1 = 1.0, U2 = 0.95, U3 = 0.86. The ranking U2 > U1 > U3 is a valid PAC ranking for { = 0.06 but not for { = 0.01 and has an observed loss of 0.05 + 0 = 0.05. However, while the confidence in a pairwise comparison between two hypotheses is well understood, it is less clear how to ensure that desired confidence is met in the set of comparisons required for a selection or the more complex set of comparisons required for a ranking. Equation 4 defines the confidence that Ui + { > Uj, when the distribution underlying the utilities is normally distributed with unknown and unequal variances. (4) where ¢ represents the cumulative standard normal distribution function, and n, Ui-j, and Si-j are the size, sample mean, and sample standard deviation of the blocked differential distribution, respectively 1 . Likewise, computation of the expected loss for asserting an ordering between a pair of hypotheses is well understood, but the estimation of expected loss for an entire ranking is less clear. Equation 5 defines the expected loss for drawing the conclusion Ui > Uj, again under the assumption ~fnormality (see [Chien95] for further details). U'-i :l -O.6n() oc ~ ' e Si_j fJ -j :l EL(Ui > Ujl = .-] + ~ e- O• 6 • dz (5) ';21rn .,j2; _ \~irn '-J In the next two subsections, we describe two interpretations for estimating the likelihood that an overall ranking satisfies the PAC or EL requirements by estimating and combining pairwise PAC errors or EL estimates. Each of these interpretations lends itself directly to an algorithmic implementation as described below. 2.1 RANKING AS RECURSIVE SELECTION One way to determine a ranking HI, ... , Hk is to view ranking as recursive selection from the set of remaining candidate hypotheses. In this view, the overall ranking error, as specified by the desired confidence in PAC algorithms and the loss threshhold in EL algorithms, is first distributed among k - 1 selection errors which are then further subdivided into pairwise comparison errors. Data is then sampled until the estimates of the pairwise comparison error (as dictated by equation 4 or 5) satisfy the bounds set by the algorithm. Thus, another degree of freedom in the design of recursive ranking algorithms is the method by which the overall ranking error is ultimately distributed among individual pairwise comparisons between hypotheses. Two factors influence the way in which we compute error distribution. First, our model of error combination determines how the error allocated for individual comparisons or selections combines into overall ranking error and thus how many candidates are available as targets for the distribution. Using Bonferroni's inequality, one combine errors additively, but a more conservative approach might be to assert that because the predicted "best" hypothesis may change during sampling in the worst case the conclusion might depend on all possible pairwise comparisons and thus the error should be distributed among all (~) pairs of hypotheses2 ). INote that in our approach we block examples to further reduce sampling complexity. Blocking forms estimates by using the difference in utility between competing hypotheses on each observed example. Blocking can significantly reduce the variance in the data when the hypotheses are not independent. It is trivial to modify the formulas to address the cases in which it is not possible to block data (see [Moore94, Chien95] for further details). 2For a discussion of this issue, see pp. 18-20 of [Gratch93]. On Efficient Heuristic Ranking of Hypotheses 447 Second, our policy with respect to allocation of error among the candidate comparisons or selections determines how samples will be distributed. For example, in some contexts, the consequences of early selections far outweigh those of later selections. For these scenarios, we have implemented ranking algorithms that divide overall ranking error unequally in favor of earlier selections3 . Also, it is possible to divide selection error into pairwise error unequally based on estimates of hypothesis parameters in order to reduce sampling cost (for example, [Gratch94] allocates error rationally) . Within the scope of this paper, we only consider algorithms that: (1) combine pairwise error into selection error additively, (2) combine selection error into overall ranking error additively and (3) allocate error equally at each level. One disadvantage of recursive selection is that once a hypothesis has been selected, it is removed from the pool of candidate hypotheses. This causes problems in rare instances when, while sampling to increase the confidence of some later selection, the estimate for a hypothesis' mean changes enough that some previously selected hypothesis no longer dominates it. In this case, the algorithm is restarted taking into account the data sampled so far. These assumptions result in the following formulations (where d(U11>£ {U2' ... , Uk}) is used to denote the error due to the action of selecting hypothesis 1 under Equation 1 from the set {HI, ... , Hk} and d(UII>{U2, ... , Uk}) denotes the error due to selection loss in situations where Equation 2 applies): t5rec(UI > U2 > ... > Uk) = t5rec(U2 > U3 > ... > Uk) +t5(UI t>. {U2 , ••• ,Uk}) (6) where drec(Uk) = 0 (the base case for the recursion) and the selection error is as defined in [Chien95]: k t5(Ul t>. {U2 , ••• ,Uk}) = L 151,. .=2 using Equation 4 to compute pairwise confidence. Algorithmically, we implement this by: (7) 1. sampling a default number of times to seed the estimates for each hypothesis mean and variance, 2. allocating the error to selection and pairwise comparisons as indicated above, 3. sampling until the desired confidences for successive selections is met, and 4. restarting the algorithm if any of the hypotheses means changed significantly enough to change the overall ranking. An analogous recursive selection algorithm based on expected loss is defined as follows. ELrec(U2 > U3 > ... > Uk) +EL(U1 t> {U2 , ••• ,Uk}) where ELrec(Uk) = 0 and the selection EL is as defined in [Chien95]: k EL(U1 I> {U2, ... , Uk}) = L EL(Ut, Ud i=2 3Space constraints preclude their description here. (8) (9) 448 S. Chien, A. StecheT1 and D. Mutz 2.2 RANKING BY COMPARISON OF ADJACENT ELEMENTS Another interpretation of ranking confidence (or loss) is that only adjacent elements in the ranking need be compared. In this case, the overall ranking error is divided directly into k -1 pairwise comparison errors. This leads to the following confidence equation for the PAC criteria: k-l dac(i(Ul > U2 > ... > Uk) = Ldi,i+1 i=l And the following equation for the EL criteria.k_l ELac(i(Ul > U2 > ... > Uk) = '2: EL(Ui,Ui+d i=l (10) (11) Because ranking by comparison of adjacent hypotheses does not establish the dominance between non-adjacent hypotheses (where the hypotheses are ordered byobserved mean utility), it has the advantage of requiring fewer comparisons than recursive selection (and thus may require fewer samples than recursive selection). However, for the same reason, adjacency algorithms may be less likely to correctly bound probability of correct selection (or average loss) than the recursive selection algorithms. In the case of the PAC algorithms, this is because f-dominance is not necessarily transitive. In the case of the EL algorithms, it is because expected loss is not additive when considering two hypothesis relations sharing a common hypothesis. For instance, the size of the blocked differential distribution may be different for each of the pairs of hypotheses being compared. 2.3 OTHER RELEVANT APPROACHES Most standard statistical ranking/selection approaches make strong assumptions about the form of the problem (e.g., the variances associated with underlying utility distribution of the hypotheses might be assumed known and equal). Among these, Turnbull and Weiss [Turnbull84] is most comparable to our PAC-based approach4. Turnbull and Weiss treat hypotheses as normal random variables with unknown mean and unknown and unequal variance. However, they make the additional stipulation that hypotheses are independent. So, while it is still reasonable to use this approach when the candidate hypotheses are not independent, excessive statistical error or unnecessarily large training set sizes may result. 3 EMPIRICAL PERFORMANCE EVALUATION We now turn to empirical evaluation of the hypothesis ranking techniques on realworld datasets. This evaluation serves three purposes. First, it demonstrates that the techniques perform as predicted (in terms of bounding the probability of incorrect selection or expected loss). Second, it validates the performance of the techniques as compared to standard algorithms from the statistical literature. Third, the evaluation demonstrates the robustness of the new approaches to real-world hypothesis ranking problems. An experimental trial consists of solving a hypothesis ranking problem with a given technique and a given set of problem and control parameters. We measure performance by (1) how well the algorithms satisfy their respective criteria; and (2) the number of samples taken. Since the performance of these statistical algorithms on any single trial provides little information about their overall behavior, each trial is repeated multiple times and the results are averaged across 100 trials. Because 4 PAC-based approaches have been investigated extensively in the statistical ranking and selection literature under the topic of confidence interval based algorithms (see [Haseeb85] for a review of the recent literature). On Efficient Heuristic Ranking of Hypotheses 449 Table 1: Estimated expected total number of observations to rank DS-2 spacecraft designs. Achieved b bTt f t k' . h' thesis. pro a I I yo correc ran mg IS sown m paren k 'Y !!.. TURNtlULL PACrec PACod ' 10 0.75 2 534 {0.96 144 1.00 92 0.98 10 0.90 2 667 (0.98 160 1.00 98 1.00 10 0.95 2 793 (0.99 177 1.00 103 0.99 Table 2: Estimated expected total number of observations and expected loss of an incorrect ranking of DS-2 penetrator designs. Parameters EL EL d ' rec a k ~ Samples Loss l:)amples l..oss 10 0.10 152 0.005 77 0.014 10 0.05 200 0.003 90 0.006 10 0.02 378 0.003 139 0.003 the PAC and expected loss criteria are not directly comparable, the approaches are analyzed separately. Experimental results from synthetic datasets are reported in [Chien97]. The evaluation of our approach on artificially generated data is used to show that: (1) the techniques correctly bound probability of incorrect ranking and expected loss as predicted when the underlying assumptions are valid even when the underlying utility distributions are inherently hard to rank, and (2) that the PAC techniques compare favorably to the algorithm of Thrnbull and Weiss in a wide variety of problem configurations. The test of real-world applicability is based on data drawn from an actual NASA spacecraft design optimization application. This data provides a strong test of the applicability of the techniques in that all of the statistical techniques make some form of normality assumption - yet the data in this application is highly non-normal. Tables 1 and 2 show the results of ranking 10 penetrator designs using the PACbased, Thrnbull, and expected loss algorithms In this problem the utility function is the depth of penetration of the penetrator, with those cases in which the penetrator does not penetrate being assigned zero utility. As shown in Table 1, both PAC algorithms significantly outperformed the Thrnbull algorithm, which is to be expected because the hypotheses are somewhat correlated (via impact orientations and soil densities). Table 2 shows that the ELrec expected loss algorithm effectively bounded actual loss but the ELad,i algorithm was inconsistent. 4 DISCUSSION AND CONCLUSIONS There are a number of areas of related work. First, there has been considerable analysis of hypothesis selection problems. Selection problems have been formalized using a Bayesian framework [Moore94, Rivest88] that does not require an initial sample, but uses a rigorous encoding of prior knowledge. Howard [Howard70] also details a Bayesian framework for analyzing learning cost for selection problems. If one uses a hypothesis selection framework for ranking, allocation of pairwise errors can be performed rationally [Gratch94]. Reinforcement learning work [Kaelbling93] with immediate feedback can also be viewed as a hypothesis selection problem. In su~mary, this paper has described the hypothesis ranking problem, an extension to the hypothesis selection problem. We defined the application of two decision criteria, probably approximately correct and expected loss, to this problem. We then defined two families of algorithms, recursive selection and adjacency, for solution of hypothesis ranking problems. Finally, we demonstrated the effectiveness of these algorithms on both synthetic and real-world datasets, documenting improved performance over existing statistical approaches. 450 S. Chien, A. Stechert and D. Mutz References [Bechhofer54] R.E. Bechhofer, "A Single-sample Multiple Decision Procedure for Ranking Means of Normal Populations with Known Variances," Annals of Math. Statistics (25) 1, 1954 pp. 16-39. [Chien95] S. A. Chien, J. M. Gratch and M. C. Burl, "On the Efficient Allocation of Resources for Hypothesis Evaluation: A Statistical Approach," IEEE Trans. Pattern Analysis and Machine Intelligence 17 (7), July 1995, pp. 652-665. [Chien97] S. Chien, A. Stechert, and D. Mutz, "Efficiently Ranking Hypotheses in Machine Learning," JPL-D-14661, June 1997. Available online at http://wwwaig.jpl.nasa.gov/public/www/pas-bibliography.html [Goldberg89] D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Add. Wes., 1989. [Govind81] Z. Govindarajulu, "The Sequential Statistical Analysis," American Sciences Press, Columbus, OH, 1981. [Gratch92] J. Gratch and G. Dejong, "COMPOSER: A Probabilistic Solution to the Utility Problem in Speed-up Learning," Proc. AAAI92, San Jose, CA, July 1992, pp. 235240. [Gratch93] J. Gratch, "COMPOSER: A Decision-theoretic Approach to Adaptive Problem Solving," Tech. Rep. UIUCDCS-R-93-1806, Dept. Compo Sci., Univ. Illinois, May 1993. [Gratch94] J. Gratch, S. Chien, and G. Dejong, "Improving Learning Performance Through Rational Resource Allocation," Proc. AAAI94, Seattle, WA, August 1994, pp. 576-582. [Greiner92] R. Greiner and I. Jurisica, "A Statistical Approach to Solving the EBL Utility Problem," Proc. AAAI92, San Jose, CA, July 1992, pp. 241-248. [Haseeb85] R. M. Haseeb, Modern Statistical Selection, Columbus, OH: Am. Sciences Press, 1985. [Hogg78] R. V. Hogg and A. T. Craig, Introduction to Mathematical Statistics, Macmillan Inc., London, 1978. [Howard70] R. A. Howard, Decision Analysis: Perspectives on Inference, Decision, and Experimentation," Proceedings of the IEEE 58, 5 (1970), pp. 823-834. [Kaelbling93] L. P. Kaelbling, Learning in Embedded Systems, MIT Press, Cambridge, MA,1993. [Minton88] S. Minton, Learning Search Control Knowledge: An Explanation-Based Approach, Kluwer Academic Publishers, Norwell, MA, 1988. [Moore94] A. W. Moore and M. S. Lee, "Efficient Algorithms for Minimizing Cross Validation Error," Proc. ML94, New Brunswick, MA, July 1994. [Musick93] R. Musick, J. Catlett and S. Russell, "Decision Theoretic Subsampling for Induction on Large Databases," Proc. ML93, Amhert, MA, June 1993, pp. 212-219. [Rivest88] R. L. Rivest and R. Sloan, A New Model for Inductive Inference," Proc. 2nd Conference on Theoretical Aspects of Reasoning about Knowledge, 1988. [Russell92] S. Russell and E. Wefald, Do the Right Thing: Studies in Limited Rationality, MIT Press, MA. [Tadepalli92] P. Tadepalli, "A theory of unsupervised speedup learning," Proc. AAAI92" pp. 229-234. [Turnbull84] Turnbull and Weiss, "A class of sequential procedures for k-sample problems concerning normal means with unknown unequal variances," in Design of Experiments: ranking and selection, T. J. Santner and A. C. Tamhane (eds. ), Marcel Dekker, 1984. [Valiant84] L. G. Valiant, "A Theory of the Learnable," Communications of the ACM 27, (1984), pp. 1134-1142.
1997
7
1,419
Multiresolution Tangent Distance for Affine-invariant Classification Nuno Vasconcelos Andrew Lippman MIT Media Laboratory, 20 Ames St, E15-320M, Cambridge, MA 02139, {nuno,lip }@media.mit.edu Abstract The ability to rely on similarity metrics invariant to image transformations is an important issue for image classification tasks such as face or character recognition. We analyze an invariant metric that has performed well for the latter - the tangent distance - and study its limitations when applied to regular images, showing that the most significant among these (convergence to local minima) can be drastically reduced by computing the distance in a multiresolution setting. This leads to the multi resolution tangent distance, which exhibits significantly higher invariance to image transformations, and can be easily combined with robust estimation procedures. 1 Introduction Image classification algorithms often rely on distance metrics which are too sensitive to variations in the imaging environment or set up (e.g. the Euclidean and Hamming distances), or on metrics which, even though less sensitive to these variations, are application specific or too expensive from a computational point of view (e.g. deformable templates). A solution to this problem, combining invariance to image transformations with computational simplicity and general purpose applicability was introduced by Simard et al in [7]. The key idea is that, when subject to spatial transformations, images describe manifolds in a high dimensional space, and an invariant metric should measure the distance between those manifolds instead of the distance between other properties of (or features extracted from) the images themselves. Because these manifolds are complex, minimizing the distance between them is a difficult optimization problem which can, nevertheless, be made tractable by considering the minimization of the distance between the tangents to the manifolds -the tangent distance (TO) - instead of that between the manifolds themselves. While it has led to impressive results for the problem of character recognition [8], the linear approximation inherent to the TO is too stringent for regular images, leading to invariance over only a very narrow range of transformations. 844 N. Vasconcelos and A. Lippman In this work we embed the distance computation in a multi resolution framework [3], leading to the multiresolution tangent distance (MRTD). Multiresolution decompositions are common in the vision literature and have been known to improve the performance of image registration algorithms by extending the range over which linear approximations hold [5, 1]. In particular, the MRTD has several appealing properties: 1) maintains the general purpose nature of the TD; 2) can be easily combined with robust estimation procedures, exhibiting invariance to moderate non-linear image variations (such as caused by slight variations in shape or occlusions); 3) is amenable to computationally efficient screening techniques where bad matches are discarded at low resolutions; and 4) can be combined with several types of classifiers. Face recognition experiments show that the MRTD exhibits a significantly extended invariance to image transformations, originating improvements in recognition accuracy as high as 38%, for the hardest problems considered. 2 The tangent distance Consider the manifold described by all the possible linear transformations that a pattern lex) may be subject to Tp [lex)] = 1('ljJ(x, p)), (1) where x are the spatial coordinates over which the pattern is defined, p is the set of parameters which define the transformation, and 'ljJ is a function typically linear on p, but not necessarily linear on x. Given two patterns M(x) and N(x), the distance between the associated manifolds - manifold distance (MD) - is T(M, N) = min IITq[M(x)] - Tp[N(x)]W. (2) p,q For simplicity, we consider a version of the distance in which only one of the patterns is subject to a transformation, i.e. T(M, N) = min IIM(x) - Tp[N(x)]lf, (3) p but all results can be extended to the two-sided distance. Using the fact that \7pTp[N(x)] = \7pN('ljJ(x, p)) = \7p '¢(x, p)\7xN('¢(x, p)), (4) where \7pTp is the gradient of Tp with respect to p, Tp[N(x)] can, for small p, be approximated by a first order Taylor expansion around the identity transformation Tp[N(x)] = N(x) + (p - If\7p 'ljJ(x,p)\7x N(x). This is equivalent to approximating the manifold by a tangent hyper-plane, and leads to the TD. Substituting this expression in equation 3, setting the gradient with respect to p to zero, and solving for p leads to p ~ [~'VP;6(X' P ) 'Vx N(x) 'V); N(X)'V~;6(x, P)]-' ~ D(x)'Vp;6(x, P l'VxN(x) + I, (5) where D(x) = M(x) - N(x). Given this optimal p, the TD between the two patterns is computed using equations I and 3. The main limitation of this formulation is that it relies on a first-order Taylor series approximation, which is valid only over a small range of variation in the parameter vector p . 2.1 Manifold distance via Newton's method The minimization of the MD of equation 3 can also be performed through Newton's method, which consists of the iteration pn+1 = pn _ 0: [\7~ T/p=pn] -I \7p Tlp=pn (6) Multiresolution Tangent Distancefor Affine-invariant Classification 845 where \7p / and \7~ / are, respectively, the gradient and Hessian of the cost function of equation 3 with respect to the parameter p, \7p/ = 2 L [M(x) - Tp[N(x)]) V'pTp[N(x)] x V'~ / = 2 L [-V'pTp[N(x)] \7~Tp[N(x)] + [M(x) - N(x)] V'~Tp[N(x)]] . x Disregarding the term which contains second-order derivatives (V'~Tp[N(x)]), choosing pO = I and Q: = 1, using 4, and substituting in 6 leads to equation 5. I.e. the TO corresponds to a single iteration of the minimization of the MD by a simplified version of Newton's method, where sec!ond-orderderivatives are disregarded. This reduces the rate of convergence of Newton's method, and a single iteration may not be enough to achieve the local minimum, even for simple functions. It is, therefore, possible to achieve improvement if the iteration described by equation 6 is repeated until convergence. 3 The multiresolution tangent distance The iterative minimization of equation 6 suffers from two major drawbacks [2]: 1) it may require a significant number of iterations for convergence and 2), it can easily get trapped in local minima. Both these limitations can be, at least partially, avoided by embedding the computation of the MD in a multiresolution framework, leading to the multiresolution manifold distance (MRMD). For its computation, the patterns to classify are first subject to a multiresolution decomposition, and the MD is then iteratively computed for each layer, using the estimate obtained from the layer above as a starting point, where, Dl(x) = M(x) - Tp~ [N(x)]. If only one iteration is allowed at each imageresolution, the MRMD becomes the multiresolution extension of the TO, i.e. the multi resolution tangent distance (MRTO). To illustrate the benefits of minimization over different scales consider the signal J (t) = E{;=1 sin(wkt), and the manifold generated by all its possible translations J'(t,d) = J(t + d). Figure 1 depicts the multiresolution Gaussian decomposition of J(t), together with the Euclidean distance to the points on the manifold as a function of the translation associated with each of them (d). Notice that as the resolution increases, the distance function has more local minima, and the range of translations over which an initial guess is guaranteed to lead to convergence to the global minimum (at d = 0) is smaller. I.e., at higher resolutions, a better initial estimate is necessary to obtain the same performance from the minimization algorithm. Notice also that, since the function to minimize is very smooth at the lowest resolutions, the minimization will require few iterations at these resolutions if a procedure such as Newton's method is employed. Furthermore, since the minimum at one resolution is a good guess for the minimum at the next resolution, the computational effort required to reach that minimum will also be small. Finally, since a minimum at low resolutions is based on coarse, or global, information about the function or patterns to be classified, it is likely to be the global minimum of at least a significant region of the parameter space, if not the true global minimum. 846 N. Vasconcelos and A. Lippman ·RB .~5ISa {\Z\Z\] -UJj -F\lJ -t;: Ll -I~ ..::.. .. .. .... ...... ... .. ~ .. . .... . ..:.. •. •• •• •••• ..I~ • • •• --"' . _ .:.. . ...... . .. . Figure 1: Top: Three scales of the multiresolution decomposition of J(t). Bottom: Euclidean distance VS. translation for each scale. Resolution decreases from left to right. 4 Affine-invariant classification There are many linear transformations which can be used in equation 1. In this work, we consider manifolds generated by affine transformations [ X y 1000] 1jJ(x,p) = 0 0 0 x yIP = ~(x)p, (8) where P is the vector of parameters which characterize the transformation. Taking the gradient of equation 8 with respect to p. V'p1jJ(x,p) = ~(x)T. using equation 4. and substituting in equation 7. p~+1 = pr + " [ ~ 4> (x) TV xN ' (x) viN' (x) 4> (xl ] -I L D'(x)~(x)TV'xN'(x), (9) x where N'(x) = N(1jJ(x, PI»' and D'(x) = M(x) - N'(x). For a given levell of the multiresolution decomposition, the iterative process of equation 9 can be summarized as follows. 1. Compute N'(x) by warping the pattern to classify N(x) according to the best current estimate of p, and compute its spatial gradient V'xN'(x). 2. Update the estimate of PI according to equation 9. 3. Stop if convergence, otherwise go to 1. Once the final PI is obtained, it is passed to the multiresolution level below (by doubling the translation parameters), where it is used as initial estimate. Given the values of Pi which minimize the MD between a pattern to classify and a set of prototypes in the database, a K-nearest neighbor classifier is used to find the pattern's class. 5 Robust classifiers One issue of importance for pattern recognition systems is that of robustness to outliers, i.e errors which occur with low probability, but which can have large magnitude. Examples are errors due to variation of facial features (e.g. faces shot with or without glasses) in face recognition, errors due to undesired blobs of ink or uneven line thickness in character recognition, or errors due to partial occlusions (such as a hand in front of a face) or partially Multiresolution Tangent Distance/or Affine-invariant Classification 847 missing patterns (such as an undoted i). It is well known that a few (maybe even one) outliers of high leverage are sufficient to throw mean squared error estimators completely off-track [6]. Several robust estimators have been proposed in the statistics literature to avoid this problem. In this work we consider M-estimators [4] which can be very easily incorporated in the MD classification framework. M-estimators are an extension of least squares estimators where the square function is substituted by a functional p(x) which weighs large errors less heavily. The robust-estimator version of the tangent distance then becomes to minimize the cost function T(M, N) = min I: p(M(x) - Tp[N{x)]) , p x and it is straightforward to show that the "robust" equivalent to equation 9 is p~+' ~ pr +" [~P"[D(X))oI>(X)TI7XN'(X)I7;;:N'(X)oI>(X)T ]-' x (10) [~P'[D(X))oI>(X)Tl7xN' (X)] , (11) where D(x) = M(x) - N'(x) and p'(x) and p"(x) are, respectively, the first and second derivatives of the function p( x) with respect to its argument. 6 Experimental results In this section, we report on experiments carried out to evaluate the performance of the MD classifier. The first set of experiments was designed to test the invariance of the TD to affine transformations of the input. The second set was designed to evaluate the improvement obtained under the multiresolution framework. 6.1 Affine invariance of the tangent distance Starting from a single view of a reference face, we created an artificial dataset composed by 441 affine transformations of it. These transformations consisted of combinations of all rotations in the range from - 30 to 30 degrees with increments of 3 degrees, with all scaling transformations in the range from 70% to 130% with increments of 3%. The faces associated with the extremes of the scaling/rotation space are represented on the left portion of figure 2. On the right of figure 2 are the distance surfaces obtained by measuring the distance associated with several metrics at each of the points in the scaling/rotation space. Five metrics were considered in this experiment: the Euclidean distance (ED), the TD, the MD computed through Newton's method, the MRMD, and the MRTD. While the TD exhibits some invariance to rotation and scaling, this invariance is restricted to a small range of the parameter space and performance only slightly better than the obtained with the ED. The performance of the MD computed through Newton's method is dramatically superior, but still inferior to those achieved with the MRTD (which is very close to zero over the entire parameter space considered in this experiment), and the MRMD. The performance of the MRTD is in fact impressive given that it involves a computational increase of less than 50% with respect to the TD, while each iteration of Newton's method requires an increase of 100%, and several iterations are typically necessary to attain the minimum MD. 848 -30 !i ~ 0 -0 a: N. Vasconcelos and A. Lippman 0.7 1.3 Scaling Figure 2: Invariance of the tangent distance. In the right, the surfaces shown correspond to ED, TO, MO through Newton's method, MRTO, and MRMO. This ordering corresponds to that of the nesting of the surfaces, i.e. the ED is the cup-shaped surface in the center, while the MRMO is the flat surface which is approximately zero everywhere. 6.2 Face recognition To evaluate the performance of the multiresolution tangent distance on a real classification task, we conducted a series of face recognition experiments, using the Olivetti Research Laboratories (ORL) face database. This database is composed by 400 images of 40 subjects, 10 images per subject, and contains variations in pose, light conditions, expressions and facial features, but small variability in terms of scaling, rotation, or translation. To correct this limitation we created three artificial datasets by applying to each image three random affine transformations drawn from three multivariate normal distributions centered on the identity transformation with different covariances. A small sample of the faces in the database is presented in figure 3, together with its transformed version under the set of transformations of higher variability. Figure 3: Left: sample of the ORL face database. Right: transformed version. We next designed three experiments with increasing degree of difficulty. In the first, we selected the first view of each subject as the test set, using the remaining nine views as training data. In the second, the first five faces were used as test data while the remaining five were used for training. Finally, in the third experiment, we reverted the roles of the datasets used in the first. The recognition accuracy for each of these experiments and each of the datasets is reported on figure 4 for the ED, the TO, the MRTD, and a robust version of this distance (RMRTO) with p(x) = 1x2 if x::; aT and p(x) = ~2 otherwise, where T is a threshold (set to 2.0 in our experiments), and a a robust version of the error standard deviation defined as a = median lei - median (ei )1 /0.6745. Several conclusions can be taken from this figure. First, it can be seen that the MRTD provides a significantly higher invariance to linear transformations than the ED or the TO, MultiresolUlion Tangent Distance for Affine-invariant Classification 849 increasing the recognition accuracy by as much as 37.8% in the hardest datasets. In fact, for the easier tasks of experiments one and two, the performance of the multiresolution classifier is almost constant and always above the level of 90% accuracy. It is only for the harder experiment that the invariance of the MRTO classifier starts to break down. But even in this case, the degradation is graceful- the recognition accuracy only drops below 75% for considerable values of rotation and scaling (dataset D3). On the other hand, the ED and the single resolution TO break down even for the easier tasks, and fail dramatically when the hardest task is performed on the more difficult datasets. Furthermore, their performance does not degrade gracefully, they seem to be more invariant when the training set has five views than when it is composed by nine faces of each subject in the database. I 'iJ '~--------~,~-=-=~~~ , ' .... ___ _ ...l _ _ _ _ _ __ _ l... , , , , , - - --1-- -- - -~ , , , '!O.(U. - - -- - - --r---- - - - - -t- - ---, , , , , , . .. _ __ _____ _ L _ _ _ _ ___ ...1_______ _ , , , , , , "".(1,1 ____ __ _ _ _ ~- - ---- --+ ---- --- -~, , , , , , am . - -- - -- - -r- -- ---- - -t - - - - --- - t-, , , , , ____ ______ L- __ _ __ __ ...1 ___ __ _ __ l, , , , , , I , , ,;. . ., ,GI;IIIIII _ __ ---- - -I--------- - ------ - r -m-------;----... ---......... :0 .... L . _ _ _ ....1_ _ _ _ ~ _ I , 10>00_ -- - - -- -- t-- -- -- -- -~ - -- ----t" , , , , OO~ _ _ __ ___ __ 1- _ ____ __ -10 _ _ ____ __ ... _ I , , , , , JOQI _ _ _ __ ____ L __ _ ____ j _ ~ __ __ _ _ L _ , , , , , , ..,CV _ _ __ ___ _ _ ~ -- - ---- ~ ----- -- - ~ , , l'W'> __ __ ___ _ ~ _ _ ___ __ ... ______ _ _ ... , , r • iii"" .OIIIIL. ----- - - - j------- - -t - - - - - - - - r -TD"""" I ! I 1Mb I ! I IIRm _ _ _ _ ______ L _____ __ ...1 _ _ _ • _ _ _ _ .l- . , , I , I , 11041 I I I - ~?~?~:~j?~~~~~~~}~~~ ~~ ~~ ~ ~ : : '~ 110m • • _ _ ___ L ___ _ _ __ ...l ___ ___ ___ _ I , • I , :1111,1,1. __ _ _ _ I ____ _ ___ + _________ t _ , , , «>.011 _ _ _ __ ___ I _____ _ _ I _ _ _ _ _ ___ ~ I _ _ ~ __ __ ___ L _____ ____ _ , , , Figure 4: Recognition accuracy. From left to right: results from the first, second, and third experiments. Oatasets are ordered by degree of variability: 00 is the ORL database 03 is subject to the affine transfonnations of greater amplitude. Acknowledgments We would like to thank Federico Girosi for first bringing the tangent distance to our attention, and for several stimulating discussions on the topic. References [1J P. Anandan, J. Bergen, K. Hanna, and R. Hingorani. Hierarchical Model-Based Motion Estimation. In M. Sezan and R. Lagendijk, editors, Motion Analysis and Image Sequence Processing, chapter 1. Kluwer Academic Press, 1993. [2J D. Bertsekas. Nonlinear Programming. Athena Scientific, 1995. [3J P. Burt and E. Adelson. The Laplacian Pyramid as a Compact Image Code. IEEE Trans. on Communications, Vol. 31:532-540,1983. [4] P. Huber. Robust Statistics. John Wiley, 1981 . [5] B. Lucas and T. Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proc. DARPA Image Understanding Workshop, 198 I. [6J P. Rousseeuw and A. Leroy. Robust Regression and Outlier Detection. John Wiley, 1987. [7] P. Simard, Y. Le Cun, and J. Denker. Efficient Pattern Recognition Using a New Transformation Distance. In Proc. Neurallnfonnation Proc. Systems, Denver, USA, 1994. [8] P. Simard, Y. Le Cun, and 1. Denker. Memory-based Character Recognition Using a Transformation Invariant Metric. In Int. Conference on Pattern Recognition, Jerusalem, Israel, 1994.
1997
70
1,420
Effects of Spike Timing Underlying Binocular Integration and Rivalry in a Neural Model of Early Visual Cortex Erik D. Lumer Wellcome department of Cognitive Neurology Institute of Neurology, University College of London 12 Queen Square, London, WC1N 3BG, UK Abstract In normal vision, the inputs from the two eyes are integrated into a single percept. When dissimilar images are presented to the two eyes, however, perceptual integration gives way to alternation between monocular inputs, a phenomenon called binocular rivalry. Although recent evidence indicates that binocular rivalry involves a modulation of neuronal responses in extrastriate cortex, the basic mechanisms responsible for differential processing of con:6.icting and congruent stimuli remain unclear. Using a neural network that models the mammalian early visual system, I demonstrate here that the desynchronized firing of cortical-like neurons that first receive inputs from the two eyes results in rivalrous activity patterns at later stages in the visual pathway. By contrast, synchronization of firing among these cells prevents such competition. The temporal coordination of cortical activity and its effects on neural competition emerge naturally from the network connectivity and from its dynamics. These results suggest that input-related differences in relative spike timing at an early stage of visual processing may give rise to the phenomena both of perceptual integration and rivalry in binocular vision. 1 Introduction The neural determinants of visual perception can be probed by subjecting the visual system to ambiguous viewing conditions - stimulus configurations that admit more 188 E. D.Lumer than one perceptual interpretation. For example, when a left-tilted grating is shown to the left eye and a right-tilted grating to the right eye, the two stimuli are momentarily perceived together as a plaid pattern, but soon only one line grating becomes visible, while the other is suppressed. This phenomenon, known as binocular rivalry, has long been thought to involve competition between monocular neurons within the primary visual cortex (VI), leading to the suppression of information from one eye (Lehky, 1988; Blake, 1989). It has recently been shown, however, that neurons whose activity covaries with perception during rivalry are found mainly in higher cortical areas and respond to inputs from both eyes, thus suggesting that rivalry arises instead through competition between alternative stimulus interpretations in extrastriate cortex (Leopold and Logothetis, 1996). Because eye-specific information appears to be lost at this stage, it remains unclear how the stimulus conditions (i.e. conflicting monocular stimuli) yielding binocular rivalry are distinguished from the conditions (i.e. matched monocular inputs) that produce stable single vision. I propose here that the degree of similarity between the images presented to the two eyes is registered by the temporal coordination of neuronal activity in VI, and that changes in relative spike timing within this area can instigate the differential responses in higher cortical areas to conflicting or congruent visual stimuli. Stimulus and eye-specific synchronous activity has been described previously both in the lateral geniculate nucleus (LGN) and in the striate cortex (Gray et al., 1989; Sillito et al., 1994; Neuenschwander and Singer, 1996). It has been suggested that such synchrony may serve to bind together spatially distributed neural events into coherent representations (Milner, 1974; von der Malsburg, 1981; Singer, 1993). In addition, reduced synchronization of striate cortical responses in strabismic cats has been correlated with their perceptual inability to combine signals from the two eyes or to incorporate signals from an amblyopic eye (Konig et al., 1993; Roelfsema et al., 1994). However, the specific influences of interocular input-similarity on spike coordination in the striate cortex, and of spike coordination on competition in other cortical areas, remain unclear. To examine these influences, a simplified neural model of an early visual pathway is simulated. In what follows, I first describe the anatomical and physiological constraints incorporated in the model, and then show that a temporal patterning of neuronal activity in its primary cortical area emerges naturally. By manipulating the relative spike timing of neuronal discharges in this area, I demonstrate its role in inducing differential responses in higher visual areas to conflicting or congruent visual stimulation. Finally, I discuss possible implications of these results for understanding the neural basis of normal and ambiguous perception in vivo. 2 Model overview The model has four stages based on the organization of the mammalian visual pathway (Gilbert, 1993). These stages represent: (i) sectors of an ipsilateral ('left eye') and a contralateral ('right eye') lamina of the LGN, which relay visual inputs to the cortex; (ii) two corresponding monocular regions in layer 4 of VI with different ocular dominance; (iii) a primary cortical sector in which the monocular inputs are first combined (called Vp in the model); and (iv) a secondary visual area of cortex in which higher-order features are extracted (Vs in the model; Fig. 1). Each stage consists of 'standard' integrate-and-fire neurons that are incorporated in synaptic networks. At the cortical stages, these units are grouped in local recurrent circuits that are similar to those used in previous modeling studies (Douglas et al., 1995; Somers et al., 1995). Synaptic interactions in these circuits are both excitatory and inhibitory between cells with similar orientation selectivity, but are restricted to inSpike Timing Effects in Binocular Integration and Rivalry (L.L ___ , _ .. ___ , ( ) __ ... ___ -.. , I 4 :9Ex1 : : Exp' .~1 ; ; ElI2p' aver : t-.~...J . :?~...J : -~ In~~~; :.~~~ ; : Inhl..1j :,~~ J (L) 189 Figure 1: Architecture of the model. Excitatory and inhibitory connections are represented by lines with arrowheads and round terminals, respectively, Each lamina in the LGN consists of 100 excitatory units (Ex) and 100 inhibitory units (Inh) , coupled via local inhibition. Cortical units are grouped into local recurrent circuits (stippled boxes) , each comprising 200 Ex units and 100 Inh units. In each monocular patch of layer 4, one cell group (Exl and Inh1) responds to left-tilted lines (orientation 1), whereas a second group (Ex2 and Inh2) is selective for right-tilted lines (orientation 2) . The same orientation selectivities are remapped onto Vp and Vs, although cells in these areas respond to inputs from both eyes. In addition, convergent inputs from Vp to Vs establish a third selectivity in Vs, namely for line crossings (Ex+ and Inh+) . hibition only between cell groups with orthogonal orientation preference (Kisvarday and Eysel, 1993). Two orthogonal orientations (orientation 1 and 2) are mapped in each monocular sector of layer 4, and in Vp. To account for the emergence of more complex response properties at higher levels in the visual system (Van Essen and Gallant, 1994), forward connectivity patterns from Vp to Vs are organized to support three feature selectivities in Vs, one for orientation 1, one for orientation 2, and one for the conjunction of these two orientations, i.e. for line crossings. These forward projections are reciprocated by weaker backward projections from Vs to Vp. As a general rule, connections are established at random within and between interconnected populations of cells, with connection probabilities between pairs of cells ranging from 1 to 10 %, consistent with experimental estimates (Thomson et al., 1988; Mason et al., 1991) . Visual stimulation is achieved by applying a stochastic synaptic excitation independently to activated cells in the LGN. A quantitative description of the model parameters will be reported elsewhere. 190 E. D. Lumer 3 Results In a first series of simulations, the responses of the model to conflicting and congruent visual stimuli are compared. When the left input consists of left-tilted lines (orientation 1) and the right input of right-tilted lines (orientation 2), rivalrous response suppression occurs in the secondary visual area. At any moment, only one of the three feature-selective cell groups in Vs can maintain elevated firing rates (Fig. 2a). By contrast, when congruent plaid patterns are used to stimulate the two monocular channels, these cell groups are forced in a regime in which they all sustain elevated firing rates (Fig. 2b). This concurrent activation of cells selective for orthogonal orientations and for line crossings can be interpreted as a distributed representation of the plaid pattern in Vs 1. A quantitative assessment of the degree of competition in Vs is shown in Figure 2c. The rivalry index of two groups of neurons is defined as the mean absolute value of the difference between their instantaneous group-averaged firing rates divided by the highest instantaneous firing rate among the two cell groups. This index varies between 0 for nonrivalrous groups of neurons and 1 for groups of neurons with mutually exclusive patterns of activity. Groups of cells with different selectivity in Vs have a significantly higher rivalry index when stimulated by conflicting rather than by congruent visual inputs (p < 0.0001) (Fig. 2c). Note that, in the example shown in Figure 2a, the differential responses to conflicting inputs develop from about 200 ms after stimulus onset and are maintained over the remainder of the stimulation epoch. In other simulations, alternation between dominant and suppressed responses was also observed over the same epoch as a result of fluctuations in the overall network dynamics. A detailed analysis of the dynamics of perceptual alternation during rivalry, however, is beyond the scope of this report. Although Vp exhibits a similar distribution of firing rates during rivalrous and nonrivalrous stimulation, synchronization between the two cell groups in Vp is more pronounced in the nonrivalrous than in the rivalrous case (Fig. 2d, upper plots). Subtraction of the shift predictor demonstrates that the units are not phase-locked to the stimuli. The changes in spike coordination among Vp units reflects the temporal patterning of their layer 4 inputs. During rivalry, Vp cells with different orientation selectivity are driven by layer 4 units that belong to separate monocular pathways, and hence, are uncorrelated (Fig. 2d, lower left). By contrast, cells in Vp receive convergent inputs from both eyes during nonrivalrous stimulation. Because of the synchronization of discharges among cells responsive to the same eye within layer 4 (Fig. 2d, lower right), the paired activities from the two monocular channels are also synchronized, a.nd provide synchronous inputs to cells with different orienta.tion selectivity in Vp. To establish unequivocally that changes in spike coordination within Vp are sufficient to trigger differential responses in Vs to conflicting and congruent stimuli, the model can be modified as follows. A single group of cells in layer 4 is used to drive with equal strength both orientation-selective populations of neurons in Vp. The outputs from layer 4, however, is relayed to these two target populations with average transmission delays that differ by either 10 ms or by 0 ms. In the first case, competition prevails among cells in the secondary visual area. This contrasts with the nonrivalrous activity in this area when similar transmission delays are used at an earlier stage (data not shown). This test confirms that changes in relative spike lTo discount possible effects of binocular snmmmation, synaptic strengths from layer 4 to Vp are reduced during congruent stimulation so as to produce a feedforward activation of Vp comparable to that elicited by conflicting monocular inputs. Spike Timing Effects in Binocular Integration and Rivalry a c >< CI) " c ~ l a: 0.1 o conflicting visual inputs N l: _6 G) 1& 40 2 3 I11.I"'\Jo/"1IfV!1'Y C)2 C 'C u:: 2 3 :[4 :u~ at-.... t.~ " oC"_ ~' <;'--t "-" -: - " o 1 2 3 4 Time (ms) Vs1:Vs2 max(Vs}:Vs+ 4 b d congruent visual inputs 40~ .S , 30 . -::~ N 'b 1 2 3 .. -Ys1 -Ys2 -Vs+ J: ~eo~vp 1U 4 IC)2 .s ' I'b'---~ 1 -~2 --3..-------:4 -Yp1 -Yp2 ir. L41 -R42 -ee- L42 R41 2 3 .. Time (ms) conflicting congruent fHdNkd CJ 1m gAd~50 -80 0 80 -80 0 80 Time lag (ms) 191 Figure 2: A, Instantaneous firing rates in response to conflicting inputs for cell groups in layer 4, in Vp, and in Vs (stimulus onset at t = 250ms). Discharge rates of layer 4 cells driven by different 'eyes' are similar (lower plot). By contrast, Vs exhi bi ts com peti ti ve firing pat terns soon after stimulus onset (upper plot). Feed back influence from Vs to Vp results in comparatively weaker competition in Vp (middle plot). B, Responses to congruent inputs. All cell groups in layer 4 are activated by visual inputs. Nonrivalrous firing patterns ensue in Vp and Vs. C, Rivalry indices during conflicting and congruent stimulation, are calculated for the two orientationselective cell groups and for the dominant and cross-selective cell group in Vs. D, Interocular responses are uncorrelated in layer 4 (lower left), whereas intraocular activities are synchronous at this stage (lower right). Enhanced synchronization of discharges ensues between cell groups in Vp during congruent stimulation (upper right), relative to the degree of coherence during conflicting stimulation (upper left). 192 E. D.Lumer timing are sufficient to switch the outcome of neural network interactions involving strong mutual inhibition from competitive to cooperative. 4 Conclusion In the present study, a simplified model of a visual pathway was used to gain insight into the neural mechanisms operating during binocular vision. Simulations of neuronal responses to visual inputs revealed a stimulus-related patterning of relative spike timing at an early stage of cortical processing. This patterning reflected the degree of similarity between the images presented to the two 'eyes', and, in turn, it altered the outcome of competitive interactions at later stages along the visual pathway. These effects can help explaining how the same cortical networks can exhibit both rivalrous and nonrivalrous activity, depending on the temporal coordination of their synaptic inputs. These results bear on the interpretation of recent empirical findings about the neuronal correlates of rivalrous perception. In experiments with awake monkeys, Logothetis and colleagues (Sheinberg et al., 1995; Leopold and Logothetis, 1996) have shown that neurons whose firing rate correlates with perception during rivalry are distributed at several levels along the primate visual pathways, including Vl/V2, V4, and IT. Importantly, the fraction of modulated responses is lower in VI than in extrastriate areas, and it increases with the level in the visual hierarchy. Simulations of the present model exhibit a behavior that is consistent with these observations. However, these simulations also predict that both rivalrous and nonrivalrous perception may have a clear neurophysiological correlate in VI, i.e. at the earliest stage of visual cortical processing. Accordingly, congruent stimulation of both eyes will synchronize the firing of binocular cells with overlapping receptive fields in Vl. By contrast, conflicting inputs to the two eyes will cause a desynchronization between their corresponding neural events in Vl. Because this temporal registration of stimulus dissimilarity instigates competition among binocular cells in higher visual areas and not between monocular pathways, the ensuing pattern of response suppression and dominance is independent of the eyes through which the stimuli are presented. Thus, the model can in principle account for the psychophysical finding that a single phase of perceptual dominance during rivalry can span multiple interocular exchanges of the rival stimuli (Logothetis et al., 1996). The present results also reveal a novel property of canonical cortical-like circuits interacting through mutual inhibition, i.e. the degree of competition among such circuits exhibits a remarkable sensitivity to the relative timing of neuronal action potentials. This suggests that the temporal patterning of cortical activity may be a fundamental mechanism for selecting among stimuli competing for the control of attention and motor action. Acknowledgements This work was supported in part by an IRSIA visiting fellowship at the Center for Nonlinear Phenomena and Complex Systems, Universite Libre de Bruxelles. I thank Professor Gregoire Nicolis for his hospitality during my stay in Brussels; and David Leopold and Daniele Piomelli for helpful discussions and comments on an earlier version of the manuscript. References Blake R (1989) A neural theory of binocular vision. Psychol Rev 96:145-167. Spike Timing Effects in Binocular Integration andRivalry 193 Douglas RJ, Koch C, Mahowald M, Martin K, Suarez H (1995) Recurrent excitation in neocortical circuits. Science 269:981-985. Gilbert C (1993) Circuitry, architecture, and functional dynamics of visual cortex. Cereb Cortex 3:373-386. Gray CM, Konig P, Engel AK, Singer, W (1989) Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature 338 :334-337. Kisvarday ZF, Eysel UT (1993) Functional and structural topography of horizontal inhibitory connections in cat visual cortex. Europ J Neurosci 5:1558-1572. Konig P, Engel AK, Lowel S, Singer, W (1993) Squint affects synchronization of oscillatory responses in cat visual cortex. Eur J Neurosci 5:501-508. Lehky SR (1988) An astable multivibrator model of binocular rivalry. Perception 17: 215- 228. Leopold DA, Logothetis NK (1996) Activity changes in early visual cortex reflect monkeys percepts during binocular rivalry. Nature 379:549-553. Logothetis NK, Leopold DA, Sheinberg DL (1996) What is rivalling during rivalry? Nature 380:621-624. Neuenschwander S, Singer W (1996) Long-range synchronization of oscillatory light responses in the cat retina and lateral geniculate nucleus. Nature 379:728-733. Milner PM (1974) A model of visual shape recognition. Psychol Rev 81:521-535. Roelfsema PR, Konig P, Engel AK, Sireteanu R, Singer W (1994) Reduced synchronization in the visual cortex of cats with strabismic amblyopia. Eur J Neurosci 6:1645-1655. Sheinberg DL, Leopold DA, Logothetis NK (1995) Effects of binocular rivalry on face cell activity in monkey temporal cortex. Soc Neurosci Abstr 21:15.12. Sillito AM, Jones HE, Gerstein GL, West DC (1994) Feature-linked synchronization of thalamic relay cell firing induced by feedback from the visual cortex. Nature 369:479-482. Singer W (1993) Synchronization of cortical activity and its putative role in information processing. Annu Rev Physiol 55:349-374. Somers D, Nelson S, Sur M (1995) An emergent model of orientation selectivity in cat visual cortical simple cells. J Neurosci 15:5448-5465. Van Essen DC, Gallant JL (1994) Neural mechanisms ofform and motion processing in the primate visual system. Neuron 13:1-10. von der Malsburg C (1981) The correlation theory of the brain. Internal Report 81-2, Max Planck Institute for Biophysical Chemistry, Gottingen.
1997
71
1,421
Detection of first and second order motion Alexander Grunewald Division of Biology California Institute of Technology Mail Code 216-76 Pasadena, CA 91125 alex@vis.caltech.edu Heiko Neumann Abteilung Neuroinformatik Vniversitat VIm 89069 VIm Germany hneumann@neuro.informatik.uni-ulm.de Abstract A model of motion detection is presented. The model contains three stages. The first stage is unoriented and is selective for contrast polarities. The next two stages work in parallel. A phase insensitive stage pools across different contrast polarities through a spatiotemporal filter and thus can detect first and second order motion. A phase sensitive stage keeps contrast polarities separate, each of which is filtered through a spatiotemporal filter, and thus only first order motion can be detected. Differential phase sensitivity can therefore account for the detection of first and second order motion. Phase insensitive detectors correspond to cortical complex cells, and phase sensitive detectors to simple cells. 1 INTRODUCTION In our environment objects are constantly in motion, and the visual system faces the task of identifying the motion of objects. This task can be subdivided into two components: motion detection and motion integration. In this study we will look at motion detection. Recent psychophysics has made a useful distinction between first and second order motion. In first order motion an absolute image feature is moving. For example, a bright bar moving on a dark background is an absolute feature because luminance is moving. In second order motion a relative image feature is moving, for example a contrast reversing bar. No longer is it possible to identify the moving object through its luminance, but only that it has different luminance with respect to the background. Humans are very sensitive to first order motion, but can they detect second order motion? Chubb & Sperling (1988) showed that subjects are in fact able to detect second order motion. These findings have since been confirmed in many psychophysical experiments, and it has become clear that the parameters that yield detection of first and second order motion are different, suggesting that separate motion detection systems exist. 802 A. Grunewald and H. Neumann 1.1 Detection of first and second order motion First order motion, which is what we encounter in our daily lives, can be easily detected by finding the peak in the Fourier energy distribution. The motion energy detector developed by Adelson & Bergen (1985) does this explicitly, and it turns out that it is also equivalent to a Reichardt detector (van Santen & Sperling, 1985). However, these detectors cannot adequately detect second order motion, because second order motion stimuli often contain the maximum Fourier energy in the opposite direction (possibly at a different velocity) as the actual motion. In other words, purely linear filters, should have opposite directional tuning for first and second order motion. This is further illustrated in Figure 1. FIRST ORDER MOTION Stimulus Energy Reconstructed 80 G) 60 E =40 20 0 20 4060 80100 20 40 60 80 100 20 40 60 80 100 space SECOND ORDER MOTION Stimulus Energy Reconstructed 80 80 ~60 60 ;:;40 40 20 20 0 0 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 space Figure 1: Schematic of first and second order motion, their peak Fourier energy, and the reconstruction. The peak Fourier energy is along the direction of motion for first order motion, and in the opposite direction for second order motion. For this reason a linear filter cannot detect second order motion. One way to account for second order motion detection is to transform the second order motion signal into a first order signal. H second order motion is defined by contrast reversals, then detecting contrast edges and then rectifying the resulting signal of contrast will yield a first order motion signal. Thus this approach includes three steps: orientation detection, rectification and finally motion detection (Wilson et al., 1992). Detection of First and Second Order Motion 803 1.2 Visual physiology Cells in the retina and the lateral geniculate nucleus (LGN) have concentric (and hence unoriented) receptive fields which are organized in an opponent manner. While the center of such an ON cell is excited by a light increment, the surround is excited by a light decrement, and vice versa for OFF cells. It is only at the cortex that direction and orientation selectivity arise. Cortical simple cells are sensitive to the phase of the stimulus, while complex cells are not (Hubel & Wiesel, 1962). Most motion models take at least partial inspiration from known physiology and anatomy, by relating the kernels of the motion detectors to the physiology of cortical cells. The motion energy model in particular detects orientation and first order motion at the same time. Curiously, all motion models essentially ignore the concentric opponency of receptive fields in the LG N. This is usually justified by pointing to the linearity of simple cells with respect to stimulus parameters. However, it has been shown that simple cells in fact exhibit strong nonlinearities (Hammond & MacKay, 1983). Moreover, motion detection does require at least one stage of nonlinearity (Poggio & Reichardt, 1973). The present study develops a model of first and second order motion detection which explicitly includes an unoriented processing stage, and phase sensitive and phase insensitive motion detectors are built from these unoriented signals. The former set of detectors only responds to first order motion, while the second set of detectors responds to both types of motion. We further show the analogies that can be drawn between these detector types and simple and complex cells in cat visual cortex. 2 MODEL DESCRIPTION The model is two-dimensional, one dimension is space, which means that space has been collapsed onto a line, and the other dimension is time. The input image to the model is a space-time matrix of luminances, as shown in figure 1. At each processing stage essentially the same operations are performed. First the input signal is convolved with the appropriate kernel. At each stage there are multiple kernels, to generate the different signal types at that stage. For example, there are ON and OFF signals at the unoriented stage. Next the convolved responses are subtracted from each other. At the unoriented stage this means ON-OFF and OFF-ON. In the final step these results are half-wave rectified to only yield positive signals. Unoriented Phase insensitive Phase sensitive space Figure 2: The kernels in the model. For the unoriented (left plot) and phase sensitive (right plot) kernel plots black indicates OFF regions, white ON regions, and grey zero input. For the phase insensitive plot (middle) grey denotes ON and OFF input, and black denotes zero input. 804 A. Grunewald and H. Neumann At the unoriented stage the input pattern is convolved with a difference of Gaussians kernel. This kernel has only a spatial dimension, no temporal dimension (see figure 2). As described earlier, competition is between ON and OFF signals, followed by half-wave rectification. This ensures that at each location only one set of unoriented signals is present. A simulation of the signals at the unoriented stage is shown in figure 3. For first order motion, ON signals are at locations corresponding to the inside of the moving bar. With each shift of the bar the signals also move. Similarly, the OFF signals correspond to the outside of the bar, and also move with the bar. For second order motion the contrast polarity reverses. Thus ON signals correspond to the inside when the bar is bright, and to the outside when the bar is dark, and vice versa for OFF signals. Thus any ON or OFF signals to the leading edge of the bar will remain active after the bar moves. Stimulus 80 ." .:: ....rl' 20 o 20 40 60 80 100 space Stimulus 20 40 60 80 100 space 20 Unoriented ON 40 60 80 100 ON 20 40 60 80 100 OFF 20 40 60 80 100 OFF 80 60 40 20 0 20 40 60 80 100 Figure 3: Unoriented signals to first and second order motion. ON signals are at the bright side of any contrast transition, while OFF signals are at the dark side. In first order motion ON and OFF move synchronously to the moving stimulus. In second order motion ON and OFF signals persist, since the leading edge becomes the trailing edge, and at the same time the contrast reverses, which means that at a particular spatial location the contrast remains constant. At the phase insensitive stage the unoriented ON and OFF signals are added, and then the result is convolved with an energy detection filter.- The pooling of ON and OFF signals means that the contrast transitions in the image are essentially full-wave rectified. This causes phase insensitivity. These pooled signals are then convolved with a space-time oriented filter (see figure 2). Competition between opposite directions of motion ensures that only one direction is active. A consequence of the pooling of unoriented ON and OFF signals at this stage is that the resulting signals are invariant to first or second order motion. Thus phase insensitivity Detection of First and Second Order Motion 805 makes this stage able to detect both first and second order motion. These signals are shown in figure 4. In a two-dimensional extension of this model these detectors would also be orientation selective. The simplest way to obtain this would be via elongation along the preferred orientation. Stimulus 20 40 60 80 100 space Stimulus 20 40 60 80 100 space 80 60 40 20 o Phase insensitive left 20 40 60 80 100 left right 20 40 60 80 100 right :: / •• • • 20 40 60 80 100 20 40 60 80 100 Figure 4: Phase insensitive signals to first and second order motion. For both stimuli there are no leftwards signals, and robust rightwards signals. At the phase sensitive stage unoriented ON and OFF signals are separately convolved with space-time oriented kernels which are offset with respect to each other (see figure 2). The separate treatment of ON and OFF signals yields phase sensitivity. At each location there are four kernels: two for the two directions of motion, and two for the two phases. Competition occurs between signals of opposite direction tuning, and opposite phase preference. To avoid activation in the opposite direction of motion slightly removed from the location of the edge spatially broadly tuned inhibition is necessary. This is provided by the phase insensitive signals, thus avoiding feedback loops among phase sensitive detectors. First order signals from the unoriented stage match the spatiotemporal filters in the preferred direction, and thus phase sensitive signals arise. However, due to their phase reversal, second order motion input, provides poor motion signals, which are quenched through phase insensitive inhibition. These signals are shown in figure 5. These simulations show that first and second order motion are detected differently. First order motion is detected by phase sensitive and phase insensitive motion detectors, while second order motion is only detected by the latter. From this we conclude that first order motion is a more potent stimulus, and that the detection of second order is more restricted, since it depends on a single type of detector. In particular, the size of the stimulus and its velocity have to be matched to the energy 806 Stimulus 20 40 60 80 100 space Stimulus 20 40 60 80 100 space A. Grunewald and H. Neumann Phase sensitive DL left 20 40 60 80 100 DL left 20 40 60 80 100 DL right DL right 20 40 60 80 100 Figure 5: Phase sensitive signals to first and second order motion. Only the darklight signals are shown. First order motion causes a consistent rightward motion signal, while second order motion does not. filters for motion signals to arise. 3 RELATION TO PHYSIOLOGY The relationship between the model and physiology is straightforward. Unoriented signals correspond to LGN responses, phase insensitive signals to complex cell responses, and phase sensitive signals to simple cell responses. Thus the model suggests that both simple and some complex cells receive direct LGN input. Moreover these complex cells inhibit simple cells. With an additional threshold in simple cells this inhibition could also be obtained via complex to simple cell excitation. We stress that we are not ruling out that many complex cells receive only simple cell input. Rather, the present research shows that if all complex cells receive only simple cell input, second order motion cannot be detected. Hence at least some complex cell responses need to be built up directly from LGN responses. Several lines of evidence from cat physiology support this suggestion. First, the mean latencies of simple and complex cells are about equal (Bullier & Henry, 1979), suggesting that at least some complex cells receive direct LGN input. Second, noise stimuli can selectively activate complex cells, without activation of simple cells (Hammond, 1991). Third, cross-correlation analyses show that complex cells do receive simple cell input (Ghose et ai., 1994). The present model predicts that some cortical complex cells should respond to Detection of First and Second Order Motion 807 second order motion. Zhou & Baker (1993) investigated this, and found that some complex cells in area 17 respond to second order motion. Moreover, they found that simple cells of a particular first order motion preference did not reverse their motion preference when stimulated with second order motion, which would occur if simple cells were just linear filters. We interpret this as further evidence that complex cells provide inhibitory input to simple cells. If complex cells are built up from LGN input, then orientation selectivity in two-dimensional space cannot be obtained based on simple cell input, but rather requires complex cells with elongated receptive fields. Thus we predict that there ought to be a correlation in complex cells between elongated receptive fields and dependence on direct LGN input. In conclusion we have shown how the phase sensitivity of motion detectors can be mapped onto the ability to detect only first order motion, or both first and second order motion. This suggests that it is not necessary to introduce a orientation detection stage before motion detection can take place, thus simplifying the model of motion detection. Furthermore we have shown that the proposed model is in accord with known physiology. Acknow ledgments This work was supported by the McDonnell-Pew program in Cognitive Neuroscience. References Adelson, E. & Bergen (1985). Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A, 2, 284-299. Bullier, J. & Henry, G. H. (1979). Ordinal position of neurons in cat striate cortex. J. Neurophys., 42, 1251-1263. Chubb, C. & Sperling, G. (1988). Drift-balanced random stimuli: a general basis for studying non-Fourier motion perception. J. Opt. Soc. Am. A, 5, 1986-2007. Ghose, G. M., Freeman, R. D. & Ohzawa, I. {1994}. Local intracortical connections in the cat's visual cortex: postnatal development and plasticity. J. Neurophys., 12, 1290-1303. Hammond, P. (1991). On the response of simple and complex cells to random dot patterns. Vis. Res., 31,47-50. Hammond, P. & MacKay, D. (1983). Influence of luminance gradient reversal On simple cells in feline striate cortex. J. Physiol., 331, 69-87. Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol., 160, 106-154. Poggio, T. & Reichardt, W. (1973). Considerations on models of movement detection. Kybernetik, 12, 223-227. van Santen & Sperling, G. (1985). Elaborated Reichardt detectors. J. Opt. Soc. Am. A, 2, 300-321. Wilson, H. R., Ferrera, V. P. & Yo, C. (1992). A psychophysically motivated model for two-dimensional motion perception. Vis. Neurosci., 9, 79-97. Zhou, Y.X. & Baker, C. L. (1993). A processing stream in mammalian visual cortex neurons for non-Fourier responses. Science, 261, 98-101.
1997
72
1,422
A Framework for Multiple-Instance Learning Oded Maron NE43-755 AI Lab, M.I. T. Cambridge, MA 02139 oded@ai.mit.edu Abstract Tomas Lozano-Perez NE43-836a AI Lab, M.I.T. Cambridge, MA 02139 tlp@ai.mit.edu Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem. 1 Introduction One ofthe drawbacks of applying the supervised learning model is that it is not always possible for a teacher to provide labeled examples for training. Multiple-instance learning provides a new way of modeling the teacher's weakness. Instead of receiving a set of instances which are labeled positive or negative, the learner receives a set of bags that are labeled positive or negative. Each bag contains many instances. A bag is labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to induce a concept that will label individual instances correctly. This problem is harder than even noisy supervised learning since the ratio of negative to positive instances in a positively-labeled bag (the noise ratio) can be arbitrarily high. The first application of multiple-instance learning was to drug activity prediction. In the activity prediction application, one objective is to predict whether a candidate drug molecule will bind strongly to a target protein known to be involved in some disease state. Typically, A Frameworkfor Multiple-Instance Learning 571 one has examples of molecules that bind well to the target protein and also of molecules that do not bind well. Much as in a lock and key, shape is the most important factor in determining whether a drug molecule and the target protein will bind. However, drug molecules are flexible, so they can adopt a wide range of shapes. A positive example does not convey what shape the molecule took in order to bind - only that one of the shapes that the molecule can ~ake was the right one. However, a negative example means that none of the shapes that the molecule can achieve was the right key. The multiple-instance learning model was only recently formalized by [Dietterich et ai., 1997]. They assume a hypothesis class of axis-parallel rectangles, and develop algorithms for dealing with the drug activity prediction problem described above. This work was followed by [Long and Tan, 1996], where a high-degree polynomial PAC bound was given for the number of examples needed to learn in the multiple-instance learning model. [Auer, 1997] gives a more efficient algorithm, and [Blum and Kalai, 1998] shows that learning from multiple-instance examples is reducible to PAC-learning with two sided noise and to the Statistical Query model. Unfortunately, the last three papers make the restrictive assumption that all instances from all bags are generated independently. In this paper, we describe a framework called Diverse Density for solving multiple-instance problems. Diverse Density is a measure of the intersection of the positive bags minus the union of the negative bags. By maximizing Diverse Density we can find the point of intersection (the desired concept), and also the set of feature weights that lead to the best intersection. We show results of applying this algorithm to a difficult synthetic training set as well as the "musk" data set from [Dietterich et ai., 1997]. We then use Diverse Density in two novel applications: one is to learn a simple description of a person from a series of images that are labeled positive if the person is somewhere in the image and negative otherwise. The other is to deal with a high amount of noise in a stock selection problem. 2 Diverse Density We motivate the idea of Diverse Density through a molecular example. Suppose that the shape of a candidate molecule can be adequately described by a feature vector. One instance of the molecule is therefore represented as a point in n-dimensional feature space. As the molecule changes its shape (through both rigid and non-rigid transformations), it will trace out a manifold through this n-dimensional spacel . Figure l(a) shows the paths of four molecules through a 2-dimensional feature space. If a candidate molecule is labeled positive, we know that in at least one place along the manifold, it took on the right shape for it to fit into the target protein. If the molecule is labeled negative, we know that none of the conformations along its manifold will allow binding with the target protein. If we assume that there is only one shape that will bind to the target protein, what do the positive and negative manifolds tell us about the location of the correct shape in feature space? The answer: it is where all positive feature-manifolds intersect without intersecting any negative feature-manifolds. For example, in Figure lea) it is point A. Unfortunately, a multiple-instance bag does not give us complete distribution information, but only some arbitrary sample from that distribution. In fact, in applications other than drug discovery, there is not even a notion of an underlying continuous manifold. Therefore, Figure l(a) becomes Figure l(b). The problem of trying to find an intersection changes I In practice, one needs to restrict consideration to shapes of the molecule that have sufficiently low potential energy. But, we ignore this restriction in this simple illustration. 572 O. Maron and T. Lozano-Perez ~Itlv. bag #13 (a) The different shapes that a molecule can take on are represented as a path. The intersection point of positive paths is where they took on the same shape. posttlVe ~g.1 x x positive X bag 112 X point A Cl ~A A Cl ~Cl Cl Cl A x x A X A Xx A A positive bag 113 negattv. bag o 0 Cl Cl Cl X (b) o o o Samples taken along the paths. Section B is a high density area, but point A is a high Diverse Density area. Figure 1: A motivating example for Diverse Density to a problem of trying to find an area where there is both high density of positive points and low density of negative points. The difficulty with using regular density is illustrated in Figure 1 (b), Section B. We are not just looking for high density, but high "Diverse Density". We define Diverse Density at a point to be a measure of how many different positive bags have instances near that point, and how far the negative instances are from that point. 2.1 Algorithms for multiple-instance learning In this section, we derive a probabilistic measure of Diverse Density, and test it on a difficult artificial data set. We denote positive bags as Bt, the ph point in that bag as Bt, and the value of the kth feature of that point as Bt k ' Likewise, BiJ represents a negative point. Assuming for now that the true concept is a single point t, we can find it by maximizing Pr(x = t I Bt, ... , B;;, B], ... , B;) over all points x in feature space. If we use Bayes' rule and an uninformative prior over the concept location, this is equivalent to maximizing the likelihood Pr( Bt , . .. , B;;, B] , ... ,B; I x = t). By making the additional assumption that the bags are conditionally independent given the target concept t, the best hypothesis is argmaxx TIi Pr(Bt I x = t) TIi Pr(B; I x = t). Using Bayes' rule once more (and again assuming a uniform prior over concept location), this is equivalent to argm:x II Pr(x = t I Bn II Pr(x = t I Bi-) · (1) i i This is a general definition of maximum Diverse Density, but we need to define the terms in the products to instantiate it. One possibility is a noisy-or model: the probability that not all points missed the target is Pr(x = t I Bt) = Pr(x = t I Bi1, B:!i, . .. ) = 1-TIj(I-Pr(x = tIBt», and likewise Pre x = t I Bj-) = TIj (1 - Pr( x = t I Bij». We model the causal probability of an individual instance on a potential target as related to the distance between them. Namely, Pre x = t I Bij) = exp( II Bij - x 11 2). Intuitively, if one of the instances in a positive bag is close to x, then Pre x = t I Bt) is high. Likewise, if every positive bag has an instance close to x and no negative bags are close to x, then x will have high Diverse Density. Diverse Density at an intersection of n bags is exponentially higher than it is at an intersection of n - 1 bags, yet all it takes is one well placed negative instance to drive the Diverse Density down. A Frameworkfor Multiple-Instance Learning 573 Figure 2: Negative and positive bags drawn from the same distribution, but labeled according to their intersection with the middle square. Negative instances are dots, positive are numbers. The square contains at least one instance from every positive bag and no negatives, The Euclidean distance metric used to measure "closeness" depends on the features that describe the instances. It is likely that some of the features are irrelevant, or that some should be weighted to be more important than others. Luckily, we can use the same framework to find not only the best location in feature space, but also the best weighting of the features. Once again, we find the best scaling of the individual features by finding the scalings that maximize Diverse Density. The algorithm returns both a location x and a scaling vector s, where 1/ Bij - X W= Lk ShBijk - Xk)2 . Note that the assumption that all bags intersect at a single point is not necessary. We can assume more complicated concepts, such as for example a disjunctive concept ta V to. In this case, we maximize over a pair of locations Xa and Xo and define Pr(xa = ta V Xb = to I Bij) = maXXa ,Xb(Pr(Xa = ta I Bij ), Pr(xo = to I Bij )). To test the algorithm, we created an artificial data set: 5 positive and 5 negative bags, each with 50 instances. Each instance was chosen uniformly at randomly from a [0, 100] x [0, 100] E n2 domain, and the concept was a 5 x 5 square in the middle of the domain. A bag was labeled positive if at least one of its instances fell within the square, and negative if none did, as shown in Figure 2. The square in the middle contains at least one instance from every positive bag and no negative instances. This is a difficult data set because both positive and negative bags are drawn from the same distribution. They only differ in a small area of the domain. Using regular density (adding up the contribution of every positive bag and subtracting negative bags; this is roughly what a supervised learning algorithm such as n~arest neighbor performs), we can plot the density surface across the domain. Figure 3(a) shows this surface for the data set in Figure 2, and it is clear that finding the peak (a candidate hypothesis) is difficult. However, when we plotthe Diverse Density surface (using the noisy-or model) in Figure 3(b), it is easy to pick out the global maximum which is within the desired concept. The other 574 O. Maron and T. Ulzano-Perez (a) Surface using regular density (b) Surface using Diverse Density Figure 3: Density surfaces over the example data of Figure 3 major peaks in Figure 3(b) are the result of a chance concentration of instances from different bags. With a bit more bad luck, one of those peaks could have eclipsed the one in the middle. However, the chance of this decreases as the number of bags (training examples) increases. One remaining issue is how to find the maximum Diverse Density. In general, we are searching an arbitrary density landscape and the number of local maxima and size of the search space could prohibit any efficient exploration. In this paper, we use gradient ascent with multiple starting points. This has worked succesfully in every test case because we know what starting points to use. Th'e maximum Diverse Density peak is made of contributions from some set of positive points. If we start an ascent from every positive point, one of them is likely to be closest to the maximum, contribute the most to it and have a climb directly to it. While this heuristic is sensible for maximizing with respect to location, maximizing with respect to scaling of feature weights may still lead to local maxima. 3 Applications of Diverse Density By way of benchmarking, we tested the Diverse Density approach on the "musk" data sets from [Dietterich et ai., 1997], which were also used in [Auer, 1997]. We also have begun investigating two new applications of multiple-instance learning. We describe preliminary results on all of these below. The musk data sets contain feature vectors describing the surfaces of a variety of low-energy shapes from approximately 100 molecules. Each feature vector has 166 dimensions. Approximately half ofthese molecules are known to smell "musky," the remainder are very similar molecules that do not smell musky. There are two musk data sets; the Musk-l data set is smaller, both in having fewer molecules and many fewer instances per molecule. Many (72) of the molecules are shared between the two data sets, but the second set includes more instances for the shared molecules. We approached the problem as follows: for each run, we held out a randomly selected 1/10 of the data set as a test set. We computed the maximum Diverse Density on the training set by multiple gradient ascents, starting at each positi ve instance. This produces a A Frameworkfor Multiple-Instance Learning 575 maximum feature point as well as the best feature weights corresponding to that point. We note that typically less than half of the 166 features receive non-zero weighting. We then computed a distance threshold that optimized classification performance under leave-one-out cross validation within the training set. We used the feature weights and distance threshold to classify the examples of the test set; an example was deemed positive if the weighted distance from the maximum density point to any of its instances was below the threshold. The table below lists the average accuracy of twenty runs, compared with the performance of the two principal algorithms reported in [Dietterich et aI., 1997] (i tera ted-discrim APR and GFS elim-kde APR), as well as the MULTINST algorithm from [Auer, 1997l. We note that the performances reported for i terated-discrim APR involves choosing parameters to maximize test set performance and so probably represents an upper bound for accuracy on this data set. The MULTINST algorithm assumes that all instances from all bags are generated independently. The Diverse Density results, which required no tuning, are comparable or better than those ofGFS elim-kde APR and MULTINST. Musk Data Set 1 algorithm accuracy iterated-discrim APR 92.4 GFS elim-kde APR 91.3 Diverse Density 88.9 MULTINST 76.7 Musk Data Set 2 algorithm iterated-discrim APR MULTINST Di verse Densi ty GFS elim-kde APR accuracy 89.2 84.0 82.5 80.4 We also investigated two new applications of multiple-instance learning. The first of these is to learn a simple description of a person from a series of images that are labeled positive if they contain the person and negative otherwise. For a positively labeled image we only know that the person is somewhere in it, but we do not know where. We sample 54 subimages of varying centers and sizes and declare them to be instances in one positive bag since one of them contains the person. This is repeated for every positive and negative image. We use a very simple representation for the instances. Each subimage is divided into three parts which roughly correspond to where the head, torso and legs of the person would be. The three dominant colors (one for each subsection) are used to represent the image. Figure 4 shows a training set where every bag included two people, yet the algorithm learned a description of the person who appears in all the images. This technique is expanded in [Maron and LakshmiRatan, 1998] to learn descriptions of natural images and use the learned concept to retrieve similar images from a large image database. Another new application uses Diverse Density in the stock selection problem. Every month, there are stocks that perform well for fundamental reasons and stocks that perform well because of flukes; there are many more of the latter, but we are interested in the former. For every month, we take the 100 stocks with the highest return and put them in a positive bag, hoping that at least one of them did well for fundamental reasons. Negative bags are created from the bottom 5 stocks in every month. A stock instance is described by 17 features such as momentum, price to fair-value, etc. Grantham, Mayo, Van Otterloo & Co. kindly provided us with data on the 600 largest US stocks since 1978. We tested the algorithm through five runs of training for ten years, then testing on the next year. In each run, the algorithm returned the stock description (location in feature space and a scaling of the features) that maximized Diverse Density. The test stocks were then ranked and decilized by distance (in weighted feature space) to the max-DD point. Figure 5 shows the average return of every decile. The return in the top decile (stocks that are most like the "fundamental stock") is positive and 576 person in common. O. Maron and T. Lozano-Perez return a. 1 n • 4 5 6 f 8 9 10 declle ·0. 2 ·0. 4 Figure 6: Black bars show Diverse Density's average return on a decile, and the white bars show GMO's predictor's return. higher than the average return of a GMO predictor. Likewise, the return in the bottom decile is negative and below that of a GMO predictor. 4 Conclusion In this paper, we have shown that Diverse Density is a general tool with which to learn from Multiple-Instance examples. In addition, we have shown that Multiple-Instance problems occur in a wide variety of domains. We attempted to show the various ways in which ambiguity can lead to the Multiple-Instance framework: through lack of knowledge in the drug discovery .example, through ambiguity of representation in the vision example, and through a high degree of noise in the stock example. Acknowledgements We thank Peter Dayan and Paul Viola at MIT and Tom Hancock and Chris Darnell at GMO for helpful discussions and the AFOSR ASSERT program, Parent Grant#:F49620-93-1-0263 for their support of this research. References [Auer, 1997] P. Auer. On Learning from Multi-Instance Examples: Empirical Evaluation of a theoretical Approach. NeuroCOLT Technical Report Series, NC-TR-97-025, March 1997. [Blum and Kalai, 1998] A. Blum and A. Kalai. A Note on Learning from Multiple-Instance Examples. To appear in Machine Learning, 1998. [Dietterich et aI., 1997] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez. Solving the Multiple-Instance Problem with Axis-Parallel Rectangles. Artificial Intelligence Journal, 89, 1997. [Long and Tan, 1996] P. M. Long and L. Tan. PAC-learning axis alligned rectangles with respect to product distributions from multiple-instance examples. In Proceedings of the J 996 Conference on Computational Learning Theory, 1996. [Maron and LakshmiRatan, 1998] O. Maron and A. LakshmiRatan. Multiple-Instance Learning for Natural Scene Classification. In Submitted to CVPR-98, 1998.
1997
73
1,423
Monotonic Networks Joseph Sill Computation and Neural Systems program California Institute of Technology MC 136-93, Pasadena, CA 91125 email: joe@cs.caltech.edu Abstract Monotonicity is a constraint which arises in many application domains. We present a machine learning model, the monotonic network, for which monotonicity can be enforced exactly, i.e., by virtue offunctional form. A straightforward method for implementing and training a monotonic network is described. Monotonic networks are proven to be universal approximators of continuous, differentiable monotonic functions. We apply monotonic networks to a real-world task in corporate bond rating prediction and compare them to other approaches. 1 Introduction Several recent papers in machine learning have emphasized the importance of priors and domain-specific knowledge. In their well-known presentation of the biasvariance tradeoff (Geman and Bienenstock, 1992)' Geman and Bienenstock conclude by arguing that the crucial issue in learning is the determination of the "right biases" which constrain the model in the appropriate way given the task at hand. The No-Free-Lunch theorem of Wolpert (Wolpert, 1996) shows, under the 0-1 error measure, that if all target functions are equally likely a priori, then all possible learning methods do equally well in terms of average performance over all targets. One is led to the conclusion that consistently good performance is possible only with some agreement between the modeler's biases and the true (non-flat) prior. Finally, the work of Abu-Mostafa on learning from hints (Abu-Mostafa, 1990) has shown both theoretically (Abu-Mostafa, 1993) and experimentally (Abu-Mostafa, 1995) that the use of prior knowledge can be highly beneficial to learning systems. One piece of prior information that arises in many applications is the monotonicity constraint, which asserts that an increase in a particular input cannot result in a decrease in the output. A method was presented in (Sill and Abu-Mostafa, 1996) which enforces monotonicity approximately by adding a second term measuring 662 J.Sill "monotonicity error" to the usual error measure. This technique was shown to yield improved error rates on real-world applications. Unfortunately, the method can be quite expensive computationally. It would be useful to have a model which obeys monotonicity exactly, i.e., by virtue of functional form . We present here such a model, which we will refer to as a monotonic network. A monotonic network implements a piecewise-linear surface by taking maximum and minimum operations on groups of hyperplanes. Monotonicity constraint'> are enforced by constraining the signs of the hyperplane weights. Monotonic networks can be trained using the usual gradient-based optimization methods typically used with other models such as feedforward neural networks. Armstrong (Armstrong et. al. 1996) has developed a model called the adaptive logic network which is capable of enforcing monotonicity and appears to have some similarities to the approach presented here. The adaptive logic network, however, is available only through a commercial software package. The training algorithms are proprietary and have not been fully disclosed in academic journals. The monotonic network therefore represents (to the best of our knowledge) the first model to be presented in an academic setting which has the ability to enforce monotonicity. Section II describes the architecture and training procedure for monotonic networks. Section III presents a proof that monotonic networks can uniformly approximate any continuous monotonic function with bounded partial derivatives to an arbitrary level of accuracy. Monotonic networks are applied to a real-world problem in bond rating prediction in Section IV. In Section V, we discuss the results and consider future directions. 2 Architecture and Training Procedure A monotonic network has a feedforward, three-layer (two hidden-layer) architecture (Fig. 1). The first layer of units compute different linear combinations of the input vector. If increasing monotonicity is desired for a particular input, then all the weights connected to that input are constrained to be positive. Similarly, weights connected to an input where decreasing monotonicity is required are constrained to be negative. The first layer units are partitioned into several groups (the number of units in each group is not necessarily the same). Corresponding to each group is a second layer unit, which computes the maximum over all first-layer units within the group. The final output unit computes the minimum over all groups. More formally, if we have f{ groups with outputs 91,92, ... 9K, and if group k consists of hk hyperplanes w(k, 1) , w(k,2), ... w(k,hk ), then 9k(X) = m~xw(kJ) . x - t(k,i), 1::; j ::; hk 3 Let y be the final output of the network. Then or, for classification problems, where u(u) = e.g. l+!-u. Monotonic Networks 663 positive Input Vector Figure 1: This monotonic network obeys increasing monotonicity in all 3 inputs because all weights in the first layer are constrained to be positive. In the discussions which follow, it will be useful to define the term active. We will call a group 1 active at x if g/(x) = mingk(x) k , i.e., if the group determines the output of the network at that point. Similarly, we will say that a hyperplane is active at x if its group is active at x and the hyperplane is the maximum over all hyperplanes in the group. As will be shown in the following section, the three-layer architecture allows a monotonic network to approximate any continuous, differentiable monotonic function arbitrarily well, given sufficiently many groups and sufficiently many hyperplanes within each group. The maximum operation within each group allows the network to approximate convex (positive second derivative) surfaces, while the minimum operation over groups enables the network to implement the concave (negative second derivative) areas of the target function (Figure 2). network output ...... .. ' ......... , , , , , input group I active group 2 active group 3 active Figure 2: This surface is implemented by a monotonic network consisting of three groups. The first and third groups consist of three hyperplanes, while the second group has only two. Monotonic networks can be trained using many of the standard gradient-based optimization techniques commonly used in machine learning. The gradient for 664 1. Sill each hyperplane is found by computing the error over all examples for which the hyperplane is active. After the parameter update is made according to the rule of the optimization technique, each training example is reassigned to the hyperplane that is now active at that point. The set of examples for which a hyperplane is active can therefore change during the course of training. The constraints on the signs of the weights are enforced using an exponential transformation. If increasing monotonicity is desired in input variable i, then V j, k the weights corresponding to the input are represented as Wi (j ,k) ::::: eZ, (i ,k) . The optimization algorithm can modify zlj,k) freely during training while maintaining the constraint. If decreasing monotonicity is required, then Vj, k we take ( . k) (i,k) Wi)' = _ez , . 3 Universal Approximation Capability In this section, we demonstrate that monotonic networks have the capacity to approximate uniformly to an arbitrary degree of accuracy any continuous, bounded, differentiable function on the unit hypercube [0, I]D which is monotonic in all variables and has bounded partial derivatives. We will say that x' dominates x if VI :S d:S D, x~ ~ Xd. A function m is monotonic in all variables if it satisfies the constraint that Vx,x', if x' dominates x then m(x') ~ m(x). Theorem 3.1 Let m(x) be any continuous, bounded monotonic function with bounded partial derivatives, mapping [0, I]D to R. Then there exists a function mnet(x) which can be implemented by a monotonic network and is such that, for any f and any x E [0, I]D ,Im(x) - mnet(x)1 < f. Proof: Let b be the maximum value and a be the minimum value which m takes on [0, I]D. Let a bound the magnitude of all partial first derivatives of m on [0, I]D. Define an equispaced grid of points on [0, 1]D, where ° = ~ is the spacing between grid points along each dimension. I.e., the grid is the set S of points (ilO, i 2o, .. . iDOl where 1 :S i1 :S n,1 :S i2 :S n, ... 1 :S iD :S n. Corresponding to each grid point x' = (x~, x~, ... xv), assign a group consisting of D+ 1 hyperplanes. One hyperplane in the group is the constant output plane y = m(x'). In addition, for each dimension d, place a hyperplane y = ,(Xd x~) + m(x') , where, > b'6 a . This construction ensures that the group associated with x' cannot be active at any point x* where there exists a d such that xd x~ > 0, since the group's output at such a point must be greater than b and hence greater than the output of a group associated with another grid point. Now consider any point x E [0, I]D. Let S(l) be the unique grid point in S such that Vd, ° :S Xd - si1) < 0, i.e., S(l) is the closest grid point to x which x dominates. Then we can show that mnet(x) ~ m(s(l»). Consider an arbitrary grid point s' =f. s(l). By the monotonicity of m, if s' dominates S(l), then m(s') ~ m(s(l»), and hence, the group associated with s' has a constant output hyperplane y = m(s') ~ m(s(l») and therefore outputs a value ~ m(s(l») at x. If 8' does not dominate S(l), then there exists a d such that Sd(l) > s~. Therefore, Xd s~ ~ 0, meaning that the output of the group associated with s' is at least b ~ m(s(l»). All groups have output at least as large as m(s(l»), so we have indeed shown that mnet(X) ~ m(s(l»). Now consider the grid point S(2) that is obtained by adding 0 to each coordinate of s(l). The group associated with s(2) outputs m(s(2») at x, so mnet(x) :S m(s(2»). Therefore, we have m(s(l») :S mnet(x) :S m(s(2»). Since x dominates s(l) and Monotonic Networks 665 is dominated by S(2), by mono tonicity we also have m(s(l)) :S m(x) :S m(s(2)). Im(x) - mnet(x)1 is therefore bounded by Im(s{2)) - m(s(l))I. By Taylor's theorem for multivariate functions, we know that for some point c on the line segment between S(I) and s(2). Given the assumptions made at the outset, Im(s(2))-m(s(1))j, and hence, \m(x)-mnedx)1 can be bounded by d.5Ct. We take .5 < d~ to complete the proof •. 4 Experimental Results We tested monotonic networks on a real-world problem concerning the prediction of corporate bond ratings. Rating agencies such as Standard & Poors (S & P) issue bond ratings intended to assess the level of risk of default associated with the bond. S & P ratings can range from AAA down to B- or lower. A model which accurately predicts the S & P rating of a bond given publicly available financial information about the issuer has considerable value. Rating agencies do not rate all bonds, so an investor could use the model to assess the risk associated with a bond which S & P has not rated. The model can also be used to anticipate rating changes before they are announced by the agency. The dataset, which was donated by a Wall Street firm, is made up of 196 examples. Each training example consists of 10 financial ratios reflecting the fundamental characteristics of the issuing firm, along with an associated rating. The meaning of the financial ratios was not disclosed by the firm for proprietary reasons. The rating labels were converted into integers ranging from 1 to 16. The task was treated as a single-output regression problem rather than a 16-class classification problem. Monotonicity constraints suggest themselves naturally in this context. Although the meanings of the features are not revealed, it is reasonable to assume that they consist of quantities such as profitability, debt, etc. It seems intuitive that, for instance, the higher the profitability of the firm is , the stronger the firm is, and hence, the higher the bond rating should be. Monotonicity was therefore enforced in all input variables. Three different types of models (all trained on squared error) were compared: a linear model, standard two-layer feedforward sigmoidal neural networks, and monotonic networks. The 196 examples were split into 150 training examples and 46 test examples. In order to get a statistically significant evaluation of performance, a leave-k-out procedure was implemented in which the 196 examples were split 200 different ways and each model was trained on the training set and tested on the test set for each split. The results shown are averages over the 200 splits. Two different approaches were used with the standard neural networks. In both cases, the networks were trained for 2000 batch-mode iterations of gradient descent with momentum and an adaptive learning rate, which sufficed to allow the networks to approach minima of the training error. The first method used all 150 examples for direct training and minimized the training error as much as possible. The second technique split the 150 examples into 110 for direct training and 40 used for validation, i.e., to determine when to stop training. Specifically, the mean-squarederror on the 40 examples was monitored over the course of the 2000 iterations, 666 1. Sill and the state of the network at the iteration where lowest validation error was obtained was taken as the final network to be tested on the test set. In both cases, the networks were initialized with small random weights. The networks had direct input-output connections in addition to hidden units in order to facilitate the implementation of the linear aspects of the target function. The monotonic networks were trained for 1000 batch-mode iterations of gradient descent with momentum and an adaptive learning rate. The parameters of each hyperplane in the network were initialized to be the parameters of the linear model obtained from the training set, plus a small random perturbation. This procedure ensured that the network was able to find a reasonably good fit to the data. Since the meanings of the features were not known, it was not known a priori whether increasing or decreasing mono tonicity should hold for each feature. The directions of monotonicity were determined by observing the signs of the weights of the linear model obtained from the training data. Model training error test error Linear 3.45 ± .02 4.09 ± .06 10-2-1 net 1.83 ± .01 4.22 ± .14 10-4-1 net 1.22 ± .01 4.86 ± .16 10-6-1 net 0.87 ± .01 5.57 ± .20 10-8-1 net 0.65 ± .01 5.56 ± .16 Table 1: Performance of linear model and standard networks on bond rating problem The results support the hypothesis of a monotonic (or at least roughly monotonic) target function. As Table 1 shows, standard neural networks have sufficient flexibility to fit the training data quite accurately (n-k-l network means a 2-layer network with n inputs, k hidden units, and 1 output). However, their excessive, non-monotonic degrees of freedom lead to overfitting, and their out-of-sample performance is even worse than that of a linear model. The use of early stopping alleviates the overfitting and enables the networks to outperform the linear model. Without the monotonicity constraint, however, standard neural networks still do not perform as well as the monotonic networks. The results seem to be quite robust with respect to the choice of number of hidden units for the standard networks and number and size of groups for the monotonic networks. Model training error test error 10-2-1 net 2.46 ± .04 3.83 ± .09 10-4-1 net 2.19 ± .05 3.82± .08 10-6-1 net 2.14 ± .05 3.77 ± .07 10-8-1 net 2.13 ± .06 3.86 ± .09 Table 2: Performance of standard networks using early stopping on bond rating problem 5 Conclusion We presented a model, the monotonic network, in which monotonicity constraints can be enforced exactly, without adding a second term to the usual objective function. A straightforward method for implementing and training such models was Monotonic Networks 667 Model training error test error 2 groups, 2 planes per group 2.78 ± .05 3.71 ± .07 3 groups, 3 planes per group 2.64 ± .04 3.56 ± .06 4 groups, 4 planes per group 2.50 ± .04 3.48 ± .06 5 groups, 5 planes per group 2.44 ± .03 3.43 ± .06 Table 3: Performance of monotonic networks on bond rating problem demonstrated, and the method was shown to outperform other methods on a realworld problem. Several areas of research regarding monotonic networks need to be addressed in the future. One issue concerns the choice of the number of groups and number of planes in each group. In general, the usual bias-variance tradeoff that holds for other models will apply here, and the optimal number of groups and planes will be quite difficult to determine a priori. There may be instances where additional prior information regarding the convexity or concavity of the target function can guide the decision, however. Another interesting observation is that a monotonic network could also be implemented by reversing the maximum and minimum operations, i.e., by taking the maximum over groups where each group outputs the minimum over all of its hyperplanes. It will be worthwhile to try to understand when one approach or the other is most appropriate. Acknowledgments The author is very grateful to Yaser Abu-Mostafa for considerable guidance. I also thank John Moody for supplying the data. Amir Atiya, Eric Bax, Zehra Cataltepe, Malik Magdon-Ismail, Alexander Nicholson, and Xubo Song supplied many useful comments. References [1] S. Geman and E. Bienenstock (1992). Neural Networks and the Bias-Variance Dilemma. Neural Computation 4, pp 1-58. [2] D. Wolpert (1996). The Lack of A Priori Distinctions Between Learning Algorithms. Neural Computation 8, pp 1341-1390. [3] Y. Abu-Mostafa (1990). Learning from Hints in Neural Networks Journal of Complexity 6, 192-198. [4] Y. Abu-Mostafa (1993) Hints and the VC Dimension Neural Computation 4, 278-288 [5] Y. Abu-Mostafa (1995) Financial Market Applications of Learning from Hints Neural Networks in the Capital Markets, A. Refenes, ed., 221-232. Wiley, London, UK. [6] J. Sill and Y. Abu-Mostafa (1996) Monotonicity Hints. To appear in it Advances in Neural Information Processing Systems 9. [7] W.W. Armstrong, C. Chu, M. M. Thomas (1996) Feasibility of using Adaptive Logic Networks to Predict Compressor Unit Failure Applications of Neural Networks in Environment, Energy, and Health, Chapter 12. P. Keller, S. Hashem, L. Kangas, R. Kouzes, eds, World Scientific Publishing Company, Ltd., London.
1997
74
1,424
Relative Loss Bounds for Multidimensional Regression Problems Jyrki Kivinen Department of Computer Science P.O. Box 26 (Teollisuuskatu 23) FIN-00014 University of Helsinki, Finland Abstract Manfred K. Warmuth Department of Computer Science University of California, Santa Cruz Santa Cruz, CA 95064, USA We study on-line generalized linear regression with multidimensional outputs, i.e., neural networks with multiple output nodes but no hidden nodes. We allow at the final layer transfer functions such as the softmax function that need to consider the linear activations to all the output neurons. We use distance functions of a certain kind in two completely independent roles in deriving and analyzing on-line learning algorithms for such tasks. We use one distance function to define a matching loss function for the (possibly multidimensional) transfer function, which allows us to generalize earlier results from one-dimensional to multidimensional outputs. We use another distance function as a tool for measuring progress made by the on-line updates. This shows how previously studied algorithms such as gradient descent and exponentiated gradient fit into a common framework. We evaluate the performance of the algorithms using relative loss bounds that compare the loss of the on-line algoritm to the best off-line predictor from the relevant model class, thus completely eliminating probabilistic assumptions about the data. 1 INTRODUCTION In a regression problem, we have a sequence of n-dimensional real valued inputs Zt E R n , t = 1, ... ,f, and for each input Zt a k-dimensional real-valued desired output Yt E R". Our goal is to find a mapping that at least approximately models the dependency between Zt and Yt. Here we consider the parametric case Yt = f (w; Zt) where the actual output Yt corresponding to the input Zt is determined by a parameter vector w E Rm (e.g., weights in a neural network) through a given fixed model f (e.g., a neural network architecture). 288 1. Kivinen and M. K Wannuth Thus, we wish to obtain parameters w such that, in some sense, I(w;:z:t} ~ Yt for all t. The most basic model 1 to consider is the linear one, which in the one-dimensional case k = 1 means that I(w;:z:t) = w . :Z:t for w E Rfl. In the multidimensional case we actually have a whole matrix 0 E Rkxfl of parameters and 1(0;:z:t} = O:Z:t. The goodness of the fit is quantitatively measured in terms of a loss function; the square loss given by Lt,j (Yt,j - ilt,j)2 /2 is a popular choice. In generalized linear regression [MN89] we fix a transfer function 4> and apply it on top of a linear model. Thus, in the one-dimensional case we would have I(w;:z:t) = ¢(w·:z:t). Here ¢ is usually a continuous increasing function from R to R, such as the logistic function that maps z to 1/(1 + e- Z ). It is still possible to use the square loss, but this can lead to problems. In particular, when we apply the logistic transfer function and try to find a weight vector w that minimizes the total square loss over f examples (:Z:t, Yt), we may have up to £fl local minima [AHW95, Bud93]. Hence, some other choice of loss function might be more convenient. In the one-dimensional case it can be shown that any continuous strictly increasing transfer function ¢ has a specific matching loss function LtP such that, among other useful properties, Lt LtP(Yt, ¢(w . :z:t}) is always convex in w, so local minima are not a problem [AHW95]. For example, the matching loss function for the logistic transfer function is the relative entropy (a generalization of the logarithmic loss for continuousvalued outcomes). The square loss is the matching loss function for the identity transfer function (i.e., linear regression). The main theme of the present paper is the application of a particular kind of distance functions to analyzing learning algorithms in (possibly multidimensional) generalized linear regression problems. We consider a particular manner in which a mapping 4>: Rk -+ Rk can be used to define a distance function D.4> : Rk x Rk -+ R; the assumption we must make here is that 4> has a convex potential function. The matching loss function LtP mentioned above for a transfer function ¢ in the one-dimensional case is given in terms of the distance function D.tP as LtP(¢(a), ¢(ii)) = D.tP(ii, a). Here, as whenever we use the matching loss LtP (y, iI), we assume that Y and iI are in the range of ¢, so we can write Y = ¢(a) and iI = ¢(ii) for some a and ii. Notice that for k = 1, any strictly increasing continuous function has a convex potential (i.e., integral) function. In the more interesting case k > 1, we can consider transfer functions such as the softmax function, which is commonly used to transfer arbitrary vectors a E Rk into probability vectors y (i.e., vectors such that iii ~ 0 for all i and Li iii = 1). The matching loss function for the softmax function defined analogously with the one-dimensional case turns out to be the relative entropy (or Kul1back-Leibler divergence), which indeed is a commonly used measure of distance between probability vectors. For the identity transfer function, the matching loss function is the squared Euclidean distance. The first result we get from this observation connecting matching losses to a general notion of distance is that certain previous results on generalized linear regression with matching loss on one-dimensional outputs [HKW95] directly generalize to multidimensional outputs. From a more general point of view, a much more interesting feature of these distance functions is how they allow us to view certain previously known learning algorithms, and introduce new ones, in a simple unified framework. To briefly explain this framework without unnecessary complications, we restrict the foUowing discussion to the case k = 1, i.e., f(w;:z:) = ¢(w . :z:) E R with w E Rfl. We consider on-line learning algorithms, by which we here mean an algorithm that processes the training examples one by one, the pair (:Z:t, Yt) being processed at time t. Based Relative Loss Bounds for Multidimensional Regression Problems 289 on the training examples the algorithm produces a whole sequence of weight vectors Wt, t = 1, ... ,f.. At each time t the old weight vector Wt is updated into WtH based on Zt and Yt. The best-known such algorithm is on-line gradient descent. To see some alternatives, consider first a distance function ll.,p defined on R n by some function ,p: Rn ~ Rn. (Thus, we assume that,p has a convex potential.) We represent the update somewhat indirectly by introducing a new parameter vector 6t ERn from which the actual weights Wt are obtained by the mapping Wt = ,p{6t ). The new parameters are updated by (1) where TJ > 0 is a learning rate. We call this algorithm the general additive algorithm with parameterization function ,p. Notice that here 6 is updated by the gradient with respect to w, so this is not just a gradient descent with reparameterization [JW98]. However, we obtain the usual on-line gradient descent when ,p is the identity function. When,p is the softmax function, we get the so-called exponentiated gradient (EG) algorithm [KW97 , HKW95]. The connection of the distance function ll.,p to the update (1) is two-fold. First, (1) can be motivated as an approximate solution to a minimization problem in which the distance ll.,p (6t , 6tH ) is used as a kind of penalty term to prevent too drastic an update based on a single example. Second, the distance function ll.,p can be used as a potential function in analyzing the performance of the resulting algorithm. The same distance functions have been used previously for exactly the same purposes [KW97, HKW95] in important special cases (the gradient descent and EG algorithms) but without realizing the full generality of the method. It should be noted that the choice of the parameterization function ,p is left completely free, as long as ,p has a convex potential function. (In contrast, the choice of the transfer function ¢ depends on what kind of a regression problem we wish to solve.) Earlier work suggests that the softmax parameterization function (Le., the EG algorithm) is particularly suited for situations in which some sparse weight vector W gives a good match to the data [HKW95, KW97]. (Because softmax normalizes the weight vector and makes the components positive, a simple transformation of the input data is typically added to realize positive and negative weights with arbitrary norm.) In work parallel to this, the analogue of the general additive update (1) in the context of linear classification, i.e., with a threshold transfer function, has recently been developed and analyzed by Grove et al. [GLS97] with methods and results very similar to ours. CesaBianchi [CB97] has used somewhat different methods to obtain bounds also in cases in which the loss function does not match the transfer function. Jagota and Warmuth [JW98] view (1) as an Euler discretization of a system of partial differential equations and investigate the performance of the algorithm as the discretization parameter approaches zero. The distance functions we use here have previously been applied in the context of exponential families by Amari [Ama85] and others. Here we only need some basic technical properties of the distance functions that can easily be derived from the definitions. For a discussion of our line of work in a statistical context see Azoury and Warmuth [AW97]. In Section 2 we review the definition of a matching loss function and give examples. Section 3 discusses the general additive algorithm in more detail. The actual relative on-line loss bounds we have for the general additive algorithm are explained in Section 4. 290 J. Kivinen and M. K. Warmuth 2 DISTANCE FUNCTIONS AND MATCIllNG LOSSES Let 4>: R k -t R k be a function that has a convex potential function P 4> (i.e., 4> = V' P 4> for some convex P 4>: Rk -4 R). We first define a distance/unction A4> for 4> by (2) Thus, the distance A4>(a, a) is the error we make if we approximate P 4>(a) by its firstorder Taylor polynomial around a. Convexity of P 4> implies that A4> is convex in its first argument. Further, A4>(a, a) is nonnegative, and zero if and only if 4>(a) = 4>( a). ~ We can alternatively write (2) as A4>(a, a) = I: (4)(r) - 4>(a)) . dr where the integral is a path integral the value of which must be independent of the actual path chosen between a and a. In the one-dimensional case, the integral is a simple definite integral, and ¢ has a convex potential (i.e., integral) function if it is strictly increasing and continuous [AHW95, HKW95]. Let now 4> have range V 4> ~ Rk and distance function A4>' Assuming that there is a function L4>: V4> x V4> -4 R such that L4>(4)(a), 4>(a» = A4>(a, a) holds for all a and a, we say that L4> is the matching loss function for 4>. Example 1 Let 4> be a linear function given by 4>(a) = Aa where A E R kxk is symmetrical and positive definite. Then 4> has the convex potential function P 4> (a) = aT Aa /2, and (2) gives A4>(a, a) = Ha - a)T A(a - a). Hence, L4>(Y' y) = t(y - y)T A-l(y - y) forally,YERk. 0 Example2 Let 0': Rk -4 Rk, O'i(a) = exp(a;)/E7=1 exp(aj), be the softmax function. It has a potential function given by PO'(a) = In E7=1 exp(aj). To see that PO' is convex, notice that the Hessian n2PO' is given by D2PO'(a);j = dijO'i(a) - O'da)O'j(a). Given a vector z E Rk, let now X be a random variable that has probability O';{a) of taking the value Xi· We have zTDO'(a)z = E7=1 O'i{a)xl- E7=1 E7=1 0'; (a)xiO'j(a)xj = EX 2 (EX)2 = VarX ~ O. Straightforward algebra now gives the relative entropy LO'(y, y) = E;=l Yj In(Yj/Yj) as the matching loss function. (To allow Yj = 0 or Yj = 0, we adopt the standard convention that OlnO = Oln(O/O) = 0 and yln(y/O) = 00 for y> 0.) 0 In the relative loss bound proofs we use the basic property [JW98, Ama85] This shows that our distances do not satisfy the triangle inequality. Usually they are not symmetrical, either. 3 THE GENERAL ADDITIVE ALGORITHM We consider on-line learning algorithms that at time t -first receive an input Zt E R n , then produce an output Yt E R k, and finally receive as feedback the desired output Yt E Rk. To define the general additive algorithm. assume we are given a transfer function Relative Loss Bounds for Multidimensional Regression Problems 291 l/J: Rk ~ Rk that has a convex potential function. (We wi11later use the matching loss as a performance measure.) We also require that all the desired outputs Y t are in the range of l/J. The algorithm's predictions are now given by Yt = l/J(Ot:et) where Ot E Rkxn is the algorithm's weight matrix at time t. To see how the weight matrix is updated, assume further we have a parameterization function ..p: R n ~ R n with a distance D....p. The algorithm maintains kn real-valued parameters. We denote by 8 t the k x n matrix of the values ofthese parameters immediately before trial t. Futher, we denote by 8t ,i the jth row of 8t. and by ..p(8t} the matrix with ..p(8t,i) as its jth row. Given initial parameter values 8 1 and a learning rate 1] > 0, we now define the general additive (GA) algorithm as the algorithm that repeats at each trial t the following prediction and update steps. Prediction: Upon recieving the instance :et, give the prediction Yt = l/J(..p(8t):et). Update: For j := 1, ... , k, set 8t+l,i = 8t,i - fJ(yt,i - Yt ,i ):et. Note that (2) implies \7aD..l/J(a, a)) = l/J(a) -l/J(a), so this update indeed turns out to be the same as (1) when we recall that Ll/J(Yt, Yt) = D..l/J(Ot:et, at} where Yt = l/J(at). The update can be motivated by an optimization problem given in terms of t~e loss and distance. Consider updating an old parameter matrix 8 into a new matrix 8 based on a single input :e and desired output y. A natural goal would be to minimize the loss L l/J (y, l/J( ,p (8):e ) ). However, the algorithm must avoid losing too much of the information it has gained during the previous trials and stored in the form of the old parameter matrix 8 . We thus set as the algorithm's goal to minimize the sum D.."p(8, 8) + fJLl/J(Y' l/J(,p(8):e)) where fJ > 0 is a parameter regulating how fast the algorithm is willing to move its parameters. Under certain regularity assumptions, the update rule of the GA algorithm can be shown to approximately solve this minimization problem. For more discussion and examples in the special case of linear regression, see [KW97]. An interesting related idea is using all the previous examples in the update instead of just the last one. For work along these lines in the linear case see Vovk [Vov97] and Foster [Fos91]. 4 RELATIVE LOSS BOUNDS Consider a sequence S := ((:el,yd, . .. ,(:el,Yl)) of training examples, and let Lossl/J(GA, S) = 2:!=1 Ll/J(Yt, Yt) be the loss incurred by the general additive algorithm on this sequence when it always uses its current weights Ot for making the tth prediction Yt· Similarly, let Lossl/J(O, S) = 2:!=1 Ll/J(Yt, l/J(O:ed) be the loss of a fixed predictor O. Basically, our goal is to show that if some 0 achieves a small loss, then the algorithm is not doing much worse, regardless of how the sequence S was generated. Making additional probabilistic assumptions allows such on-line loss bounds to be converted into more traditional results about generalization errors [KW97]. To give the bounds for Lossl/J(GA, S) in terms of Lossl/J(O, S) we need some additional parameters. The first one is the distance D....p(81,8) where 0 = "p(8) and 8 1 is the initial parameter matrix of the GA algorithm (which can be arbitrary). The second one is defined by bx,,,p = sup {:eT n..p(9):e 19 E Rn,:e EX} where X := {:el, ... ,:el} is the set of input vectors and n..p(9) is the Jacobian with (D,p(9))ij = 81Pi(9)/80j . The value bx,,p can be interpreted as the maximum norm of 292 J. Kivinen and M. K. Wannuth any input vector in a norm defined by the parameterization function ..p. In Example 3 below we show how b x,..p can easily be evaluated when 1/J is a linear function or the softmax function. The third parameter ctP' defined as relates the matching loss function for the transfer function tP to the square loss. In Example 4 we evaluate this constant for linear functions, the softmax function, and the onedimensional case. Example 3 Consider bounding the value :I: TDO'( 0):1: where 0' is the softmax function. As we saw in Example 2, this value is a variance of a random variable with the range {Xl, ... ,Xn }. Hence, we have bx,O' ~ maXzex(maxixi - minixd 2/4 ~ maXzex 11:l:11~ where 11:1:1100 = maXi IXil· If 1/J is a linear function with 1/J( 8) = A8 for a symmetrical positive definite A, we clearly have bx,..p ~ Amax max:l:ex :1:2 where Amax is the largest eigenvalue of A. 0 Example 4 For the softmax function 0' the matching loss function LO' is the relative entropy (see Example 2), for which it is well known that LO'(y, y) 2: 2(y - y)2. Hence, we have ctP ~ 1/4. If tP is a linear function given by a symmetrical positive semidefinite matrix A, we see from Example 1 that C tP is the largest eigenvalue of A. Finally, in the special case k = 1, with ¢: R -7 R differentiable and strictly increasing, we can show ctP :::; Z if Z is a bound such that 0 < ¢'(z) :::; Z holds for all z. 0 Assume now we are given constants b 2: bx ,1/J and C 2: ctP. Our first loss bound states that for any parameter matrix 8 we have when the learning rate is chosen as '1 = 1/(2bc). (Proofs are omitted from this extended abstract.) The advantage of this bound is that with a fixed learning rate it holds for any 8, so we need no advance knowledge about a good 8. The drawback is the factor 2 in front of LosstP (..p (8), S), which suggests that asymptotically the algorithm might not ever achieve the performance of the best fixed predictor. A tighter bound can be achieved by more careful tuning. Thus, iven constants K 2: 0 and R > 0, if we choose the learning rate as '1 = ( (bcR)2 + KbcR - bcR)/(Kbc) (with '1 = 1/(2bc) if K = 0) we obtain for any 8 that satisfies Loss tP ( ..p (8) , S) :::; K and d..p (81, 8) :::; R. This shows that if we restrict our comparison to parameter matrices within a given distance R of the initial matrix of the algorithm, and we have a reasonably good guess K as to the loss of the best fixed predictor within this distance, this knowledge allows the algorithm to asymptotically match the performance of this best fixed predictor. Relative Loss Bounds for Multidimensional Regression Problems 293 Acknowledgments The authors thank Katy Azoury, Chris Bishop, Nicolo Cesa-Bianchi, David Helmbold, and Nick Littlestone for helpful discussions. Jyrki Kivinen was supported by the Academy of Finland and the ESPRIT project NeuroCOLT. Manfred Warmuth was supported by the NSF grant CCR 9700201. References [Ama85] S. Amari. Differential Geometrical Methods in Statistics. Springer Verlag, Berlin, 1985. [AHW95] P. Auer, M. Herbster, and M. K. Warmuth. Exponentially many local minima for single neurons. In Proc. 1995 Neural Information Processing Conference, pages 316-317. MIT Press, Cambridge, MA, November 1995. [AW97] K. Azoury and M. K. Warmuth. Relative loss bounds and the exponential family of distributions. Unpublished manuscript, 1997. [Bud93] M. Budinich. Some notes on perceptron learning. 1. Phys. A.: Math. Gen., 26:4237-4247, 1993. [CB97] N. Cesa-Bianchi. Analysis of two gradient-based algorithms for on-line regression. In Proc. 10th Annu. Con! on Comput. Learning Theory, pages 163-170. ACM,1997. [Fos91] D. P. Foster. Prediction in the worst case. The Annals of Statistics, 19(2):10841090, 1991. [GLS97] A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. In Proc. 10th Annu. Con! on Comput. Learning Theory, pages 171-183. ACM, 1997. [HKW95] D. P. Helmbold, J. Kivinen, and M. K. Warmuth. Worst-case loss bounds for sigmoided linear neurons. In Proc. Neural Information Processing Systems 1995, pages 309-315. MIT Press, Cambridge, MA, November 1995. [JW98] A. K. Jagota and M. K. Warmuth. Continuous versus discrete-time nonlinear gradient descent: Relative loss bounds and convergence. Presented at Fifth Symposium on Artificial Intelligence and Mathematics, Ft. Lauderdale, FL, 1998. [KW97] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132( 1): 1-64, January 1997. [MN89] P. McCullagh and J. A. NeIder. Generalized Linear Models. Chapman & Hall, New York, 1989. [Vov97] V. Vovk. Competitive on-line linear regression. In Proc. Neural Information Processing Systems 1997. MIT Press, Cambridge, MA, 1998.
1997
75
1,425
Gradients for retinotectal mapping Geoffrey J. Goodhill Georgetown Institute for Cognitive and Computational Sciences Georgetown University Medical Center 3970 Reservoir Road Washington IX: 20007 geoff@giccs.georgetown.edu Abstract The initial activity-independent formation of a topographic map in the retinotectal system has long been thought to rely on the matching of molecular cues expressed in gradients in the retina and the tectum. However, direct experimental evidence for the existence of such gradients has only emerged since 1995. The new data has provoked the discussion of a new set of models in the experimentalliterature. Here, the capabilities of these models are analyzed, and the gradient shapes they predict in vivo are derived. 1 Introduction During the early development of the visual system in for instance rats, fish and chickens, retinal axons grow across the surface of the optic tectum and establish connections so as to form an ordered map. Although later neural activity refines the map, it is not required to set up the initial topography (for reviews see Udin & Fawcett (1988); Goodhill (1992». A long-standing idea is that the initial topography is formed by matching gradients of receptor expression in the retina with gradients of ligand expression in the tectum (Sperry, 1963). Particular versions of this idea have been formalized in theoretical models such as those of Prestige & Willshaw (1975), Willshaw & von der Malsburg (1979), Whitelaw & Cowan (1981), and Gierer (1983;1987). However, these models were developed in the absence of any direct experimental evidence for the existence of the necessary gradients. Since 1995, major breakthroughs have occurred in this regard in the experimental literature. These center around the Eph (Erythropoetin-producing hepatocellular) subfamily of receptor tyrosine kinases. Eph receptors and their ligands have been shown to be expressed in gradients in the developing retina and tectum respectively, and to playa role in guiding axons to appropriate positions. These exciting new developments have led experimentalists to discuss theoretical models differGradients/or Retinotectal Mapping 153 ent from those previously proposed (e.g. Tessier-Lavigne (1995); Tessier-Lavigne & Goodman (1996); Nakamoto et aI, (1996)). However, the mathematical consequences of these new models, for instance the precise gradient shapes they require, have not been analyzed. In this paper, it is shown that only certain combinations of gradients produce appropriate maps in these models, and that the validity of these models is therefore experimentally testable. 2 Recent experimental data Receptor tyrosine kinases are a diverse class of membrane-spanning proteins. The Eph subfamily is the largest, with over a dozen members. Since 1990, many of the genes encoding Eph receptors and their ligands have been shown to be expressed in the developing brain (reviewed in Friedman & O'Leary, 1996). Ephrins, the ligands for Eph receptors, are all membrane anchored. This is unlike the majority of receptor tyrosine kinase ligands, which are usually soluble. The ephrins can be separated into two distinct groups A and B, based on the type of membrane anchor. These two groups bind to distinct sets of Eph receptors, which are thus also called A and B, though receptor-ligand interaction is promiscuous within each subgroup. Since many research groups discovered members of the Eph family independently, each member originally had several names. However a new standardized notation was recently introduced (Eph Nomenclature Committee, 1997), which is used in this paper. With regard to the mapping from the nasal-temporal axis of the retina to the anterior-posterior axis of the tectum (figure 1), recent studies have shown the following (see Friedman & O'Leary (1996) and Tessier-Lavigne & Goodman (1996) for reviews). • EphA3 is expressed in an increasing nasal to temporal gradient in the retina (Cheng et aI, 1995). • EphA4 is expressed uniformly in the retina (Holash & Pasquale, 1995). • Ephrin-A2, a ligand of both EphA3 and EphA4, is expressed in an increasing rostral to caudal gradient in the tectum (Cheng et aI, 1995). • Ephrin-A5, another ligand of EphA3 and EphA4, is also expressed in an increasing rostral to caudal gradient in the tectum, but at very low levels in the rostral half of the tectum (Drescher et aI, 1995). All of these interactions are repulsive. With regard to mapping along the complementary dimensions, EphB2 is expressed in a high ventral to low dorsal gradient in the retina, while its ligand ephrin-B1 is expressed in a high dorsal to low ventral gradient in the tectum (Braisted et aI, 1997). Members of the Eph family are also beginning to be implicated in the formation of topographic projections between many other pairs of structures in the brain (Renping Zhou, personal communication). For instance, EphA5 has been found in an increasing lateral to medial gradient in the hippocampus, and ephrin-A2 in an increasing dorsal to ventral gradient in the septum, consistent with a role in establishing the topography of the map between hippocampus and septum (Gao et aI, 1996). The current paper focusses just on the paradigm case of the nasal-temporal to anterior-posterior axis of the retinotectal mapping. Actual gradient shapes in this system have not yet been quantified. The analysis below will assume that certain gradients are linear, and derive the consequences for the other gradients. 154 G. J. Goodhill RETINA TECTUM N c R(x) L(y) "'-----.......;;;~x k:==::y Figure 1: This shows the mapping that is normally set up from the retina to the tectum. Distance along the nasal-temporal axis of the retina is referred to as x and receptor concentration as R( x). Distance along the rostral-caudal axis of the tectum is referred to as y and ligand concentration as L(y). 3 Mathematical models Let R be the concentration of a receptor expressed on a growth cone or axon, and L the concentration of a ligand present in the tectum. Refer to position along the nasal-temporal axis of the retina as x, and position along the rostral-caudal axis of the tectum as y, so that R = R(x) and L = L(y) (see figure 1). Gierer (1983; 1987) discusses how topographic information could be signaled by interactions between ligands and receptors. A particular type of interaction, proposed by Nakamoto et al (1996), is that the concentration of a "topographic signal", the signal that tells axons where to stop, is related to the concentration of receptor and ligand by the law of mass action: G(x, y) = kR(x)L(y) (1) where G(x, y) is the concentration of topographic signal produced within an axon originating from position x in the retina when it is at position y in the tectum, and k is a constant. In the general case of multiple receptors and ligands, with promiscuous interactions between them, this equation becomes G(x, y) = L: kijRi(X)Lj(Y) i,j (2) Whether each receptor-ligand interaction is attractive or repulsive is taken care of by the sign of the relevant kij • Two possibilities for how G(x, y) might produce a stop (or branch) signal in the growth cone (or axon) are that this occurs when (1) a "set point" is reached (discussed in, for example, Tessier-Lavigne & Goodman (1996); Nakamoto et al (1996» ,i.e. G (x, y) = c where c is a constant, or (2) attraction (or repulsion) reaches a local maximum (or minimum), i.e. &G~~,y) = 0 (Gierer, 1983; 1987). For a smooth, uniGradients for Retinotectal Mapping 155 form mapping, one of these conditions must hold along a line y ex: x. For simplicity assume the constant of proportionality is unity. 3.1 Set point rule For one gradient in the retina and one gradient in the tectum (i.e. equation 1), this requires that the ligand gradient be inversely proportional to the receptor gradient: c L(x) = R(x) If R(x) is linear (c.f. the gradient of EphA3 in the retina), the ligand concentration is required to go to infinity at one end of the tectum (see figure 2). One way round this is to assume R(x) does not go to zero at x = 0: the experimental data is not precise enough to decide on this point. However, the addition of a second receptor gradient gives c L(x) = k1R1 (x) + k2R2(X) If R1 (x) is linear and R2(x) is flat (c.f. the gradient of EphA4 in the retina), then L (y) is no longer required to go to infinity (see figure 2). For two receptor and two ligand gradients many combinations of gradient shapes are possible. As a special case, consider R1 (x) linear, R2(x) flat, and L 1(y) linear (c.f. the gradient of Elfl in the tectum). Then L2 is required to have the shape L ( ) = ay2 + by 2 Y dy + e where a, b, d, e are constants. This shape depends on the values of the constants, which depend on the relative strengths of binding between the different receptor and ligand combinations. An interesting case is where R1 binds only to L1 and R2 binds only .to L 2, i.e. there is no promiscuity. In this case we have L2(y) ex: y2 (see figure 2). This function somewhat resembles the shape of the gradient that has been reported for ephrin-AS in the tectum. However, this model requires one gradient to be attractive, whereas both are repulsive. 3.2 Local optimum rule For one retinal and one tectal gradient we have the requirement R(x) aL(y) = 0 ay This is not generally true along the line y = x, therefore there is no map. The same problem arises with two receptor gradients, whatever their shapes. For two receptor and two ligand gradients many combinations of gradient shapes are possible. (Gierer (1983; 1987) investigated this case, but for a more complicated reaction law for generating the topographic signal than mass action.) For the special case introduced above, L 2(y) is required to have the shape L2(y) = ay + blog(dy + e) + f where a, b, d, e, and f are constants as before. Considering the case of no promiscuity, we again obtain L2(y) ex: y2 i.e. the same shape for L2 (y) as that specified by the set point rule. 156 G. 1. Goodhill A L B L c Figure 2: Three combinations of gradient shapes that are sufficient to produce a smooth mapping with the mass action rule. In the left column the horizontal axis is position in the retina while the vertical axis is the concentration of receptor. In the right column the horizontal axis is position in the tectum while the vertical axis is the concentration of ligand. Models A and B work with the set point but not the local optimum rule, while model C works with both rules. For models B and C, one gradient is negative and the other positive. Gradients for Retinotectal Mapping 157 4 Discussion For both rules, there is a set of gradient shapes for the mass-action model that is consistent with the experimental data, except for the fact that they require one gradient in the tectum to be attractive. Both ephrin-A2 and ephrin-A5 have repulsive effects on their receptors expressed in the retina, which is clearly a problem for these models. The local optimum rule is more restrictive than the set point rule, since it requires at least two ligand gradients in the tectum. However, unlike the set point rule, it supplies directional information (in terms of an appropriate gradient for the topographic signal) when the axon is not at the optimal location. In conclusion, models based on the mass action assumption in conjunction with either a "set point" or "local optimum" rule can be true only if the relevant gradients satisfy the quantitative relationships described above. A different theoretical approach, which analyzes gradients in terms of their ability to guide axons over the maximum possible distance, also makes predictions about gradient shapes in the retinotectal system (Goodhill & Baier, 1998). Advances in experimental technique should enable a more quantitative analysis of the gradients in situ to be performed shortly, allowing these predictions to be tested. In addition, analysis of particular Eph and ephrin knockout mice (for instance ephrin-A5 (Yates et aI, 1997» is now being performed, which should shed light on the role of these gradients in normal map development. Bibliography Braisted, J.E., McLaughlin, T., Wang, H.U., Friedman, G.C, Anderson, D.J. & O'Leary, D.D.M. (1997). Graded and lamina-specific distributions of ligands of EphB receptor tyrosine kinases in the developing retinotectal system. Developmental Biology, 19114-28. Cheng, H.J., Nakamoto, M., Bergemann, A.D & Flanagan, J.G. (1995). Complementary gradients in expression and binding of Elf-1 and Mek4 in development of the topographic retinotectal projection map. Cell, 82,371-381. Drescher, U., Kremoser, C, Handwerker, C, Loschinger, J., Noda, M. & Bonhoeffer, F. (1995). In-vitro guidance of retinal ganglion-cell axons by RAGS, a 25 KDa tectal protein related to ligands for Eph receptor tyrosine kinases. Cell, 82, 359370. Eph Nomenclature Committee (1997). Unified nomenclature for Eph family receptors and their ligands, the ephrins. Cell, 90, 403-404. Friedman, G.C & O'Leary, D.D.M. (1996). Eph receptor tyrosine kinases and their ligands in neural development. Curro Opin. Neurobiol., 6, 127-133. Gierer, A. (1983). Model for the retinotectal projection. Proc. Roy. Soc. Lond. B, 218, 77-93. Gierer, A. (1987). Directional cues for growing axons forming the retinotectal projection. Development, 101,479-489. Gao, P.-P., Zhang, J.-H., Yokoyama, M., Racey, R, Dreyfus, CF., Black, LR & Zhou, R. (1996). Regulation of topographic projection in the brain: Elf-1 in the hippocampalseptal system. Proc. Nat. Acad. Sci. USA, 93, 11161-11166. Goodhill, G.J. (1992). Correlations, Competition and Optimality: Modelling the Development of Topography and Ocular Dominance. Cognitive Science Research Paper CSRP 226, University of Sussex. Available from www.giccs.georgetown.edu/ "'geoff 158 G. 1. Goodhill Goodhill, G.J. & Baier, H. (1998). Axon guidance: stretching gradients to the limit. Neural Computation, in press. Holash, J.A. & Pasquale, E.B. (1995). Polarized expression of the receptor proteintyrosine kinase CekS in the developing avian visual system. Developmental Biology, 172, 683-693. Nakamoto, M., Cheng H.J., Friedman, G.C, Mclaughlin, T., Hansen, M.J., Yoon, CH., O'Leary, D.D.M. & Flanagan, J.G. (1996). Topographically specific effects of ELF-Ion retinal axon guidance in-vitro and retinal axon mapping in-vivo. Cell, 86, 755-766. Prestige, M.C & Willshaw, D.J. (1975). On a role for competition in the formation of patterned neural connexions. Proc. R. Soc. Lond. B, 190, 77-98. Sperry, RW. (1963). Chemoaffinity in the orderly growth of nerve fiber patterns and connections. Proc. Nat. Acad. Sci., U.S.A., 50, 703-710. Tessier-Lavigne, M. (1995). Eph receptor tyrosine kinases, axon repulsion, and the development of topographic maps. Cell, 82, 345-348. Tessier-Lavigne, M. and Goodman, CS. (1996). The molecular biology of axon guidance. Science, 274, 1123-1133. Udin, S.B. & Fawcett, J.W. (1988). Formation of topographic maps. Ann. Rev. Neurosci., 11,289-327. Whitelaw, V.A. & Cowan, J.D. (1981). Specificity and plasticity of retinotectal connections: a computational model. Jou. Neurosci., 1, 1369-1387. Willshaw, D.J. & Malsburg, C von der (1979). A marker induction mechanism for the establishment of ordered neural mappings: its application to the retinotectal problem. Phil. Trans. Roy. Soc. B, 287, 203-243. Yates, P.A., McLaughlin, T., Friedman, G.C, Frisen, J., Barbacid, M. & O'Leary, D.D.M. (1997). Retinal axon guidance defects in mice lacking ephrin-A5 (ALl/RAGS). Soc. Neurosci. Abstracts, 23, 324.
1997
76
1,426
Multi-modular Associative Memory Nir Levy David Horn School of Physics and Astronomy Tel-Aviv University Tel Aviv 69978, Israel Eytan Ruppin Departments of Computer Science & Physiology Tel-Aviv University Tel Aviv 69978, Israel Abstract Motivated by the findings of modular structure in the association cortex, we study a multi-modular model of associative memory that can successfully store memory patterns with different levels of activity. We show that the segregation of synaptic conductances into intra-modular linear and inter-modular nonlinear ones considerably enhances the network's memory retrieval performance. Compared with the conventional, single-module associative memory network, the multi-modular network has two main advantages: It is less susceptible to damage to columnar input, and its response is consistent with the cognitive data pertaining to category specific impairment. 1 Introduction Cortical modules were observed in the somatosensory and visual cortices a few decades ago. These modules differ in their structure and functioning but are likely to be an elementary unit of processing in the mammalian cortex. Within each module the neurons are interconnected. Input and output fibers from and to other cortical modules and subcortical areas connect to these neurons. More recently, modules were also found in the association cortex [1] where memory processes supposedly take place. Ignoring the modular structure of the cortex, most theoretical models of associative memory have treated single module networks. This paper develops a novel multi-modular network that mimics the modular structure of the cortex. In this framework we investigate the computational rational behind cortical multimodular organization, in the realm of memory processing. Does multi-modular structure lead to computational advantages? Naturally one Multi-modular Associative Memory 53 may think that modules are necessary in order to accommodate memories of different coding levels. We show in the next section that this is not the case, since one may accommodate such memories in a standard sparse coding network . In fact, when trying to capture the same results in a modular network we run into problems, as shown in the third section: If both inter and intra modular synapses have linear characteristics, the network can sustain memory patterns with only a limited range of activity levels. The solution proposed here is to distinguish between intra-modular and inter-modular couplings, endowing the inter-modular ones with nonlinear characteristics. From a computational point of view, this leads to a modular network that has a large capacity for memories with different coding levels. The resulting network is particularly stable with regard to damage to modular inputs. From a cognitive perspective it is consistent with the data concerning category specific impairment. 2 Homogeneous Network We study an excitatory-inhibitory associative memory network [2], having N excitatory neurons. We assume that the network stores M1 memory patterns 7]1-' of sparse coding level p and M2 patterns ~v with coding level f such that p < f < < 1. The synaptic efficacy Jjj between the jth (presynaptic) neuron and the ith (postsynaptic) neuron is chosen in the Hebbian manner 1 Ml 1 M2 Jij = N L 7]1-' i 7]1-' j + N L ~v i~v j , P 1-'=1 P 1-'=1 The updating rule for the activity state Vi of the ith binary neuron is given by Vi(t + 1) = e (hj(t) - 0) where e is the step function and 0 is the threshold. hi(t) = hHt) - lQ(t) p (1) (2) (3) is the local field, or membrane potential. It includes the excitatory Hebbian coupling of all other excitatory neurons, N hi(t) = L Jij Vj(t) , j::f.i (4) and global inhibition that IS proportional to the total activity of the excitatory neurons 1 N Q(t) = N L Vj(t) . (5) j The overlap m(t) between the network activity and the memory patterns is defined for the two memory populations as N m{V(t) = ~f ~~VjVj(t) , J (6) The storage capacity a = M / N of this network has two critical capacities. a c{ above which the population of ~v patterns is unstable and a C 7] above which the population of 7]1-' patterns is unstable. We derived equations for the overlap and total activity of the two populations using mean field analysis. Here we give the 54 N. Levy, D. Hom and E. Ruppin fixed-point equations for the case of Ml = M2 = ~ and 'Y = Md2 + M2p2. The resulting equations are ( (J - m ) ml1 = <I> 1> '1 , Q = pml1 + <I> (~) , (7) and (8) where (9) and 1 00 (z2) dz <I>(x) = exp -x 2 ~ (10) (a) (b) p p Figure 1: (a) The critical capacity acl1 VS. f and p for f ~ p, (J = 0.8 and N = 1000. (b) (acl1 ac~) / acl1 versus f and p for the same parameters as in (a). The validity of these analytical results was tested and verified in simulations. Next, we look for the critical capacities, acl1 and ac~ at which the fixed-point equations become marginally stable. The results are shown in Figure 1. Figure 1 (a) shows acl1 VS. the coding levels f and p (f ~ p). Similar results were obtained for ac~. As evident the critical capacities of both populations are smaller than the one observed in a homogeneous network in which f = p. One hence necessarily pays a price for the ability to store patterns with different levels of activity. Figure l(b) plots the relative capacity difference (acl1 ac~)/acl1 vs. f and p. The function is non negative, i.e., acl1 ~ ac~ for all f and p. Thus, low activity memories are more stable than high activity ones. Assuming that high activity codes more features [3], these results seem to be at odds with the view [3, 4] that memories that contain more semantic features, and therefore correspond to larger Hebbian cell assemblies, are more stable, such as concrete versus abstract words. The homogeneous network, in which the memories with high activity are more susceptible to damage, cannot account for these observations. In the next section we show how a modular network can store memories with different activity levels and account for this cognitive phenomenon. Multi-modular Associative Memory 55 3 Modular Network We study a multi modular excitatory-inhibitory associative memory network, storing M memory patterns in L modules of N neurons each. The memories are coded such that in every memory a variable number n of 1 to L modules is active. This number will be denoted as modular coding. The coding level inside the modules is sparse and fixed, i.e., each modular Hebbian cell assembly consists of pN active neurons with p < < 1. The synaptic efficacy Ji/k between the jth (presynaptic) neuron from the kth module and the ith (postsynaptic) neuron from the lth module is chosen in a Hebbian manner M J .. lk _ 1 ~ ~ ~ IJ -N L.JTJiITJjk, p ~=1 (11) where TJ~ il are the stored memory patterns. The updating rule for the activity state Vii of the ith binary neuron in the lth module is given by (12) where (J$ is the threshold, and S(x) is a stochastic sigmoid function, getting the value 1 with probability (1 + e- X )-1 and 0 otherwise. The neuron's local field, or membrane potential has two components, hil (t) = h/ internal(t) + h/ external(t) . (13) The internal field , hilinternal(t), includes the contributions from all other excitatory neurons that are situated in the lth module, and inhibition that is proportional to the total modular activity of the excitatory neurons, i.e., where N h/ internal(t) = L Ji/ Y~/ (t) ')'$ QI (t) , j:¢.j N I 1 ~ I Q (t) = N L.J Yj (t) . P . J (14) (15) The external field component, hil external(t), includes the contributions from all other excitatory neurons that are situated outside the lth module, and inhibition that is proportional to the total network activity. hil external(t) = 9 (t t Ji/kYjk(t) - ')'d t Qk(t) - (Jd) . k~1 j k (16) We allow here for the freedom of using more complicated behavior than the standard 9(x) = x one. In fact, as we will see, the linear case is problematic, since only memory storage with limited modular coding is possible. The retrieval quality at each trial is measured by the overlap function, defined by 1 L N m~ (t) = Nn~ L L TJ~ ik Vik (t) , P k=1 ;=1 (17) where n~ is the modular coding of TJ~. 56 N. Levy, D. Hom and E. Ruppin In the simulations we constructed a network of L = 10 modules, where each module contains N = 500 neurons. The network stores M = 50 memory patterns randomly distributed over the modules. Five sets of ten memories each are defined. In each set the modular coding is distributed homogeneously between one to ten active modules. The sparse coding level within each module was set to be p = 0.05. Every simulation experiment is composed of many trials. In each trial we use as initial condition a corrupted version of a stored memory pattern with error rate of 5%, and check the network's retrieval after it converges to a stable state. 0.9 ... 0.8 0.7 0.8 & 0.5 0.4 0.3 0.2 0.1 0 4 5 6 10 Modular Coding Figure 2: Quality of retrieval vs. memory modular coding. The dark shading represents the mean overlap achieved by a network with linear intra-modular and intermodular synaptic couplings. The light shading represents the mean overlap of a network with sigmoidal inter-modular connections, which is perfect for all memory patterns. The simulation parameters were: L = 10, N = 500, M = 50, p = 0.05, ). = 0.7, (Jd = 2 and (J, = 0.6. We start with the standard choice of 9(x) = x, i.e. treating similarly the intramodular and inter-modular synaptic couplings. The performance of this network is shown in Figure 2. As evident, the network can store only a relatively narrow span of memories with high modular coding levels, and completely fails to retrieve memories with low modular coding levels (see also [5]). If, however, 9 is chosen to be a sigmoid function, a completely stable system is obtained, with all possible coding levels allowed. A sigmoid function on the external connections is hence very effective in enhancing the span of modular coding of memories that the network can sustain. The segregation of the synaptic inputs to internal and external connections has been motivated by observed patterns of cortical connectivity: Axons forming excitatory intra-modular connections make synapses more proximal to the cell body than do inter-modular connections [6]. Dendrites, having active conductances, embody a rich repertoire of nonlinear electrical and chemical dynamics (see [7] for a review). In our model, the setting of 9 to be a sigmoid function crudely mimics these active conductance properties. We may go on and envisage the use of a nested set of sigmoidal dendritic transmission functions. This turns out to be useful when we test the effects of pathologic alterations on the retrieval of memories with different modular codings. The amazing result is that if the damage is done to modular inputs, the highly nonlinear transmission functions are very resistible to it. An example is shown in Fig. 3. Multi-modular Associative Memory 57 Here we compare two nonlinear functions: 91 =.x8 [t t Ji/kV/(t) - "Yd t Qk(t) - Od] , k:f.1 j k:f.1 g, = Ae [t. e [~J';IkV;'(t) - ,.Q.(t) - 0.] -0.] The second one is the nested sigmoidal function mentioned above. Two types of input cues are compared: correct TJIJ il to one of the modules and no input to the rest, or partial input to all modules. 0.9 O.S 0.7 0.4 0.3 0.2 0.1 , I I I I , I 00 0.1 0.2 0.3 0.4 0.5 0.8 0.7 O.B 0.9 m(I'()) Figure 3: The performance of modular networks with different types of non-linear inter-connections when partial input cues are given. The mean overlap is plotted vs. the overlap of the input cue. The solid line represents the performance of the network with 92 and the dash-dot line represents 91. The left curve of 92 corresponds to the case when full input is presented to only one module (out of the 5 that comprise a memory), while the right solid curve corresponds to partial input to all modules. The two 91 curves describe partial input to all modules, but correspond to two different choices of the threshold parameter Od, 1.5 (left) and 2 (right). Parameters are L = 5, N = 1000, p = 0.05, .x = 0.8, n = 5, 0, = 0.7 and Ok = 0.7. As we can see, the nested nonlinearities enable retrieval even if only the input to a single module survives. One may therefore conclude that, under such conditions, patterns of high modular coding have a grater chance to be retrieved from an input to a single module and thus are more resilient to afferent damage. Adopting the assumption that different modules code for distinct semantic features, we now find that a multi-modular network with nonlinear dendritic transmission can account for the view of [3], that memories with more features are more robust. 4 Summary We have studied the ability of homogeneous (single-module) and modular networks to store memory patterns with variable activity levels. Although homogeneous networks can store such memory patterns, the critical capacity oflow activity memories was shown to be larger than that of high activity ones. This result seems to be inconsistent with the pertaining cognitive data concerning category specific semantic 58 N. Levy, D. Hom and E. Ruppin impairment, which seem to imply that high activity memories should be the more stable ones. Motivated by the findings of modular structure in associative cortex, we developed a multi-modular model of associative memory. Adding the assumption that dendritic non-linear processing operates on the signals of inter-modular synaptic connections, we obtained a network that has two important features: coexistence of memories with different modular codings and retrieval of memories from cues presented to a small fraction of all modules. The latter implies that memories encoded in many modules should be more resilient to damage in afferent connections, hence it is consistent with the conventional interpretation of the data on category specific impairment. References [1] R. F. Hevner. More modules. TINS, 16(5):178,1993. [2] M. V. Tsodyks. Associative memory in neural networks with the hebbian learning rule. Modern Physics Letters B, 3(7):555-560, 1989. [3] G. E. Hinton and T. Shallice. Lesioning at attractor network: investigations of acquired dyslexia. Psychological Review, 98(1):74-95, 1991. [4] G. V. Jones. Deep dyslexia, imageability, and ease of predication. Brain and Language, 24:1-19, 1985. [5] R. Lauro Grotto, S. Reich, and M. A. Virasoro. The computational role of conscious processing in a model of semantic memory. In Proceedings of the lIAS Symposium on Cognition Computation and Consciousness, 1994. [6] P. A. Hetherington and L. M. Shapiro. Simulating hebb cell assemblies: the necessity for partitioned dendritic trees and a post-not-pre ltd rule. Network, 4:135-153,1993. [7] R. Yuste and D. W. Tank. Dendritic integration in mammalian neurons a century after cajal. Neuron, 16:701-716, 1996.
1997
77
1,427
Analysis of Drifting Dynamics with Neural Network Hidden Markov Models J. Kohlmorgen GMD FIRST Rudower Chaussee 5 12489 Berlin, Germany K.-R. Miiller GMD FIRST Rudower Chaussee 5 12489 Berlin, Germany Abstract K. Pawelzik MPI f. Stromungsforschung Bunsenstr. 10 37073 Gottingen, Germany We present a method for the analysis of nonstationary time series with multiple operating modes. In particular, it is possible to detect and to model both a switching of the dynamics and a less abrupt, time consuming drift from one mode to another. This is achieved in two steps. First, an unsupervised training method provides prediction experts for the inherent dynamical modes. Then, the trained experts are used in a hidden Markov model that allows to model drifts. An application to physiological wake/sleep data demonstrates that analysis and modeling of real-world time series can be improved when the drift paradigm is taken into account. 1 Introduction Modeling dynamical systems through a measured time series is commonly done by reconstructing the state space with time-delay coordinates [10]. The prediction of the time series can then be accomplished by training neural networks [11]. H, however, a system operates in multiple modes and the dynamics is drifting or switching, standard approaches like multi-layer perceptrons are likely to fail to represent the underlying input-output relations. Moreover, they do not reveal the dynamical structure of the system. Time series from alternating dynamics of this type can originate from many kinds of systems in physics, biology and engineering. In [2, 6, 8], we have described a framework for time series from switching dynamics, in which an ensemble of neural network predictors specializes on the respective operating modes. We now extend the ability to describe a mode change not only as a switching but - if appropriate - also as a drift from one predictor to another. Our results indicate that physiological signals contain drifting dynamics, which 736 J. Kohlmorgen, K-R. Maller and K. Pawelzik underlines the potential relevance of our method in time series analysis. 2 Detection of Drifts The detection and analysis of drifts is performed in two steps. First, an unsupervised (hard-)segmentation method is applied. In this approach, an ensemble of competing prediction experts Ii, i = 1, ... , N, is trained on a given time series. The optimal choice of function approximators Ii depends on the specific application. In general, however, neural networks are a good choice for the prediction of time series [11]. In this paper, we use radial basis function (RBF) networks of the Moody-Darken type [5] as predictors, because they offer a fast and robust learning method. Under a gaussian assumption, the probability that a particular predictor i would have produced the observed data y is given by (1) where K is the normalization term for the gaussian distribution. If we assume that the experts are mutually exclusive and exhaustive, we have p(y) = LiP(Y I i)p(i). We further assume that the experts are - a priori - equally probable, p(i) = liN. (2) In order to train the experts, we want to maximize the likelihood that the ensemble would have generated the time series. This can be done by a gradient method. For the derivative of the log-likelihood log L = log(P(y» with respect to the output of an expert, we get (3) This learning rule can be interpreted as a weighting of the learning rate of each expert by the expert's relative prediction performance. It is a special case of the Mixtures of Experts [1] learning rule, with the gating network being omitted. Note that according to Bayes' rule the term in brackets is the posterior probability that expert i is the correct choice for the given data y, i.e. p(i I y). Therefore, we can simply write alogL . ali ex: p(z I y)(y - Ii)· (4) Furthermore, we imposed a low-pass filter on the prediction errors Ci = (y - 1i)2 and used deterministic annealing of f3 in the training process (see [2, 8] for details). We found that these modifications can be essential for a successful segmentation and prediction of time series from switching dynamics. As a prerequisite of this method, mode changes should occur infrequent, i.e. between two mode changes the dynamics should operate stationary in one mode for a certain number of time steps. Applying this method to a time series yields a (hard) segmentation of the series into different operating modes together with prediction experts for each mode. In case of a drift between two modes, the respective segment tends to be subdivided into several parts, because a single predictor is not able to handle the nonstationarity. Analysis of Drifting Dynamics with Neural Network Hidden Markov Models 737 The second step takes the drift into account. A segmentation algorithm is applied that allows to model drifts between two stationary modes by combining the two respective predictors, Ii and h. The drift is modeled by a weighted superposition (5) where a(t) is a mixing coefficient and Xt = (Xt,Xt-r, .•. ,Xt_(m_l)r)T is the vector of time-delay coordinates of a (scalar) time series {Xt}. Furthermore, m is the embedding dimension and T is the delay parameter of the embedding. Note that the use of multivariate time series is straightforward. 3 A Hidden Markov Model for Drift Segmentation In the following, we will set up a hidden Markov model (HMM) that allows us to use the Viterbi algorithm for the analysis of drifting dynamics. For a detailed description of HMMs, see [9] and the references therein. An HMM consists of (1) a set S of states, (2) a matrix A = {poi,,} of state transition probabilities, (3) an observation probability distribution p(Yls) for each state s, which is a continuous density in our case, and (4) the initial state distribution 7r = {7r8 }. Let us first consider the construction of S, the set of states, which is the crucial point of this approach. Consider a set P of 'pure' states (dynamical modes). Each state s E P represents one of the neural network predictors Ik(,) trained in the first step. The predictor of each state performs the predictions autonomously. Next, consider a set M of mixture states, where each state s E M represents a linear mixture of two nets /;.(.) and h(.). Then, given a state s E S, S = P U M, the prediction of the overall system is performed by ;ifsEP ;ifsEM (6) For each mixture state s EM, the coefficients a( s) and b( 8) have to be set together with the respective network indices i(s) and j(s). For computational feasibility, the number of mixture states has to be restricted. Our intention is to allow for drifts between any two network outputs of the previously trained ensemble. We choose a(s) and b(s) such that 0 < a(s) < 1 and b(s) = 1 - a(s). Moreover, a discrete set of a( s) values has to be defined. For simplicity, we use equally distant steps, r ar = R + 1 ' r = 1, ... , R. (7) R is the number of intermediate mixture levels. A given resolution R between any two out of N nets yields a total number of mixed states IMI = R· N· (N - 1)/2. If, for example, the resolution R = 32 is used and we assume N = 8, then there are IMI = 896 mixture states, plus IFI = N = 8 pure states. Next, the transition matrix A = {poi,,} has to be chosen. It determines the transition probability for each pair of states. In principle, this matrix can be found using a training procedure, as e.g. the Baum-Welch method [9]. However, this is hardly feasible in this case, because of the immense size of the matrix. In the above example, the matrix A has (896 + 8)2 = 817216 elements that would have to be estimated. Such an exceeding number of free parameters is prohibitive for any adaptive method. Therefore, we use a fixed matrix. In this way, prior knowledge about 738 1. Kohlmorgen, K-R. Maller and K. Pawelzik the dynamical system can be incorporated. In our applications either switches or smooth drifts between two nets are allowed, in such a way that a (monotonous) drift from one net to another is a priori as likely as a switch. All the other transitions are disabled by setting P.,. = O. Defining p(y Is) and 7r is straightforward. Following eq.(I) and eq.(2), we assume gauflsian noise p(y Is) = Ke- f3(1I-g.)'l, (8) and equally probable initial states, 7r. = 151-1. The Viterbi algorithm [9] can then be applied to the above stated HMM, without any further training of the HMM parameters. It yields the drift segmentation of a given time series, i.e. the most likely state sequence (the sequence of predictors or linear mixtures of two predictors) that could have generated the time series, in our case with the assumption that mode changes occur either as (smooth) drifts or as infrequent switches. 4 Drifting Mackey-Glass Dynamics As an example, consider a high-dimensional chaotic system generated by the Mackey-Glass delay differential equation dx(t) _ 01 () 0.2x(t - td) ---xt +--~----:'-::-::dt . 1 + x(t - td)1° . (9) It was originally introduced as a model of blood cell regulation [4]. Two stationary operating modes, A and B, are established by using different delays, td = 17 and 23, respectively. After operating 100 time steps in mode A (with respect to a subsampling step size T :;; 6), the dynamics is drifting to mode B. The drift takes another 100 time steps. It is performed by mixing the equations for td = 17 and 23 during the integration of eq.(9). The mixture is generated according to eq.(5), using an exponential drift (-4t) a(t) = exp 100 ' t = 1, . .. ,100. (10) Then, the system runs stationary in mode B for the following 100 time steps, whereupon it is switching back to mode A at t = 300, and the loop starts again (Fig.l(a». The competing experts algorithm is applied to the first 1500 data points of the generated time series, using an ensemble of 6 predictors h(Xt), i = 1, ... ,6. The input to each predictor is a vector Xt of time-delay coordinates of the scalar time series {xt}. The embedding dimension is m = 6 and the delay parameter is T = 1 on the subsampled data. The RBF predictors consist of 40 basis functions each. After training, nets 2 and 3 have specialized on mode A, nets 5 and 6 on mode B. This is depicted in the drift segmentation in Fig.l(b). Moreover, the removal of four nets does not increase the root mean squared error (RMSE) of the prediction significantly (Fig.l(c», which correctly indicates that two predictors completely describe the dynamical system. The sequence of nets to be removed is obtained by repeatedly computing the RMSE of all n subsets with n - 1 nets each, and then selecting the subset with the lowest RMSE of the respective drift segmentation. The segmentation of the remaining nets, 2 and 5, nicely reproduces the evolution of the dynamiCS, as seen in Fig.1(d). • Analysis of Drifting Dynamics with Neural Network Hidden Markov Models 1.4 1.3 1.2 t .t E 0.8 " 0.8 0.7 0.8 0.5 0.4 0.3 0 50 100 150 200 250 300 t (a) 01 0.011 0.08 0.07 w !!lo.os a: 0.05 0.04 0.03 0.02 0 2 3 .Remo\Iod N.,. (c) ! 8 7rrr' 5 r7j -,I iJ?rJ f ( I I 3-~ ~~ 2 '-'-'-'-o 200 ~ ~ ~ 1~ 1200 1~ 1~ t (b) 8 5 74 ( ! I IT r7- r I I f I , 2~''--'-- '-_ L '-'-o 200 ~ ~ ~ I~ 1200 I~ 1~ t (d) 739 Figure 1: (a) One 'loop' of the drifting Mackey-Glass time series (see text). (b) The resulting drift segmentation invokes four nets. The dotted line indicates the evolution of the mixing coefficient a(t) of the respective nets. For example, between t = 100 and 200 it denotes a drift from net 3 to net 5, which appears to be exponential. (c) Increase of the prediction error when predictors are successively removed. (d) The two remaining predictors model the dynamics of the time series properly. 5 Wake/Sleep EEG In [7], we analyzed physiological data recorded from the wake/sleep transition of a human. The objective was to provide an unsupervised method to detect the sleep onset and to give a detailed approximation of the signal dynamics with a high time resolution, ultimately to be used in diagnosis and treatment of sleep disorders. The application of the drift segmentation algorithm now yields a more detailed modeling of the dynamical system. As an example, Fig. 2 shows a comparison of the drift segmentation (R = 32) with a manual segmentation by a medical expert. The experimental data was measured during an afternoon nap of a healthy human. The computer-based analysis is performed on a single-channel EEG recording (occipital-l), whereas the manual segmentation was worked out using several physiological recordings (EEG, EOG, ECG, heart rate, blood pressure, respiration). The two-step drift segmentation method was applied using 8 RBF networks. However, as shown in Fig. 2, three nets (4, 6, and 8) are finally found by the Viterbi algorithm to be sufficient to represent the most likely state sequence. Before the sleep onset, at t ~ 3500 (350s) in the manual analysis, a mixture of two wake-state 740 J Kohimorgen, K-R Maller and K. Pawelzik net8 t-------, net7 net6 1------+-------, netS net4 net3 net2 net1 W1 W2 S1 S2 n.a. art. data o 2000 4000 6000 8000 10000 12000 14000 16000 t Figure 2: Comparison of the drift segmentation obtained by the algorithm (upper plot), and a manual segmentation by a medical expert (middle). Only a singlechannel EEG recording (occipital-l, time resolution O.ls) of an afternoon nap is given for the algorithmic approach, while the manual segmentation is based on all available measurements. In the manual analysis, WI and W2 indicate two wakestates (eyes open/closed), and 81 and 82 indicate sleep stage I and II, respectively. (n.a.: no assessment, art.: artifacts) nets, 6 and 8, performs the best reconstruction of the EEG dynamics. Then, at t = 3000 (300s), there starts a drift to net 4, which apparently represents the dynamics of sleep stage II (82). Interestingly, sleep stage I (81) is not represented by a separate net but by a linear mixture of net 4 and net 6, with much more weight on net 4. Thus, the process of falling asleep is represented as a drift from the state of being awake directly to sleep stage II. During sleep there are several wake-up spikes indicated in the manual segmentation. At least the last four are also clearly indicated in the drift segmentation, as drifts back to net 6. Furthermore, the detection ofthe final arousal after t = 12000 (1200s) is in good accordance with the manual segmentation: there is a fast drift back to net 6 at that point. Considering the fact that our method is based only on the recording of a single EEG channel and does not use any medical expert knowledge, the drift algorithm is in remarkable accordance with the assessment of the medical expert. Moreover, it resolves the dynamical structure of the signal to more detail. For a more comprehensive analysis of wake/sleep data, we refer to our forthcoming publication [3] . Analysis of Drifting Dynamics with Neural Network Hidden Markov Models 741 6 Summary and Discussion We presented a method for the unsupervised segmentation and identification of nonstationary drifting dynamics. It applies to time series where the dynamics is drifting or switching between different operating modes. An application to physiological wake/sleep data (EEG) demonstrates that drift can be found in natural systems. It is therefore important to consider this aspect of data description. In the case of wake/sleep data, where the physiological state transitions are far from being understood, we can extract the shape of the dynamical drift from wake to sleep in an unsupervised manner. By applying this new data analysis method, we hope to gain more insights into the underlying physiological processes. Our future work is therefore dedicated to a comprehensive analysis of large sets of physiological wake/sleep recordings. We expect, however, that our method will be also applicable in many other fields. Acknowledgements: We acknowledge support of the DFG (grant Ja379/51) and we would like to thank J. Rittweger for the EEG data and for fruitful discussions. References [1] Jacobs, R.A., Jordan, M.A., Nowlan, S.J., Hinton, G.E. (1991). Adaptive Mixtures of Local Experts, Neural Computation 3, 79-87. [2] Kohlmorgen, J., Miiller, K.-R., Pawelzik, K. (1995). Improving short-term prediction with competing experts. ICANN'95, EC2 & Cie, Paris, 2:215-220. [3] Kohlmorgen, J., Miiller, K.-R., Rittweger, J., Pawelzik, K., in preparation. [4] Mackey, M., Glass, L. (1977). Oscillation and Chaos in a Physiological Control System, Science 197,287. [5] Moody, J., Darken, C. (1989). Fast Learning in Networks of Locally-Tuned Processing Units. Neural Computation 1, 281-294. [6] Miiller, K.-R., Kohlmorgen, J., Pawelzik, K. (1995). Analysis of Switching Dynamics with Competing Neural Networks, IEICE 'nans. on Fundamentals of Electronics, Communications and Computer Sc., E78-A, No.1O, 1306-1315. [7] Miiller, K.-R., Kohlmorgen, J., Rittweger, J., Pawelzik, K. (1995). Analysing Physiological Data from the Wake-Sleep State Transition with Competing Predictors, NOLTA'95: Symposium on Nonlinear Theory and its Appl., 223-226. [8] Pawelzik, K., Kohlmorgen, J., Miiller, K.-R. (1996). Annealed Competition of Experts for a Segmentation and Classification of Switching Dynamics, Neural Computation, 8:2, 342-358. [9] Rabiner, L.R. (1988). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In Readings in Speech Recognition, ed. A. Waibel, K. Lee, 267-296. San Mateo: Morgan Kaufmann, 1990. [10] Takens, F. (1981). Detecting Strange Attractors in Turbulence. In: Rand, D., Young, L.-S., (Eds.), Dynamical Systems and Turbulence, Springer Lecture Notes in Mathematics, 898, 366. [11] Weigend, A.S., Gershenfeld, N.A. (Eds.) (1994). Time Series Prediction: Forecasting the Future and Understanding the Past, Addison-Wesley.
1997
78
1,428
The Observer-Observation Dilemma in Neuro-Forecasting Hans Georg Zimmermann SiemensAG Corporate Technology D-81730 Munchen, Germany Georg.Zimmermann@mchp.siemens.de Ralph N euneier Siemens AG Corporate Technology D-81730 Munchen, Germany Ralph.Neuneier@mchp.siemens.de Abstract We explain how the training data can be separated into clean information and unexplainable noise. Analogous to the data, the neural network is separated into a time invariant structure used for forecasting, and a noisy part. We propose a unified theory connecting the optimization algorithms for cleaning and learning together with algorithms that control the data noise and the parameter noise. The combined algorithm allows a data-driven local control of the liability of the network parameters and therefore an improvement in generalization. The approach is proven to be very useful at the task of forecasting the German bond market. 1 Introduction: The Observer-Observation Dilemma Human beings believe that they are able to solve a psychological version of the ObserverObservation Dilemma. On the one hand, they use their observations to constitute an understanding of the laws of the world, on the other hand, they use this understanding to evaluate the correctness of the incoming pieces of information. Of course, as everybody knows, human beings are not free from making mistakes in this psychological dilemma. We encounter a similar situation when we try to build a mathematical model using data. Learning relationships from the data is only one part of the model building process. Overrating this part often leads to the phenomenon of overfitting in many applications (especially in economic forecasting). In practice, evaluation of the data is often done by external knowledge, i. e. by optimizing the model under constraints of smoothness and regularization [7]. If we assume, that our model summerizes the best knowledge of the system to be identified, why should we not use the model itself to evaluate the correctness of the data? One approach to do this is called Clearning [11]. In this paper, we present a unified approach of the interaction between the data and a neural network (see also [8]). It includes a new symmetric view on the optimization algorithms, here learning and cleaning, and their control by parameter and data noise. The Observer-Observation Dilemma in Neuro-Forecasting 993 2 Learning 2.1 Learning reviewed We are especially interested in using the output of a neural network y( x, w), given the input pattern, x, and the weight vector, w, as a forecast of financial time series. In the context of neural networks learning nonnally means the minimization of an error function E by changing the weight vector w in order to achieve good generalization performance. Typical error functions can be written as a sum of individual terms over all T training patterns, E = ~ 'L,;=1 Et. For example, the maximum-likelihood principle leads to Et = 1/2 (y(x, w) - yt)2 , (1) with yt as the given target pattern. If the error function is a nonlinear function of the parameters, learning has to be done iteratively by a search through the weight space, changing the weights from step T to T + 1 according to: (2) There are several algorithms for choosing the weight increment ~W(T) , the most easiest being gradient descent. After each presentation of an input pattern, the gradient gt := VEt Iw of the error function with respect to the weights is computed. In the batch version of gradient descent the increments are based on all training patterns 1 T ~W(T) = -"1g = -"1 T L gt, t=l (3) whereas the pattern-by-pattern version changes the weights after each presentation of a pattern Xt (often randomly chosen from the training set): (4) The learning rate "1 is typically held constant or follows an annealing procedure during training to assure convergence. Our experiments have shown that small batches are most useful, especially in combination with Vario-Eta, a stochastic approximation of a QuasiNewton method [3]: N ~W(T) = "1 . ~ L gt, J + 'L,(gt - g)2 N t=l (5) with and N ~ 20. Learning pattern-by-pattern or with small batches can be viewed as a stochastic search process because we can write the weight increments as: (6) These increments consist of the terms 9 with a drift to a local minimum and of noise terms (k 'L,~ 1 gt - g) disturbing this drift. 2.2 Parameter Noise as an Implicit Penalty Function Consider the Taylor expansion of E ( w) around some point w in the weight space 1 E(w + ~w) = E(w) + VE ~w + 2~W'H ~w (7) 994 H. G. Zimmermann and R. Neuneier with H as the Hessian of the error function. Assume a given sequence of T disturbance vectors ~ Wt, whose elements ~ Wt ( i) are identically, independently distributed (i .i.d.) with zero mean and variance (row-)vector var(~wi) to approximate the expectation (E( w) by 1", 1", . (E(w) ~ T L...J E(w + ~Wt) = E(w) + :2 L...J var(~w(z))Hii' (8) t i with Hii as the diagonal elements of H. In eq. 8, noise on the weights acts implicitly as a penalty term to the error function given by the second derivatives H ii . The noise variances var( ~ w( i)) operate as penalty parameters. As a result of this flat minima solutions which may be important for achieving good generalization performance are favored [5]. Learning pattern-by-pattem introduces such noise in the training procedure i.e., ~ Wt = -1] • gt· Close to convergence, we can assume that gt is i.i.d. with zero mean and variance vector var(gi) so that the expected value can be approximated by TJ2 82 E (E(w) ~ E(w) + - Lvar(gd8 2' 2 . Wi I (9) This type of learning introduces to a local penalty parameter var( ~ w ( i) ), characterizing the stability of the weights w = [Wdi=l, ... ,k. The noise effects due to Vario-Eta learning ~wt(i) = -R . gti leads to 1]2 82 E (E(w) ~ E(w) + 2 :L 8w~' i I (10) By canceling the term var(gi) in eq. 9, Vario-Eta achieves a simplified uniform penalty parameter, which depends only on the learning rate 1]. Whereas pattern-by-pattern learning is a slow algorithm with a locally adjusted penalty control, Vario-Eta is fast only at the cost of a simplified uniform penalty term. We summarize this section by giving some advice on how to learn to flat minima solutions: • Train the network to a minimal training error solution with Vario-Eta, which is a stochastic approximation of a Newton method and therefore very fast. • Add a final phase of pattem-by-pattern learning with uniform learning rate to fine tune the local curvature structure by the local penalty parameters (eq. 9). • Use a learning rate 1] as high as possible to keep the penalty effective. The training error may vary a bit, but the inclusion of the implicit penalty is more important. 3 Cleaning 3.1 Cleaning reviewed When training neural networks, one typically assumes that the data is noise-free and one forces the network to fit the data exactly. Even the control procedures to minimize overfitting effects (i.e., pruning) consider the inputs as exact values. However, this assumption is often violated, especially in the field of financial analysis, and we are taught by the phenomenon of overfitting not to follow the data exactly. Clearning, as a combination of cleaning and learning, has been introduced in the paper of [11]. The motivation was to minimize overfitting effects by considering the input data as corrupted by noise whose distribution has also to be learned. The Cleaning error function for the pattern t is given by the sum of two terms Eyx 1[( d)2 ( d)2] y x t' ="2 Yt Yt + Xt xt = Et + Et (11) The Observer-Observatio·n Dilemma in Neuro-Forecasting 995 with xf, yt as the observed data point. In the pattem-by-pattem learning, the network output y( x t, w) determines the adaptation as usual, (12) We have also to memorize correction vectors &t for all input data of the training set to present the cleaned input Xt to the network, Xt = xf + &t (13) The update rule for the corrections, initialized with ~x~O) = 0 can be described as ~X~T+I) (1 1])~X!T) 1](Yt - yt) ~ (14) All the necessary quantIties, i. e. (Yt yt) &Y~;w) are computed by typical backpropagation algorithms, anyway. We experienced, that the algorithms work well, if the same learning rate 1] is used for both, the weight and cleaning updates. For regression, cleaning forces the acceptance of a small error in x. which can in turn decrease the error in Y dramatically, especially in the case of outliers. Successful applications of Cleaning are reported in [11] and [9]. Although the network may learn an optimal model for the cleaned input data, there is no easy way to work with cleaned data on the test set. As a consequence, the model is evaluated on a test set with a different noise characteristic compared to the training set. We will later propose a combination of learning with noise and cleaning to work around this serious disadvantage. 3.2 Data Noise reviewed Artificial noise on the input data is often used during training because it creates an infinite number of training examples and expands the data to empty parts of the input space. As a result, the tendency of learning by heart may be limited because smoother regression functions are produced. Now, we are considering again the Taylor expansion, this time applied to E (x) around some point x in the input space. The expected value (E (x)) is approximated by (E(x)} ~ ~ L E(x + &t} = E(x) + ~ L var(&(j))Hjj, (15) t j with Hjj as the diagonal elements of the Hessian Hxx of the error function with respect to the inputs x. Again, in eq. 15, noise on the inputs acts implicitly as a penalty term to the error function with the noise variances var( & (j)) operating as penalty parameters. Noise on the input improve generalization behavior by favoring smooth models [1]. The noise levels can be set to a constant value, e. g. given by a priori knowledge, or adaptive as described now. We will concentrate on a uniform or normal noise distribution. Then, the adaptive noise level ~j is estimated for each input j individually. Suppressing pattern indices, we define the noise levels ~j or ~J as the average residual errors: t). = _1 '" IOEY I, uniform residual error:.. ~ T t OXj (16) t)2. ___ 1 '" (OEY) 2 Gaussian residual error: .. ~ T t OXj (17) Actual implementations use stochastic approximation, e. g. for the uniform residual error €~T+1) = (1 _ ~ )~~T) + ~ I oEY I. (18) J T) T OXj 996 H. G. Zimmennann and R. Neuneier The different residual error levels can be interpreted as follows: A small level ~j may indicate an unimportant input j or a perfect fit of the network concerning this input j. In both cases, a small noise level is appropriate. On the other hand, a high value of ~j for an input j indicates an important but imperfectly fitted input. In this case high noise levels are advisable. High values of ~j lead to a stiffer regression model and may therefore increase the generalization perfonnance of the network. 3.3 Cleaning with Noise Typically, training with noisy inputs takes a data point and adds a random variable drawn from a fixed or adaptive distribution. This new data point Xt is used as an input to the network. If we assume, that the data is corrupted by outliers and other influences, it is preferable to add the noise tenn to the cleaned input. For the case of Gaussian noise the resulting new input is: Xt = xf + ~Xt + ~¢, (19) with ¢ drawn from the nonnal distribution. The cleaning of the data leads to a corrected mean of the data and therefore to a more symmetric noise distribution, which also covers the observed data x t • We propose a variant which allows more complicated noise distributions: (20) with k as a random number drawn from the indices of the correction vectors (~xtlt=l , ... ,T. In this way we use a possibly asymmetric and/or dependent noise distribution, which still covers the observed data Xt by definition of the algorithm. One might wonder, why to disturb the cleaned input xf + ~Xt with an additional noisy tenn ~x k. The reason for this is, that we want to benefit from representing the whole input distribution to the network instead of only using one particular realization. 4 A Unifying Approach 4.1 The Separation of Structure and Noise In the previous sections we explained how the data can be separated into clean infonnation and unexplainable noise. Analogous, the neural network is described as a time invariant structure (otherwise no forecasting is possible) and a noisy part. data -t cleaned data + time invariant data noise neural network-ttime invariant parameters+parameter noise We propose to use cleaning and adaptive noise to separate the data and to use learning and stochastic search to separate the structure of the neural network. data ~ cleaning(neural network) + adaptive noise (neural network) neural network~learning (data) + stochastic search(data) The algorithms analyzing the data depend directly o"n the network whereas the methods searching for structure are directly related to the data. It should be clear that the model building process should combine both aspects in an alternate or simultaneous manner. The interaction of algorithms concerning data analysis and network structure enables the realization of the the concept of the Observer-Observation Dilemma. The Observer-Observation Dilemma in Neuro-Forecasting 997 The aim of the unified approach can be described, exemplary assuming here a Gaussian noise model, as the minimization of the error due to both, the structure and the data: T 2~ I: [(Yt - y1)2 + {Xt - x1)2] -+ ~~fJ (21) t=l Combining the algorithms and approximating the cumulative gradient 9 by g, we receive data structure (1 0: )gH + O:(Yt - yt) PWIT) 7]g(T) -7](9t - g0-)) '-....-' ---......-.learning noise (22) The cleaning of the data by the network computes an individual correction term for each training pattern. The adaptive noise procedure according to eq. 20 generates a potentially asymmetric and dependent noise distribution which also covers the observed data. The implied curvature penalty, whose strength depends on the individual liability of the input variables, can improve the generalization performance of the neural network. The learning of the structure searches for time invariant parameters characterized by -j; L 9t = O. The parameter noise supports this exploration as a stochastic search to find better "global" minima. Additionally, the generalization performance may be further improved by the implied curvature penalty depending on the local liability of the parameters. Note that, although the description of the weight updates collapses to the simple form of eq. 4, we preferred the formula above to emphasize the analogy between the mechanism which handles the data and the structure. In searching for an optimal combination of data and parameters, the noise of both parts is not a disastrous failure to build a perfect model but it is an important element to control the interaction of data and structure. 4.2 Pruning The neural network topology represents only a hypothesis of the true underlying class of functions. Due to possible misspecification, we may have defects of the parameter noise distribution. Pruning algorithms are not only a way to limit the memory of the network, but they also appear useful to correct the noise distribution in different ways. Stochastic-Pruning [2] is basically a t-test on the weights w. Weights with low testw values constitute candidates for pruning to cancel weights with low liability measured by the size of the weight divided by the standard deviation of its fluctuations. By this, we get a stabilization of the learning against resampling of the training data. A further weight pruning method is EBD, Early-Brain-Damage [10], which is based on the often cited OBD pruning method of [6]. In contrast to OBD, EBD allows its application before the training has reached a local minimum. One of the advantages of EBD over OBD is the possibility to perform the testing while being slidely away from a local minimum. In our training procedure we propose to use noise even in the final part of learning and therefore we are only nearby a local minimum. Furthermore, EBD is also able to revive already pruned weights. Similar to Stochastic Pruning, EBD favors weights with a low rate of fluctuations. If a weight is pushed around by a high noise, the implicit curvature penalty would favor a flat minimum around this weight which leads to its elimination by EBD. 998 H. G. Zimmermann and R. Neuneier 5 Experiments In a research project sponsored by the European Community we are applying the proposed approach to estimate the returns of 3 financial markets for each of the G7 countries subsequently using these estimations in an asset allocation scheme to create a Markowitz-optimal portfolio [4]. This paper reports the 6 month forecasts of the German bond rate, which is one of the more difficult tasks due to the reunification of Germany and GDR. The inputs consist of 39 variables achieved by preprocessing 16 relevant financial time series. The training set covers the time from April, 1974 to December, 1991, the test set runs from J anuary, 1992 to May, 1996. The network arcitecture consists of one hidden layer (20 neurons, tanh transfer function) and one linear output. First, we trained the neural network until convergence with pattern-by-pattern learning using a small batch size of 20 patterns (classical approach). Then, we trained the network using the unified approach as described in section 4.1 using pattern-by-pattern learning. We compare the resulting predictions of the networks on the basis of four performance measures (see table). First, the hit rate counts how often the sign of the return of the bond has been correctly predicted. As to the other measures, the step from the forecast model to a trading system is here kept very simple. If the output is positive, we buy shares of the bond, otherwise we sell them. The potential realized is the ratio of the return to the maximum possible return over the test (training) set. The annualized return is the average yearly profit of the trading systems. Our approach turns out to be superior: we almost doubled the annualized return from 4.5% to 8.5% on the test set. The figure compares the accumulated return of the two approaches on the test set. The unified approach not only shows a higher profitability, but also has by far a less maximal draw down. 35 i ---~ -- -.., - , - -. -. - . I approach II our classical 30\ ~~~/.// : 25r .3 !!!20 ~'5'JI " 1i I " '" j'O' ,'. . ., . . . ::. .... . ..... .. .... I hit rate 81%(96%) 66%(93%) realized potential 75%(100%) 44%(96%) annualized return 8.5% (11.2%) 4.5%(10.1%) I .{--'0· - 20-" 30 40sO - -So 11 ... References [I] Christopher M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, 1994. [2] W. Finnoff, F. Hergert, and H. G. Zimmennann. Improving generalization perfonnance by nonconvergentmodel selection methods. In proc. of ICANN-92, 1992. [3] W. Finnoff, F. Hergert, and H. G. Zimmennann. Neuronale Lemverfahren mit variabler Schrittweite. 1993. Tech. report, Siemens AG. [4] P. Herve, P. Nairn, and H. G. Zimmennann. Advanced Adaptive Architectures for Asset Allocation: A Trial Application. In Forecasting Financial Markets, 1996. [5] S. Hochreiter and J. Schmid huber. Flat minima. Neural Computation, 9(1): 1-42, 1997. [6] Y. Ie Cun, J. S. Denker, and S. A. Solla. Optimal brain damage. NIPS*89, 1990. [7] J. E. Moody and T. S. Rognvaldsson. Smoothing regularizers for projective basis function networks. NIPS 9, 1997. [8] R. Neuneier and H. G. Zimmennann. How to Train Neural Networks. In Tricks of the Trade: How to make algorithms really to work. Springer Verlag, Berlin, 1998. [9] B. Tang, W. Hsieh, and F. Tangang. Cleaming neural networks with continuity constraints for prediction of noisy time series. ICONIP '96, 1996. [10] V. Tresp, R. Neuneier, and H. G. Zimmennann. Early brain damage. NIPS 9, 1997. [II] A. S. Weigend, H. G. Zimmennann, and R. Neuneier. Cleaming. Neural Networks in Financial Engineering, (NNCM95), 1995.
1997
79
1,429
Learning Continuous Attractors in Recurrent Networks H. Sebastian Seung Bell Labs, Lucent Technologies Murray Hill, NJ 07974 seung~bell-labs.com Abstract One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical viewpoint, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation. A classic approach to invariant object recognition is to use a recurrent neural network as an associative memory[l]. In spite of the intuitive appeal and biological plausibility of this approach, it has largely been abandoned in practical applications. This paper introduces two new concepts that could help resurrect it: object representation by continuous attractors, and learning attractors by pattern completion. In most models of associative memory, memories are stored as attractive fixed points at discrete locations in state space[l]. Discrete attractors may not be appropriate for patterns with continuous variability, like the images of a three-dimensional object from different viewpoints. When the instantiations of an object lie on a continuous pattern manifold, it is more appropriate to represent objects by attractive manifolds of fixed points, or continuous attractors. To make this idea practical, it is important to find methods for learning attractors from examples. A naive method is to train the network to retain examples in shortterm memory. This method is deficient because it does not prevent the network from storing spurious fixed points that are unrelated to the examples. A superior method is to train the network to restore examples that have been corrupted, so that it learns to complete patterns by filling in missing information. Learning Continuous Attractors in Recurrent Networks 655 (a) (b) Figure 1: Representing objects by dynamical attractors. (a) Discrete attractors. (b) Continuous attractors. Learning by pattern completion can be understood from both dynamical and statistical perspectives. Since the completion task requires a large basin of attraction around each memory, spurious fixed points are suppressed. The completion task also leads to a formulation of unsupervised learning as the regression problem of estimating functional dependences between variables in the sensory input. Density estimation, rather than regression, is the dominant formulation of unsupervised learning in stochastic neural networks like the Boltzmann machine[2]. Density estimation has the virtue of suppressing spurious fixed points automatically, but it also has the serious drawback of being intractable for many network architectures. Regression is a more tractable, but nonetheless powerful, alternative to density estimation. In a number of recent neurobiological models, continuous attractors have been used to represent continuous quantities like eye position-[3], direction of reaching[4], head direction[5], and orientation of a visual stimulus[6]. Along with these models, the present work is part of a new paradigm for neural computation based on continuous attractors. 1 DISCRETE VERSUS CONTINUOUS ATTRACTORS Figure 1 depicts two ways of representing objects as attractors of a recurrent neural network dynamics. The standard way is to represent each object by an attractive fixed point[l], as in Figure 1a. Recall of a memory is triggered by a sensory input, which sets the initial conditions. The network dynamics converges to a fixed point, thus retrieving a memory. If different instantiations of one object lie in the same basin of attraction, they all trigger retrieval of the same memory, resulting in the many-to-one map required for invariant recognition. In Figure 1b, each object is represented by a continuous manifold of fixed points. A one-dimensional manifold is shown, but generally the attractor should be multidimensional, and is parametrized by the instantiation or pose parameters of the object. For example, in visual object recognition, the coordinates would include the viewpoint from which the object is seen. The reader should be cautioned that the term "continuous attractor" is an idealization and should not be taken too literally. In real networks, a continuous attractor is only approximated by a manifold in state space along which drift is very slow. This is illustrated by a simple example, a descent dynamics on a trough-shaped energy landscape[3]. If the bottom of the trough is perfectly level, it is a line of fixed points and an ideal continuous attract or of the dynamics. However, any slight imperfections cause slow drift along the line. This sort of approximate continuous attract or is what is found in real networks, including those trained by the learning 656 H S. Seung (a) hidden layer (b) ~ visible layer Figure 2: (a) Recurrent network. (b) Feedforward autoencoder. algorithms to be discussed below. 2 DYNAMICS OF MEMORY RETRIEVAL The preceding discussion has motivated the idea of representing pattern manifolds by continuous attractors. This idea will be further developed with the simple network shown in Figure 2a, which consists of a visible layer Xl E Rnl and a hidden layer X2 E Rn2. The architecture is recurrent, containing both bottom-up connections (the n2 x nl matrix W2d and top-down connections (the nl x n2 matrix WI2). The vectors bl and b2 represent the biases ofthe neurons. The neurons have a rectification nonlinearity [x]+ = max{x, O}, which acts on vectors component by component. There are many variants of recurrent network dynamics: a convenient choice is the following discrete-time version, in which updates of the hidden and visible layers alternate in time. After the visible layer is initialized with the input vector Xl (0), the dynamics evolves as X2(t) = [b2 + W2IXI(t -1)]+ , Xl (t) = [bl + W12X2(t)]+ . (1) If memories are stored as attractors, iteration of this dynamics can be regarded as memory retrieval. Activity circulates around the feedback loop between the two layers. One iteration of this loop is the map Xl(t - 1) ~ X2(t) ~ Xl(t). This single iteration is equivalent to the feedforward architecture of Figure 2b. In the case where the hidden layer is smaller than the visible layers, this architecture is known as an auto encoder network[7]. Therefore the recurrent network dynamics (1) is equivalent to repeated iterations of the feedforward autoencoder. This is just the standard trick of unfolding the dynamics of a recurrent network in time, to yield an equivalent feedforward network with many layers[7]. Because of the close relationship between the recurrent network of Figure 2a and the autoencoder of Figure 2b, it should not be surprising that learning algorithms for these two networks are also related, as will be explained below. 3 LEARNING TO RETAIN PATTERNS Little trace of an arbitrary input vector Xl (0) remains after a few time steps of the dynamics (1). However, the network can retain some input vectors in short-term memory as "reverberating" patterns of activity. These correspond to fixed points of the dynamics (1); they are patterns that do not change as activity circulates around the feedback loop. Learning Continuous Attraclors in Recurrent Networlcs 657 This suggests a formulation of learning as the optimization of the network's ability to retain examples in short-term memory. Then a suitable cost function is the squared difference IXI (T) Xl (0)12 between the example pattern Xl (0) and the network's short-term memory Xl (T) of it after T time steps. Gradient descent on this cost function can be done via backpropagation through time[7]. If the network is trained with patterns drawn from a continuous family, then it can learn to perform the short-term memory task oy developing a continuous attractor that lies near the examples it is trained on. When the hidden layer is smaller than the visible layer, the dimensionality of the attractor is limited by the size of the hidden layer. For the case of a single time step (T = 1), training the recurrent network of Figure 2a to retain patterns is equivalent to training the autoencoder of Figure 2b by minimizing the squared difference between its input and output layers, averaged over the examples[8]. From the information theoretic perspective, the small hidden layer in Figure 2b acts as a bottleneck between the input and output layers, forcing the autoencoder to learn an efficient encoding of the input. For the special case of a linear network, the nature of the learned encoding is understood completely. Then the input and output vectors are related by a simple matrix multiplication. The rank of the matrix is equal to the number of hidden units. The average distortion is minimized when this matrix becomes a projection operator onto the subspace spanned by the principal components of the examples[9]. From the dynamical perspective, the principal subspace is a continuous attractor of the dynamics (1). The linear network dynamics converges to this attractor in a single iteration, starting from any initial condition. Therefore we can interpret principal component analysis and its variants as methods of learning continuous attractors[lO]. 4 LEARNING TO COMPLETE PATTERNS Learning to retain patterns in short-term memory only works properly for architectures with a small hidden layer. The problem with a large hidden layer is evident when the hidden and visible layers are the same size, and the neurons are linear. Then the cost function for learning can be minimized by setting the weight matrices equal to the identity, W21 = Wl2 = I. For this trivial minimum, every input vector is a fixed point of the recurrent network (Figure 2a), and the equivalent feedforward network (Figure 2b) exactly realizes the identity map. Clearly these networks have not learned anything. Therefore in the case of a large hidden layer, learning to retain patterns is inadequate. Without the bottleneck in the architecture, there is no pressure on the feedforward network to learn an efficient encoding. Without constraints on the dimension of the attractor, the recurrent network develops spurious fixed points that have nothing to do with the examples. These problems can be solved by a different formulation of learning based on the task of pattern completion. In the completion task of Figure 3a, the network is initialized with a corrupted version of an example. Learning is done by minimizing the completion error, which is the squared difference IXI (T) - dl2 between the uncorrupted pattern d and the final visible vector Xl (T). Gradient descent on completion error can be done with backpropagation through time[ll]. This new formulation of learning eliminates the trivial identity map solution men658 (a) ~1 retention. ~1 L _ .. _ ~ completio~ ~ 1 It ~ It ___ (b) topographic feature map 9x9 patch missing sensory Input retrieved memory H. S. Seung Figure 3: (a) Pattern retention versus completion. (b) Dynamics of pattern completion. (b) 5x5 receptive fields Figure 4: ( a) Locally connected architecture. (b) Receptive fields of hidden neurons. tioned above: while the identity network can retain any example, it cannot restore corrupted examples to their pristine form. The completion task forces the network to enlarge the basins of attraction of the stored memories, which suppresses spurious fixed points. It also forces the network to learn associations between variables in the sensory input. 5 LOCALLY CONNECTED ARCHITECTURE Experiments were conducted with images of handwritten digits from the USPS database described in [12]. The example images were 16 x 16, with a gray scale ranging from a to 1. The network was trained on a specific digit class, with the goal of learning a single pattern manifold. Both the network architecture and the nature of the completion task were chosen to suit the topographic structure present in visual images. The network architecture was given a topographic organization by constraining the synaptic connectivity to be local, as shown in Figure 4a. Both the visible and hidden layers of the network were 16 x 16. The visible layer represented an image, while the hidden layer was a topographic feature map. Each neuron had 5 x 5 receptive and projective fields, except for neurons near the edges, which had more restricted connectivity. In the pattern completion task, example images were corrupted by zeroing the pixels inside a 9 x 9 patch chosen at a random location, as shown in Figure 3a. The location of the patch was randomized for each presentation of an example. The size of the patch was a substantial fraction of the 16 x 16 image, and much larger than the 5 x 5 receptive field size. This method of corrupting the examples gave the completion task a topographic nature, because it involved a set of spatially contiguous pixels. This topographic nature would have been lacking if the examples had been corrupted by, for example, the addition of spatially uncorrelated noise. Figure 3b illustrates the dynamics of pattern completion performed by a network Learning Continuous Attractors in Recurrent Networks 659 trained on examples of the digit class "two." The network is initialized with a corrupted example of a "two." After the first itex:ation of the dynamics, the image is partially restored. The second iteration leads to superior restoration, with further sharpening of the image. The "filling in" phenomenon is also evident in the hidden layer. The network was first trained on a retrieval dynamics of one iteration. The resulting biases and synaptic weights were then used as initial conditions for training on a retrieval dynamics of two iterations. The hidden layer developed into a topographic feature map suitable for representing images of the digit "two." Figure 4b depicts the bottom-up receptive fields of the 256 hidden neurons. The top-down projective fields of these neurons were similar, but are not shown. This feature map is distinct from others[13) because of its use of top-down and bottom-up connections in a feedback loop. The bottom-up connections analyze images into their constituent features, while the top-down connections synthesize images by composing features. The features in the top-down connections can be regarded as a "vocabulary" for synthesis of images. Since not all combinations of features are proper patterns, there must be some "grammatical" constraints on their combination. The network's ability to complete patterns suggests that some of these constraints are embedded in the dynamical equations of the network. Therefore the relaxation dynamics (1) can be regarded as a process of massively parallel constraint satisfaction. 6 CONCLUSION I have argued that continuous attractors are a natural representation for pattern manifolds. One method of learning attractors is to train the network to retain examples in short-term memory. This method is equivalent to autoencoder learning, and does not work if the number of hidden units is large. A better method is to train the network to complete patterns. For a locally connected network, this method was demonstrated to learn a topographic feature map. The trained network is able to complete patterns, indicating that syntactic constraints on the combination of features are embedded in the network dynamics. Empirical evidence that the network has indeed learned a continuous attractor is obtained by local linearization of the network (1). The linearized dynamics has many eigenvalues close to unity, indicating the existence of an approximate continuous attractor. Learning with an increased number of iterations in the retrieval dynamics should improve the quality of the approximation. There is only one aspect of the learning algorithm that is specifically tailored for continuous attractors. This aspect is the limitation of the retrieval dynamics (1) to a few iterations, rather than iterating it all the way to a true fixed point. As mentioned earlier, a continuous attractor is only an idealization; in a real network it does not consist of true fixed points, but is just a manifold to which relaxation is fast and along which drift is slow. Adjusting the shape of this manifold is the goal of learning; the exact locations of the true fixed points are not relevant. The use of a fast retrieval dynamics removes one long-standing objection to attractor neural networks, which is that true convergence to a fixed point takes too long. If all that is desired is fast relaxation to an approximate continuous attractor, attractor neural networks are not much slower than feedforward networks. In the experiments discussed here, learning was done with backpropagation through time. Contrastive Hebbian learning[14] is a simpler alternative. Part of the image 660 H S. Seung is held clamped, the missing values are filled in by convergence to a fixed point, and an anti-Hebbian update is made. Then the missing values are clamped at their correct values, the network converges to a new fixed point, and a Hebbian update is made. This procedure has the disadvantage of requiring true convergence to a fixed point, which can take many iterations. It also requires symmetric connections, which may be a representational handicap. This paper addressed only the learning of a single attractor to represent a single pattern manifold. The problem of learning multiple attractors to represent mUltiple pattern classes will be discussed elsewhere, along with the extension to network architectures with many layers. Acknowledgments This work was supported by Bell Laboratories. I thank J. J. Hopfield, D. D. Lee, L. K. Saul, N. D. Socci, H. Sompolinsky, and D. W. Tank for helpful discussions. References [1] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. USA, 79:2554-2558, 1982. [2] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-169, 1985. [3] H. S. Seung. How the brain keeps the eyes still. Proc. Natl. Acad. Sci. USA,93:1333913344, 1996. [4] A. P. Georgopoulos, M. Taira, and A. Lukashin. Cognitive neurophysiology of the motor cortex. Science, 260:47-52, 1993. [5] K Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J. Neurosci., 16:2112-2126, 1996. [6] R. Ben-Yishai, R. L. Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Nat. Acad. Sci. USA, 92:3844-3848, 1995. [7] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland, editors, Parallel Distributed Processing, volume 1, chapter 8, pages 318-362. MIT Press, Cambridge, 1986. [8] G. W. Cottrell, P. Munro, and D. Zipser. Image compression by back propagation: an example of extensional programming. In N. E. Sharkey, editor, Models of cognition: a review of cognitive science. Ablex, Norwood, NJ, 1989. [9] P. Baldi and K Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2:53-58, 1989. [10] H. S. Seung. Pattern analysis and synthesis in attractor neural networks. In K-Y. M. Wong, 1. King, and D.-y' Yeung, editors, Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective, Singapore, 1997. Springer-Verlag. [11] F.-S. Tsung and G. W. Cottrell. Phase-space learning. Adv. Neural Info. Proc. Syst., 7:481-488, 1995. [12] Y. LeCun et al. Learning algorithms for classification: a comparison on handwritten digit recognition. In J.-H. Oh, C. Kwon, and S. Cho, editors, Neural networks: the statistical mechanics perspective, pages 261-276, Singapore, 1995. World Scientific. [13] T. Kohonen. The self-organizing map. Proc. IEEE, 78:1464-1480, 1990. [14] J. J. Hopfield, D. I. Feinstein, and R. G. Palmer. "Unlearning" has a stabilizing effect in collective memories. Nature, 304:158-159, 1983.
1997
8
1,430
A Model of Early Visual Processing Laurent Itti, Jochen Braun, Dale K. Lee and Christof Koch {itti, achim, jjwen, koch}Gklab.caltech.edu Computation & Neural Systems, MSC 139-74 California Institute of Technology, Pasadena, CA 91125, U.S.A. Abstract We propose a model for early visual processing in primates. The model consists of a population of linear spatial filters which interact through non-linear excitatory and inhibitory pooling. Statistical estimation theory is then used to derive human psychophysical thresholds from the responses of the entire population of units. The model is able to reproduce human thresholds for contrast and orientation discrimination tasks, and to predict contrast thresholds in the presence of masks of varying orientation and spatial frequency. 1 INTRODUCTION A remarkably wide range of human visual thresholds for spatial patterns appears to be determined by the earliest stages of visual processing, namely, orientation- and spatial frequency-tuned visual filters and their interactions [18, 19, 3, 22, 9]. Here we consider the possibility of quantitatively relating arbitrary spatial vision thresholds to a single computational model. The success of such a unified account should reveal the extent to which human spatial vision indeed reflects one particular stage of processing. Another motivation for this work is the controversy over the neural circuits that generate orientation and spatial frequency tuning in striate cortical neurons (13, 8, 2]. We think it is likely that behaviorally defined visual filters and their interactions reveal at least some of the characteristics of the underlying neural circuitry. Two specific problems are addressed: (i) what is the minimal set of model components necessary to account for human spatial vision, (ii) is there a general decision strategy which relates model responses to behavioral thresholds and which obviates case-by-case assumptions about the decision strategy in different behavioral situations. To investigate these questions, we propose a computational model articulated around three main stages: first, a population of bandpass linear filters extracts visual features from a stimulus; second, linear filters interact through non-linear excitatory and inhibitory pooling; third, a noise model and decision strategy are assumed in order to relate the model's output to psychophysical data. 174 L Itti, 1. Braun, D. K. Lee and C. Koch 2 MODEL We assume spatial visual filters tuned for a variety of orientations e E e and spatial periods A E A. The filters have overlapping receptive fields in visual space. Quadrature filter pairs, p{~(r and F{~d, are used to compute a phase-independent linear energy response, E>.,6, to a visual stimulus S. A small constant background activity, f, is added to the linear energy responses: E>. 6 = . /(peven * S)2 + (podd * S)2 + f , \I >' ,6 >.,6 Filters have separable Gaussian tuning curves in orientation and spatial frequency. Their corresponding shape in visual space is close to that of Gabor filters, although not separable along spatial dimensions. 2.1 Pooling: self excitation and divisive inhibition A model based on linear filters alone would not correctly account for the non-linear response characteristics to stimulus contrast which have been observed psychophysically [19]. Several models have consequently introduced a non-linear transducer stage following each linear unit [19]. A more appealing possibility is to assume a non-linear pooling stage [6, 21, 3, 22]. In this study, we propose a pooling strategy inspired by Heeger's model for gain control in cat area VI [5, 6]. The pooled response R>.,6 of a unit tuned for (A, 0) is computed from the linear energy responses of the entire population: E'Y R>. >',6 + 1] (1) ,6 So + L>'I,61 W>.,6(N, OI)E~/,61 where the sum is taken over the entire population and W>.,6 is a two-dimensional Gaussian weighting function centered around (A,O), and 1] a background activity. The numerator in Eq. 1 represents a non-linear self-excitation term. The denominator represents a divisive inhibitory term which depends not only on the activity of the unit (A,O) of interest, but also on the responses of other units. We shall see in Section 3 that, in contrast to Heeger's model for electrophysiological data in which all units contribute equally to the pool, it is necessary to assume that only a subpopulation of units with tuning close to (A, 0) contribute to the pool in order to account for psychophysical data. Also, we assume, > 15 to obtain a power law for high contrasts [7], as opposed to Heeger's physiological model in which, = 15 = 2 to account for neuronal response saturation at high contrasts. Several interesting properties result from this pooling model. First, a sigmoidal transducer function - in agreement with contrast discrimination psychophysics - is naturally obtained through pooling and thus need not be introduced post-hoc. The transducer slope for high contrasts is determined by ,-15, the location of its inflexion point by 5, and the slope at this point by the absolute value of, (and 15). Second, the tuning curves of the pooled units for orientation and spatial period do not depend of stimulus contrast, in agreement with physiological and psychophysical evidence [14]. In comparison, a model which assumes a non-linear transducer but no pooling exhibits sharper tuning curves for lower contrasts. Full contrast independence of the tuning is achieved only when all units participate in the inhibitory pool; when only sub-populations participate in the pool, some contrast dependence remains. 2.2 Noise model: PoissonlX It is necessary to assume the presence of noise in the system in order to be able to derive psychophysical performance from the responses of the population of pooled A Model of Early Visual Processing 175 units. The deterministic response of each unit then represents the mean of a randomly distributed "neuronal" response which varies from trial to trial in a simulated psychophysical experiment. Existing models usually assume constant noise variance in order to simplify the subsequent decision stage [18]. Using the decision strategy presented below, it is however possible to derive psychophysical performance with a noise model whose variance increases with mean activity, in agreement with electrophysiology [16]. In what follows, Poissoncx noise will be assumed and approximated by a Gaussian random variable with variance = meancx (0' is a constant close to unity). 2.3 Decision strategy We use tools from statistical estimation theory to compute the system's behavioral response based on the responses of the population of pooled units. Similar tools have been used by Seung and Sompolinsky [12] under the simplifying assumption of purely Poisson noise and for the particular task of orientation discrimination in the limit of an infinite population of oriented units. Here, we extend this framework to the more general case in which any stimulus attribute may differ between the two stimulus presentations to be discriminated by the model. Let's assume that we want to estimate psychophysical performance at discriminating between two stimuli which differ by the value of a stimulus parameter ((e.g. contrast, orientation, spatial period). The central assumption of our decision strategy is that the brain implements an unbiased efficient statistic T(R; (), which is an estimator of the parameter ( based on the population response R = {R).,I/; A E A, () E 0}. The efficient statistic is the one which, among all possible estimators of (, has the property of minimum variance in the estimated value of ( . Although we are not suggesting any putative neuronal correlate for T, it is important to note that the assumption of efficient statistic does not require T to be prohibitively complex; for instance, a maximum likelihood estimator proposed in the decision stage of several existing models is asymptotically (with respect to the number of observations) a efficient statistic. Because T is efficient, it achieves the Cramer-Rao bound [1]. Consequently, when the number of observations (i.e. simulated psychophysical trials) is large, E[T] = ( and var[T] = 1/3(() where E[.] is the mean over all observations, var[.] the variance, and 3(() is the Fisher information. The Fisher information can be computed using the noise model assumption and tuning properties of the pooled units: for a random variable X with probability density f(x; (), it is given by [1]: J(() = E [:c In/(X;()r For our Poissoncx noise model and assuming that different pooled units are independent [15], this translates into: One unit R). ,I/: All independent units: The Fisher information computed for each pooled unit and three types of stimulus parameters ( is shown in Figure 1. This figure demonstrates the importance of using information from all units in the population rather than from only one unit optimally tuned for the stimulus: although the unit carrying the most information about contrast is the one optimally tuned to the stimulus pattern, more information 176 L. lui, 1 Braun, D. K. Lee and C. Koch about orientation or spatial frequency is carried by units which are tuned to flanking orientations and spatial periods and whose tuning curves have maximum slope for the stimulus rather than maximum absolute sensitivity. In our implementation, the derivatives of pooled responses used in the expression of Fisher information are computed numerically. orientation spatial frequency Figure 1: Fisher information computed for contrast, orientation and spatial frequency. Each node in the tridimensional meshes represents the Fisher information for the corresponding pooled unit (A, B) in a model with 30 orientations and 4 scales. Arrows indicate the unit (A, B) optimally tuned to the stimulus. The total Fisher information in the population is the sum of the information for all units. Using the estimate of ( and its variance from the Fisher information, it is possible to derive psychophysical performance for a discrimination task between two stimuli with parameters (1 ~ (2 using standard ideal observer signal discrimination techniques [4] . For such discrimination, we use the Central Limit Theorem (in the limit of large number of trials) to model the noisy responses of the system as two Gaussians with means (1 and (2, and variances lTi = 1/:1((d and lTi = 1/:1((2) respectively. A decision criterion D is chosen to minimize the overall probability of error; since in our case lT1 =f. lT2 in general, we derive a slightly more complicated expression for performance P at a Yes/No (one alternative forced choice) task than what is commonly used with models assuming constant noise [18]: (2 lTi - (llT~ - lT1lT2J((1 - (2)2 + 2(lTr - lTi) log(lT!/lT2) D = 2 2 lT1 lT2 P= ~+~erf((2-D) + ~erf(D-(l) 2 4 lT2..J2 4 lT1..J2 where erf is the Normal error function. The expression for D extends by continuity to D = ((2 - (1)/2 when lT1 = lT2 . This decision strategy provides a unified, taskindependent framework for the computation of psychophysical performance from the deterministic responses of the pooled units. This strategy can easily be extended to allow the model to perform discrimination tasks with respect to additional stimulus parameters, under exactly the same theoretical assumptions. 3 RESULTS 3.1 Model calibration The parameters of the model were automatically adjusted to fit human psychophysical thresholds measured in our laboratory [17] for contrast and orientation discrimination tasks (Figure 2). The model used in this experiment consisted of 60 orientations evenly distributed between 0 and 180deg. One spatial scale at 4 cycles per degree (cpd) was sufficient to account for the data. A multidimensional simplex method with simulated annealing overhead was used to determine the best fit of the model to the data [10]. The free parameters adjusted during the automatic A Model of Early VlSUal Processing 177 fits were: the noise level a, the pooling exponents 'Y and &, the inhibitory pooling constant 5, and the background firing rates, E and rJ. The error function minimized by the fitting algorithm was a weighted average of three constraints: 1) least-square error with the contrast discrimination data in Figure 2.a; 2) least-square error with the orientation discrimination data in Figure 2.h; 3) because the data was sparse in the "dip-shaped" region of the curve in Figure 2.a, and unreliable due to the limited contrast resolution of the display used for the psychophysics, we added an additional constraint favoring a more pronounced "dip", as has been observed by several other groups [11, 19, 22]. Data fits used for model calibration: iii .-_____ ---..:a:::.., a ~u ~ 0~ ~ 10-2 ~ c:Ch Q)~ E :5 10-3 L..-______ --...J Q) ·2 0 o 10 10 c: .mask contrast Transducer function: ~50.----________ c~ Q) Ch c: o a. ~ u Q) (5 0.5 8. stimulus contrast 0.2 0.4 stimulus contrast Orientation tuning: d Q) Ch C o a. ~ 0.5 ~ Q) > ~ O~~----~----=-~ ~ -100 0 100 stimulUS orientation (deg) I ; Figure 2: The model (solid lines) was calibrated using data from two psychophysical experiments: (a) discrimination between a pedestal contrast (a.a) and the same pedestal plus an increment contrast (a.{3); (b) discrimination between two orientations near vertical (b.a and b.{3). After calibration, the transducer function of each pooled unit (c) correctly exhibits an accelerating non-linearity near threshold (contrast ~ 1%) and compressive non-linearity for high contrasts (Weber's law). We can see in (d) that pooling among units with similar tuning properties sharpens their tuning curves. Model parameters were: a ~ 0.75,,), ~ 4,«5 ~ 3.5,E ~ 1%, '1 ~ 1.7Hz,S such that transducer inflexion point is at 4x detection threshold contrast, orientation tuning FWHM=68deg (full width at half maximum), orientation pooling FWHM=40deg. Two remaining parameters are the orientation tuning width, (7'8, of the filters and the width, (7'We, of the pool. It was not possible from the data in Figure 2 alone to unambiguously determine these parameters. However, for any given (7'8, (7'W8 is uniquely determined by the following two qualitative constraints: first, a small pool size is not desirable because it yields contrast-dependent orientation tuning; it however appears from the data in Figure 2.h that this tuning should not vary much over a wide range of contrasts. The second constraint is qualitatively derived from Figure 3.a: for large pool sizes, the model predicted significant interference between mask and test patterns even for large orientation differences. Such inter178 L Itti, 1. Braun, D. K. Lee and C. Koch ference was not observed in the data for orientation differences larger than 45deg. It consequently seems that a partial inhibitory pool, composed only of a fraction of the population of oriented filters with tuning similar to the central excitatory unit, accounts best for the psychophysical data. Finally, (76 was fixed so as to yield a correct qualitative curve shape for Figure 3.a. 3.2 Predictions We used complex stimuli from masking experiments to test the predictive value of the model (Figure 3). Although it was necessary to use some of the qualitative properties of the data seen in Figure 3.a to calibrate the model as detailed above, the calibrated model correctly produced a quantitative fit of this data. The calibrated model also correctly predicted the complex data of Figure 3.h. a a b c::10 c:: 0 0 6 ~ ~ > > 4 Q) 5 Q) CD Q) "C ~ "C 2 (5 (5 .r; .r; (/J 0 30 60 90 no (/J 2 4 8 Q) Q) ~ ~ .r; mask orientation (deg) mask .r; mask spatial freq. (cpd) ...... ...... Figure 3: Prediction of psychophysical contrast thresholds in the presence of an oblique mask. The mask was a 50%-contrast stochastic oriented pattern (a). and the superimposed test pattern was a sixth-derivative of Gaussian bar (j3). In (a), threshold elevation (i.e. ratio of threshold in the presence of mask to threshold in the absence of mask) was measured for varying mask orientation, for mask and test patterns at 4 cycles per degree (cpd). In (b), orientation difference between test and mask was fixed to 15deg, and threshold elevation was measured as a function of mask spatial frequency. Solid lines represent model predictions, and dashed lines represent unity threshold elevation. 4 DISCUSSION AND CONCLUSION We have developed a model of early visual processing in humans which accounts for a wide range of measured spatial vision thresholds and which predicts behavioral thresholds for a potentially unlimited number of spatial discriminations. In addition to orientation- and spatial-frequency-tuned units, we have found it necessary to assume two types of interactions between such units: (i) non-linear self-excitation of each unit and (ii) divisive normalization of each unit response relative to the responses of similarly tuned units. All model parameters are constrained by psychophysical data and an automatic fitting procedure consistently converged to the same parameter set regardless of the initial position in parameter space. Our two main contributions are the small number of model components and the un i.fied, task-independent decision strategy. Rather than making different assumptions about the decision strategy in different behavioral tasks, we combine the information contained in the responses of all model units in a manner that is optimal for any behavioral task. We suggest that human observers adopt a similarly optimal decision procedure as they become familiar with a particular task (" task set"). Although here we apply this decision strategy only to the discrimination of stimulus contrast, orientation, and spatial frequency, it can readily be generalized to arbitrary discriminations such as, for example, the discrimination of vernier targets. A Model of Early Vzsual Processing 179 So far we have considered only situations in which the same decision strategy is optimal for every stimulus presentation. We are now studying situations in which the optimal decision strategy varies unpredictably from trial to trial (" decision uncertainty"). For example, situations in which the observer attempts to detect an increase in either the spatial frequency or the contrast of stimulus. In this way, we hope to learn the extent to which our model reflects the decision strategy adopted by human observers in an even wider range of situations. We have also assumed that the model's units were independent, which is not strictly true in biological systems (although the main source of correlation between neurons is the overlap between their respective tuning curves, which is accounted for in the model). The mathematical developments necessary to account for fixed or variable covariance between units are currently under study. In contrast to other models of early visual processing [5, 6], we find that the psychophysical data is consistent only with interactions between similarly tuned units (e.g., "near-orientation inhibition")' not with interactions between units of very different tuning (e.g., "cross-orientation inhibition") . Although such partial pooling does not render tuning functions completely contrast-independent, an additional degree of contrast-independence could be provided by pooling across different spatial locations. This issue is currently under investigation. In conclusion, we have developed a model based on self-excitation of each unit, divisive normalization [5, 6] between similarly tuned units, and an ideal observer decision strategy. It was able to reproduce a wide range of human visual thresholds. The fact that such a simple and idealized model can account quantitatively for a wide range of psychophysical observations greatly strengthens the notion that spatial vision thresholds reflect processing at one particular neuroanatomical level. Acknowledgments: This work was supported by NSF-Engineering Research Center (ERC), NIMH, ONR, and the Sloan Center for Theoretical Neurobiology. References [1] Cover TM, Thomas JA. Elem Info Theo, Wiley & Sons, 1991 [2] Ferster D, Chung S, Wheat H. Nature 1996;380(6571):249-52 [3] Foley JM. J Opt Soc A 1994;11(6):1710-9 [4] Green DM, Swets JA. Signal Detectability and Psychophys. Wiley & Sons, 1966. [5] Heeger DJ. Comput Models of Vis Processing, MIT Press, 1991 [6] Heeger DJ. Vis Neurosci 1992;9:181-97 [7] Nachmias J, Sansbury RV. Vis Res 1974;14:1039-42 [8] Nelson S, Toth L, Sheth B, Sur M. Science 1994;265(5173):774-77 [9] Perona P, Malik J. J Opt Soc A 1990;7(5):923-32 [10] Press WH, Teukolsky SA, et al. Num Rec in C. Cambridge University Press, 1992 [ll] Ross J, Speed HD. Proc R Soc B 1991;246:61-9 [12] Seung HS, Sompolinksy H. Proc Natl Acad Sci USA 1993;90:10749-53. [13] Sillito AM. Progr Brain Res 1992;90:349-84 [14] Skottun BC, Bradley A, Sclar G et al. J Neurophys 1987;57(3):773-86 [15] Snippe HP, Koenderink JJ. Bioi Cybern 1992;67:183-90 [16] Teich MC, Thrcott RG, Siegel RM. IEEE Eng Med Bioi 1996;Sept-Oct,79-87 [17] Wen J, Koch C, Braun J. Proc ARVO 1997;5457 [18] Wilson HR, Bergen JR. Vis Res 1979; 19: 19-32 [19] Wilson HR. Bioi Cybern 1980;38: 171-8 [20] Wilson HR, McFarlane DK, Phillips GC. Vis Res 1983;23;873-82. [21] Wilson HR, Humanski R. Vis Res 1993;33(8):1133-50 [22] Zenger B, Sagi D. Vis Res 1996;36(16):2497-2513.
1997
80
1,431
A Simple and Fast Neural Network Approach to Stereovision Rolf D. Henkel Institute of Theoretical Physics University of Bremen P.O. Box 330 440, D-28334 Bremen http://axon.physik.uni-bremen.de/-rdh Abstract A neural network approach to stereovision is presented based on aliasing effects of simple disparity estimators and a fast coherencedetection scheme. Within a single network structure, a dense disparity map with an associated validation map and, additionally, the fused cyclopean view of the scene are available. The network operations are based on simple, biological plausible circuitry; the algorithm is fully parallel and non-iterative. 1 Introduction Humans experience the three-dimensional world not as it is seen by either their left or right eye, but from a position of a virtual cyclopean eye, located in the middle between the two real eye positions. The different perspectives between the left and right eyes cause slight relative displacements of objects in the two retinal images (disparities), which make a simple superposition of both images without diplopia impossible. Proper fusion of the retinal images into the cyclopean view requires the registration of both images to a common coordinate system, which in turn requires calculation of disparities for all image areas which are to be fused. 1.1 The Problems with Classical Approaches The estimation of disparities turns out to be a difficult task, since various random and systematic image variations complicate this task. Several different techniques have been proposed over time, which can be loosely grouped into feature-, areaA Simple and Fast Neural Network Approach to Stereovision 809 and phase-based approaches. All these algorithms have a number of computational problems directly linked to the very assumptions inherent in these approaches. In feature-based stereo, intensity data is first converted to a set of features assumed to be a more stable image property than the raw image intensities. Matching primitives used include zerocrossings, edges and corner points (Frisby, 1991), or higher order primitives like topological fingerprints (see for example: Fleck, 1991). Generally, the set of feature-classes is discrete, causing the two primary problems of feature-based stereo algorithms: the famous "false-matches"-problem and the problem of missing disparity estimates. False matches are caused by the fact that a single feature in the left image can potentially be matched with every feature of the same class in the right image. This problem is basic to all feature-based stereo algorithms and can only be solved by the introduction of additional constraints to the solution. In conjunction with the extracted features these constraints define a complicated error measure which can be minimized by cooperative processes (Marr, 1979) or by direct (Ohta, 1985) or stochastic search techniques (Yuille, 1991). While cooperative processes and stochastic search techniques can be realized easily on a neural basis, it is not immediately clear how to implement the more complicated algorithmic structures of direct search techniques neuronally. Cooperative processes and stochastic search techniques turn out to be slow, needing many iterations to converge to a local minimum of the error measure. The requirement of features to be a stable image property causes the second problem of feature-based stereo: stable features can only be detected in a fraction of the whole image area, leading to missing disparity estimates for most of the image area. For those image parts, disparity estimates can only be guessed. Dense disparity maps can be obtained with area-based approaches, where a suitable chosen correlation measure is maximized between small image patches of the left and right view. However, a neuronally plausible implementation of this seems to be not readily available. Furthermore, the maximization turns out to be a computationally expensive process, since extensive search is required in configuration space. Hierarchical processing schemes can be utilized for speed-up, by using information obtained at coarse spatial scales to restrict searching at finer scales. But, for general image data, it is not guaranteed that the disparity information obtained at some coarse scale is valid. The disparity data might be wrong, might have a different value than at finer scales, or might not be present at all. Furthermore, by processing data from coarse to fine spatial scales, hierarchical processing schemes are intrinsically sequential. This creates additional algorithmic overhead which is again difficult to realize with neuronal structures. The same comments apply to phase-based approaches, where a locally extracted Fourier-phase value is used for matching. Phase values are only defined modulo 211", and this wrap-around makes the use of hierarchical processing essential for these types of algorithms. Moreover, since data is analyzed in different spatial frequency channels, it is nearly certain that some phase values will be undefined at intermediate scales, due to missing signal energy in this frequency band (Fleet, 1993). Thus, in addition to hierarchical processing, some kind of exception handling is needed with these approaches. 810 R. D. Henkel 2 Stereovision by Coherence Detection In summary, classical approaches to stereovision seem to have difficulties with the fast calculation of dense disparity-maps, at least with plausible neural circuitry. In the following, a neural network implementation will be described which solves this task by using simple disparity estimators based on motion-energy mechanisms (Adelson, 1985; Qian, 1997), closely resembling responses of complex cells in visual cortex (DeAngelis, 1991). Disparity units of these type belong to a class of disparity estimators which can be derived from optical flow methods (Barron, 1994). Clearly, disparity calculations and optical flow estimation share many similarities. The two stereo views of a (static) scene can be considered as two time-slices cut out of the space-time intensity pattern which would be recorded by an imaginary camera moving from the position of the left to the position of the right eye. However, compared to optical flow, disparity estimation is complicated by the fact that only two discrete "time"-samples are available, namely the images of the left and right view positions. to correct disparity calculations <p correct wrong Left Right 1 Right 2 Figure 1: The velocity of an image patch manifests itself as principal texture direction in the space-time flow field traced out by the intensity pattern in time (left). Sampling such flow patterns at discrete times can create aliasing-effects which lead to wrong estimates. If one is using optical flow estimation techniques for disparity calculations, this problem is always present. For an explanation consider Fig. 1. A surface patch shifting over time traces out a certain flow pattern. The principal texture direction of this flow indicates the relative velocity of the image patch (Fig. 1, left). Sampling the flow pattern only at discrete time points, the shift between two "time-samples" can be estimated without ambiguity provided the shift is not too large (Fig. 1, middle). However, if a certain limit is exceeded, it becomes impossible to estimate the shift correctly, given the data (Fig. 1, right). This is a simple aliasing-effect in the "time"-direction; an everyday example can be seen as motion reversal in movies. In the case of stereovision, aliasing-effects of this type are always present, and they limit the range of disparities a simple disparity unit can estimate. Sampling theory gives a relation between the maximal spatial wavevector k~ax (or, equivalently, the minimum spatial wavelength >'~in) present in the data and the largest disparity which can be estimated reliably (Henkel, 1997): II 7r _1I{J d < k~ax - '2>'min . (1) A Simple and Fast Neural Network Approach to Stereovision 811 A well-known example of the size-disparity scaling expressed in equation (1) is found in the context of the spatial frequency channels assumed to exist in the visual cortex. Cortical cells respond to spatial wavelengths down to about half their peak wavelength Aopt; therefore, they can estimate reliable only disparities less than 1/4 Aopt. This is known as Marr's quarter-cycle limit (Blake, 1991). Equation (1) immediately suggests a way to extend the limited working range of disparity estimators: a spatial smoothing of the image data before or during disparity calculation reduces k'f:tax, and in turn increases the disparity range. However, spatial smoothing reduces also the spatial resolution of the resulting disparity map. Another way of modifying the usable range of disparity estimators is the application of a fixed preshift to the input data before disparity calculation. This would require prior knowledge of the correct preshift to be applied, which is a nontrivial problem. One could resort to hierarchical coarse-to-fine schemes, but the difficulties with hierarchical schemes have already been elal ')rated. The aliasing effects discussed are a general feature of sampling visual space with only two eyes; instead of counteracting, one can exploit them in a simple coherencedetection scheme, where the multi-unit activity in stacks of disparity detectors tuned to a common view direction is analyzed. Assuming that all disparity units i in a stack have random preshifts or presmoothing applied to their input data, these units will have different, but slightly overlapping working ranges Di = [diin, diax] for valid disparity estimates. An object with true disparity d, seen in the common view direction of such a stack, will therefore split the stack into two disjunct classes: the class C of estimators with dEDi for all i E C, and the rest of the stack, C, with d ¢ D i . All disparity estimators E C will code more or less the true disparity di ~ d, but the estimates of units belonging to C will be subject to the random aliasing effects discussed, depending in a complicated way on image content and disparity range Di of the unit. We will thus have di ~ d ~ dj whenever units i and j belong to C, and random relationships otherwise. A simple coherence detection within each stack, i.e. searching for all units with di ~ dj and extracting the largest cluster found, will be sufficient to single out C. The true disparity d in the view direction of the stack can be simply estimated as an average over all coherently coding units: 3 Neural Network Implementation Repeating this coherence detection scheme in every view direction results in a fully parallel network structure for disparity calculation. Neighboring disparity stacks responding to different view directions estimate disparity values independently from each other, and within each stack, disparity units operate independently from each other. Since coherence detection is an opportunistic scheme, extensions of the basic algorithm to mUltiple spatial scales and combinations of different types of disparity estimators are trivial. Additional units are simply included in the appropriate coherence stacks. The coherence scheme will combine only the information from the coherently coding units and ignore the rest of the data. For this reason, the scheme also turns out to be extremely robust against single-unit failures. 812 R. D. Henkel disparity data "h'7" -----------r·---------Left eye·" .. , : .' Right eye , .............. , .. Cyclopean eye Figure 2: The network structure for a single horizontal scan-line (left). The view directions of the disparity stacks split the angle between the left and right lines of sight in the network and 3D-space in half, therefore analyzing space along the cyclopean view directions (right). In the current implementation (Fig. 2), disparity units at a single spatial scale are arranged into horizontal disparity layers. Left and right image data is fed into this network along diagonally running data lines. This causes every disparity layer to receive the stereo data with a certain fixed preshift applied, leading to the required, slightly different working-ranges of neighboring layers. Disparity units stacked vertically above each other are collected into a single disparity stack which is then analyzed for coherent activity. 4 Results The new stereo network performs comparable on several standard test image sets (Fig. 3). The calculated disparity maps are similar to maps obtained by classical area-based approaches, but they display subpixel-precision. Since no smoothing or regularization is performed by the coherence-based stereo algorithm, sharp disparity edges can be observed at object borders. Within the network, a simple validation map is available locally. A measure of local Figure 3: Disparity maps for some standard test images (small insets), calculated by the coherence-based stereo algorithm. A Simple and Fast Neural Network Approach to Stereovision 813 Figure 4: The performance of coherence-based stereo on a difficult scene with specular highlights, transparency and repetitive structures (left). The disparity map (middle) is dense and correct, except for a few structure-less image regions. These regions, as well as most object borders, are indicated in the validation map (right) with a low [dark] validation count. coherence can be obtained by calculating the relative number of coherently acting disparity units in each stack, i.e. by calculating the ratio N(C)/ N(CUC), where N(C) is the number of units in class C. In most cases, this validation map clearly marks image areas where the disparity calculations failed (for various reasons, notably at occlusions caused by object borders, or in large structure-less image regions, where no reliable matching can be obtained compare Fig 4). Close inspection of disparity and validation maps reveals that these image maps are not aligned with the left or the right view of the scene. Instead, both maps are registered with the cyclopean view. This is caused by the structural arrangement of data lines and disparity stacks in the network. Reprojecting data lines and stacks back into 3D-space shows that the stacks analyze three-dimensional space along lines splitting the angle between the left and right view directions in half. This is the cyclopean view direction as defined by (Hering, 1879). It is easy to obtain the cyclopean view of the scene itself. With If and If denoting the left and right input data at the position of disparity-unit i, a summation over all coherently coding disparity units in a stack, i.e., Figure 5: A simple superposition of the left and right stereo images results in diplopia (left). By using a vergence system, the two stereo images can be aligned better (middle), but diplopia is still prominent in most areas of the visual field. The fused cyclopean view of the scene (left) was calculated by the coherence-based stereo network. 814 R. D. Henkel gives the image intensity I C in the cyclopean view-direction of this stack. Collecting IC from all disparity stacks gives the complete cyclopean view as the third coregistered map of the network (Fig 5). Acknowledgements Thanks to Helmut Schwegler and Robert P. O'Shea for interesting discussions. Image data courtesy of G. Medoni, UCS Institute for Robotics & Intelligent Systems, B. Bolles, AIC, SRI International, and G. Sommer, Kiel Cognitive Systems Group, Christian-Albrechts-Universitat Kiel. An internetbased implementation of the algorithm presented in this paper is available at http://axon.physik.uni-bremen.de/-rdh/online~alc/stereo/. References Adelson, E.H. & Bergen, J.R. (1985): Spatiotemporal Energy Models for the Perception of Motion. J. Opt. Soc. Am. A2: 284-299. Barron, J.L., Fleet, D.J. & Beauchemin, S.S. (1994): Performance of Optical Flow Techniques. Int. J. Camp. Vis. 12: 43-77. Blake, R. & Wilson, H.R. (1991): Neural Models of Stereoscopic Vision. TINS 14: 445-452. DeAngelis, G.C., Ohzawa, I. & Freeman, R.D. (1991): Depth is Encoded in the Visual Cortex by a Specialized Field Structure. Nature 11: 156-159. Fleck, M.M. (1991): A Topological Stereo Matcher. Int. J. of Camp. Vis. 6: 197-226. Fleet, D.J. & Jepson, A.D. (1993): Stability of Phase Information. IEEE PAMI 2: 333-340. Frisby, J.P. & and S. B. Pollard, S.B. (1991): Computational Issues in Solving the Stereo Correspondence Problem. eds. M.S. Landy and J. A. Movshon, Computational Models of Visual Processing, pp. 331, MIT Press, Cambridge 1991. Henkel, R.D. (1997): Fast Stereovision by Coherence Detection, in Proc. of CAIP'97, Kiel, LCNS 1296, eds. G. Sommer, K. Daniilidis and J. Pauli, pp. 297, LCNS 1296, Springer, Heidelberg 1997. E. Hering (1879): Der Raumsinn und die Bewegung des Auges, in Handbuch der Psychologie, ed. 1. Hermann, Band 3, Teil 1, Vogel, Leipzig 1879. Marr, D. & Poggio, T. (1979): A Computational Theory of Human Stereo Vision. Proc. R. Soc. Land. B 204: 301-328. Ohta, Y, & Kanade, T. (1985): Stereo by Intra- and Inter-scanline Search using dynamic programming. IEEE PAMI 7: 139-154. Qian, N. & Zhu, Y. (1997): Physiological Computation of Binocular Disparity, to appear in Vision Research. Yuille, A.L., Geiger, D. & Biilthoff, H.H. (1991): Stereo Integration, Mean Field Theory and Psychophysics. Network 2: 423-442.
1997
81
1,432
On Parallel Versus Serial Processing: A Computational Study of Visual Search Eyal Cohen Department of Psychology Tel-Aviv University Tel Aviv 69978, Israel eyalc@devil. tau .ac .il Eytan Ruppin Departments of Computer Science & Physiology Tel-Aviv University Tel Aviv 69978, Israel ruppin@math.tau.ac.il Abstract A novel neural network model of pre-attention processing in visualsearch tasks is presented. Using displays of line orientations taken from Wolfe's experiments [1992], we study the hypothesis that the distinction between parallel versus serial processes arises from the availability of global information in the internal representations of the visual scene. The model operates in two phases. First, the visual displays are compressed via principal-component-analysis. Second, the compressed data is processed by a target detector module in order to identify the existence of a target in the display. Our main finding is that targets in displays which were found experimentally to be processed in parallel can be detected by the system, while targets in experimentally-serial displays cannot. This fundamental difference is explained via variance analysis of the compressed representations, providing a numerical criterion distinguishing parallel from serial displays. Our model yields a mapping of response-time slopes that is similar to Duncan and Humphreys's "search surface" [1989], providing an explicit formulation of their intuitive notion of feature similarity. It presents a neural realization of the processing that may underlie the classical metaphorical explanations of visual search. On Parallel versus Serial Processing: A Computational Study a/Visual Search 11 1 Introduction This paper presents a neural-model of pre-attentive visual processing. The model explains why certain displays can be processed very fast, "in parallel" , while others require slower, "serial" processing, in subsequent attentional systems. Our approach stems from the observation that the visual environment is overflowing with diverse information, but the biological information-processing systems analyzing it have a limited capacity [1]. This apparent mismatch suggests that data compression should be performed at an early stage of perception, and that via an accompanying process of dimension reduction, only a few essential features of the visual display should be retained. We propose that only parallel displays incorporate global features that enable fast target detection, and hence they can be processed pre-attentively, with all items (target and dis tractors) examined at once. On the other hand, in serial displays' representations, global information is obscure and target detection requires a serial, attentional scan of local features across the display. Using principal-component-analysis (peA), our main goal is to demonstrate that neural systems employing compressed, dimensionally reduced representations of the visual information can successfully process only parallel displays and not serial ones. The sourCe of this difference will be explained via variance analysis of the displays' projections on the principal axes. The modeling of visual attention in cognitive psychology involves the use of metaphors, e.g., Posner's beam of attention [2]. A visual attention system of a surviving organism must supply fast answers to burning issues such as detecting a target in the visual field and characterizing its primary features. An attentional system employing a constant-speed beam of attention [3] probably cannot perform such tasks fast enough and a pre-attentive system is required. Treisman's feature integration theory (FIT) describes such a system [4]. According to FIT, features of separate dimensions (shape, color, orientation) are first coded pre-attentively in a locations map and in separate feature maps, each map representing the values of a particular dimension. Then, in the second stage, attention "glues" the features together conjoining them into objects at their specified locations. This hypothesis was supported using the visual-search paradigm [4], in which subjects are asked to detect a target within an array of distractors, which differ on given physical dimensions such as color, shape or orientation. As long as the target is significantly different from the distractors in one dimension, the reaction time (RT) is short and shows almost no dependence on the number of distractors (low RT slope). This result suggests that in this case the target is detected pre-attentively, in parallel. However, if the target and distractors are similar, or the target specifications are more complex, reaction time grows considerably as a function of the number of distractors [5, 6], suggesting that the displays' items are scanned serially using an attentional process. FIT and other related cognitive models of visual search are formulated on the conceptual level and do not offer a detailed description of the processes involved in transforming the visual scene from an ordered set of data points into given values in specified feature maps. This paper presents a novel computational explanation of the source of the distinction between parallel and serial processing, progressing from general metaphorical terms to a neural network realization. Interestingly, we also come out with a computational interpretation of some of these metaphorical terms, such as feature similarity. 12 E. Cohen and E. Ruppin 2 The Model We focus our study on visual-search experiments of line orientations performed by Wolfe et. al. [7], using three set-sizes composed of 4, 8 and 12 items. The number of items equals the number of dis tractors + target in target displays, and in non-target displays the target was replaced by another distractor, keeping a constant set-size. Five experimental conditions were simulated: (A) - a 20 degrees tilted target among vertical distractors (homogeneous background). (B) - a vertical target among 20 degrees tilted distractors (homogeneous background). (C) - a vertical target among heterogeneous background ( a mixture of lines with ±20, ±40 , ±60 , ±80 degrees orientations). (E) - a vertical target among two flanking distractor orientations (at ±20 degrees), and (G) - a vertical target among two flanking distractor orientations (±40 degrees). The response times (RT) as a function of the set-size measured by Wolfe et. al. [7] show that type A, Band G displays are scanned in a parallel manner (1.2, 1.8,4.8 msec/item for the RT slopes), while type C and E displays are scanned serially (19.7,17.5 msec/item). The input displays of our system were prepared following Wolfe's prescription: Nine images of the basic line orientations were produced as nine matrices of gray-level values. Displays for the various conditions of Wolfe's experiments were produced by randomly assigning these matrices into a 4x4 array, yielding 128x100 display-matrices that were transformed into 12800 display-vectors. A total number of 2400 displays were produced in 30 groups (80 displays in each group): 5 conditions (A, B, C, E, G ) x target/non-target x 3 set-sizes (4,8, 12). Our model is composed of two neural network modules connected in sequence as illustrated in Figure 1: a peA module which compresses the visual data into a set of principal axes, and a Target Detector (TD) module. The latter module uses the compressed data obtained by the former module to detect a target within an array of distractors. The system is presented with line-orientation displays as described above. NO·TARGET =·1 TARGET-I t t /--- -Tn [JUTPUT LAYER (I UNIT)-------, Tn INrnRMEDIATE LAYER (12 UNITS) TARGET DETECTOR MODULE (11)) PeA O~=~ LAYER J DATA COMPRESSION --..;;:::~~~ MODULE (PeA) INPUT LAYER (12Il00 UNITS) _ / DISPLAY Figure 1: General architecture of the model For the PCA module we use the neural network proposed by Sanger, with the connections' values updated in accordance with his Generalized Hebbian Algorithm (GHA) [8]. The outputs of the trained system are the projections of the displayvectors along the first few principal axes, ordered with respect to their eigenvalue magnitudes. Compressing the data is achieved by choosing outputs from the first On Parallel versus Serial Processing: A Computational Study o/Visual Search 13 few neurons (maximal variance and minimal information loss). Target detection in our system is performed by a feed-forward (FF) 3-layered network, trained via a standard back-propagation algorithm in a supervised-learning manner. The input layer of the FF network is composed of the first eight output neurons of the peA module. The transfer function used in the intermediate and output layers is the hyperbolic tangent function. 3 Results 3.1 Target Detection The performance of the system was examined in two simulation experiments. In the first, the peA module was trained only with "parallel" task displays, and in the second, only with "serial" task displays. There is an inherent difference in the ability of the model to detect targets in parallel versus serial displays. In parallel task conditions (A, B, G) the target detector module learns the task after a comparatively small number (800 to 2000) of epochs, reaching performance level of almost 100%. However, the target detector module is not capable of learning to detect a target in serial displays (e, E conditions) . Interestingly, these results hold (1) whether the preceding peA module was trained to perform data compression using parallel task displays or serial ones, (2) whether the target detector was a linear simple perceptron, or the more powerful, non-linear network depicted in Figure 1, and (3) whether the full set of 144 principal axes (with non-zero eigenvalues) was used. 3.2 Information Span To analyze the differences between parallel and serial tasks we examined the eigenvalues obtained from the peA of the training-set displays. The eigenvalues of condition B (parallel) displays in 4 and 12 set-sizes and of condition e (serial-task) displays are presented in Figure 2. Each training set contains a mixture of target and non-target displays. (a) (b) PARALLEL SERIAL 40 40 l!J +4 ITEMS +4 ITEMS 35 II> 35 "'I;l o 12 ITEMS o 12 ITEMS 30 ~ 30 25 25 ~ w ~ w ~20 ~20 ~ 15 ~ 15 w w 10 10 5 ~ 5 0 0 -5 0 10 20 30 40 -5 0 10 20 30 40 No. of PRINCIPAL AXIS No. of PRINCIPAL AXIS Figure 2: Eigenvalues spectrum of displays with different set-sizes, for parallel and serial tasks. Due to the sparseness of the displays (a few black lines on white background), it takes only 31 principal axes to describe the parallel training-set in full (see fig 2a. Note that the remaining axes have zero eigenvalues, indicating that they contain no additional information.), and 144 axes for the serial set (only the first 50 axes are shown in fig 2b). 14 E. Cohen and E. Ruppin As evident, the eigenvalues distributions of the two display types are fundamentally different: in the parallel task, most of the eigenvalues "mass" is concentrated in the first few (15) principal axes, testifying that indeed, the dimension of the parallel displays space is quite confined. But for the serial task, the eigenvalues are distributed almost uniformly over 144 axes. This inherent difference is independent of set-size: 4 and 12-item displays have practically the same eigenvalue spectra. 3.3 Variance Analysis The target detector inputs are the projections of the display-vectors along the first few principal axes. Thus, some insight to the source of the difference between parallel and serial tasks can be gained performing a variance analysis on these projections. The five different task conditions were analyzed separately, taking a group of 85 target displays and a group of 85 non-target displays for each set-size. Two types of variances were calculated for the projections on the 5th principal axis: The "within groups" variance, which is a measure of the statistical noise within each group of 85 displays, and the "between groups" variance, which measures the separation between target and non-target groups of displays for each set-size. These variances were averaged for each task (condition), over all set-sizes. The resulting ratios Q of within-groups to between-groups standard deviations are: QA = 0.0259, QB = 0.0587 ,and Qa = 0.0114 for parallel displays (A, B, G), and QE = 0.2125 Qc = 0.771 for serial ones (E, C). As evident, for parallel task displays the Q values are smaller by an order of magnitude compared with the serial displays, indicating a better separation between target and non-target displays in parallel tasks. Moreover, using Q as a criterion for parallel/serial distinction one can predict that displays with Q < < 1 will be processed in parallel, and serially otherwise, in accordance with the experimental response time (RT) slopes measured by Wolfe et. al. [7]. This differences are further demonstrated in Figure 3, depicting projections of display-vectors on the sub-space spanned by the 5, 6 and 7th principal axes. Clearly, for the parallel task (condition B), the PCA representations of the target-displays (plus signs) are separated from non-target representations (circles), while for serial displays (condition C) there is no such separation. It should be emphasized that there is no other principal axis along which such a separation is manifested for serial displays. -11106 -1 un .11615 -11025 _1163 '.II , .. " .,0' Til o o o 0 o .. . + ++ . + .+ 7.&12 '.7 1.1186 INIIS 11166 ,. 18846 ,. "'AXIS 71hAXIS , ow 110" ::::~ _1157 -11M ~ :., Hill -'.1' -'181 _1182 '10 '07 ,.II • 10~ , .. .. '~AXIS . 0o 1"1 o 1114 1113 1.1e2 ,., ,. no AXIS . l • 0 +0 o o o 1.1'11 '.71 1 iTT 1.178 1.175 1 114 .,. Figure 3: Projections of display-vectors on the sub-space spanned by the 5, 6 and 7th principal axes. Plus signs and circles denote target and non-target displayvectors respectively, (a) for a parallel task (condition B), and (b) for a serial task (condition C). Set-size is 8 items. On Parallel versus Serial Processing: A Computational Study o/Visual Search 15 While Treisman and her co-workers view the distinction between parallel and serial tasks as a fundamental one, Duncan and Humphreys [5] claim that there is no sharp distinction between them, and that search efficiency varies continuously across tasks and conditions. The determining factors according to Duncan and Humphreys are the similarities between the target and the non-targets (T-N similarities) and the similarities between the non-targets themselves (N-N similarity). Displays with homogeneous background (high N-N similarity) and a target which is significantly different from the distractors (low T-N similarity) will exhibit parallel, low RT slopes, and vice versa. This claim was illustrated by them using a qualitative "search surface" description as shown in figure 4a. Based on results from our variance analysis, we can now examine this claim quantitatively: We have constructed a "search surface", using actual numerical data of RT slopes from Wolfe's experiments, replacing the N-N similarity axis by its mathematical manifestation, the within-groups standard deviation, and N-T similarity by between-groups standard deviation 1. The resulting surface (Figure 4b) is qualitatively similar to Duncan and Humphreys's. This interesting result testifies that the PCA representation succeeds in producing a viable realization of such intuitive terms as inputs similarity, and is compatible with the way we perceive the world in visual search tasks. (a) o CIo-..... ~:..-4.:0,........::~"""""'" 1- _.-...-_ l.rgeI-.-Jargel IImllarll), Figun J. The seatcllaurface. (b) SEARCH SURFACE Figure 4: RT rates versus: (a) Input similarities (the search surface, reprinted from Duncan and Humphreys, 1989). (b) Standard deviations (within and between) of the PCA variance analysis. The asterisks denote Wolfe's experimental data. 4 Summary In this work we present a two-component neural network model of pre-attentional visual processing. The model has been applied to the visual search paradigm performed by Wolfe et. al. Our main finding is that when global-feature compression is applied to visual displays, there is an inherent difference between the representations of serial and parallel-task displays: The neural network studied in this paper has succeeded in detecting a target among distractors only for displays that were experimentally found to be processed in parallel. Based on the outcome of the 1 In general, each principal axis contains information from different features, which may mask the information concerning the existence of a target. Hence, the first principal axis may not be the best choice for a discrimination task. In our simulations, the 5th axis for example, was primarily dedicated to target information, and was hence used for the variance analysis (obviously, the neural network uses information from all the first eight principal axes). 16 E. Cohen andE. Ruppin variance analysis performed on the PCA representations of the visual displays, we present a quantitative criterion enabling one to distinguish between serial and parallel displays. Furthermore, the resulting 'search-surface' generated by the PCA components is in close correspondence with the metaphorical description of Duncan and Humphreys. The network demonstrates an interesting generalization ability: Naturally, it can learn to detect a target in parallel displays from examples of such displays. However, it can also learn to perform this task from examples of serial displays only! On the other hand, we find that it is impossible to learn serial tasks, irrespective of the combination of parallel and serial displays that are presented to the network during the training phase. This generalization ability is manifested not only during the learning phase, but also during the performance phase; displays belonging to the same task have a similar eigenvalue spectrum, irrespective of the actual set-size of the displays, and this result holds true for parallel as well as for serial displays. The role of PCA in perception was previously investigated by Cottrell [9], designing a neural network which performed tasks as face identification and gender discrimination. One might argue that PCA, being a global component analysis is not compatible with the existence of local feature detectors (e.g. orientation detectors) in the cortex. Our work is in line with recent proposals [10J that there exist two pathways for sensory input processing: A fast sub-cortical pathway that contains limited information, and a slow cortical pathway which is capable of providing richer representations of the stimuli. Given this assumption this paper has presented the first neural realization of the processing that may underline the classical metaphorical explanations involved in visual search. References [1] J. K. Tsotsos. Analyzing vision at the complexity level. Behavioral and Brain Sciences, 13:423-469, 1990. [2J M. I. Posner, C. R. Snyder, and B. J. Davidson. Attention and the detection of signals. Journal of Experimental Psychology: General, 109:160-174, 1980. [3J Y. Tsal. Movement of attention across the visual field. Journal of Experimental Psychology: Human Perception and Performance, 9:523-530, 1983. [4] A. Treisman and G. Gelade. A feature integration theory of attention. Cognitive Psychology, 12:97-136,1980. [5] J. Duncan and G. Humphreys. Visual search and stimulus similarity. Psychological Review, 96:433-458, 1989. [6] A. Treisman and S. Gormican. Feature analysis in early vision: Evidence from search assymetries. Psychological Review, 95:15-48, 1988. [7] J . M. Wolfe, S. R. Friedman-Hill, M. I. Stewart, and K. M. O'Connell. The role of categorization in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance, 18:34-49, 1992. [8] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Network, 2:459-473, 1989. [9] G. W. Cottrell. Extracting features from faces using compression networks: Face, identity, emotion and gender recognition using holons. Proceedings of the 1990 Connectionist Models Summer School, pages 328-337, 1990. [10] J. L. Armony, D. Servan-Schreiber, J . D. Cohen, and J. E. LeDoux. Computational modeling of emotion: exploration through the anatomy and physiology of fear conditioning. Trends in Cognitive Sciences, 1(1):28-34, 1997. Data-Dependent Structural Risk Minimisation for Perceptron Decision Trees John Shawe-Taylor Dept of Computer Science Royal Holloway, University of London Egham, Surrey TW20 OEX, UK Email: jst@dcs.rhbnc.ac.uk N ello Cristianini Dept of Engineering Mathematics University of Bristol Bristol BS8 ITR, UK Email: nello.cristianini@bristol.ac. uk Abstract Perceptron Decision 'frees (also known as Linear Machine DTs, etc.) are analysed in order that data-dependent Structural Risk Minimisation can be applied. Data-dependent analysis is performed which indicates that choosing the maximal margin hyperplanes at the decision nodes will improve the generalization. The analysis uses a novel technique to bound the generalization error in terms of the margins at individual nodes. Experiments performed on real data sets confirm the validity of the approach. 1 Introduction Neural network researchers have traditionally tackled classification problems byassembling perceptron or sigmoid nodes into feedforward neural networks. In this paper we consider a less common approach where the perceptrons are used as decision nodes in a decision tree structure. The approach has the advantage that more efficient heuristic algorithms exist for these structures, while the advantages of inherent parallelism are if anything greater as all the perceptrons can be evaluated in parallel, with the path through the tree determined in a very fast post-processing phase. Classical Decision 'frees (DTs), like the ones produced by popular packages as CART [5] or C4.5 [9], partition the input space by means ofaxis-parallel hyperplanes (one at each internal node), hence inducing categories which are represented by (axis-parallel) hyperrectangles in such a space. A natural extension of that hypothesis space is obtained by associating to each internal node hyperplanes in general position, hence partitioning the input space by means of polygonal (polyhedral) categories. Data-Dependent SRMfor Perceptron Decision Trees 337 This approach has been pursued by many researchers, often with different motivations, and hE.nce the resulting hypothesis space has been given a number of different names: multivariate DTs [6], oblique DTs [8], or DTs using linear combinations of the attributes [5], Linear Machine DTs, Neural Decision Trees [12], Perceptron Trees [13], etc. We will call them Perceptron Decision Trees (PDTs), as they can be regarded as binary trees having a simple perceptron associated to each decision node. Different algorithms for Top-Down induction of PDTs from data have been proposed, based on different principles, [10], [5], [8], Experimental study of learning by means of PDTs indicates that their performances are sometimes better than those of traditional decision trees in terms of generalization error, and usually much better in terms of tree-size [8], [6], but on some data set PDTs can be outperformed by normal DTs. We investigate an alternative strategy for improving the generalization of these structures, namely placing maximal margin hyperplanes at the decision nodes. By use of a novel analysis we are able to demonstrate that improved generalization bounds can be obtained for this approach. Experiments confirm that such a method delivers more accurate trees in all tested databases. 2 Generalized Decision Trees Definition 2.1 Generalized Deci.ion Tree. (GDT). Given a space X and a set of boolean functions ~ = {/ : X -+ {O, I}}, the class GDT(~) of Generalized Decision Trees over ~ are functions which can be implemented using a binary tree where each internal node is labeled with an element of ~, and each leaf is labeled with either 1 or O. To evaluate a particular tree T on input z EX, All the boolean functions associated to the nodes are assigned the same argument z EX, which is the argument of T( z). The values assumed by them determine a unique path from the root to a leaf: at each internal node the left (respectively right) edge to a child is taken if the output of the function associated to that internal node is 0 (respectively 1). The value of the function at the assignment of a z E X is the value associated to the leaf reached. We say that input z reaches a node of the tree, if that node is on the evaluation path for z. In the following, the nodu are the internal nodes of the binary tree, and the leave. are its external ones. Examples. • Given X = {O, I}", a Boolean Deci6ion Tree (BDT) is a GDT over ~BDT = U : "(x) = Xi, "Ix E X} • Given X = lR", a C-I.5-like Deci.ion Tree (CDT) is a GDT over ~CDT = U" : ",,(x) = 1 ¢:> z, > 8} This kind of decision trees defined on a continuous space are the output of common algorithms like C4.5 and CART, and we will call them - for short - CDTs. • Given X = lR", a Perceptron Deci.ion Tree (PDT) is a GDT over ~PDT = {wT x : W E lR"+1}, where we have assumed that the inputs have been augmented with a coordinate of constant value, hence implementing a thresholded perceptron. 338 1. Shawe-Taylor and N. Cristianini 3 Data-dependent SRM We begin with the definition of the fat-shattering dimension, which was first introduced in [7], and has been used for several problems in learning since [1, 4, 2, 3]. Definition 3.1 Let F be a ,et of real valued functiom. We ,ay that a ,et of point. X u1-shattered by F relative to r = (r.).ex if there are real number, r. indezed by z E X ,uch that for all binary vector' b indezed by X, there u a function I" E F ,atufying ~ (z) { ~ r. + 1 if b. = .1 I" ~ r. -1 otheMDue. The fat shattering dimension fat:F of the ,et F i, a function from the po,itive real number, to the integer' which map' a value 1 to the ,ize of the largut 1-,hattered ,et, if thi' i, finite, or infinity otherwi6e. As an example which will be relevant to the subsequent analysis consider the class: J=nn = {z -+ (w, z) + 8: IIwl! = 1}. We quote the following result from [11]. Corollary 3.2 [11} Let J=nn be reltricted to point' in a ball of n dimemiom of radiu, R about the origin and with thre,hold8 181 ~ R. Then fat~ (1) ~ min{9R2 /12, n + I} + 1. The following theorem bounds the generalisation of a classifier in terms of the fat shattering dimension rather than the usual Vapnik-Chervonenkis or Pseudo dimension. Let T9 denote the threshold function at 8: T9: 1R -+ {O,I}, T9(a) = 1 iff a> 8. For a class offunctions F, T9(F) = {T9(/): IE F}. Theorem 3.3 [11} Comider a real valued function dOl, F having fat ,hattering function bounded above by the function &lat : 1R -+ N which i, continuOtU from the right. Fi:D 8 E 1R. If a learner correctly cIOl,ifie, m independently generated ezample, • with h = T9(/) E T9(F) ,uch that er.(h) = 0 and 1 = min I/(z,) - 81, then with confidence 1 - i the ezpected error of h u bounded from above by e(m,k,6) = ! (kiog (8~m) log(32m) + log (8;a)) , where k = &lath/8). The importance of this theorem is that it can be used to explain how a classifier can give better generalisation than would be predicted by a classical analysis of its VC dimension. Essentially expanding the margin performs an automatic capacity control for function classes with small fat shattering dimensions. The theorem shows that when a large margin is achieved it is as if we were working in a lower VC class. We should stress that in general the bounds obtained should be better for cases where a large margin is observed, but that a priori there is no guarantee that such a margin will occur. Therefore a priori only the classical VC bound can be used. In view of corresponding lower bounds on the generalisation error in terms of the VC dimension, the a posteriori bounds depend on a favourable probability distribution making the actual learning task easier. Hence, the result will only be useful if the distribution is favourable or at least not adversarial. In this sense the result is a distribution dependent result, despite not being distribution dependent in the Data-Dependent SRMfor Perceptron Decision Trees 339 traditional sense that assumptions about the distribution have had to be made in its derivation. The benign behaviour of the distribution is automatically estimated in the learning process. In order to perform a similar analysis for perceptron decision trees we will consider the set of margins obtained at each of the nodes, bounding the generalization as a function of these values. 4 Generalisation analysis of the Tree Class It turns out that bounding the fat shattering dimension of PDT's viewed as real function classifiers is difficult. We will therefore do a direct generalization analysis mimicking the proof of Theorem 3.3 but taking into account the margins at each of the decision nodes in the tree. Definition 4.1 Let (X, d) be a {p,eudo-} metric 'pace, let A be a ,ub,et of X and E > O. A ,et B ~ X i, an E-cover for A if, for every a E A, there eNtI b E B ,uch that d(a,b) < E. The E-covering number of A, A'd(E,A), is the minimal cardinality of an E-cover for A (if there is no ,uch finite cover then it i, defined to be (0). We write A'(E,:F, x) for the E-covering number of:F with respect to the lao pseudometric measuring the maximum discrepancy on the sample x. These numbers are bounded in the following Lemma. Lemma 4.2 (.Alon et al. [1]) Let:F be a cla.s, of junction, X -+ [0,1] and P a distribution over X. Choo,e 0 < E < 1 and let d = fat:F(E/4). Then (4m)dlos(2em/(cU» E (A'(E,:F, x» ~ 2 \ -;;, where the ezpectation E i, taken w.r.t. a ,ample x E xm drawn according to pm. Corollary 4.3 [11} Let :F be a cla" of junctiom X -+ [a, b] and P a distribution over X. Choo,e 0 < E < 1 and let d = fat:F(E/4). Then ( 4m(b _ a)2)dlos(2em("-Cl)/(cU» E (A'(E,:F, x» ~ 2 E2 ' where the ezpectation E is over ,amples x E xm drawn according to pm. We are now in a position to tackle the main lemma which bounds the probability over a double sample that the first half has lero error and the second error greater than an appropriate E. Here, error is interpreted as being differently classified at the output of tree. In order to simplify the notation in the following lemma we assume that the decision tree has K nodes. We also denote fat:Flin (-y) by fat(-y) to simplify the notation. Lemma 4.4 Let T be a perceptron decision tree with K decuion node, with margim '11 , '12, ••• ,'1K at the decision nodes. If it ha.s correctly cla.s,ified m labelled ezamples generated independently according to the unknown (but jized) distribution P, then we can bound the following probability to be Ie" than ~, p2m { xy: 3 a tree T : T correctly cla.s,ifie, x, fraction of y mi,cla"ified > E( m, K,~) } < ~, where E(m,K,~) = !(Dlog(4m) + log ~). where D = E~1 kslog(4em/k.) and k, = fat(-y./8). 340 1. Shawe-Taylor and N. Cristianini Proof: Using the standard permutation argument, we may fix a sequence xy and bound the probability under the uniform distribution on swapping permutations that the sequence satisfies the condition stated. We consider generating minimal 'YI&/2-covers B!y for each value of Ie, where "11& = min{'Y' : fath' /8) :5 Ie}. Suppose that for node i oCthe tree the margin 'Yi of the hyperplane 'Wi satisfies fathi /8) = ~. We can therefore find Ii E B!~ whose output values are within 'Yi /2 of 'Wi. We now consider the tree T' obtained by replacing the node perceptrons 'Wi of T with the corresponding Ii. This tree performs the same classification function on the first half of the sample, and the margin remains larger than 'Yi - "1".12 > "11&.12. If a point in the second half of the sample is incorrectly classified by T it will either still be incorrectly classified by the adapted tree T' or will at one of the decision nodes i in T' be closer to the decision boundary than 'YI&i /2. The point is thus distinguishable from left hand side points which are both correctly classified and have margin greater than "11&.12 at node i. Hence, that point must be kept on the right hand side in order for the condition to be satisfied. Hence, the fraction of permutations that can be allowed for one choice of the functions from the covers is 2-"". We must take the union bound over all choices of the functions from the covers. U sing the techniques of [11] the numbers of these choices is bounded by Corollory 4.3 as follows n~12(8m)I&.los(4emll&.) = 2K (8m)D, where D = ~~1 ~ log(4em/lei). The value of E in the lemma statement therefore ensures that this the union bound is less than 6. o Using the standard lemma due to Vapnik [14, page 168] to bound the error probabilities in terms of the discrepancy on a double sample, combined with Lemma 4.4 gives the following result. Theorem 4.5 Suppo,e we are able to cleu,i/y an m ,ample of labelled ezamplea wing a perceptron decilion tree with K node, and obtaining margina 'Yi at node i, then we can bound the generali,ation error with probability greater than 1 - 6 to be Ie" than 1 (8m)K(2K) -(Dlog(4m) + log K ) m (K + 1)6 where D = E~l ~log(4em//cj) and lei = fathi/8). Proof: We must bound the probabilities over different architectures of trees and different margins. We simply have to choose the values of E to ensure that the individual 6's are sufficiently small that the total over all possible choices is less than 6. The details are omitted in this abstract. o 5 Experiments The theoretical results obtained in the previous section imply that an algorithm which produces large margin splits should have a better generalization, since increasing the margins in the internal nodes, has the effect of decreasing the bound on the test error. In order to test this strategy, we have performed the following experiment, divided in two parts: first run a standard perceptron decision tree algorithm and then for each decision node generate a maximal margin hyperplane implementing the same dichotomy in place of the decision boundary generated by the algorithm. Data-Dependent SRMfor Perceptron Decision Trees 341 Input: Random m sample x with corresponding classification b. Algorithm: Find a perceptron decision tree T which correctly classifies the sample using a standard algorithm; Let Ie = number of decision nodes of Tj From tree T create T' by executing the following loop: For each decision node i replace the weight vector w, by the vector wi which realises the maximal margin hyperplane agreeing with w, on the set of inputs reaching node i; Let the margin of w~ on the inputs reaching node i be 'Y,j Output: Classifier T', with bound on the generalisation error in terms of the number of decision nodes K and D = 2:~11e, log(4em/~) where Ie, = fath,/8). Note that the classification of T and T' agree on the sample and hence, that T' is consistent with the sample. As a PDT learning algorithm we have used OC1 [8], created by Murthy, Kasif and Salzberg and freely available over the internet. It is a randomized algorithm, which performs simulated annealing for learning the perceptrons. The details about the randomization, the pruning, and the splitting criteria can be found in [8]. The data we have used for the test are 4 of the 5 sets used in the original OC1 paper, which are publicly available in the UCI data repository [16]. The results we have obtained on these data are compatible with the ones reported in the original OC1 paper, the differences being due to different divisions between training and testing sets and their sizesj the absence in our experiments of crossv&l.idation and other techniques to estimate the predictive accuracy of the PDT; and the inherently randomized nature of the algorithm. The second stage of the experiment involved finding - for each node - the hyperplane which performes the lame split as performed by the OC1 tree but with the ma.ximal margin. This can be done by considering the subsample reaching each node as perfectly divided in two parts, and feeding the data accordingly relabelled to an algorithm which finds the optimal split in the linearly separable case. The ma.ximal margin hyperplanes are then placed in the decision nodes and the new tree is tested on the same testing set. The data sets we have used are: Wi,eoun,in Brealt Caneer, Pima Indiana Diabetel, BOlton Houling transformed into a classification problem by thresholding the price at • 21.000 and the classical Inl studied by Fisher (More informations about the databases and their authors are in [8]). All the details about sample sizes, number of attributes and results (training and testing accuracy, tree size) are summarised in table 1. We were not particularly interested in achieving a high testing accuracy, but rather in observing if improved performances can be obtained by increasing the margin. For this reason we did not try to optimize the performance of the original classifier by using cross-v&l.idation, or a convenient training/testing set ratio. The relevant quantity, in this experiment, is the different in the testing error between a PDT with arbitrary margins and the same tree with optimized margins. This quantity has turned out to be always positive, and to range from 1.7 to 2.8 percent of gain, on test errors which were already very low. train OC1 test FAT test #trs #ts attrib. classes nodes CANC 96.53 93.52 95.37 249 108 9 2 1 IRIS 96.67 96.67 98.33 90 60 4 3 2 DIAB 89.00 70.48 72.45 209 559 8 2 4 HOUS 95.90 81.43 84.29 306 140 13 2 7 342 1. Shawe-Taylor and N. Cristianini References [1] Ncga Alon, Shai Ben-David, Nicolo Cesa-Bianchi and David Haussler, "Scalesensitive Dimensions, Uniform Convergence, and Learnability," in Proceeding. of the Conference on Foundation. of Computer Science (FOCS), (1993). Also to appear in Journal of the ACM. [2] Martin Anthony and Peter Bartlett, "Function learning from interpolation", Technical Report, (1994). (An extended abstract appeared in Computational Learning Theory, Proceeding. 2nd European Conference, EuroCOLT'95, pages 211-221, ed. Paul Vitanyi, (Lecture Notes in Artificial Intelligence, 904) Springer-Verlag, Berlin, 1995). [3] Peter L. Bartlett and Philip M. Long, "Prediction, Learning, Uniform Convergence, and Scale-Sensitive Dimensions," Preprint, Department of Systems Engineering, Australian National University, November 1995. [4] Peter L. Bartlett, Philip M. Long, and Robert C. Williamson, "Fat-shattering and the learnability of Real-valued Functions," Journal of Computer and Sy.tem Science., 52(3), 434-452, (1996). [5] Breiman L., Friedman J.H., Olshen R.A., Stone C.J., "Classification and Regression Trees", Wadsworth International Group, Belmont, CA, 1984. [6] Brodley C.E., UtgofF P.E., Multivariate Decision Trees, Machine Learning 19, pp. 45-77, 1995. [7] Michael J. Kearns and Robert E. Schapire, "Efficient Distribution-free Learning of Probabilistic Concepts," pages 382-391 in Proceeding. of the Slit Sympo.ium on the Foundation. of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1990. [8] Murthy S.K., Kasif S., Salzberg S., A System for Induction of Oblique Decision Trees, Journal of Artificial Intelligence Research, 2 (1994), pp. 1-32. [9] Quinlan J.R., "C4.5: Programs for Machine Learning", Morgan Kaufmann, 1993. [10] Sankar A., Mammone R.J., Growing and Pruning Neural Tree Networks, IEEE Transactions on Computers, 42:291-299, 1993. [11] John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, Martin Anthony, Structural Risk Mjnjmization over Data-Dependent Hierarchi~, NeuroCOLT Technical Report NC-TR-96-053, 1996. (ftp:llftp.dc •• rhbDc.ac.uk/pub/Deurocolt/t.c~.port.). [12] J.A. Sirat, and J.-P. Nadal, "Neural trees: a new tool for classification", Network, 1, pp. 423-438, 1990 [13] UtgofF P.E., Perceptron Trees: a Case Study in Hybrid Concept Representations, Connection Science 1 (1989), pp. 377-391. [14] Vladimir N. Vapnik, E.timation of Dependence. Baled on Empirical Data, Springer-Verlag, New York, 1982. [15] Vladimir N. Vapnik, The Nature of Statiltical Learning Theory, SpringerVerlag, New York, 1995 [16] University of California, Irvine Machine Learning Repository, http://www.icB.uci.edu/ mlearn/MLRepoBitory.html
1997
82
1,433
Using Expectation to Guide Processing: A Study of Three Real-World Applications Shumeet 8aluja Justsystem Pittsburgh Research Center & School of Computer Science, Carnegie Mellon University baluja@cs.cmu.edu Abstract In many real world tasks, only a small fraction of the available inputs are important at any particular time. This paper presents a method for ascertaining the relevance of inputs by exploiting temporal coherence and predictability. The method proposed in this paper dynamically allocates relevance to inputs by using expectations of their future values. As a model of the task is learned, the model is simultaneously extended to create task-specific predictions of the future values of inputs. Inputs which are either not relevant, and therefore not accounted for in the model, or those which contain noise, will not be predicted accurately. These inputs can be de-emphasized, and, in turn, a new, improved, model of the task created. The techniques presented in this paper have yielded significant improvements for the vision-based autonomous control of a land vehicle, vision-based hand tracking in cluttered scenes, and the detection of faults in the etching of semiconductor wafers. 1 Introduction In many real-world tasks, the extraneous information in the input can be easily confused with the important features, making the specific task much more difficult. One of the methods in which humans function in the presence of many distracting features is to selectively attend to only portions of the input signal. A means by which humans select where to focus their attention is through the use of expectations. Once the important features in the current input are found, an expectation can be formed of what the important features in the next inputs will be, as well as where they will be. The importance of features must be determined in the context of a specific task; different tasks can require the processing of different subsets of the features in the same input. There are two distinct uses of expectations. Consider Carnegie Mellon's Navlab autonomous navigation system. The road-following module [Pomerleau, 1993] is separate from the obstacle avoidance modules [Thorpe, 1991]. One role of expectation, in which unexpected features are de-emphasized, is appropriate for the road-following module in which the features to be tracked, such as lane-markings, appear in predictable locations. This use of expectation removes distractions from the input scene. The second role of expectation, to emphasize unexpected features, is appropriate for the obstacle avoidance modules. This use of expectation emphasizes unanticipated features of the input scene. 2 Architectures for Attention In many studies of attention, saliency maps (maps which indicate input relevance) have been constructed in a bottom-up manner. For example, in [Koch & Ullman, 1985], a 860 s. Baluja saliency map, which is not task-specific, is created by emphasizing inputs which are different from their neighbors. An alternate approach, presented in [Clark & Ferrier, 1992], places mUltiple different, weighted, task-specific feature detectors around the input image. The regions of the image which contain high weighted sums of the detected features are the portion of the scene which are focused upon. Top-down knowledge of which features are used and the weightings of the features is needed to make the procedure task-specific. In contrast, the goal of this study is to learn which task-specific features are relevant without requiring top-down knowledge. In this study, we use a method based on Input Reconstruction Reliability Estimation (IRRE) [Pomerleau, 1993] to detennine which portions of the input are important for the task. IRRE uses the hidden units of a neural network (NN) to perfonn the desired task and to reconstruct the inputs. In its original use, IRRE estimated how confident a network's outputs were by measuring the similarity between the reconstructed and current inputs. Figure 1 (Left) provides a schematic ofIRRE. Note that the weights between the input and hidden layers are trained to reduce both task and reconstruction error. Because the weights between the input and hidden layers are trained to reduce both task and reconstruction error, a potential drawback of IRRE is the use of the hidden layer to encode all of the features in the image, rather than only the ones required for solving the particular task [Pomerleau, 1993]. This can be addressed by noting the following: if a strictly layered (connections are only between adjacent layers) feed-forward neural network can solve a given task, the activations of the hidden layer contain, in some fonn, the important infonnation for this task from the input layer. One method of detennining what is contained in the hidden layer is to attempt to reconstruct the original input image, based solely upon the representation developed in the hidden layer. Like IRRE, the input image is reconstructed from the activations of the units in the hidden layer. Unlike IRRE, the hidden units are not trained to reduce reconstruction error, they are only trained to solve the panicular task. The network's allocation of its limited representation capacity at the hidden layer is an indicator of what it deems relevant to the task. Information which is not relevant to the task will not be encoded in the hidden units. Since the reconstruction of the inputs is based solely on the hidden units' activations, and the irrelevant portions of the input are not encoded in the hidden units' activations, the inputs which are irrelevant to the task cannot be reconstructed. See Figure I(Right). By measuring which inputs can be reconstructed accurately, we can ascertain which inputs the hidden units have encoded to solve the task. A synthetic task which demonstrates this idea is described here. Imagine being given a lOxlO input retina such as shown in Figure 2a&b. The task is to categorize many such examples into one of four classes. Because of the random noise in the examples, the simple underlying process, of a cross being present in one of four locations (see Figure 2c), is not easily discernible, although it is the feature on which the classifications are to be based. Given enough examples, the NN will be able to solve this task. However, even after the model of the task is learned, it is difficult to ascertain to which inputs the network is attending. To detennine this, we can freeze the weights in the trained network and connect a input-reconstruction layer to the hidden units, as shown in Figure 1 (Right). After training these connections, by measuring where the reconstruction matches the actual input, we can detennine which inputs the network has encoded in its hidden units, and is therefore attending. See Figure 2d. weights trained to reduce task error only error. weights trained to reduce reconstruction error only. weights trained to reduce task error only. Figure 1: (Left) IRRE. (Right) Modified IRRE. "weights trained to reduce reconstruction error only. Using Expectation to Guide Processing 861 B: C: D: 2 + 3 4 ir+ Figure 2: (A & B): Samples of training data (cross appears in position 4 & 1 respectively). Note the large amounts of noise. (C): The underlying process puts a cross in one of these four locations. (D): The black crosses are where the reconstruction matched the inputs; these correspond exactly to the underlying process. IRRE and this modified IRRE are related to auto-encoding networks [Cottrell, 1990] and principal components analysis (PeA). The difference between auto-encoding networks and those employed in this study is that the hidden layers of the networks used here were trained to perfonn well on the specific task, not to reproduce the inputs accurately. 2.1 Creating Expectations A notion of time is necessary in order to focus attention in future frames. Instead of reconstructing the current input, the network is trained to predict the next input; this corresponds to changing the subscript in the reconstruction layer of the network shown in Figure 1 (Right) from t to t+ 1. The prediction is trained in a supervised manner, by using the next set of inputs in the time sequence as the target outputs. The next inputs may contain noise or extraneous features. However, since the hidden units only encode infonnation to solve the task, the network will be unable to construct the noise or extraneous features in its prediction. To this point, a method to create a task-specific expectation of what the next inputs will be has been described. As described in Section 1, there are two fundamentally different ways in which to interpret the difference between the expected next inputs and the actual next inputs. The first interpretation is that the difference between the expected and the actual inputs is a point of interest because it is a region which was not expected. This has applications in anomaly detection; it will be explored in Section 3.2. In the second interpretation, the difference between the expected and actual inputs is considered noise. Processing should be de-emphasized from the regions in which the difference is large. This makes the assumption that there is enough infonnation in the previous inputs to specify what and where the important portions of the next image will be. As shown in the road-following and hand-tracking task, this method can remove spurious features and noise. 3 Real-World Applications 1bree real-world tasks are discussed in this section. The first, vision-based road following, shows how the task-specific expectations developed in the previous section can be used to eliminate distractions from the input. The second, detection of anomalies in the plasmaetch step of wafer fabrication, shows how expectations can be used to emphasize the unexpected'features in the input. The third, visual hand-tracking, demonstrates how to incorporate a priori domain knowledge about expectations into the NN. 3.1 Application 1: Vision-Based Autonomous Road Following In the domain of autonomous road following, the goal is to control a robot vehicle by analyzing the image of the road ahead. The direction of travel should be chosen based on the location of important features like lane markings and r~ad edges. On highways and dirt roads, simple techniques, such as feed-forward NNs, have worked well for mapping road images to steering commands [Pomerleau, 1993]. However, on city streets, where there are distractions like old lane-.narkings, pedestrians, and heavy traffic, these methods fail. The purpose of using attention in this domain is to eliminate features of the road which the NN may mistake as lane markings. Approximately 1200 images were gathered from a 862 E F Figure 3: (Top): Four samples of training images. Left most shows the position of the lane-marking which was hand-marked. G (Right): In each triplet: Left: raw input imagtt. Middle: the network's prediction of the inputs at time t; this prediction was made by a network with input ofimaget_I ' Right: a pixel-by-pixel filtered image (see text). This image is used as the input to the NN. S. Baluja camera mounted on the left side of the CMU-Navlab 5 test vehicle, pointed downwards and slightly ahead of the vehicle. The car was driven through city and residential neighborhoods around Pittsburgh, PA. The images were gathered at 4-5 hz. The images were subsampled to 30x32 pixels. In each of these images, the horizontal position of the lane marking in the 20th row of the input image was manually identified. The task is to produce a Gaussian of activation in the outputs centered on the horizontal position of the lane marking in the 20th row of the image, given the entire input image. Sample images and target outputs are shown in Figure 3. In this task, the ANN can be confused by road edges (Figure 3a), by extraneous lane markings (Figure 3b), and reflections on the car itself (since the camera was positioned on the side of the car), as shown in Figure 3c. The network architecture shown in Figure 4 was used; this is the same architecture as in Figure l(right) with the feedback shown. The feedback is used during both training and simulation. In each time-step, a steering direction and a prediction of the next inputs is produced. For each time-step, the magnitude of the difference' between the input's expected value (computed in the previous time-step) and its actual value is computed. Each input pixel can be moved towards its background value l in proportion to this difference-value. The larger the difference value, the more weight is given to the background value. If the difference value is small, the actual inputs are used. This has the effect of deemphasizing the unexpected inputs. The results of using this method were very promising. The lane tracker removed distracting features from the images. In Figure 3G, a distracting lane-marking is removed: the lane marker on the right was correctly tracked in images before the distractor lane-marker appeared. In Figure 3F, a passing car is de-emphasized: the network does not have a model to predict the movement of passing cars, since these are not relevant for the lane-marker detection task. In Figure 3E, the side of the road appears brighter than expected; therefore it is de-emphasized. Note that the expectation-images (shown in the middle of each triplet weights trained to reduce task error only. __ weights I ... ~~.... --- trained to reduce prediction error only. m~tii;:mrt;,._--1 Weight bkgd. and actual inputs according to difference ima e. Signal Transfer (Connections are not trainable) Delayed I time step t---t Difference between inputst & predicted in uts Figure 4: Architecture used to track the lane marking in cluttered scenes. I. A simple estimate of the background value for each pixel is its average activation across the training set. For the road-following domain, it is possible to use a background activation of 0.0 (when the entire image is scaled to activations of +1.0 to -1.0) since the road often appears as intermediate grays. Using Expectation to Guide Processing 863 in Figure 3) show that the expected lane-marker and road edge locations are not precisely defined. This is due to the training method, which attempts to model the many possible transitions from one time step to the next to account for inter- and intra-driver variability with a limited training set [Baluja, 1996]. In summary, by eliminating the distractions in the input images, the lane-tracker with the attention mechanisms improved performance by 20% over the standard lane-tracker, measured on the difference between the estimated and hand-marked position of the lanemarker in each image. This improvement was seen on multiple runs, with random initial weights in the NN and different random translations chosen for the training images. 3.2 Application 2: Fault Detection in the Plasma-Etch Wafer Fabrication Plasma etch is one of the many steps in the fabrication of semiconductor wafers. In this study, the detection of four faults was attempted. Descriptions of the faults can be found in [Baluja, 1996][Maxion, 1996]. For the experiments conducted here, only a single sensor was used, which measured the intensity of light emitted from the plasma at the 520nm wavelength. Each etch was sampled once a second, providing approximately 140 samples per wafer wavefonn. The data-collection phase of this experiment began on October 25, 1994, and continued until April 4, 1995. The detection of faults is a difficult problem because the contamination of the etch chamber and the degradation parts keeps the sensor's outputs, even for fault-free wafers, changing over time. Accounting for machine state should help the detection process. Expectation is used as follows: Given the waveform signature of waferT_I' an expectation of waferT can be fonned. The input to the prediction-NN is the wavefonn signature of waferT_I; the output is the prediction of the signature of waferT. The target output for each example is the signature of the next wafer in sequence (the full 140 parameters). Detection of the four faults is done with a separate network which used as input: the expectation of the wafer's wavefonn, the actual wafer's wavefonn, and the point-by-point difference of the two. In this task, the input is not filtered as in the driving domain described previously; the values of the point-by-point difference vector are used as extra inputs. The perfonnance of many methods and architectures were compared on this task, details can be found in [Baluja, 1996]. The results using the expectation based methods was a 98.7% detection rate, 100% classification rate on the detected faults (detennining which of the four types of faults the detected fault was), and a 2.3% false detection rate. For comparison, a simple perceptron had an 80% detection rate, and a 40% false-detection rate. A fully-connected network which did not consider the state of the machine achieved a 100% detection rate, but a 53% false detection rate. A network which considered state by using the last-previous no-fault wafer for comparison with the current wafer (instead of an expectation for the current wafer) achieved an 87.9% detection rate, and a 1.5% falsedetection rate. A variety of neural and non-neural methods which examined the differences between the expected and current wafer, as well those which examined the differences between the last no-fault wafer and the current wafer, perfonned poorly. In summary, methods which did not use expectations were unable to obtain the false-positives and detection rates of the expectation-based methods. 3.3 Application 3: Hand-Tracking in Cluttered Scenes In the tasks described so far, the transition rules were learned by the NN. However, if the transition rules had been known a priori, processing could have been directed to only the relevant regions by explicitly manipulating the expectations. The ability to incorporate a priori rules is important in many vision-based tasks. Often the constraints about the environment in which the tracking is done can be used to limit the portions of the input scene which need to be processed. For example, consider visually tracking a person's hand. Given a fast camera sampling rate, the person's hand in the current frame will be close to 864 A. B. S. Baluja Figure 5: Typical input images used for the hand-tracking experiments. The target is to track the subject's right hand. Without expectation, in (A) both hands were found in X outputs, and the wrong hand was found in the Y outputs. In (8) Subject's right hand and face found in the X outputs. where it appeared in the previous frame. Although a network can learn this constraint by developing expectations of future inputs (as with the NN architecture shown in Figure 4), training the expectations can be avoided by incorporating this rule directly. In this task, the input layer is a 48*48 image. There are two output layers of 48 units; the desired outputs are two gaussians centered on the (X,Y) position of the hand to be tracked. See Figure 5. Rather than creating a saliency map based upon the difference between the actual and predicted inputs, as was done with autonomous road following, the saliency map was explicitly created with the available domain knowledge. Given the sampling rate of the camera and the size of the hand in the image, the salient region for the next timestep was a circular region centered on the estimated location of the hand in the previous image. The activations of the inputs outside of the salient region were shifted towards the background image. The activations inside the salient region were not modified. After applying the saliency map to the inputs, the filtered inputs were fed into the NN. This system was tested in very difficult situations; the testing set contained images of a person moving both of his hands and body throughout the sequence (see Figure 5). Therefore, both hands and body are clearly visible in the difference images used as input into the network. All training was done on much simpler training sets in which only a single hand was moving. To gauge the perfonnance of an expectation-based system, it was compared to a system which used the following post-processing heuristics to account for temporal coherence. First, before a gaussian was fit to either of the output layers, the activation of the outputs was inversely scaled with the distance away from the location of the hand in the previous time step. This reduces the probability of detecting a hand in a location very different than the previous detection. This helps when both hands are detected, as shown in Figure 5. The second heuristic was that any predictions which differ from the previous prediction by more than half of the dimension of the output layer were ignored, and the previous prediction used instead. See Table I for the results. In summary, by using the expectation based methods, perfonnance improved from 66% to 90% when tracking the left hand, and 52% to 91 % when tracking the right hand. Table I: Performance: Number of frames in which each hand was located (283 total images). Target: Find Left Hand Target: Find Right Hand Method Which Hand Was Found Which Hand Was Found % Correct L R None % Correct L R None No Heuristics, No Expect. 52% 146 44 93 16% 143 47 93 Heuristics 66% 187 22 74 52% 68 147 68 Expectation 91% 258 3 22 90% 3 255 25 Expectation + Heuristics 90% 256 3 24 91% 2 257 24 [Nowlan & Platt, 1995] presented a convolutional-NN based hand-tracker which used separate NNs for intensity and differences images with a rule-based integration of the multiple network outputs. The integration of this expectation-based system should improve the performance of the difference-image NN. Using Expectation to Guide Processing 865 4 Conclusions A very closely related procedure to the one described in this paper is the use of Kalman Filters to predict the locations of objects of interest in the input retina. For example, Dickmanns uses the prediction of the future state to help guide attention by controlling the direction of a camera to acquire accurate position of landmarks [Dickmanns, 1992]. Strong models of the vehicle motion, the appearance of objects of interest (such as the road, road-signs, and other vehicles), and the motion of these objects are encoded in the system. The largest difference in their system and the one presented here is the amount of a priori knowledge that is used. Many approaches which use Kalman Filters require a large amount of problem specific information for creating the models. In the approach presented in this paper, the main object is to automatically learn this information from examples. First, the system must learn what the important features are, since no top-down information is assumed. Second, the system must automatically develop the control strategy from the detected features. Third, the system must also learn a model for the movements of all of the relevant features. In deciding whether the approaches described in this paper are suitable to a new problem, two criteria must be considered. First, if expectation is to be used to remove distractions from the inputs, then given the current inputs, the activations of the relevant inputs in the next time step must be predictable while the irrelevant inputs are either unrelated to the task or are unpredictable. In many visual object tracking problems, the relevant inputs are often predictable while the distractions are not. In the cases in which the distractions are predictable, if they are unrelated to the main task, these methods can work. When using expectation to emphasize unexpected or potentially anomalous features, the activations of the relevant inputs should be unpredictable while the irrelevant ones are predictable. This is often the case for anomaly/fault detection tasks. Second, when expectations are used as a filter, it is necessary to explicitly define the role of the expected features. In particular, it is necessary to define whether the expected features should be considered relevant or irrelevant, and therefore, whether they should be emphasized or de-emphasized, respectively. We have demonstrated the value of using task-specific expectations to guide processing in three real-world tasks. In complex, dynamic, environments, such as driving, expectations are used to quickly and accurately discriminate between the relevant and irrelevant features. For the detection of faults in the plasma-etch step of semiconductor fabrication, expectations are used to account for the underlying drift of the process. Finally, for visionbased hand-tracking, we have shown that a priori knowledge about expectations can be easily integrated with a hand-detection model to focus attention on small portions of the scene, so that distractions in the periphery can be ignored. Acknowledgments The author would like to thank Dean Pomerleau, Takeo Kanade, Tom Mitchell and Tomaso Poggio for their help in shaping this work. References Baluja, S. 1996, Expectation-Based Selective Attention. Ph.D. Thesis, School of Computer Science, CMU. Clark, J. & Ferrier, N (1992), Attentive Visual Servoing, in: Active Vision. Blake & Yuille, (MIT Press) 137-154. Cottrell, G.W., 1990, Extracting Features from Faces using Compression Network, Connectionist Models, Morgan Kaufmann 328-337. Dickmanns, 1992, Expectation-based Dynamic Scene Understanding, in: Active Vision. A. Blake & A.Yuille, MIT Press. Koch, C. & Ullman, S. (1985) "Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry", in: Human Neurobiology 4 (1985) 219-227. Maxion, R. (1995) The Semiconductor Wafer Plasma-Etch Data Set. Nowlan, S. & Platt, J., 1995, "A Convolutional Neural Network Hand Tracker". NIPS 7. MIT Press. 901-908. Pomerleau, D.A., 1993. Neural Network Perception for Mobile Robot Guidance, Kluwer Academic. Thorpe, C., 1991, Outdoor Visual Navigation for Autonomous Robots, in: Robotics and Autonomous Systems 7.
1997
83
1,434
Features as Sufficient Statistics D. Geiger • Department of Computer Science Courant Institute and Center for Neural Science New York University geiger~cs.nyu.edu A. Rudra t Department of Computer Science Courant Institute New York University archi~cs.nyu.edu L. Maloney t Departments of Psychology and Neural Science New York University Itm~cns.nyu.edu Abstract An image is often represented by a set of detected features. We get an enormous compression by representing images in this way. Furthermore, we get a representation which is little affected by small amounts of noise in the image. However, features are typically chosen in an ad hoc manner. \Ve show how a good set of features can be obtained using sufficient statistics. The idea of sparse data representation naturally arises. We treat the I-dimensional and 2-dimensional signal reconstruction problem to make our ideas concrete. 1 Introduction Consider an image, I, that is the result of a stochastic image-formation process. The process depends on the precise state, f, of an environment. The image, accordingly, contains information about the environmental state f, possibly corrupted by noise. We wish to choose feature vectors ¢leI) derived from the image that summarize this information concerning the environment. We are not otherwise interested in the contents of the image and wish to discard any information concerning the image that does not depend on the environmental state f . ·Supported by NSF grant 5274883 and AFOSR grants F 49620-96-1-0159 and F 4962096-1-0028 tpartially supported by AFOSR grants F 49620-96-1-0159 and F 49620-96-1-0028 tSupported by NIH grant EY08266 Features as Sufficient Statistics 795 We develop criteria for choosing sets of features (based on information theory and statistical estimation theory) that extract from the image precisely the information concerning the environmental state. 2 Image Formation, Sufficient Statistics and Features As above, the image I is the realization of a random process with distribution PEn1JironmentU). We are interested in estimating the parameters j of the environmental model given the image (compare [4]). We assume in the sequel that j, the environmental parameters, are themselves a random vector with known prior distribution. Let ¢J(I) denote a feature vector derived from the the image I. Initially, we assume that ¢J(I) is a deterministic function of I. For any choice of random variables, X, Y, define[2] the mutual in/ormation of X and Y to be M(Xj Y) = :Ex,Y P(X, Y)log pf*~;:;t). The information about the environmental parameters contained in the image is then M(f;I), while the information about the environmental parameters contained in the feature vector ¢J(I) is then M(fj ¢J(I)). As a consequence of the data processing inequality [2] , M(f; ¢J(I)) ~ M(f; I). A vector ¢J(I), 'of features is defined to be sufficient if the inequality above is an equality. We will use the terms feature and statistic interchangeably. The definition of a sufficient feature vector above is then just the usual definition of a set of jointly sufficient statistics[2]. To summarize, a feature vector ¢J(I) captures all the information about the environmental state parameters / precisely when it is sufficent. 1 Graded Sufficiency: A feature vector either is or is not sufficient. For every possible feature vector ¢J(I), we define a measure of its failure to be sufficent: Suff(¢J(I)) = M(fjI) - MUj ¢J(I)). This sufficency measure is always non-negative and it is zero precisely when ¢J is sufficient. We wish to find feature vectors ¢J(I) where Suff(¢J(I)) is close to O. We define ¢J(I) to be t-sufficient if Suff(¢J(I)) ~ t. In what follows, we will ordinarily say sufficient, when we mean t-sufficient. The above formulation of feature vectors as jointly sufficient statistics, maximizing the mutual information, M(j, ¢J(I)), can be expressed as the Kullback-Leibler distance between the conditional distributions, PUll) and P(fI¢J(I)): E1[D(PUII) II P(fI¢J(I)))] = M(fj I) - MUj ¢J(I)) , (1) where the symbol E1 denotes the expectation with respect to I, D denotes the Kullback-Leibler (K-L) distance, defined by DUlIg) = :Ex j(x) logU(x)jg(x)) 2. Thus, we seek feature vectors ¢J(I) such that the conditional distributions, PUll) and PUI¢J(I)) in the K-L sense, averaged across the set of images. However, this optimization for each image could lead to over-fitting. 3 Sparse Data and Sufficient Statistics The notion of sufficient statistics may be described by how much data can be removed without increasing the K-L distance between PUI¢J(I)) and PUll). Let us 1 An information-theoretic framework has been adopted in neural networks by others; e.g., [5] [9][6] [1][8]. However, the connection between features and sufficiency is new. 2We won't prove the result here. The proof is simple and uses the Markov chain property to say that P(f, l, ¢(I)) = P(l, ¢J(l) )P(fll, ¢(l)) = P(l)P(fII). 796 D. Geiger; A. Rudra and L. T. Maloney formulate the approach more precisely, and apply two methods to solve it. 3.1 Gaussian Noise Model and Sparse Data We are required to construct P(fII) and P(fI¢(I)). Note that according to Bayes' rule P(fl¢(!)) = P(¢(I)I!) P(f) j P(¢(I)). We will assume that the form of the model P(f) is known. In order to obtain P(¢(I)I!) we write P(¢(!)l!) = EJ P(¢(I)II)P(II!)· Computing P(fl ¢(!)): Let us first assume that the generative process of the image I, given the model I, is Gaussian LLd. ,Le., P(II/) = (ljV21To"i) TIi e-(f,-Ji)2/2tT~ where i = 0,1, ... , N - 1 are the image pixel index for an image of size N. Further, P(Iil/i) is a function of (Ii - Ii) and Ii varies from -00 to +00, so that the normalization constant does not depend on Ii. Then, P(fII) can be obtained by normalizing P(f)P(II!). P(fII) = (ljZ)(II e-(fi-J;)2/2tT~)p(f), i where Z is the normalization constant. Let us introduce a binary decision variable Si = 0,1, which at every image pixel i decides if that image pixel contains "important" information or not regarding the model I. Our statistic ¢ is actually a (multivariate) random variable generated from I according to Ps(¢II) = II i This distribution gives ¢i = Ii with probability 1 (Dirac delta function) when Si = ° (data is kept) and gives ¢i uniformly distributed otherwise (Si = 1, data is removed). We then have Ps(¢I/) = I P(¢,II!)dI= I P(II!) Ps(¢II) dI II 1 I -~(f;-Ji)2 e 20', i v21TUr = II The conditional distribution of ¢ on I satisfies the properties that we mentioned in connection with the posterior distribution of I on I. Thus, Ps(fl¢) = (ljZs) P(f) (II e-~(f;-J;)2(1-Si)) (2) i where Zs is a normalization constant. It is also plausible to extend this model to non-Gaussian ones, by simply modifying the quadratic term (fi - Ii)2 and keeping the sparse data coefficient (1 - Si). 3.2 Two Methods We can now formulate the problem of finding a feature-set, or finding a sufficient statistics, in terms of the variables Si that can remove data. More precisely, we can find S by minimizing Features as Sufficient Statistics 797 E(s,I) = D(P(fII) II Ps(fl¢(l))) + A 2)1 - Si) . (3) It is clear that the K-L distance is minimized when Si = 0 everywhere and all the data is kept. The second term is added on to drive the solution towards a minimal sufficient statistic, where the parameter A has to be estimated. Note that, for A very large, all the data is removed (Si = 1), while for A = 0 all the data is kept. We can further write (3) as E(s,I) = 2:P(fII) log(P(fIl)/Ps(fI¢(I))) + A 2:(1- Si) I 2: P(fII)log( (Zs/Z) II e -2!rUi-li)2(1-(I-Si») + A 2:(1 - Si) I i i Zs ['" Si 2)] '" log-Z - Ep ~ 2u[ (h - Ii) + A ~(1 - Si) . , , where Ep [.] denotes the expectation taken with respect to the distribution P. IT we let Si be a continuous variable the minimum E(s, I) will occur when aE 2 2 0= aSi = (Ep.[(h - Ii) ] - Ep[(fi - Ii) ]) - A. We note that the Hessian matrix (4) Hs[i,j] = a~:!j = Ep.[(h - Ii)2(/j - Ij)2] - Ep.[(h - Ii)2]Ep.[(!i - Ij?] , (5) is a correlation matrix, i.e., it is positive semi-definite. Consequently, E(s) is convex. Continuation Method on A: In order to solve for the optimal vector S we consider the continuation method on the parameter A. We know that S = 0, for A = O. Then, taking derivatives of (4) with respect to A, we obtain aSj "'H-1 [ " ] aA = LJ s ~,J. i It was necessary the Hessian to be invertible, i.e., the continuation method works because E is convex. The computations are expected to be mostly spent on estimating the Hessian matrix, i.e., on computing the averages Ep. [(h - Ii)2(iJ - Ij )2], Ep• [(h - Ii)2], and Ep. [(fj - Ij )2]. Sometimes these averages can be exactly computed, for example for one dimensional graph lattices. Otherwise these averages could be estimated via Gibbs sampling. The above method can be very slow, since these computations for Hs have to be repeated at each increment in A. We then investigate an alternative direct method. A Direct Method: Our approach seeks to find a "large set" of Si = 1 and to maintain a distribution Ps(fI¢(I)) close to P(fII), i.e., to remove as many data points as possible. For this (a) 798 D. Geiger, A. Rudra and L. T. Maloney rj o 10 20 . 10 Figure 1: (a). Complete results for step edge showing the image, the effective variance and the computed s-value (using the continuation method). (b) Complete results for step edge with added noise. goal, we can investigate the marginal distribution P(fiII) f dfo .. , dh-l dfHl ... dfN-l P(fII) ~ e -2!;(f;-I;)2 f II d/j P(f) (II e --0(/;-1;)2) j#i j#i PI; (h) Pell(!;,), (after rearranging the normalization constants) where Pel I (h) is an effective marginal distribution that depends on all the other values of I besides the one at pixel i. How to decide if Si = 0 or Si = 1 directly from this marginal distribution P(lilI)? The entropy of the first term HI; (fi) = J dfiPI; (h) logPI; (h) indicates how much !;, is conditioned by the data. The larger the entropy the less the data constrain Ii, thus, there is less need to keep this data. The second term entropy Hell (Ii) = J dhPell(h) lo9Pel/(fi) works the opposite direction. The more h is constrained by the neighbors, the lesser the entropy and the lesser the need to keep that data point. Thus, the decision to keep the data, Si = 0, is driven by minimizing the "data" entropy HI (!;,) and maximizing the neighbor entropy Hell (!;,). The relevant quantity is Hell(h)-HI; (!;,). When this is large, the pixel is kept. Later, we will see a case where the second term is constant, and so the effective entropy is maximized. For Gaussian models, the entropy is the logarithm of the variance and the appropriate ratio of variances may be considered. 4 Example: Surface Reconstruction To make this approach concrete we apply to the problem of surface reconstruction. First we consider the 1 dimensional case to conclude that edges are the important features. Then, we apply to the two dimensional case to conclude that junctions followed by edges are the important features. Features as Sufficient Statistics 799 4.1 ID Case: Edge Features Various simplifications and manipulations can be applied for the case that the model f is described by a first order Markov model, i.e., P(f) = Di Pi(h, h-I).Then the posterior distribution is P(fII) = ~ II e-[~(li-/i)2+"i(li-/i-l)21, i where J-ti are smoothing coefficients that may vary from pixel to pixel according to how much intensity change occurs ar pixel i, e.g., J-ti = J-tl+P(Ii:/i-d2 with J-t and p to be estimated. We have assumed that the standard deviation of the noise is homogeneous, to simplify the calculations and analysis of the direct method. Let us now consider both methods, the continuation one and the direct one to estimate the features. Continuation Method: Here we apply ~ = 2:i H;I[i, j] by computing Hs[i,j], given by (5), straight forwardly. We use the Baum-Welch method [2] for Markov chains to exactly compute Ep• [(h-li)2(h-Ij?], Ep• [(h-li?], and Ep.[(f;-Ij)2]. The final result ofthis algorithm, applied to a step-edge data (and with noise added) is shown in Figure 1. Not surprisingly, the edge data, both pixels, as well as the data boundaries, were the most important data, Le., the features. Direct Method: We derive the same result, that edges and boundaries are the most important data via an analysis of this model. We use the result that P(filI) = / dlo ... dli- I dli+1 ... dIN - I PUII) = Z~e-~(Ii-/.)2 e->.[i(li-r [i)2 , where >.t" is obtained recursively, in log2 N steps (for simplicity, we are assuming N to be an exact power of 2), as follows >.K 11K >.K 11K >.~K = (>.!< + i+KrHK + i-Kri K ) (6) ~ ~ >.f + J-tf + J-tfrK >'i + J-tf + J-ti-K The effective variance is given by varel/(h) = 1/(2)'t'') while the data variance is given by var/(h) = (72. Since var/(h) does not depend on any pixel i, maximizing the ratio var ell / var / (as the direct method suggested) as equivalent to maximizing either the effective variance, or the total variance (see figure(I). Thus, the lower is >.t" the lower is Si. We note that >.f increases with K, and J-tf decreases with K. Consequently >.K increases less and less as K increases. In a perturbative sense A; most contribute to At" and is defined by the two neighbors values J-ti and J-ti+I, Le., by the edge information. The larger are the intensity edges the smaller are J-ti and therefore, the smaller will >.r be. Moreover, >.t" is mostly defined by>.; (in a perturbative sense, this is where most of the contribution comes). Thus, we can argue that the pixels i with intensity edges will have smaller values for At" and therefore are likely to have the data kept as a feature (Si = 0). 4.2 2D Case: Junctions, Corners, and Edge Features Let us investigate the two dimensional version of the ID problem for surface reconstruction. Let us assume the posterior PUll) = .!.e-[~(lii-/ij)2+":i(li;-/i-l.j)2+"~j(lij-/;.j-l)21, Z 800 D. Geiger, A. Rudra and L. T. Maloney where J.L~jh are the smoothing coefficients along vertical and horizontal direction, that vary inversely according to the 'V I along these direction. We can then approximately compute (e.g., see [3)) (f I) 1 _-L(J·· -I·· )2 _>.N(J· ·_rN )2 P ij I ~ Z e ~ ., ., e ij" ij where, analogously to the ID case, we have >..K h,K >..K h,K >..K 11,K >.. 11,K >..~ + i ,j-KJ.Lij + i,j+KJ.Li,j+K + i-K,jJ.Lij + HK,jJ.LHK,j (7) ~ K K K K Xi ,j-K Xi,j+K Xi-K,j XHK,j " ,K h,K h K _ \ K + h,K + 11,K + h,K + 11,K d h,2K _ Jl.ij Jl.i.i±K were Xi ,j Aij J.Lij J.Lij J.Li,HK J.LHK,j' an J.Lij x!' . '" The larger is the effective variance at one site (i,j), the smaller is >..N, the more likely that image portion to be a feature. The larger the intensity gradient along h, v, at (i, j), the smaller J.L~1J. The smaller is J.L~11 the smaller will be contribution to >..2. In a perturbative sense ([3)) >..2 makes the largest contribution to >..N. Thus, at one site, the more intensity edges it has the larger will be the effective variance. Thus, T-junctions will produce very large effective variances, followed by corners, followed by edges. These will be, in order of importance, the features selected to reconstruct 2D surfaces. 5 Conclusion We have proposed an approach to specify when a feature set has sufficient information in them, so that we can represent the image using it. Thus, one can, in principle, tell what kind of feature is likely to be important in a given model. Two methods of computation have been proposed and a concrete analysis for a simple surface reconstruction was carried out. References [1] A. Berger and S. Della Pietra and V. Della Pietra "A Maximum Entropy Approach to Natural Language Processing" Computational Linguistics, Vo1.22 (1), pp 39-71, 1996. [2] T. Cover and J. Thomas. Elements of Information Theory. Wiley Interscience, New York, 1991. [3] D. Geiger and J. E. Kogler. Scaling Images and Image Feature via the Renormalization Group. In Proc. IEEE Con/. on Computer Vision & Pattern Recognition, New York, NY, 1993. [4] G. Hinton and Z. Ghahramani. Generative Models for Discovering Sparse Distributed Representations To Appear Phil. funs. of the Royal Society B, 1997. [5] R. Linsker. Self-Organization in a Perceptual Network. Computer, March 1988, 105-117. [6] J. Principe, U. of Florida at Gainesville Personal Communication [7] T. Sejnowski. Computational Models and the Development of Topographic Projections 7rends Neurosci, 10, 304-305. [8] S.C. Zhu, Y.N. Wu, D. Mumford. Minimax entropy principle and its application to texture modeling Neural Computation 1996 B. [9] P. Viola and W.M. Wells III. "Alignment by Maximization of Mutual Information". In Proceedings of the International Conference on Computer Vision. Boston. 1995.
1997
84
1,435
Factorizing Multivariate Function Classes Juan K. Lin* Department of Physics University of Chicago Chicago, IL 60637 Abstract The mathematical framework for factorizing equivalence classes of multivariate functions is formulated in this paper. Independent component analysis is shown to be a special case of this decomposition. Using only the local geometric structure of a class representative, we derive an analytic solution for the factorization. We demonstrate the factorization solution with numerical experiments and present a preliminary tie to decorrelation. 1 FORMALISM In independent component analysis (ICA), the goal is to find an unknown linear coordinate system where the joint distribution function admits a factorization into the product of one dimensional functions. However, this decomposition is only rarely possible. To formalize the notion of multivariate function factorization, we begin by defining an equivalence relation. Definition. We say that two functions f, 9 : IRn -t IR are equivalent if there exists A,b and c such that: f(x) = cg(Ax+b), where A is a non-singular matrix and c f. O. Thus, the equivalence class of a function consists of aU invertible linear transformations of it. To avoid confusion, equivalence classes will be denoted in upper case, and class representatives in lower case. We now define the product of two equivalence classes. Consider representatives b : IRn -t IR, and c : IRm -t IR of corresponding equivalence classes Band C. Let Xl E IRn , x"2 E IRm , and x = (xl, x"2). From the scalar product of the two functions, define the function a : IRn+m -+ IR by a(x) = b(xI)c(x"2). Let the product of Band C be the equivalence class A with * Current address: E25-201, MIT, Cambridge, MA 02139. Email: jklin@ai.mit.edu 564 1. K Lin representative a(x). This product is independent of the choice of representatives of Band C, and hence is a well defined operation on equivalence classes. We proceed to define the notion of an irreducible class. Definition. Denote the equivalence class of constants by I. We say that A is irreducible if A = BC implies either B = A, C = I, or B = I, C = A. From the way products of equivalence classes are defined, we know that all equivalence classes of one dimensional functions are irreducible. Our formulation of the factorization of multivariate function classes is now complete. Given a multivariate function, we seek a factorization of the equivalence class of the given representative into a product of irreducibles. Intuitively, in the context of joint distribution functions, the irreducible classes constitute the underlying sources. This factorization generalizes independent component analysis to allow for higher dimensional "vector" sources. Consequently, this decomposition is well-defined for all multivariate function classes. We now present a local geometric approach to accomplishing this factorization. 2 LOCAL GEOMETRIC INFORMATION Given that the joint distribution factorizes into a product in the "source" coordinate system, what information can be extracted locally from the joint distribution in a "mixed" coordinate frame? We assume that the relevant multivariate function is twice differentiable in the re!jion of interest, and denote H f, the Hessian of I, to be the matrix with elements Hij = oiod, where Ok = a~k' Proposition: H' is block diagonal everywhere, oiojllso = 0 for all points So and all i ~ k, j > k, il and only il 1 is separable into a sum I(SI,"" sn) = g( SI, ... , Sk) + h( Sk+l, ... , sn) for some functions 9 and h. Proof - Sufficiency: Given l(sl, . .. , sn) = g(SI, . .. , Sk) + h(sk+1, . .. , Sn), 021 _ ~ Oh(Sk+1,"" Sn) _ 0 OSiOSj OSi OSj everywhere for all i ~ k, j > k. Necessity: From H{n = 0, we can decompose 1 into l(sl, S2,···, sn) = 9(SI, ... , sn-t} + h(S2"'" sn), for some functions 9 and h. Continuing by imposing the constraints H [. = 0 for all j > k, we find J l(sl, S2,"" sn) = 9(S1, ... , Sk) + h(S2"'" sn). Combining with Htj = 0 for all j > k yields I(SI, S2,···, sn) = 9(SI, ... , Sk) + h(S3, ... , sn). Finally, inducting on i, from the constraints Ht. = 0 for all i < k and J' > k we ~ , arrive at the desired functional form I(SI,S2"",Sn) = g(SI,. ",Sk) + h(Sk+l, ... ,Sn). Factorizing Multivariate Function Classes 565 More explicitly, a twice-differentiable function satisfies the set of coupled partial differential equations represented by the block diagonal structure of H if and only if it admits the corresponding separation of variables decomposition. By letting log p = f, the additive decomposition of f translates to a product decomposition of p. The more general decomposition into an arbitrary number of factors is obtained by iterative application of the above proposition. The special case of independent component analysis corresponds to a strict diagonalization of H. Thus, in the context of smooth joint distribution functions, pairwise conditional independence is necessary and sufficient for statistical independence. To use this information in a transformed "mixture" frame, we must understand how the matrix Hlog p transforms. From the relation between the mixture and source coordinate systems given by fl = As, we have 8~; = Aji 8~j' where we use Einstein's convention of summation over repeated indices. From the relation between the joint distributions in the mixture and source frames, Ps(S) = IAIPx(fl), direct differentiation gives 82 10gps(S) _ A A 82 10gPx(i) OSi8s1 ji kl 8xj 8xk . L t . H 8 2 10gp (8) d H8 2 10gp (x). . . h e tmg .. = . an .. = Z m matnx notatIOn we ave lJ 8s; 8Sj lJ 8x; 8Xj , H = AT if A. In other words, H is a second rank (symmetric) covariant tensor. The joint distribution admits a product decomposition in the source frame if and only if H and hence AT if A has the corresponding block diagonal structure. Thus multivariate function class factorization is solved by joint block diagonalization of symmetric matrices, with constraints on A of the form AjiifjkAkl = 0. Because the Hessian is symmetric, its diagonalization involves only (n choose 2) constraints. Consequently, in the independent component analysis case where the joint distribution function admits a factorization into one dimensional functions, if the mixing transformation is orthogonal, the independent component coordinate system will lie along the eigenvector directions of if. Generally however, n(n 1) independent constraints corresponding to information from the Hessian at two points are needed to determine the n arbitrary coordinate directions. 3 NUMERICAL EXPERIMENTS In the simplest attack on the factorization problem, we solve the constraint equations from two points simultaneously. The analytic solution is demonstrated in two dimensions. Without loss of generality, the mixing matrix A is taken to be of the form A=(~ ~) . The constraints from the two points are: ax + b(xy + 1) + cy = 0, and a'x + b'(xy + 1) + e'y = 0, where Hu = a, H21 = H12 = b and H22 = e at the first point, and the primed coefficients denote the values at the second point. Solving the simultaneous quadratic equations, we find x a'e - ae' ± v(ale - ae')2 - 4(a'b - ab') (b'e - be') 2(ab' - a'b) 566 y a'e - ae' ± v(ale - ae')2 - 4(a'b - ab')(b'e - be') 2(bc' - b'e) 1. K. Lin The ± double roots is indicative of the (x, y) ~ (l/y, l/x) symmetry in the equations, and together only give two distinct orientation solutions. These independent component orientation solutions are given by 81 = tan-l(l/x) and 82 = tan-ley). 3.1 Natural Audio Sources To demonstrate the analytic factorization solution, we present some proof of concept numerics. Generality is pursued over optimization concerns. First, we perform the standard separation of two linearly mixed natural audio sources. The input dataset consists of 32000 un-ordered datapoints, since no use will be made of the temporal information. The process for obtaining estimates of the Hessian matrix if is as follows. A histogram of the input distribution was first acquired and smoothed by a low-pass Gaussian mask in spatial-frequency space. The elements of if were then obtained via convolution with a discrete approximation of the derivative operator. The width of the Gaussian mask and the support of the derivative operator were chosen to reduce sensitivity to low spatial-frequency uncertainty. It should be noted that the analytic factorization solution makes no assumptions about the mixing transformation, consequently, a blind determination of the smoothing length scale is not possible because of the multiplicative degree of freedom in each source. Because of the need to take the logarithm of p before differentiation, or equivalently to divide by p afterwards, we set a threshold and only extracted information from points where the number of counts was greater than threshold. This is justified from a counting uncertainty perspective, and also from the understanding that regions with vanishing probability measure contain no information. With our sample of 32000 datapoints, we considered only the bin-points with a corresponding bin count greater than 30. From the 394 bin locations that satisfied this constraint, the solutions (81 Jh) for all (394 choose 2) = (394·393/2) pairs of the corresponding factorization equations are plotted in Fig. 1. A histogram ofthese solutions are shown in Fig. 2. The two peaks in the solution histogram correspond to orientations that differ from the two actual independent component orientations by 0.008 and 0.013 radians. The signal to mixture ratio of the two outputs generated from the solution are 158 and 49. 3.2 Effect of Noise Because the solution is analytic, uncertainty in the sampling just propagates through to the solution, giving rise to a finite width in the solution's distribution. We investigated the effect of noise and counting uncertainty by performing numerics starting from analytic forms for the source distributions. The joint distribution in the source frame was taken to be: Normalization is irrelevant since a function's decomposition into product form is preserved in scalar multiplication. This is also reflected in the equivalence between Hiogp and Hiog cp for e an arbitrary positive constant. The joint distribution in the mixture frame was obtained from the relation Px(x) = IAI-Ips(S'). To simulate Factorizing Multivariate Function Classes 567 . . ~. ._ ... -'.or' ~ ... . . n/4 82 .. ·····n:I..:-l ··-Tt/4 re/4 Figure 1: Scatterplot of the independent component orientation solutions. All unordered solution pairs ((h, (J2) are plotted. The solutions are taken in the range from -7r /2 to 7r /2. ~~~~r--------'r--------'.--------,--------~--------~--------~ ··_····n/2 ·_·· .. ·n!4 e n/4 rr/2 Figure 2: Histogram of the orientation solutions plotted in the previous figure. The range is still taken from -7r /2 to 7r /2, with the histogram wrapped around to ease the circular identification. The mixing matrix used was: all = 0.0514, a21 = 0.779, a12 = 0.930, a22 = -0.579, giving independent component orientations at -0.557 and 1.505 radians. Gaussian fit to the centers of the two solution peaks give -0.570 ± 0.066 and 1.513 ± 0.077 radians for the two orientations. sampling, Px (x) was multiplied with the number of samples M, onto which was added Gaussian distributed noise with amplitude given by the (M Px(X))1/2. This reflects the fact that counting uncertainty scales as the square root of the number of counts. The result was rounded to the nearest integer, with all negative count values set to zero. The subsequent processing coincided with that for natural audio sources. From the source distribution equation above, the minimum number of expected counts is M, and the maximum is 9M. The results in Figures 3 and 4 show that, as expected, increasing the number of samplings decreases the widths of the solution peaks. By fitting Gaussians to the two peaks, we find that the uncertainty (peak widths) in the independent component orientations changes from 0.06 to 0.1 radians as the sampling is decreased from for M = 20 to M = 2. So even with few samplings, a relatively accurate determination of the independent component coordinate system can be made. 568 1. K Lin - IT.!::! - rr/4 e rr/ 4 rr/ 2 Figure 3: Histogram of the independent component orientation solutions for four different samplings. Solutions were generated from 20000 randomly chosen pairs of positions. The curves, from darkest to lightest, correspond to solutions for the noiseless, M = 20,11 and 2 simulations. The noiseless solution histogram curve extends to a height of approximately 15000 counts, and is accurate to the width of the bin. The slight scatter is due to discretization noise. Spikes at (} = 0 and -7r /2 correspond to pairs of positions which contain no information. rrJ2 . . . . . . . . . . . . . . . . . ~4-·~---_--;r--_ "·_- "·+---"_--r-- ·""·Y- "·~'" +''---%-- .____4- -+''.---3;-'' .. - .-3;- ---T ... -Y-".".+ -e -rr./4 --rr./2 p ..... -T ........ .;!;- ,,-•• ;E-.. ... -T ....... .;!;-••.. ... ;E-.. " .•.. ;E-. . .. .;1;-•••• .;!;- .. •• ;E. ", ..... -r .. " .. .;E- . .-1'-" .. '" .... -:E-. ... -:E-. ..:r ··cr· = . . . . . . os "" = = M Figure 4: The centers and widths of the solution peaks as a function of the minimum expected number of counts M . From the source distribution, the maximum expected number of counts is 9M. Information was only extracted from regions with more than 2M counts. The actual independent component orientation as determined from the mixing matrix A are shown by the two dashed lines. The solutions are very accurate even for small samplings. 4 RELATION TO DECORRELATION Ideally, if a mixed tensor (transforms as J = A-I j A) with the full degrees of freedom can be found which is diagonal if and only if the joint distribution appears in product form, then the independent component coordinate directions will coincide with that of the tensor's eigenvectors. However, the preceding analysis shows that a maximum of n(n -1)/2 constraints contain all the information that exists locally. This, however, provides a nice connection with decorrelation. Starting with the characteristic function of log p(i) , ¢(k) = J eik.;E logp(X) di, the off diagonal terms of Hlogp are given by which can loosely be seen as the second order cross-moments in ¢(k). Thus diFactorizing Multivariate Function Classes 569 agonalization of Hiog P roughly translates into decor relation in ¢(k). It should be noted that ¢(k) is not a proper distribution function. In fact, it is a complex valued function with ¢ek) = ¢* (-k). Consequently, the summation in the above equation is not an expectation value, and needs to be interpreted as a superposition of plane waves with specified wavelengths, amplitudes and phases. 5 DISCUSSION The introduced functional decomposition defines a generalization of independent component analysis which is valid for all multivariate functions. A rigorous notion of the decomposition of a multivariate function into a set of lower dimensional factors is presented. With only the assumption of local twice differentiability, we derive an analytic solution for this factorization [1]. A new algorithm is presented, which in contrast to iterative non-local parametric density estimation ICA algorithms [2, 3, 4], performs the decomposition analytically using local geometric information. The analytic nature of this approach allows for a proper treatment of source separation in the presence of uncertainty, while the local nature allows for a local determination of the source coordinate system. This leaves open the possibility of describing a position dependent independent component coordinate system with local linear coordinates patches. The presented class factorization formalism removes the decomposition assumptions needed for independent component analysis, and reinforces the well known fact that sources are recoverable only up to linear transformation. By modifying the equivalence class relation, a rich underlying algebraic structure with both multiplication and addition can be constructed. Also, it is clear that the matrix of second derivatives reveals an even more general combinatorial undirected graphical structure of the multivariate function. These topics, as well as uniqueness issues of the factorization will be addressed elsewhere [5]. The author is grateful to Jack Cowan, David Grier and Robert Wald for many invaluable discussions. References [1] J. K. Lin, Local Independent Component Analysis, Ph. D. thesis, University of Chicago, 1997. [2] A. J. Bell and T. J. Sejnowski, Neural Computation 7, 1129 (1995). [3] S. Amari, A. Cichocki, and H. Yang, in Advances in Neural and Information Processing Systems, 8, edited by D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo (MIT Press, Cambridge, MA, 1996), pp. 757-763. [4] B. A. Pearlmutter and L. Parra, in Advances in Neural and Information Processing Systems, 9, edited by M. C. Mozer, M. I. Jordan, and T. Petsche (MIT Press, Cambridge, MA, 1997), pp. 613-619. [5] J. K. Lin, Graphical Structure of Multivariate Functions, in preparation.
1997
85
1,436
An Annealed Self-Organizing Map for Source Channel Coding Matthias Burger, Thore Graepel, and Klaus Obermayer Department of Computer Science Technical University of Berlin FR 2-1, Franklinstr. 28/29, 10587 Berlin, Germany {burger, graepel2, oby} @cs.tu-berlin.de Abstract We derive and analyse robust optimization schemes for noisy vector quantization on the basis of deterministic annealing. Starting from a cost function for central clustering that incorporates distortions from channel noise we develop a soft topographic vector quantization algorithm (STVQ) which is based on the maximum entropy principle and which performs a maximum-likelihood estimate in an expectationmaximization (EM) fashion. Annealing in the temperature parameter f3 leads to phase transitions in the existing code vector representation during the cooling process for which we calculate critical temperatures and modes as a function of eigenvectors and eigenvalues of the covariance matrix of the data and the transition matrix of the channel noise. A whole family of vector quantization algorithms is derived from STVQ, among them a deterministic annealing scheme for Kohonen's self-organizing map (SOM). This algorithm, which we call SSOM, is then applied to vector quantization of image data to be sent via a noisy binary symmetric channel. The algorithm's performance is compared to those of LBG and STVQ. While it is naturally superior to LBG, which does not take into account channel noise, its results compare very well to those of STVQ, which is computationally much more demanding. 1 INTRODUCTION Noisy vector quantization is an important lossy coding scheme for data to be transmitted over noisy communication lines. It is especially suited for speech and image data which in many applieations have to be transmitted under low bandwidth/high noise level conditions. Following the idea of (Farvardin, 1990) and (Luttrell, 1989) of jointly optimizing the codebook and the data representation w.r.t. to a given channel noise we apply a deterministic annealing scheme (Rose, 1990; Buhmann, 1997) to the problem and develop a An Annealed Self-Organizing Map for Source Channel Coding 431 soft topographic vector quantization algorithm (STVQ) (cf. Heskes, 1995; Miller, 1994). From STVQ we can derive a class of vector quantization algorithms, among which we find SSOM, a deterministic annealing variant of Kohonen's self-organizing map (Kohonen, 1995), as an approximation. While the SSOM like the SOM does not minimize any known energy function (Luttre11, 1989) it is computationally less demanding than STVQ. The deterministic annealing scheme enables us to use the neighborhood function of the SOM solely to encode the desired transition probabiliti~;;s of the channel noise and thus opens up new possibilities for the usage of SOMs with arbitrary neighborhood functions. We analyse phase transitions during the annealing and demonstrate the performance of SSOM by applying it to lossy image data compression for transmission via noisy channels. 2 DERIVATION OF A CLASS OF VECTOR QUANTIZERS Vector quantization is a method of encoding data by grouping the data vectors and providing a representative in data space for each group. Given a set X of data vectors Xi E ~, i = 1, ... , D, the objective of vector quantization is to find a set W of code vectors Wr 1 r = 0, ... , N- 1, and a set M of binary assignment variables IDir. Lr IDir = 1, Vi, such that a given cost function (1) r is minimized. Er (Xi, W) denotes the cost of assigning data point Xi to code vector Wr. Following an idea by (Luttrell, 1994) we consider the case that the code labels r form a compressed encoding of the data for the purpose of transmission via a noisy channel (see Figure 1). The distortion caused by the channel noise is modeled by a matrix H of transition probabilities hrs. La hrs = 1 , Vr, for the noise induced change of assignment of a data vector Xi from code vector Wr to code vector W 8 • After transmission the received index s is decoded using its code vector w 8 • Averaging the squared Euclidean distance II xi - w sll2 over a11 possible transitions yields the assignment costs (2) where the factor 1/2 is introduced for computational convenience. Starting from the cost function E given in Eqs. (1), (2) the Gibbs-distribution P (M, WI X) = ! exp ( -,8 E (M, WI X)) can be obtained via the principle of maximum entropy under the constraint of a given average cost (E). The Lagrangian multiplier ,B is associated with {E) and is interpreted as an inverse temperature that determines the fuzziness of assignments. In order to generalize from the given training set X we calculate the most likely set of code vectors from the probability distribution P (M, WI X) marginalized over all legal sets of assignments M. For a given value of ,B we obtain LiXi L 8 hrsP(xi E s) Wr = LiLa hraP(xi E s) ' where P(xi E s) = (mis). Vr, e:x;p (-~ Lthat Jlxi- Wtll2 ) P (Xi E s) = ( ) , Lu exp -~ Lt hut llxi- Wtll 2 (3) (4) is the assignment probability of data vector Xi to code vector Wa. Solving Eqs. (3), (4) by fixed-point iteration comprises an expectation-maximization algorithm, where the E-step, 432 M Burger, T. Graepel and K Obermayer Figure 1: Cartoon of a generic data communication problem. The encoder assigns input vectors Xi to labeled code vectors Wr. Their indices r are the~ transmitted via a noisy channel which is characterized by a set of transition probabilities hrs· The decoder expands the received index s to its code vector W 8 which represents the data vectors assigned to it during encoding. The total error is measured via the squared Euclidean distance between the original data vector Xi and its representative w 5 averaged over all transitions r -t s. Q X; r------------., : Distortion II x; - w • 112 : 1.------------..1 w, I Em'OdPr x, w, " I I Chlllmel Noiie hn : r s I 1· Decod~r s w, Eq. (4), determines the assignment probabilities P(xi E s) for all data points Xi and the old code vectors w 8 and theM-step, Eq. (3), determines the new code vectors Wr from the new assignment probabilities P(xi E s). In order to find the global minimum ofE, (3 = 0 is increased according to an annealing schedule which tracks the solution from the easily solvable convex problem at low f3 to the exact solution of Eqs. (1 ), (2) at infinite j3. In the following we call the solution of Eqs. (3), (4) soft topographic vector quantizer (STVQ). Eqs. (3), (4) are the starting point for a whole class of vector quantization algorithms (Figure 2). The approximation hrs -t drs applied to Eq. (4) leads to a soft version of Kohonen's self-organzing map (SSOM), if additionally applied to Eq. (3) soft-clustering (SC) (Rose, 1990) is recovered. f3 -t oo leads to the corresponding "hard" versions topographic vector quantisation (TVQ) (Luttrell, 1989), self-organizing map (SOM) (Kohonen, 1995), and LBG. In the following, we will focus on the soft self-organizing map (SSOM). SSOM is computationally less demanding than STVQ, but offers - in contrast to the traditional SOM - a robust deterministic annealing optimization scheme. Hence it is possible to extend the SOM approach to arbitrary non-trivial neighborhood functions hrs as required, e.g. for source channel coding problems for noisy channels. 3 PHASE TRANSITIONS IN THE ANNEALING From (Rose, 1990) it is known that annealing in f3 changes the representation of the data. Code vectors split with increasing f3 and the size of the codebook for a fixed f3 is given by the number of code vectors that have split up to that point. With non-diagonal H, however, permutation symmetry is broken and the "splitting" behavior of the code vectors changes. At infinite temperature every data vector Xi is assigned to every code vector w r with equal probability P 0(xi E r) = 1/N, where N is the size of the codebook. Hence all code vectors are located in the center of mass, w~ = f:s 2:'::i Xi , Vr, of the data. Expanding the r.h.s. of Eq. (3) to first order around the fixed point { w~} and assuming hrs = hsr, 'r/ r, s, we obtain the critical value (5) An Annealed Self-Organizing Map for Source Channel Coding 433 r --., hrs-- t5rs hrs-- t5rs STVQ SSOM sc E-Step M- Step L --.J (3--- 00 (3- 00 ! (3-- 00 r ---.., hrs- t5rs hrs- t5rs TVQ SOM LBG E-Step M- Step L ---.J Figure 2: Class of vector quantizers derived from STVQ, together with approximations and limits (see text). The "S" in front stands for "soft" to indicate the probabilistic approach. for the inverse temperature, at which the center of mass solution becomes unstable . .\~ax is the largest eigenvalue of the covariance matrix C = ~ Li XiXf of the data and corresponds to their variance .\~ax = IT~ax along the principal axis which is given by the associated eigenvector v:;;ax and along which code vectors split. .\~ax is the largest eigenvalue of a matrix G whose elements are given by grt = Ls hrs (hst- h). The rth component of the corresponding eigenvector v~ax determines for each code vector Wr in which direction along the principal axis it departs from w~ and how it moves relative to the other code vectors. For SSOM a similar result is obtained with Gin Eq. (5) simply being replaced by GssoM, g;~oM = hrt ~. See (Graepel, 1997) for details. 4 NUMERICAL RESULTS In the following we consider a binary symmetric channel (BSC) with a bit error rate (BER) £. Assuming that the length of the code indices is n bits, the matrix elements of the transition matrix Hare hrs = (1 _ c)n-dH(r,s) cdH(r,s) l ( 6) where dH (r, s) is the Hamming-distance between the binary representations ofr and s. 4.1 TOY PROBLEM The numerical analysis of the phase transitions described in the previous section was performed on a toy data set consisting of 2000 data vectors drawn from a two-dimensional elongated Gaussian distribution P(x) = (211')- 1 ICI-~ exp(-~xTc- 1 x) with diagonal covariance matrix C = diag(1, 0.04). The size of the codebook was N = 4 corresponding ton = 2 bits. Figure 3 (left) shows the x-coordinates of the positions of the code vectors in data space as functions of the inverse temperature {3. At a critical inverse temperature {3* the code vectors split along the x-axis which is the principal axis of the distribution of data points. In accordance with the eigenvector v~ax = ( 1, 0, 0, -1) T for the largest eigenvalue .\~ax of the matrix G two code vectors with Hamming distance dH = 2 move to opposite positions along the principal axis, and two remain at the center. Note the degeneracy of eigenvalues for matrix (6). Figure 3 (right) shows the critical inverse temperature /3* as a function of the BER for both STVQ (crosses) and SSOM (dots). Results are in very good agreement with the theoretical predictions of Eq. (5) (solid line). The inset displays the average cost (E) = ~ Li Lr P(xi E r) Ls hrs \\xi- Ws\\ 2 as a function of f3 for 434 M Burger, T. Graepel and K. Obennayer f = 0.08 for STVQ and SSOM. The drop of the average cost occurs at the critical inverse temperature {3"'. 4 11 3.5 ., 0.5 ... Dl I! . 3 .. . I; . X 'rn2.5 . . . -0.5 •, ... -1 0 2 3 4 5 0.05 0.1 0.15 0.2 0.25 8 BER Figure 3: Phase transitions in the 2 bit "toy" problem. (left) X-coordinate of code vectors for the SSOM case plotted vs. inverse temperature {3, f = 0.08. The splitting of the four code vectors occurs at {3 = 1.25 which is in very good accordance with the theory. (right) Critical values of {3 for SSOM (dots) and STVQ (crosses), determined via the kink in the average cost (inset: f = 0.08, top line STVQ), which indicates the phase transition. Solid lines denote theoretical predictions. Convergence parameter for the fixed-point iteration, giving the upper limit for the difference in successive code vector positions per dimension, was d = 5.0E- 10. 4.2 SOURCE CHANNEL CODING FOR IMAGE DATA In order to demonstrate the applicability of STVQ and in particular of SSOM to source channel coding we employed both algorithms to the compression of image data, which were then sent via a noisy channel and decoded after transmission. As a training set we used three 512 x 512 pixel 256 gray-value images from different scenes with blocksize d = 2 x 2. The size of the codebook was chosen to beN = 16 in order to achieve a compression to 1 bpp. We applied an exponential annealing schedule given by f3t+l = 2 f3t and determined the start value f3o to be just below the critical {3"' for the first split as given in Eq. (5). Note that with the transition matrix as given in Eq. (6) this optimization corresponds to the embedding of an n = 4 dimensional hypercube in the d = 4 dimensional data space. We tested the resulting codebooks by encoding our test image Lena1 (Figure 5), which had not been used for determining the codebook, simulating the transmission of the indices via a noisy binary symmetric channel with given bit error rate and reconstructing the image using the codebook. The results are summarized in Figure 4 which shows a plot of the signal-to-noise-ratio (SNR) as a function of the bit-error rate for STVQ (dots), SSOM (vertical crosses), and LBG (oblique crosses). STVQ shows the best performance especially for high BERs, where it is naturally far superior to the LBG-algorithm which does not take into account channel noise. SSOM, however, performs only slightly worse (approx. 1 dB) than STVQ. Considering the fact that SSOM is computationally much less demanding than STVQ 1The Lenna Story can be found at http://www.isr.com/ chuck/lennapgllenna.shtml An Annealed Self-Organizing Map for Source Channel Coding 435 (O(N) for encoding)- due to the omission of the convolution with hrs in Eq. (4)- theresult demonstrates the efficiency of SSOM for source channel coding. Figure 4 also shows the generalization behavior of a SSOM codebook optimized for a BER of 0.05 (rectangles). Since this codebook was optimized fort: = 0.05 it performs worse than appropriately trained SSOM codebooks for other values of BER, but still performs better than LBG except for low values of BERs. At low values, SSOMs trained for the noisy case are outperformed by LBG because robustness w.r.t. channel noise is achieved at the expense of an optimal data representation in the noise free case. Figure 5, finally, provides a vis..Ial impression of the performance of the different vector quantizers at a BER of 0.033. While the reconstruction for STVQ is only slightly better than the one for SSOM, both are clearly superior to the reconstruction for LBG. Figure 4: Comparison between different vector quantizers for image compression, noisy channel (BSC) transmission and reconstruction. The plot shows the signal-to-noise-ratio (SNR), defined as 10 loglo(O'signat/ O'noise). as a function of bit-error rate (BER) for STVQ z and SSOM, each optimized for the given ~ channel noise, for SSOM, optimized for f a BER of 0.05, and for LBG. The training set consisted of three 512 x 512 pixel 256 gray-value images with blocksize d = 2 x 2. The codebook size was N = 16 corresponding to 1 bpp. The annealing schedule was given by f3t+I = 2 f3t and Lena was used as a test image. Convergence parameter o was 1. 0 E - 5. 5 CONCLUSION 14 -2 BER SlVQ+SSQ,I +SSQoi5%8ER ·II-· LOO-.. We presented an algorithm for noisy vector quantization which is based on deterministic annealing (STVQ). Phase transitions in the annealing process were analysed and a whole class of vector quantizers could be derived, includings standard algorithms such as LBG and "soft" versions as special cases of STVQ. In particular, a fuzzy version of Kohonen's SOM was introduced, which is computationally more efficient than STVQ and still yields very good results as demonstrated for noisy vector quantization of image data. The deterministic annealing scheme opens up many new possibilities for the usage of SOMs, in particular, when its neighborhood function represents non-trivial neighborhood relations. Acknowledgements This work was supported by TU Berlin (FIP 13/41). We thank H. Bartsch for help and advice with regard to the image processing example. References J. M. Buhmann and T. Hofmann. Robust Vector Quantization by Competitive Learning. Proceedings ofiCASSP'97, Munich, (1997). N. Farvardin. A Study of Vector Quantization/or Noisy Channels. IEEE Transactions on Infonnation Theory, vol. 36, p. 799-809 (1990). 436 M. Burger, T. Graepel and K. Obermayer Original LBG SNR 4.64 dB STVQ SNR9.00dB SSOM SNR 7.80 dB Figure 5: Lena transmitted over a binary symmetric channel with BER of 0.033 encoded and reconstructed using different vector quantization algorithms. · T. Graepel, M. Burger, and K. Obermayer. Phase Transitions in Stochastic Self-Organizing Maps. Physical Review E, vol. 56, no. 4, p. 3876-3890 (1997). T. Heskes and B. Kappen. Self-Organizing and Nonparametric Regression. Artificial Neural Networks- ICANN'95, vol.l,p. 81-86 (1995). T. Kohonen. Self-Organizing Maps. Springer-Verlag, 1995. S. P. Luttrell. Self-Organisation: A Derivationjromfirst Principles of a Class of Learning Algorithms. Proceedings of IJCNN'89, Washington DC, vol. 2, p. 495-498 (1989). S. P. Luttrell. A Baysian Analysis of Self-Organizing Maps. Neural Computation, vol. 6, p. 767-794 (1994). D. Miller and K. Rose. Combined Source-Channel Vector Quantization Using Deterministic Annealing. IEEE Transactions on Communications, vol. 42, p. 347-356 (1994). K. Rose, E. Gurewitz, and G. C. Fox. Statistical Mechanics and Phase Transitions in Clustering. Physical Review Letters, vol. 65, No.8, p. 945-948 (1990).
1997
86
1,437
How to Dynamically Merge Markov Decision Processes Satinder Singh Department of Computer Science University of Colorado Boulder, CO 80309-0430 baveja@cs.colorado.edu Abstract David Cohn Adaptive Systems Group Harlequin, Inc. Menlo Park, CA 94025 cohn@harlequin.com We are frequently called upon to perform multiple tasks that compete for our attention and resource. Often we know the optimal solution to each task in isolation; in this paper, we describe how this knowledge can be exploited to efficiently find good solutions for doing the tasks in parallel. We formulate this problem as that of dynamically merging multiple Markov decision processes (MDPs) into a composite MDP, and present a new theoretically-sound dynamic programming algorithm for finding an optimal policy for the composite MDP. We analyze various aspects of our algorithm and illustrate its use on a simple merging problem. Every day, we are faced with the problem of doing mUltiple tasks in parallel, each of which competes for our attention and resource. If we are running a job shop, we must decide which machines to allocate to which jobs, and in what order, so that no jobs miss their deadlines. If we are a mail delivery robot, we must find the intended recipients of the mail while simultaneously avoiding fixed obstacles (such as walls) and mobile obstacles (such as people), and still manage to keep ourselves sufficiently charged up. Frequently we know how to perform each task in isolation; this paper considers how we can take the information we have about the individual tasks and combine it to efficiently find an optimal solution for doing the entire set of tasks in parallel. More importantly, we describe a theoretically-sound algorithm for doing this merging dynamically; new tasks (such as a new job arrival at a job shop) can be assimilated online into the solution being found for the ongoing set of simultaneous tasks. 1058 S. Singh and D. Cohn 1 The Merging Framework Many decision-making tasks in control and operations research are naturally formulated as Markov decision processes (MDPs) (e.g., Bertsekas & Tsitsikiis, 1996). Here we define MDPs and then formulate what it means to have multiple simultanous MDPs. 1.1 Markov decision processes (MDPs) An MDP is defined via its state set 8, action set A, transition probability matrices P, and payoff matrices R. On executing action a in state s the probability of transiting to state s' is denoted pa(ss') and the expected payoff associated with that transition is denoted Ra (ss'). We assume throughout that the payoffs are non-negative for all transitions. A policy assigns an action to each state of the MDP. The value of a state under a policy is the expected value of the discounted sum of payoffs obtained when the policy is followed on starting in that state. The objective is to find an optimal policy, one that maximizes the value of every state. The optimal value of state s, V* (s), is its value under the optimal policy. The optimal value function is the solution to the Bellman optimality equations: for all s E 8, V(s) = maxaEA(Esl pa(ss') [Ra (ss') +/V(s'))), where the discount factor o ~ / < 1 makes future payoffs less valuable than more immediate payoffs (e.g., Bertsekas & Tsitsiklis, 1996). It is known that the optimal policy 7r* can be determined from V* as follows: 7r*(s) = argmaxaEA(Esl pa(ss')[Ra(ss') +/V*(s'))). Therefore solving an MDP is tantamount to computing its optimal value function. 1.2 Solving MDPs via Value Iteration Given a model (8, A, P, R) of an MDP value iteration (e.g., Bertsekas & Tsitsikiis, 1996) can be used to determine the optimal value function. Starting with an initial guess, Vo, iterate for all s Vk+1(S) = maxaEA(EsIES pa(ss')[Ra(ss') + /Vk(S'))). It is known that maxsES 1Vk+1 (s) - V*(s)1 ~ / maxsES IVk(S) - V*(s)1 and therefore Vk converges to V* as k goes to infinity. Note that a Q-value (Watkins, 1989) based version of value iteration and our algorithm presented below is also easily defined. 1.3 Multiple Simultaneous MDPs The notion of an optimal policy is well defined for a single task represented as an MDP. If, however, we have multiple tasks to do in parallel, each with its own state, action, transition probability, and payoff spaces, optimal behavior is not automatically defined. We will assume that payoffs sum across the MDPs, which means we want to select actions for each MDP at every time step so as to maximize the expected discounted value of this summed payoff over time. If actions can be chosen independently for each MDP, then the solution to this "composite" MDP is obvious do what's optimal for each MDP. More typically, choosing an action for one MDP constrains what actions can be chosen for the others. In a job shop for example, actions correspond to assignment of resources, and the same physical resource may not be assigned to more than one job simultaneously. Formally, we can define a composite MDP as a set of N MDPs {Mi}f. We will use superscripts to distinguish the component MDPs, e.g., 8i, Ai, pi, and Ri are the state, action, transition probability and payoff parameters of MDP Mi. The state space of the composite MDP, 8, is the cross product of the state spaces of the component MDPs, i.e., 8 = 8 1 X 8 2 X ... X 8 N . The constraints on actions implies that How to Dynamically Merge Markov Decision Processes 1059 the action set of the composite MDP, A, is some proper subset of the cross product of the N component action spaces. The transition probabilities and the payoffs of the composite MDP are factorial because the following decompositions hold: for all s, s' E S and a E A, pa(ss') = nf:lpai (SiSi') and Ra(ss') = l:~l ~i (SiSi'). Singh (1997) has previously studied such factorial MDPs but only for the case of a fixed set of components. The optimal value function of a composite MDP is well defined, and satisfies the following Bellman equation: for all s E S, N V(s) = ~a:L (nf:lpa'(sisi')[LRa\sisi')+'YV(s')]). (1) ~ES i=l Note that the Bellman equation for a composite MDP assumes an identical discount factor across component MDPs and is not defined otherwise. 1.4 The Dynamic Merging Problem Given a composite MDP, and the optimal solution (e.g. the optimal value function) for each of its component MDPs, we would like to efficiently compute the optimal solution for the composite MDP. More generally, we would like to compute the optimal composite policy given only bounds on the value functions of the component MDPs (the motivation for this more general version will become clear in the next section). To the best of our knowledge, the dynamic merging question has not been studied before. Note that the traditional treatment of problems such as job-shop scheduling would formulate them as nonstationary MDPs (however, see Zhang and Dietterich, 1995 for another learning approach). This normally requires augmenting the state space to include a "time" component which indexes all possible state spaces that could arise (e.g., Bertsekas, 1995). This is inefficient, and potentially infeasible unless we know in advance all combinations of possible tasks we will be required to solve. One contribution of this paper is the observation that this type of nonstationary problem can be reformulated as one of dynamically merging (individually) stationary MDPs. 1.4.1 The naive greedy policy is suboptimal Given bounds on the value functions of the component MDPs, one heuristic composite policy is that of selecting actions according to a one-step greedy rule: N 7I"(s) = argmax(l: nf:,lpai (Si si')[l:(Rai (si, ai) + 'YXi(Si'))]), a 8' i=l where Xi is the upper or lower bound of the value function, or the mean of the bounds. It is fairly easy however, to demonstrate that these policies are substantially suboptimal in many common situations (see Section 3). 2 Dynamic Merging Algorithm Consider merging N MDPs; job-shop scheduling presents a special case of merging a new single MDP with an old composite MDP consisting of several factor MDPs. One obvious approach to finding the optimal composite policy would be to directly perform value iteration in the composite state and action space. A more efficient approach would make use of the solutions (bounds on optimal value functions) of the existing components; below we describe an algorithm for doing this. 1060 S. SinghandD. Cohn Our algorithm will assume that we know the optimal values, or more generally, upper and lower bounds to the optimal values of the states in each component MDP. We use the symbols Land U for the lower and upper bounds; if the optimal value function for the ith factor MDP is available then Li = Ui = V·,i.l Our algorithm uses the bounds for the component MDPs to compute bounds on the values of composite states as needed and then incrementally updates and narrows these initial bounds using a form of value iteration that allows pruning of actions that are not competitive, that is, actions whose bounded values are strictly dominated by the bounded value of some other action. Initial State: The initial composite state So is composed from the start state of all the factor MOPs. In practice (e.g. in job-shop scheduling) the initial composite state is composed of the start state of the new job and whatever the current state of the set of old jobs is. Our algorithm exploits the initial state by only updating states that can occur from the initial state under competitive actions. Initial Value Step: When we need the value of a composite state S for the first time. we compute upper and lower bounds to its optimal value as follows: L(s) = max!1 Li(Si), and U(s) = E~1 Ui(S). Initial Update Step: We dynamically allocate upper and lower bound storage space for composite states as we first update them. We also create the initial set of competitive actions for S when we first update its value as A(s) = A. As successive backups narrow the upper and lower bounds of successor states, some actions will no longer be competitive, and will be eliminated from further consideration. Modified Value Iteration Algorithm: At step t if the state to be updated is St: Lt+l(St) max (L pa(sts')[Ra(st. s') + -yLt(s')]) aEAt{st} J s Ut+l(St) max (L pa(sts')[Ra(st, s') + -yUt(s')]) aEAt(st} s' At+l (St) = U a E At(st) AND L pa(sts')[Ra(st, s') + -yUt(s')] St+l s' ;::: argmax L pb(sts')[Rb(st, s') + -yLt(s')] bEAt(St) 8' { So if s~ is terminal for all Si E s s' E S such that 3a E At+1 (St), pa(StS') > 0 otherwise The algorithm terminates when only one competitive action remains for each state, or when the range of all competitive actions for any state are bounded by an indifference parameter €. To elaborate, the upper and lower bounds on the value of a composite state are backed up using a form of Equation 1. The set of actions that are considered competitive in that state are culled by eliminating any action whose bounded values is strictly dominated by the bounded value of some other action in At(st). The next state to be updated is chosen randomly from all the states that have non-zero 1 Recall that unsuperscripted quantities refer to the composite MDP while superscripted quantities refer to component MDPs. Also, A is the set of actions that are available to the composite MDP after taking into account the constraints on picking actions simultaneously for the factor MDPs. How to Dynamically Merge Markov Decision Processes 1061 pro babili ty of occuring from any action in At+! (St) or, if St is the terminal state of all component MDPs, then StH is the start state again. A significant advantage of using these bounds is that we can prune actions whose upper bounds are worse than the best lower bound. Only states resulting from remaining competitive actions are backed up. When only one competitive action remains, the optimal policy for that state is known, regardless of whether its upper and lower bounds have converged. Another important aspect of our algorithm is that it focuses the backups on states that are reachable on currently competitive actions from the start state. The combined effect of only updating states that are reachable from the start state and further only those that are reachable under currently competitive actions can lead to significant computational savings. This is particularly critical in scheduling, where jobs proceed in a more or less feedforward fashion and the composite start state when a new job comes in can eliminate a large portion of the composite state space. Ideas based on Kaelbling's (1990) interval-estimation algorithm and Moore & Atkeson's (1993) prioritized sweeping algorithm could also be combined into our algorithm. The algorithm has a number of desirable "anytime" characteristics: if we have to pick an action in state So before the algorithm has converged (while multiple competitive actions remain), we pick the action with the highest lower bound. If a new MDP arrives before the algorithm converges, it can be accommodated dynamically using whatever lower and upper bounds exist at the time it arrives. 2.1 Theoretical Analysis In this section we analyze various aspects of our algorithm. UpperBound Calculation: For any composite state, the sum of the optimal values of the component states is an upper bound to the optimal value of the composite state, i.e., V*(s = SI, S2, .. . , SN) ~ 2:~1 V*,i(Si). If there were no constraints among the actions of the factor MDPs then V* (s) would equal L~l V*,i(Si) because of the additive payoffs across MDPs. The presence of constraints implies that the sum is an upper bound. Because V*,i(S') ~ Ut(Si) the result follows. LowerBound Calculation: For any composite state, the maximum of the optimal values of the component states is a lower bound to the optimal value of the composite states, i.e., V*(s = SI, S2, . .. ,SN) ~ max~1 V*,i(Si). To see this for an arbitrary composite state s, let the MDP that has the largest component optimal value for state s always choose its component-optimal action first and then assign actions to the other MDPs so as to respect the action constraints encoded in set A. This guarantees at least the value promised by that MDP because the payoffs are all non-negative. Because V*,i(Si) ~ Lt(Si) the result follows. Pruning of Actions: For any composite state, if the upper bound for any composite action, a, is lower than the lower bound for some other composite action, then action a cannot be optimal action a can then safely be discarded from the max in value iteration. Once discarded from the competitive set, an action never needs to be reconsidered. Our algorithm maintains the upper and lower bound status of U and L as it updates them. The result follows. 1062 S. Singh and D. Cohn Convergence: Given enough time our algorithm converges to the optimal policy and optimal value function for the set of composite states reachable from the start state under the optimal policy. If every state were updated infinitely often, value iteration converges to the optimal solution for the composite problem independent of the intial guess Vo. The difference between standard value iteration and our algorithm is that we discard actions and do not update states not on the path from the start state under the continually pruned competitive actions. The actions we discard in a state are guaranteed not to be optimal and therefore cannot have any effect on the value of that state. Also states that are reachable only under discarded actions are automatically irrelevant to performing optimally from the start state. 3 An Example: Avoiding Predators and Eating Food We illustrate the use of the merging algorithm on a simple avoid-predator-andeat-food problem, depicted in Figure 1a. The component MDPs are the avoidpredator task and eat-food task; the composite MDP must solve these problems simultaneously. In isolation, the tasks avoid-predator and eat-food are fairly easy to learn. The state space of each task is of size n\ 625 states in the case illustrated. Using value iteration, the optimal solutions to both component tasks can be learned in approximately 1000 backups. Directly solving the composite problem requires n6 states (15625 in our case), and requires roughly 1 million backups to converge. Figure 1b compares the performance of several solutions to the avoid-predatorand-eat-food task. The opt-predator and opt-food curves shows the performance of value iteration on the two component tasks in isolation; both converge qUickly to their optima. While it requires no further backups, the greedy algorithm of Section 1.4.1 falls short of optimal performance. Our merging algorithm, when initialized with solutions for the component tasks (5000 backups each) converges quickly to the optimal solution. Value iteration directly on the composite state space also finds the optimal solutions, but requires 4-5 times as many backups. Note that value iteration in composite state space also updated states on trajectories (as in Barto etal.'s, 1995 RTDP algorithm) through the state space just as in our merging algorithm, only without the benefit of the value function bounds and the pruning of non-competitive actions. 4 Conclusion The ability to perform multiple decision-making tasks simultaneously, and even to incorporate new tasks dynamically into ongoing previous tasks, is of obvious interest to both cognitive science and engineering. Using the framework of MDPs for individual decision-making tasks, we have reformulated the above problem as that of dynamically merging MDPs. We have presented a modified value iteration algorithm for dynamically merging MDPs, proved its convergence, and illustrated its use on a simple merging task. As future work we intend to apply our merging algorithm to a real-world jobshop scheduling problem, extend the algorithm into the framework of semi-Markov decision processes, and explore the performance of the algorithm in the case where a model of the MDPs is not available. How to Dynamically Merge Markov Decision Processes 1063 a) b) 0.80 I f 0.70 f *' ~ P A Q. & 0.60 j opt-predator 0.40 ,I ----::-:-....,.-:------'-.,...,..,.----' 0.0 500000.0 1000000.0 1500000.0 Number 01 Backups Figure 1: a) Our agent (A) roams an n by n grid. It gets a payoff of 0.5 for every time step it avoids predator (P), and earns a payoff of 1.0 for every piece of food (f) it finds. The agent moves two steps for every step P makes, and P always moves directly toward A. When food is found, it reappears at a random location on the next time step. On every time step, A has a 10% chance of ignoring its policy and making a random move. b) The mean payoff of different learning strategies vs. number of backups. The bottom two lines show that when trained on either task in isolation, a learner reaches the optimal payoff for that task in fewer than 5000 backups. The greedy approach makes no further backups, but performs well below optimal. The optimal composite solution, trained ab initio, requires requires nearly 1 million backups. Our algorithm begins with the 5000-backup solutions for the individual tasks, and converges to the optimum 4-5 times more quickly than the ab initio solution. Acknowledgements Satinder Singh was supported by NSF grant IIS-9711753. References Barto, A. G., Bradtke, S. J., & Singh, S. (1995) . Learning to act using real-time dynamic programming. Artificial Intelligence, 72, 81-138. Bertsekas, D. P. (1995). Dynamic Programming and Optimal Control. Belmont, MA: Athena Scientific. Bertsekas, D. P. & Tsitsiklis, J. N. (1996). Neuro-Dynamic Programming. Belmont, MA: Athena Scientific. Kaelbling, L. P. (1990). Learning in Embedded Systems. PhD thesis, Stanford University, Department of Computer Science, Stanford, CA. Technical Report TR-90-04. Moore, A. W. & Atkeson, C. G. (1993). Prioritized sweeping: Reinforcement learning with less data and less real time. Machine Learning, 19(1). Singh, S. (1997). Reinforcement learning in factorial environments. submitted. Watkins, C. J. C. H. (1989). Learning from Delayed Rewards. PhD thesis, Cambridge Univ., Cambridge, England. Zhang, W. & Dietterich, T . G. (1995). High-performance job-shop scheduling with a time delay TD(lambda) network. In NIPSystems 8. MIT Press.
1997
87
1,438
Refractoriness and Neural Precision Michael J. Berry n and Markus Meister Molecular and Cellular Biology Department Harvard University Cambridge, MA 02138 Abstract The relationship between a neuron's refractory period and the precision of its response to identical stimuli was investigated. We constructed a model of a spiking neuron that combines probabilistic firing with a refractory period. For realistic refractoriness, the model closely reproduced both the average firing rate and the response precision of a retinal ganglion cell. The model is based on a "free" firing rate, which exists in the absence of refractoriness. This function may be a better description of a spiking neuron's response than the peri-stimulus time histogram. 1 INTRODUCTION The response of neurons to repeated stimuli is intrinsically noisy. In order to take this trial-to-trial variability into account, the response of a spiking neuron is often described by an instantaneous probability for generating an action potential. The response variability of such a model is determined by Poisson counting statistics; in particular, the variance in the spike count is equal to the mean spike count for any time bin (Rieke, 1997). However, recent experiments have found far greater precision in the vertebrate retina (Berry, 1997) and the HI interneuron in the fly visual system (de Ruyter, 1997). In both cases, the neurons exhibited sharp transitions between silence and nearly maximal firing. When a neuron is firing near its maximum rate, refractoriness causes spikes to become more regularly spaced than for a Poisson process with the same firing rate. Thus, we asked the question: does the refractory period play an important role in a neuron's response precision under these stimulus conditions? 2 FIRING EVENTS IN RETINAL GANGLION CELLS We addressed the role of refractoriness in the precision of light responses for retinal ganglion cells. 2.1 RECORDING AND STIMULATION Experiments were performed on the larval tiger salamander. The retina was isolated from the eye and superfused with oxygenated Ringer's solution. Action potentials from retinal Refractoriness and Neural Precision 111 ganglion cells were recorded extracellularly with a multi-electrode array, and their spike times measured relative to the beginning of each stimulus repeat (Meister, 1994). Spatially uniform white light was projected from a computer monitor onto the photoreceptor layer. The intensity was flickered by choosing a new value at random from a Gaussian distribution (mean J, standard deviation oJ) every 30 ms. The mean light level (J= 4'10-3 W/m2) corresponded to photopic (daylight) vision. Contrast C is defined here as the temporal standard deviation of the light intensity divided by the mean, C = 01/ I. Recordings extended over 60 repeats of a 60-sec segment of random flicker. The qualitative features of ganglion cell responses to random flicker stimulation at 35 % contrast are seen in Fig. 1. First, spike trains had extensive periods in which no spikes were seen in 60 repeated trials. Many spike trains were sparse, in that the silent periods covered a large fraction of the total stimulus time. Second, during periods of firing, the peri-stimulus time histogram (PSTH) rose from zero to the maximum firing rate (-200 Hz) on a time scale comparable to the time interval between spikes (-10 ms). We have argued that these responses are better viewed as a set of discrete firing "events" than as a continuously varying firing rate (Berry, 1997). In general, the firing events were bursts containing more than one spike (Fig. IB). Identifiable firing events were seen across cell types; similar results were also found in the rabbit retina (Berry, 1997). 2 >. A ..-'w c 1 Q) ..-c 0 60 ~ 40 ·c I20 0 W· 8 ~' ~~ ~~ ,{ If: >f. .- 300 N I C Q) ni a: 0 43.4 43.5 43.6 Time (5) I' :~ : ) .. 0: '~ I'r .~' '~*': /..'1 , I 43.7 .'J: ! ~ . • ' .I 1 \ ~ ,.~. \>.~ ~-(,It: 'I~~ 43.8 Figure 1: Response of a salamander ganglion cell to random flicker stimulation. (A) Stimulus intensity in units of the mean for a O.5-s segment, (B) spike rasters from 60 trials, and (C) the firing rate r(t). 2.2 FIRING EVENT PRECISION Discrete episodes of ganglion cell firing were recognized from the PSTH as a contiguous period of firing bounded by periods of complete silence. To provide a consistent demarcation of firing events, we drew the boundaries of a firing event at minima v in the PSTH that were significantly lower than neighboring maxima PI and P2' such that ~ PIP2 Iv ~ ¢ with 95 % confidence (Berry, 1997). With these boundaries defined, every spike in each trial was assigned to exactly one firing event. 112 M. J Berry and M. Meister Measurements of both timing and number precision can be obtained if the spike train is parsed into such firing events. For each firing event i, we accumulated the distribution of spike times across trials and calculated several statistics: the average time Tj of the first spike in the event and its standard deviation OTj across trials, which quantified the temporal jitter of the first spike; similarly, the average number N j of spikes in the event and its variance ONj 2 across trials, which quantified the precision of spike number. In trials that contained zero spikes for event i, no contribution was made to Tj or OTj , while a value of zero was included in the calculation of Nj and ONj 2. For the ganglion cell shown in Fig. 1, the temporal jitter oT of the first spike in an event was very small (1 to 10 ms). Thus, repeated trials of the same stimulus typically elicit action potentials with a timing uncertainty of a few milliseconds. The temporal jitter of all firing events was distilled into a single number Tby taking the median o"er all events. The variance ON2 in the spike count was remarkably low as well: it often approached the lower bound imposed by the fact that individual trials necessarily produce integer spike counts. Because ON2 « N for all events, ganglion cell spike trains cannot be completely characterized by their firing rate (Berry, 1997). The spike number precision of a cell was assessed by comp.utin,fo. tHe average variance over events and dividing by the average spike count: F = (ON-J j(N). This quantity, also known as the Fano factor, has a value of one for a Poisson process with no refractoriness. 3 PROBABILISTIC MODELS OF A SPIKE TRAIN We start by reviewing one of the simplest probabilistic models of a spike train, the inhomogeneous Poisson model. Here, the measured spike times {tj } are used to estimate the instantaneous rate r(t) of spike generation during a time Lit . This can be written fonnallyas where M is the number of repeated stimulus trials and e( x) is the Heaviside function ex= . () 1 x~O} o x<O We can randomly generate a sequence of spike trains from a set of random numbers between zero and one: {aj } with a j E (0,1]. If there is a spike at time tj , then the next spike time tj+1 is found by numerically solving the equation 1,+, -Ina;+! = J r(t)dt . t, 3.1 INCLUDING AN ABSOLUTE REFRACTORY PERIOD In order to add refractoriness to the Poisson spike-generator, we expressed the firing rate as the product of a "free" firing rate q(t) , which obtains when the neuron is not refractory, and a recovery function w(t), which describes how the neuron recovers from refractoriness (Johnson, 1983; Miller, 1985). When the recovery function is zero, spiking is not possible; and when it is one, spiking is not affected. The modified rule for selecting spikes then becomes I, ... , -lnaj+1 = J q(t)w(t-t;)dt . " For an absolute refractory period of time J1, the weight function is zero for times between o and J1 and one otherwise Refractoriness and Neural Precision 113 w{t;,u) = 1- B{t )B(,u - t) Because the refractory period may exclude spiking in a given time bin, the probability of firing a spike when not prevented by the refractory period is higher than predicted by r( t). This free firing rate q( t ; ,u) can be estimated by excluding trials where the neuron is unable to fire due to refractoriness The sum is restricted to spike times ti nearest to the time bin on a given trial. This restriction follows from the assumption that the recovery function only depends on the time since the last action potential. Notice that this new probability obeys the inequality q( t ) ~ p( t) and also that it depends upon the refractory period ,u. 4.5 N I Q) ~ 4.3 a: OJ c ·c 4.1 u:: 1.00 lJ... 0.75 .... 0 (,) 0.50 ctS 0 c ctS 0.25 u.. 0.00 5 (J) E .,.. 4 .... Q) 3 =: ::s e! 2 8. 1 E ~ 0 t -+ --i- -1- -f - -f" - ~- - .. - -. - - - -• • • • • • ----------------------.--.-~ • • • • • _____________ _ • ___ • __ S __ • __ ,.. __ • o 1 2 3 4 5 Refractory Period (ms) Figure 2: Results for model spike trains with an absolute refractory period. (A) Mean firing rate averaged over a 60-s segment (circles), (B) Fano factor F, a measure of spike number precision in an event (triangles), and (C) temporal jitter 't'(diamonds) plotted versus the absolute refractory period ,u. Shown in dotted in each panel is the value for the real data. With this definition of the free firing rate, we can now generate spike trains with the same first order statistics (i.e., the average firing rate) for a range of values of the refractory period ,u. For each value of ,u, we can then compare the second order statistics (i.e., the precision) of the model spike trains to the real data. To this end, the free rate q( t) was 114 M. 1 Berry and M. Meister calculated for a 60-s segment of the response to random flicker of the salamander ganglion cell shown in Fig. 1. Then, q(t) was used to generate 60 spike trains. Firing events were identified in the set of model spike trains, and their precision was calculated. Finally, this procedure was repeated 10 times for each value ofthe refractory period. Figure 2A plots the firing rate (circles) generated by the model, averaged over the entire 60-s segment of random flicker with error bars equal to the standard deviation of the rate among the 10 repeated sets. The firing rate of the model matches the actual firing rate for the real ganglion cell (dashed) up to refractory periods of J1 :: 4 ms, although the deviation for larger refractory periods is still quite small. For large enough values of the absolute refractory period, there will be inter-spike intervals in the real data that are shorter than J1. In this case, the free firing rate q( t) cannot be enhanced enough to match the observed firing rate. While the mean firing rate is approximately constant for refractory periods up to 5 ms, the precision changes dramatically. Figure 2B shows that the Fano factor F (triangles) has the expected value of 1 for no refractory period, but drops to ~ 0.2 for the largest refractory period. In Fig. 2e, the temporal jitter 'l" (diamonds) also decreases as refractoriness is added, although the effect is not as large as for the precision of spike number. The sharpening of temporal precision is due to the fact that the probability q( t) rises more steeply than r(t) (see Fig. 4), so that the first spike occurs over a narrower range oftimes. The number precision of the model matches the real data for J1 = 4 to 4.5 ms and the timing precision matches for ~ :: 4 ms. Therefore, a probabilistic spike generator with an absolute refractory period can match both the average firing rate and the precision of a retinal ganglion cell's spike train with roughly the same value of one free parameter. 3.2 USING A RELATIVE REFRACTORY PERIOD Salamander ganglion cells typically have a relative refractory period that lasts beyond their absolute refractory period. This can be seen in Fig. 3A from the distribution of interspike intervals P{L1) for the ganglion cell shown above - the absolute refractory period lasts for only 2 ms, while relative refractoriness extends to ~ 5 ms. We can include the effects of relative refractoriness by using weight values in w( t) that are between zero and one. Figure 3 illustrates a parameter-free method for determining this weight function. If there were no refractoriness and a neuron had a constant firing rate q, then the inter-spike interval distribution would drop exponentially. This behavior is seen from the curve fit in Fig. 3A for intervals in the range 5 to 10 ms. The recovery function w(t) can then be found from the inter-spike interval distribution (Berry, 1998) Notice in Fig. 3B, that the recovery function w(t) is zero out to 3 ms, rises almost linearly between 3 and 5 ms, and then reaches unity beyond 5 ms. Using the weight function shown in Fig. 3B, the free firing rate q(t) was calculated and 10 sets of 60 spike trains were generated. The results, summarized in Table 1, give very close agreement with the real data: Table 1: Results for a Relative Refractory Period QUANTITY Firing Rate Timing Precision 'l" Number Precision F REAL DATA 4.43 Hz 3.20ms 0.250 MODEL 4.44 Hz 2.95 ms 0.266 STD. DEV. 0.017 Hz 0.09ms 0.004 Refractoriness and Neural Precision 115 Thus, a Poisson spike generator with a relative refractory period reproduces the measured precision. A similar test, performed over a population of ganglion cells, also yielded close agreement (Berry, 1998). 1000 0000 0 A 0 >0 u 100 0 c: Q) :::J 0 CT ~OOO 0 Q) .... 10 0 000 0 u. 0 1 0 2 4 6 8 10 12 Inter-Spike Interval (ms) c: 0 U 1.0 c: B :::J U. ~ 0.5 Q) > 0 u 0.0 Q) a:: 0 2 4 6 8 10 12 Time (ms) Figure 3: Determination of the relative refractory period. (A) The interspike interval distribution (diamonds) is fit by an exponential curve (solid), resulting in (B) the recovery function. Not only is the average firing rate well-matched by the model, but the firing rate in each time bin is also very similar. Figure 4A compares the firing rate for the real neuron to that generated by the model. The mean-squared error between the two is 4 %, while the counting noise, estimated as the variance of the standard error divided by the variance of r{t) , is also 4 %. Thus, the agreement is limited by the finite number of repeated trials. Figure 4B compares the free firing rate q( t) to the observed rate firing r{ t). q( t) is equal to r(t) at the beginning of a firing event, but becomes much larger after several spikes have occurred. In addition, q(t) is generally smoother than r(t), because there is a greater enhancement in q(t) at times following a peak in r(t). In summary, the free firing rate q( t) can be calculated from the raw spike train with no more computational difficulty than r{t), and thus can be used for any spiking neuron. Furthermore, q(t) has some advantages over r(t): 1) in conjunction with a refractory spike-generator, it produces the correct response precision; 2) it does not saturate at high firing rates, so that it can continue to distinguish gradations in the neuron's response. Thus, q( t) may prove useful for constructing models of the input-output relationship of a spiking neuron (Berry, 1998). Acknowledgments We would like to thank Mike DeWeese for many useful conversations. One of us, MJB, acknowledges the support of the National Eye Institute. The other, MM, acknowledges the support of the National Science Foundation. Figure 4: Illustration of the free fIring rate. (A) The observed fIring rate r(t) for real data (solid) is compared to that from the model (dotted). (B) The free rate q(t) (thick) is shown on the same scale as r(t) (thin). All rates used a time bin of 0.25 ms and boxcar smoothing 'over 9 bins. References Berry, M. J., D. K. Warland, and M. Meister, The Structure and Precision of Retinal Spike Trains. PNAS, USA, 1997.94: pp. 5411-5416. Berry II, M. J. and Markus Meister, Refractoriness and Neural Precision. 1. Neurosci., 1998. in press. De Ruyter van Steveninck, R. R., G. D. Lewen, S. P. Strong, R. Koberle, and W. Bialek, Reliability and Variability in Neural Spike Trains. Science, 1997.275: pp. 1805-1808. Johnson, D. H. and A. Swami, The Transmission of Signals by Auditory-Nerve Fiber Discharge Patterns. J. Acoust. Soc. Am., 1983. 74: pp. 493-501. Meister, M., J. Pine, and D. A. Baylor, Multi-Neuronal Signals from the Retina: Acquisition and Analysis. 1. Neurosci. Methods, 1994.51: pp. 95-106. Miller, M. 1. Algorithms for Removing Recovery-Related Distortion gtom Auditory-Nerve Discharge Patterns. J. Acoust. Soc. Am., 1985.77: pp. 1452-1464. Rieke, F., D. K. Warland, R. R. de Ruyter van Steveninck, and W. Bialek, Spikes: Exploring the Neural Code. 1997, Cambridge, MA: MIT Press.
1997
88
1,439
Active Data Clustering Thomas Hofmann Center for Biological and Computational Learning, MIT Cambridge, MA 02139, USA, hofmann@ai.mit.edu Joachim M. Buhmann Institut fur Informatik III, Universitat Bonn RomerstraBe 164, D-53117 Bonn, Germany, jb@cs.uni-bonn.de Abstract Active data clustering is a novel technique for clustering of proximity data which utilizes principles from sequential experiment design in order to interleave data generation and data analysis. The proposed active data sampling strategy is based on the expected value of information, a concept rooting in statistical decision theory. This is considered to be an important step towards the analysis of largescale data sets, because it offers a way to overcome the inherent data sparseness of proximity data. '''Ie present applications to unsupervised texture segmentation in computer vision and information retrieval in document databases. 1 Introduction Data clustering is one of the core methods for numerous tasks in pattern recognition, exploratory data analysis, computer vision, machine learning, data mining, and in many other related fields. Concerning the data representation it is important to distinguish between vectorial data and proximity data, cf. [Jain, Dubes, 1988]. In vectorial data each measurement corresponds to a certain 'feature' evaluated at an external scale. The elementary measurements of proximity data are, in contrast, (dis-)similarity values obtained by comparing pairs of entities from a given data set. Generating proximity data can be advantageous in cases where 'natural' similarity functions exist, while extracting features and supplying a meaningful vector-space metric may be difficult. We will illustrate the data generation process for two exemplary applications: unsupervised segmentation of textured images and data mining in a document database. Textured image segmentation deals with the problem of partitioning an image into regions of homogeneous texture. In the unsupervised case, this has to be achieved on Active Data Clustering 529 the basis of texture similarities without prior knowledge about the occuring textures. Our approach follows the ideas of [Geman et al., 1990] to apply a statistical test to empirical distributions of image features at different sites. Suppose we decided to work with the gray-scale representation directly. At every image location P = (x, y) we consider a local sample of gray-values, e.g., in a squared neighborhood around p. Then, the dissimilarity between two sites Pi and Pj is measured by the significance of rejecting the hypothesis that both samples were generated from the same probability distribution. Given a suitable binning (tk h :5: k:5:R and histograms Ii, Ij, respectively, we propose to apply a x2-test, i.e., (1) In fact, our experiments are based on a multi-scale Gabor filter representation instead of the raw data, cf. [Hofmann et al., 1997] for more details. The main advantage of the similarity-based approach is that it does not reduce the distributional information, e.g., to some simple first and second order statistics, before comparing textures. This preserves more information and also avoids the ad hoc specification of a suitable metric like a weighted Euclidean distance on vectors of extracted moment statistics. As a second application we consider structuring a database of documents for improved information retrieval. Typical measures of association are based on the number of shared index terms [Van Rijsbergen , 1979]. For example, a document is represented by a (sparse) binary vector B, where each entry corresponds to the occurrence of a certain index term. The dissimilarity can then be defined by the cosme measure (2) Notice, that this measure (like many other) may violate the triangle inequality. 2 Clustering Sparse Proximity Data In spite of potential advantages of similarity-based methods, their major drawback seems to be the scaling behavior with the number of data: given a dataset with N entities, the number of potential pairwise comparisons scales with O(N2). Clearly, it is prohibitive to exhaustively perform or store all dissimilarities for large datasets, and the crucial problem is how to deal with this unavoidable data sparseness. More fundamentally, it is already the data generation process which has to solve the problem of experimental design, by selecting a subset of pairs (i, j) for evaluation. Obviously, a meaningful selection strategy could greatly profit from any knowledge about the grouping structure of the data. This observation leads to the concept of performing a sequential experimental design which interleaves the data clustering with the data acquisition process. \Ve call this technique active data clustering, because it actively selects new data, and uses tentative knowledge to estimate the relevance of missing data. It amounts to inferring from the available data not only a grouping structure, but also to learn which future data is most relevant for the clustering problem. This fundamental concept may also be applied to other unsupervised learning problems suffering from data sparseness. The first step in deriving a clustering algorithm is the specification of a suitable objective function . In the case of similarity-based clustering this is not at all a trivial problem and we have systematically developed an axiomatic approach based on invariance and robustness principles [Hofmann et al., 1997] . Here, we can only 530 T. Hofmann and J. M. Buhmann give some informal justifications for our choice. Let us introduce indicator functions to represent data partitionings, Miv being the indicator function for entity 0i belonging to cluster Cv ' For a given number J{ of clusters, all Boolean functions are summarized in terms of an assignment matrix M E {O, 1 }NXK. Each row of M is required to sum to one in order to guarantee a unique cluster membership. To distinguish between known and unknown dissimilarities, index sets or neighborhoods N = (N1 , • •. , NN) are introduced. If j EM this means the value of Dij is available, otherwise it is not known. For simplicity we assume the dissimilarity measure (and in turn the neighborhood relation) to be symmetric, although this is not a necessary requjrement. With the help of these definition the proposed criterion to assess the quality of a clustering configuration is given by N K 1i(M;D,N) LLMivdiv, (3) i=1 v=1 1i additively combines contributions div for each entity, where div corresponds to the average dissimilarity to entities belonging to cluster Cv . In the sparse data case, averages are restricted to the fraction of entities with known dissimilarities, i.e., the subset of entities belonging to Cv n;Vi. 3 Expected Value of Information To motivate our active data selection criterion, consider the simplified sequential problem of inserting a new entity (or object) ON to a database of N - 1 entities with a given fixed clustering structure. Thus we consider the decision problem of optimally assigning the new object to one of the J{ clusters. If all dissimilarities between objects 0i and object ON are known, the optimal assignment only depends on the average dissimilarities to objects in the different clusters, and hence is given by (4) For incomplete data, the total population averages dNv are replaced by point estimators dNv obtained by restricting the sums in (4) to N N, the neighborhood of ON. Let us furthermore assume we want to compute a fixed number L of dissimilarities before making the terminal decision. If the entities in each cluster are not further distinguished, we can pick a member at random, once we have decided to sample from a cluster Cv . The selection problem hence becomes equivalent to the problem of optimally distributing L measurements among J{ populations, such that the risk of making the wrong decision based on the resulting estimates dNv is minimal. More formally, this risk is given by n = dNcx - dNcx.' where a is the decision based on the subpopulation estimates {dNv } and a* is the true optimum. To model the problem of selecting an optimal experiment we follow the Bayesian approach developed by Raiffa & Schlaifer [Raiffa, Schlaifer, 1961] and compute the so-called Expected Value of Sampling Information (EVSI). As a fundamental step this involves the calculation of distributions for the quantities dNv ' For reasons of computational efficiency we are assuming that dissimilarities resulting from a comparison with an object in cluster Cv are normally distributed 1 with mean dNv and variance uNv 2. Since the variances are nuisance parameters the risk function n does not depend on, it suffices to calculate the marginal distribution of lOther computationally more expensive choices to model within cluster dissimilarities are skewed distributions like the Gamma-d.istribution. Active Data Clustering 531 a) c) 800 I~, RANDOM 1-+----4 ACTIVE ........ 600 PI 400 I~~ 200 J{O I ,_ b) ·200 \1!Jln '--. ·400 -600 ·800 'ill 50000 100000 150000 200000 # samples Figure 1: (a) Gray-scale visualization of the generated proximity matrix (N = 800). Dark/light gray values correspond to low/high dissimilarities respectively, Dij being encoded by pixel (i, j). (b) Sampling snapshot for active data clustering after 60000 samples, queried values are depicted in white. (c) Costs evaluated on the complete data for sequential active and random sampling. dN/I' For the class of statistical models we will consider in the sequel the empirical mean dN/I, the unbiased variance estimator O"Jv/l and the sample size mN/I are a sufficient statistic. Depending on these empirical quantities the marginal posterior distribution of dN/I for uninformative priors is a Student t distribution with t = .jmN/I(dN/I - dN/I)/O"N/I and mN/I - 1 degrees of freedom. The corresponding density will be denoted by !/I(dNII\dN/I,O"JvIl,mNII)' With the help of the posterior densities !/I we define the Expected Value of Perfect Information (EVPI) after having observed (dN/I,O"Jv/l,mNII) by 1 +00 1+00 K EVPI = -00'" -00 m;x{dNa-dNII } g !/I(drvll\dNII , O"~II' mN/I) d drvll, (5) where a = arg minll dNII . The EVPI is the loss one expects to incur by making the decision a based on the incomplete il1formation {dN/I} instead of the optimal decision a", or, put the other way round, the expected gain we would obtain if a" was revealed to us. In the case of experimental design, the main quantity of interest is not the EVPI but the Expected Value of Sampling Information (EVSI). The EVSI quantifies how much gain we are expecting from additional data. The outcome of additional experiments can only be anticipated by making use of the information which is already available. This is known as preposterior analysis. The linearity of the utility measure implies that it suffices to calculate averages with respect to the preposterous distribution [Raiffa, Schlaifer, 1961, Chapter 5.3]. Drawing mt/l additional samples from the lI-th population, and averaging possible outcomes with the (prior) distribution !/I(dN/I\dN/I,O"Jv/l,mNII) will not affect the unbiased estimates dN/I,O"Jv/l, but only increase the number of samples mN/I --;. mNII + mt/l ' Thus, we can compute the EVSI from (5) by replacing the prior densities with.its preposterous counterparts. To evaluate the K-dimensional integral in (5) or its EVSI variant we apply MonteCarlo techniques, sampling from the Student t densities using Kinderman's re532 :J{ .30000 '20000 ,,'00000 90000 00000 j '0000 f 60000 50000 50000 # of samples \\ \ \ \ \ \ '''' .. T. Hofmann and 1. M. Buhmann b) AAN[)OM ACTlVe -+-. L=5IHNMI L=IO(HMMI I, = I3IUMI .............................. .............. - . . . - ... . 150000 200000 250000 300000 3!OOOO ~ # of samples Figure 2: (a) Solution quality for active and random sampling on data generated from a mixture image of 16 Brodatz textures (N = 1024). (b) Cost trajectories and segmentation results for an active and random sampling example run (N = 4096). jection sampling scheme, to get an empirical estimate of the random variable 'l/Ja(dN1 , ... , dNK ) = maxv{dNa-dNIJ. Though this enables us in principle to approximate the EVSI of any possible experiment, we cannot efficiently compute it for all possible ways of distributing the L samples among J{ populations. In the large sample limit, however, the EVSI becomes a concave function of the sampling sizes. This motivates a greedy design procedure of drawing new samples incrementally one by one. 4 Active Data Clustering So far we have assumed the assignments of all but one entity ON to be given in advance. This might be realistic in certain on-line applications, but more often we want to simultaneously find assignments for all entities in a dataset. The active data selection procedure hence has to be combined with a recalculation of clustering solutions, because additional data may help us not only to improve our terminal decision, but also with respect to our sampling'strategy. A local optimization of 'Ii for assignments of a single object OJ can rely on the quantities L [f- + ~il MjvDij - L +} -i L MjvM/cvDjk, JEN. IV njv JEN. njvnjv kENj-{i} (6) where njv = 2:jEN. Mjv, n;: njv - M iv , and nj: = n;: + 1, by setting Mia = 1 {==> a = arg minv giv = argminv'li(M!Miv = 1), a claim which can be proved by straightforward algebraic manipulations (cf. [Hofmann et al., 1997]). This effectively amounts to a cluster readjustment by reclassification of objects. For additional evidence arising from new dissimilarities, one thus performs local reassignments, e.g., by cycling through all objects in random order, until no assignment is changing. To avoid unfavorable local minima one may also introduce a computational temperature T and utilize {9iv} for simulated annealing based on the Gibbs sampler [Geman, Geman, 1984], P{Mia = I} = exp [-;J.gia]J2:~=l exp [-;J.9iv], Alternatively, Eq. (6) may also serve as the starting point to derive mean-field equations in a deterministic annealing framework, cf. [Hofmann, Buhmann, 1997]. These local Active Data Clustering 533 1 2 3 4 " G 7 8 0 10 dugter dugter dugter cluHeT t.uk clu,itcr dUiter model fu:n.i network model .:Uatc alom altr;orhhm ~chedul itrUc.tur object ciu3ter clu3ter ciu3ter diuribu ,up rcoult propOJ cluiter method a.pproa.ch method al,;orithm neura.l procc:u pa.rlid tempera-tur method a.lgorithm ba,3 C altortthm objec t da.ta. lea.rn 3tudi Jotudi de,;re new ~raph ,;ener biue da.ta. mct.hod ClI,;orithm cloud 'up a.lloi :speech 3chedul loop Uicr model fuzzi n e ura.l facta.l a.lpha. a.tom continu tiuk video queri context membcuhip network event sta.te ion error pla.cern fa-mill aCceu deciiion rule com petit ra.ndom particl e)cc\ron conHruct connect ioftwa.r 30ft war manufactur co ntrol icifor,an pa.llid i.ntera.c tempera.tur :spea.ker Qualiti va.ria.bl placern ~h}'.ic. l idcntif lCiJ.rn 11 12 13 14 1 ~ 16 17 18 19 20 a «oruhm a Konthm c uater melhod c uiter robu,u lmao,; CI~,uter model «a.lc~Xl problem cluster da.ta. do c.u m datiJ. duater c1uucr da.ta cluiter dU:ltcr cluUer fuzzi propo:l siKna.tUJ tcchniqu :5Y:ltcm ,e~men' a.1«orithm :lciJ.le function method propoi re sult cluuer rC:5uh complex a.lgorithm , .. nonlinea.r correl optim da.ta. method file p.per ei~enva.lu method method :limul red.hifl heuntt conyer,; link dOe?m vuua. uncenatnh PIXel di:Ulr.nilar nbodl h,up :lolv cmea.n :lin,1 retnev video rabun :le«ment pOint ,r;ravit red:lhih to ol a.lgorithm method previou ta.rKet perturb irna.g da.ta. dark rnpe program fern retriev a.na.lyt proceuoT bound motion center mau gala.xi rna.chin criteria. hiera.rchi literatur queli m&uix color kmean ma.Her i urvei Figure 3: Clustering solution with 20 clusters for 1584 documents on 'clustering'. Clusters are characterized by their 5 most topical and 5 most typical index terms. optimization algorithms are well-suited for an incremental update after new data has been sampled, as they do not require a complete recalculation from scratch. The probabilistic reformulation in an annealing framework has the further advantage to provide assignment probabilities which can be utilized to improve the randomized 'partner' selection procedure. For any of these algorithms we sequentially update data assignments until a convergence criterion is fulfilled. 5 Results To illustrate the behavior of the active data selection criterion we have run a series of repeated experiments on artificial data. For N = 800 the data has been divided into 8 groups of 100 entities. Intra-group dissimilarities have been set to zero, while inter-group dissimilarities were defined hierarchically. All values have been corrupted by Gaussian noise. The proximity matrix, the sampling performance, and a sampling snapshot are depicted in Fig. 1. The sampling exactly performs as expected: after a short initial phase the active clustering algorithm spends more samples to disambiguate clusters which possess a higher mean similarity, while less dissimilarities are queried for pairs of entities belonging to well separated clusters. For this type of structured data the gain of active sampling increases with the depth of the hierarchy. The final solution variance is due to local minima. Remarkably the active sampling strategy not only shows a faster improvement, it also finds on average significantly better solution. Notice that the sampling has been decomposed into stages, refining clustering solutions after sampling of 1000 additional dissimilari ties. The results of an experiment for unsupervised texture segmentation is shown Fig. 2. To obtain a close to optimal solution the active sampling strategy roughly needs less than 50% of the sample size required by random sampling for both, a resolution of N = 1024 and N = 4096. At a 64 x 64 resolution, for L = 100[{, 150[{, 200[{ actively selected samples the random strategy needs on average L = 120[{, 300[{, 440f{ samples, respectively, to obtain a comparable solution quality. Obviously, active sampling can only be successful in an intermediate regime: if too little is known, we cannot infer additional information to improve our sampling, if the sample is large enough to reliably detect clusters, there is no need to sample any more. Yet, this intermediate regime significantly increases with [{ (and N). 534 T. Hofmann and I. M. Buhmann Finally, we have clustered 1584 documents containing abstracts of papers with clustering as a title word. For I{ = 20 clusters2 active clustering needed 120000 samples « 10% of the data) to achieve a solution quality within 1% of the asymptotic solution. A random strategy on average required 230000 samples. Fig. 3 shows the achieved clustering solution, summarizing clusters by topical (most frequent) and typical (most characteristic) index terms. The found solution gives a good overview over areas dealing with clusters and clustering3 . 6 Conclusion As we have demonstrated, the concept of expected value of information fits nicely into an optimization approach to clustering of proximity data, and establishes a sound foundation of active data clustering in statistical decision theory. On the medium size data sets used for validation, active clustering achieved a consistently better performance as compared to random selection. This makes it a promising technique for automated structure detection and data mining applications in large data bases. Further work has to address stopping rules and speed-up techniques to accelerate the evaluation of the selection criterion, as well as a unification with annealing methods and hierarchical clustering. Acknowledgments This work was supported by the Federal Ministry of Education and Science BMBF under grant # 01 M 3021 Aj4 and by a M.l.T. Faculty Sponser's Discretionary Fund. References [Geman et al., 1990] Geman, D., Geman, S., Graffigne, C., Dong, P. (1990). Boundary Detection by Constrained Optimization. IEEE Transactions on Pattern A nalysis and Machine Intelligence, 12(7), 609-628. [Geman, Geman, 1984] Geman, S., Geman, D. (1984). Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Machine Intelligen·ce, 6(6), 721-741. [Hofmann, Buhmann, 1997] Hofmann, Th., Buhmann, J. M. (1997). Pairwise Data Clustering by Deterministic Annealing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(1), 1-14. [Hofmann et al., 1997] Hofmann, Th., Puzicha, J., Buhmann, J.M. 1997. Deterministic Annealing for Unsupervised Texture Segmentation. Pages 213-228 of: Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. Lecture Notes in Computer Science, vol. 1223. [Jain, Dubes, 1988] Jain, A. K., Dubes, R. C. (1988). Algorithms for Clustering Data. Englewood Cliffs, NJ 07632: Prentice Hall. (Raiffa, Schlaifer, 1961] Raiffa, H., Schlaifer, R. (1961). Applied Statistical Decision Theory. Cambridge MA: MIT Press. (Van Rijsbergen, 1979] Van Rijsbergen, C. J. (1979). Information Retrieval. Butterworths, London Boston. 2The number of clusters was determined by a criterion based on complexity costs. 3Is it by chance, that 'fuzzy' techniques are 'softly' distributed over two clusters?
1997
89
1,440
Boltzmann Machine learning using mean field theory and linear response correction H.J. Kappen Department of Biophysics University of Nijmegen, Geert Grooteplein 21 NL 6525 EZ Nijmegen, The Netherlands F. B. Rodriguez Instituto de Ingenieria del Conocimiento & Departamento de Ingenieria Informatica. Universidad Aut6noma de Madrid, Canto Blanco,28049 Madrid, Spain Abstract We present a new approximate learning algorithm for Boltzmann Machines, using a systematic expansion of the Gibbs free energy to second order in the weights. The linear response correction to the correlations is given by the Hessian of the Gibbs free energy. The computational complexity of the algorithm is cubic in the number of neurons. We compare the performance of the exact BM learning algorithm with first order (Weiss) mean field theory and second order (TAP) mean field theory. The learning task consists of a fully connected Ising spin glass model on 10 neurons. We conclude that 1) the method works well for paramagnetic problems 2) the TAP correction gives a significant improvement over the Weiss mean field theory, both for paramagnetic and spin glass problems and 3) that the inclusion of diagonal weights improves the Weiss approximation for paramagnetic problems, but not for spin glass problems. 1 Introduction Boltzmann Machines (BMs) [1], are networks of binary neurons with a stochastic neuron dynamics, known as Glauber dynamics. Assuming symmetric connections between neurons, the probability distribution over neuron states s will become stationary and is given by the Boltzmann-Gibbs distribution P( S). The Boltzmann distribution is a known function of the weights and thresholds of the network. However, computation of P(Sj or any statistics involving P(S), such as mean firing rates or correlations, requires exponential time in the number of neurons. This is Boltzmann Machine Learning Using Mean Field Theory 281 due to the fact that P(S) contains a normalization term Z, which involves a sum over all states in the network, of which there are exponentially many. This problem is particularly important for BM learning. Using statistical sampling techiques [2], learning can be significantly improved [1]. However, the method has rather poor convergence and can only be applied to small networks. In [3, 4], an acceleration method for learning in BMs is proposed using mean field theory by replacing (SjSj) by mimj in the learning rule. It can be shown [5] that such a naive mean field approximation of the learning rules does not converge in general. Furthermore, we argue that the correlations can be computed using the linear response theorem [6]. In [7, 5] the mean field approximation is derived by making use of the properties of convex functions (Jensen's inequality and tangential bounds). In this paper we present an alternative derivation which uses a Legendre transformation and a small coupling expansion [8] . It has the advantage that higher order contributions (TAP and higher) can be computed in a systematic manner and that it may be applicable to arbitrary graphical models. 2 Boltzmann Machine learning The Boltzmann Machine is defined as follows. The possible configurations of the network can be characterized by a vector s = (S1, .. , Si, .. , sn), where Si = ±1 is the state of the neuron i, and n the total number of the neurons. Neurons are updated using Glauber dynamics. Let us define the energy of a configuration s as 1 -E(S) = 2 L WijSiSj + L SiOi. i,j i After long times, the probability to find the network in a state s becomes independent of time (thermal equilibrium) and is given by the Boltzmann distribution 1 p(S) = Z exp{ -E(S)}. (1) Z = L; exp{ - E( S)} is the partition function which normalizes the probability distribution. Learning [1] consists of adjusting the weights and thresholds in such a way that the Boltzmann distribution approximates a target distribution q(S) as closely as possible. A suitable measure of the difference between the distributions p(S) and q(S) is the K ullback divergence [9] J{ = ~ q(S) log ;~~. s (2) Learning consists of minimizing J{ using gradient descent [1] ~Wij = 17( (SiSj)c (SiSj) ), ~()i = 17( (Si)c (Si) ). The parameter 17 is the learning rate. The brackets (-) and (-) c denote the 'free' and 'clamped' expectation values, respectively. 282 H. 1. Kappen and F. B. Rodr(guez The computation of both the free and the clamped expectation values is intractible, because it consists of a sum over all unclamped states. As a result, the BM learning algorithm can not be applied to practical problems. 3 The mean field approximation We derive the mean field free energy using the small, expansion as introduced by Plefka [8]. The energy of the network is given by E(s, w, h, ,) for, = 1. The free energy is given by F(w, (), ,) = -logTrse- E (s ,w,8,),) and is a function of the independent variables Wij, ()j and ,. We perform a Legendre transformation on the variables (}i by introducing mj = ~:.. The Gibbs free energy G(w, m, ,) = F(w, (), ,) + L ()jmj is now a function of the independent variables mj and Wij, and ()i is implicitly given by (Si) )' = mi. The expectation 0" is with respect to the full model with interaction , . We expand G(,) = G(O) + ,G' (0) + ~,2G"(O) + 0(,3) We directly obtain from [8] G'(,) (Eint)" GUb) (E;n')~ - {Ei:,,), + (E;n,;;= ~ (s, - m;)), For, = 0 the expectation values 0 )' become the mean field expectations which we can directly compute: G(O) G'(O) G"(O) Thus G(l) 1 (1 1) 2 ~ (1 + mi) log 2(1 + mi) + (1 - md log 2(1- md J -~ L Wjjmjmj ij 1 (1 1) -L (1 + md log-(l + md + (1- mdlog-(l- md 2 . 2 2 I -~ "'w .. m ·m· 2 LIJ I J ij -~ L w;j(l - m;)(l- m]) + 0(w3 f(m)) ij (3) Boltvnann Machine Learning Using Mean Field Theory 283 where f(m) is some unknown function of m. The mean field equations are given by the inverse Legendre transformation e. - ae _ h - 1 ( ) ""' ""' 2 2 1 ami - tan mi - L- Wijmj + ~ Wjjmd1 - mj ), (4) J J which we recognize as the mean field equations. The correlations are given by a 2 F ami ( ao ) -1 (SiSj) (Si) (Sj) = - oeiooj = oej = am ij ( 02~) -1 am ij We therefore obtain from Eq. 3 (Si S j ) (Si) (s j) = Aij with (A-')oj = Jij ( 1 _1 ml + ~ wi.(1 - ml)) - Wij - 2mimjW;j (5) Thus, for given Wij and OJ, we obtain the approximate mean firing rates mj by solving Eqs. 4 and the correlations by their linear response approximations Eqs. 5. The inclusion of hidden units is straigthforward. One applies the above approximations in the free and the clamped phase separately [5]. The complexity of the method is O(n3 ), due to the matrix inversion. 4 Learning without hidden units We will assess the accuracy of the above method for networks without hidden units. Let us define Cij = (SjSj)c (Si)c (Sj)c' which can be directly computed from the data. The fixed point equation for D..Oj gives D..Oi = 0 {:} mj = (Si)c . (6) The fixed point equation for D..wij, using Eq. 6, gives D..wij = 0 {:} Aij = Cij ' i =F j. (7) From Eq. 7 and Eq. 5 we can solve for Wij, using a standard least squares method. In our case, we used fsolve from Matlab. Subsequently, we obtain ei from Eq. 4. We refer to this method as the TAP approximation. In order to assess the effect of the TAP term, we also computed the weights and thresholds in the same way as described above, but without the terms of order w 2 in Eqs. 5 and 4. Since this is the standard Weiss mean field expression, we refer to this method as the Weiss approximation. The fixed point equations are only imposed for the off-diagonal elements of D..Wjj because the Boltzmann distribution Eq. 1 does not depend on the diagonal elements Wij. In [5], we explored a variant of the Weiss approximation, where we included diagonal weight terms. As is discussed there, if we were to impose Eq. 7 for i = j as well, we have A = C. HC is invertible, we therefore have A-I = C- 1 . However, we now have more constraints than variables. Therefore, we introduce diagonal weights Wii by adding the term Wiimi to the righthandside of Eq. 4 in the Weiss approximation. Thus, ,sij _ (C- 1) .. Wij = 1 _ m? lJ I and OJ is given by Eq. 4 in the Weiss approximation. Clearly, this method is computationally simpler because it gives an explicit expression for the solution of the weights involving only one matrix inversion. 284 H. 1. Kappen and F. B. Rodr(guez 5 Numerical results For the target distribution q(s) in Eq. 2 we chose a fully connected Ising spin glass model with equilibrium distribution with lij i.i.d. Gaussian variables with mean n~l and variance /~1 ' This model is known as the Sherrington-Kirkpatrick (SK) model [10]. Depending on the values of 1 and 10 , the model displays a para-magnetic (unordered), ferro-magnetic (ordered) and a spin-glass (frustrated) phase. For 10 = 0, the para-magnetic (spinglass) phase is obtained for 1 < 1 (1 > 1). We will assess the effectiveness of our approximations for finite n, for 10 = 0 and for various values of 1. Since this is a realizable task, the optimal KL divergence is zero, which is indeed observed in our simulations. We measure the quality of the solutions by means ofthe Kullback divergence. Therefore, this comparison is only feasible for small networks. The reason is that the computation of the Kullback divergence requires the computation of the Boltzmann distribution, Eq. 1, which requires exponential time due to the partition function Z. We present results for a network of n = 10 neurons. For 10 = 0, we generated for each value of 0.1 < 1 < 3, 10 random weight matrices l ij. For each weight matrix, we computed the q(S) on all 2n states. For each of the 10 problems, we applied the TAP method, the Weiss method and the Weiss method with diagonal weights. In addition, we applied the exact Boltzmann Machine learning algorithm using conjugate gradient descent and verified that it gives KL diver?ence equal to zero, as it should. We also applied a factorized model p(S) = Ili ?"(1 + misd with mi = (Si)c to assess the importance of correlations in the target distribution. In Fig. la, we show for each 1 the average KL divergence over the 10 problem instances as a function of 1 for the TAP method, the Weiss method, the Weiss method with diagonal weights and the factorized model. We observe that the TAP method gives the best results, but that its performance deteriorates in the spin-glass phase (1) 1). The behaviour of all approximate methods is highly dependent on the individual problem instance. In Fig. Ib, we show the mean value of the KL divergence of the TAP solution, together with the minimum and maximum values obtained on the 10 problem instances. Despite these large fluctuations , the quality of the TAP solution is consistently better than the Weiss solution. In Fig. lc, we plot the difference between the TAP and Weiss solution, averaged over the 10 problem instances. In [5] we concluded that the Weiss solution with diagonal weights is better than the standard Weiss solution when learning a finite number of randomly generated patterns. In Fig. Id we plot the difference between the Weiss solution with and without diagonal weights. We observe again that the inclusion of diagonal weights leads to better results in the paramagnetic phase (1 < 1), but leads to worse results in the spin-glass phase. For 1 > 2, we encountered problem instances for which either the matrix C is not invertible or the KL divergence is infinite. This problem becomes more and more severe for increasing 1 . We therefore have not presented results for the Weiss approximation with diagonal weigths for 1 > 2. BoltvnannMachine Learning Using Mean Field Theory 285 Comparison mean values 5r-------~------~------. 4 fact weiss+d weiss tap .... ; ./ / ", 1_'" J..:y'", I '" -.... o~ __ ~ ... =·~J~ ______ ~ ____ ~ n. .1 0.5 :.:: I en en W J 0 :.:: o 2 3 J Difference WEISS and TAP -0.5 L-. ______ ~ ____ ~~ ____ ----.J o 2 3 J TAP 2 mean 1.5 min Q) max u c: Q) e> 1 Q) > '5 ....J :.:: 0.5 o~-=~~------~----~ 1.5 en ':!? 1 w J :.:: I '? 0.5 en en W J 0 :.:: o -0.5 o 2 3 J Difference WEISS+D and WEISS v V / ~ 1 2 3 J Figure 1: Mean field learning of paramagnetic (J < 1) and spin glass (J > 1) problems for a network of 10 neurons. a) Comparison of mean KL divergences for the factorized model (fact), the Weiss mean field approximation with and without diagonal weights (weiss+d and weiss), and the TAP approximation, as a function of J. The exact method yields zero KL divergence for all J. b) The mean, minimum and maximum KL divergence of the TAP approximation for the 10 problem instances, as a function of J. c) The mean difference between the KL divergence for the Weiss approximation and the TAP approximation, as a function of J. d) The mean difference between the KL divergence for the Weiss approximation with and without diagonal weights, as a function of J . 6 Discussion We have presented a derivation of mean field theory and the linear response correction based on a small coupling expansion of the Gibbs free energy. This expansion can in principle be computed to arbitrary order. However, one should expect that the solution of the resulting mean field and linear response equations will become more and more difficult to solve numerically. The small coupling expansion should be applicable to other network models such as the sigmoid belief network, Potts networks and higher order Boltzmann Machines. The numerical results show that the method is applicable to paramagnetic problems. This is intuitively clear, since paramagnetic problems have a unimodal probability distribution, which can be approximated by a mean and correlations around the mean. The method performs worse for spin glass problems. However, it still gives a useful approximation of the correlations when compared to the factorized model which ignores all correlations. In this regime, the TAP approximation improves 286 H. 1. Kappen and F. B. Rodr(guez significantly on the Weiss approximation. One may therefore hope that higher order approximation may further improve the method for spin glass problems. Therefore. we cannot conclude at this point whether mean field methods are restricted to unimodal distributions. In order to further investigate this issue, one should also study the ferromagnetic case (Jo > 1, J > 1), which is multimodal as well but less challenging than the spin glass case. It is interesting to note that the performance of the exact method is absolutely insensitive to the value of J. Naively, one might have thought that for highly multi-modal target distributions, any gradient based learning method will suffer from local minima. Apparently, this is not the case: the exact KL divergence has just one minimum, but the mean field approximations of the gradients may have multiple solutions. Acknowledgement This research is supported by the Technology Foundation STW, applied science division of NWO and the techology programme of the Ministry of Economic Affairs. References [1] D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann Machines. Cognitive Science, 9:147-169, 1985. [2] C. Itzykson and J-M. Drouffe. Statistical Field Theory. Cambridge monographs on mathematical physics. Cambridge University Press. Cambridge, UK, 1989. [3] C. Peterson and J.R. Anderson. A mean field theory learning algorithm for neural networks. Complex Systems, 1:995-1019, 1987. [4] G.E. Hinton. Deterministic Boltzmann learning performs steepest descent in weightspace. Neural Computation, 1:143-150. 1989. [5] H.J. Kappen and F.B. Rodriguez. Efficient learning in Boltzmann Machines using linear response theory. Neural Computation, 1997. In press. [6] G. Parisi. Statistical Field Theory. Frontiers in Physics. Addison-Wesley, 1988. [7] L.K. Saul, T. Jaakkola, and M.1. Jordan. Mean field theory for sigmoid belief networks. Journal of artificial intelligence research, 4:61-76, 1996. [8] T. Plefka. Convergence condition of the TAP equation for the infinite-range Ising spin glass model. Journal of Physics A, 15:1971-1978, 1982. [9] S. Kullback. Information Theory and Statistics. Wiley, New York, 1959. [10] D. Sherrington and S. Kirkpatrick. Solvable model of Spin-Glass. Physical review letters, 35:1792-1796, 1975.
1997
9
1,441
Coding of Naturalistic Stimuli by Auditory Midbrain Neurons H. Attias* and C.E. Schreinert Sloan Center for Theoretical Neurobiology and W.M. Keck Foundation Center for Integrative Neuroscience University of California at San Francisco San Francisco, CA 94143-0444 Abstract It is known that humans can make finer discriminations between familiar sounds (e.g. syllables) than between unfamiliar ones (e.g. different noise segments). Here we show that a corresponding enhancement is present in early auditory processing stages. Based on previous work which demonstrated that natural sounds had robust statistical properties that could be quantified, we hypothesize that the auditory system exploits those properties to construct efficient neural codes. To test this hypothesis, we measure the information rate carried by auditory spike trains on narrow-band stimuli whose amplitude modulation has naturalistic characteristics, and compare it to the information rate on stimuli with non-naturalistic modulation. We find that naturalistic inputs significantly enhance the rate of transmitted information, indicating that auditiory neural responses are matched to characteristics of natural auditory scenes. 1 Natural Scene Statistics and the Neural Code A primary goal of hearing research is to understand how complex sounds that occur in natural scenes are processed by the auditory system. However, natural sounds are difficult to describe quantitatively and the complexity of auditory responses they evoke makes it hard to gain insight into their processing. Hence, most studies of auditory physiology are restricted to pure tones and noise stimuli, resulting in a limited understanding of auditory encoding. In this paper we pursue a novel approach to the study of natural sound encoding in auditory spike trains. Our • Corresponding author. E-mail: hagai@phy.ucsf.edu. t E-mail: chris@phy.ucsf.edu. 104 H. Attias and C. E. Schreiner ~ 11111111 I II I II I I 111111 ~ I 111111 III 11111111 III II III U II Figure 1: Left: amplitude modulation stimulus drawn from a naturalistic stimulus set, and the evoked spike train of an inferior colliculus neuron. Right: amplitude modulation from a non-naturalistic set and the evoked spike train of the same neuron. method consists of measuring statistical characteristics of natural auditory scenes, and incorporating them into simple stimuli in a systematic manner, thus creating 'naturalistic' stimuli which enable us to study the encoding of natural sounds in a controlled fashion. The first stage of this program has been described in (Attias and Schreiner 1997); the second is reported below. Fig. 1 shows two segments of long stimuli and the corresponding spike trains of the same neuron, elicited by pure tones that were amplitude-modulated by these stimuli. While both stimuli appear to be random and to have the same mean and both spike trains have the same firing rate, one may observe that high and low amplitudes are more likely to occur in the stimulus on the left; indeed, these stimuli are drawn from two stimulus sets with different statistical properties. Our present study of auditory coding focuses on assessing the efficiency of this neural code: for a given stimulus set, how well can the animal reconstruct the input sound and discriminate between similar sound segments, based on the evoked spike train, and how those abilities are affected by changing the stimulus statistics. We quantify the discrimination capability of auditory neurons in the inferior colliculus of the cat using concepts from information theory (Bialek et al. 1991; Rieke et al. 1997). This leads to the issue of optimal coding (Atick 1992). Theoretically, given an auditory scene with particular statistical properties, it is possible to design an encoding scheme that would exploit those properties, resulting in a neural code that is optimal for that scene but is consequently less efficient for other scenes. Here we investigate the hypothesis that the auditory system uses a code that is adapted to natural auditory scenes. This question is addressed by comparing the discrimination capability of auditory neurons between sound segments drawn from a naturalistic stimulus set, to the one for a non-naturalistic set. 2 Statistics of Natural Sounds As a first step in investigating the relation between neural responses and auditory inputs, we studied and quantified temporal statistics of natural auditory scenes {Attias and Schreiner 1997}. It is well-known that different locations on the basal membrane respond selectively to different frequency components of the incoming sound x{t) (e.g., Pickles 1988), hence the frequency v corresponds to a spatial coordinate, in analogy with retinal location in vision. We therefore analyzed a large database of sounds, including speech, music, animal vocalizations, and background sounds, using various filter banks comprising 0 -10kHz. In each frequency band v, the amplitude a{t) ~ 0 and phase r/>{t) ofthe band-limited signal xv(t) = a{t) cos{vt+r/>{t)) were extracted, and the amplitude probability distribution p(a) and auto-correlation Coding of Naturalistic Stimuli by Auditory Midbrain Neurons 105 Piano music Symphonic: music: 0.6 0.5 Cat vQcalizatlona Bird 80ngs Ba.okground sound. 0 . 5 0 . 5 0 . 4 0 . 2 o . ~ o~~--------~~ -4 Figure 2: Log-amplitude distribution in several sound ensembles. Different curves for a given ensemble correspond to different frequency bands. The low amplitude peak in the cat plot reflect abundance of silent segments. The theoretical curve p(a) (1) is plotted for comparison (dashed line). function c(r) = (a(t)a(t + r)) were computed, as well as those of the instantaneous frequency d¢(t)/dt. Those statistics were found to be nearly identical in all bands and across all examined sounds. In particular, the distribution of the log-amplitude a = log a, normalized to have zero mean and unit variance, could be well-fitted to the form p(a) = ,8 exp (,8a + Q e.Bii+t:t) (1) (with normalization constants Q = -.578 and ,8 = 1.29), which should, however, be corrected at large amplitude (> 5a). Several examples are displayed in Fig. 1. The log-amplitude distribution (1) corresponds mathematically to the amplitude distribution of musical instruments and vocalizations, found to be p(a) = e-a (known as the Laplace distribution in speech signal processing), as well as that of background sounds, where p(a) <X ae- a2 (which can be shown to be the band amplitude distribution for a Gaussian signal). The power spectra of a(t) (Fourier transform of c(r)) were found to have a modified 1/ f form. Together with the results for ¢(t), those findings show that natural sounds are distinguished from arbitrary ones by robust characteristics. In the present paper we explore to what extent the auditory system exploits them in constructing efficient neural codes. Another important point made by (Attias and Schreiner 1997), as well as by (Ruderman and Bialek 1994) regarding visual signals, is that natural inputs are very often not Gaussian (e.g. (1)), unlike the signals used by conventional system-identification methods often applied to the nervous system. In this paper we use non-Gaussian stimuli to study auditory coding. 3 Measuring the Rate of Information Transfer 3.1 Experiment Based on our results for temporal statistics of natural auditory scenes, we can construct 'naturalistic' stimuli by starting with a simple signal and systematically incorporate successively more complicated characteristics of natural sounds into it. 106 H. Attias and C. E. Schreiner We cQ.ose to use narrow-band stimuli consisting of amplitude-modulated carriers a(t) cos(vt) at sound frequencies v = 2 - 9kHz with no phase modulation. Focusing on one-point amplitude statistics, we constructed a white naturalistic amplitude by choosing a(t) from an exponential distribution with a cutoff, p(O ::; a ::; ae) ex: e- a , p(a > ae) = 0 at each time point t independently, using a cutoff modulation frequency of fe = 100Hz (i.e., 1 a(J ::; fe) 1= const., 1 a(J > fe) 1= 0, where a(J) is the Fourier transform of a{t)). We also used a non-naturalistic stimulus set where a(t) was chosen from a uniform distribution p(O ::; a ~ be) = 1lbe, p(a > be) = 0, with be adjusted so that both stimulus sets had the same mean. A short segment from each set is shown in Fig. 1, and the two distributions are plotted in Figs. 3,4 (right) . Stimuli of 15 - 20min duration were played to ketamine-anesthetized cats. To minimize adaptation effects we alternated between the two sets using 10sec long segments. Single-unit recordings were made from the inferior colliculus (IC), a subthalamic auditory processing stage (e.g., Pickles 1988). Each IC unit responds best to a narrow range of sound frequencies, the center of which is called its 'best frequency' (BF). Neighboring units have similar BF's, in accord with the topographic frequency organization of the auditory system. For each unit, stimuli with carrier frequency v at most 500Hz away from the unit's BF were used. Firing rates in response to those stimuli were between 60 - 100Hz. The stimulus and the electrode signal were recorded simultaeneously at a sampling rate of 24kHz. After detecting and sotring the spikes and extracting the stimulus amplitude, both amplitude and spike train were down-sampled to 3kHz. 3.2 Analysis In order to assess the ability to discriminate between different inputs based on the observed spike train, we computed the mutual information Ir,s between the spike train response r(t) = Li o(t - ti), where ti are the spike times, and the stimulus amplitude s(t). I consists of two terms, Ir,s = Hs - Hslr' where Hs is the stimulus entropy (the log-number of different stimuli) and Hslr is the entropy of the stimulus conditioned on the response (the log-number of different stimuli that could elicit a given response, and thus could not be discriminated based on that response, averaged over all responses). Our approach generally follows the ideas of (Bialek et al. 1991; Rieke et al. 1997). To simplify the calculation, we first modified the stimuli s(t) to get s'(t) = f(s(t», where the function f(s) was chosen so that s' was Gaussian. Hence for exponential stimuli f(s) = y'(2)erfi(1-2e-S ) and for uniform stimuli f(s) = y'(2)erfi(2slbe-1), where erfi is the inverse error function. This Gaussianization has two advantages: first, the expression for the mutual information Ir,s' (= Ir,s) is now simpler, being given by the frequency-dependent signal-to-noise ratio SNR(J) (see below), since Hs' depends only on the power spectrum of s'(t); second and more importantly, the noise distribution was observed to become closer to Gaussian following this transformation. To compute Hs'lr we bound it from above by ftc dfH[s'(J) 1 f(J)], the calculation of which requires the conditional distribution p[s'(J) 1 f(J)] (note that these variables are complex, hence this is the joint ditribution of the real and imaginary parts). The latter is approximated by a Gaussian with mean s~(J) and variance Nr(f). This variance is, in fact, the power spectrum of the noise, Nr(J) = (I nr(J) 12 ), which we define by nr(t) = s'(t) s~(t). Computing the mutual information for those Gaussian distributions is straightforward and provides a lower bound on the Coding of Naturalistic Stimuli by Auditory Midbrain Neurons 1 0.8 1.6 60 100 f I, I, " I, I , I , I , 107 Figure 3: Left: signal-to-noise ratio SNR(f) vs. modulation frequency I for naturalistic stimuli. Right: normalized noise distribution (solid line), amplitude distribution of stimuli (dashed line) and of Gaussianized stimuli (dashed-dotted line). true Ir,s, Ie Ir,s = Ir,s' ~ J dllog2 SNR(f) . (2) o The signal-to-noise ratio is given by SNR(f) = S'(J)j(Nr(f))r, where S'(f) = (I s'(J) 12} is the spectrum of the Gaussianized stimulus and the averaging Or is performed over all responses. The main object here is s~(J), which is an estimate of the stimulus from the elicited spike train, and would optimally be given by the conditional mean J ds's'p(s' 1 f) at each I (Kay 1993). For Gaussian p( S' ,f) this estimator, which is generally non-linear, becomes linear in f(f) and is given by h(J)f(J), where h(J) = (s'(J)f*(J)}j(f(J)f*(f») is the Wiener filter. However, since our distributions were only approximately Gaussians we used the conditional mean, obtained by the kernel estimate s~(f) = (3) where k is a Gaussian kernel, R(J) is the spectrum of the spike train, and i indexes the data points obtained by computing FFT using a sliding window. The scaling by y'Si,.,fii reflects the assumption that the distributions at all I differ only by their variance, which enables us to use the data points at all frequencies to estimate s~ at a given I. Our estimate produced a slightly higher SNR(f) than the Wiener estimate used by (Bialek et al. 1991;Rieke et al. 1997) and others. 4 Information on Naturalistic Stimuli The SNR(f) for exponential stimuli is shown in Fig. 3 (left) for one of our units. Ie neurons have a preferred modulation frequency 1m (e.g., Pickles 1988), which is about 40Hz for this unit; notice that generally SNR(J) ~ 1, with equality when the stimulus and response are completely independent. Thus, stimulus components at frequencies higher than 60Hz effectively cannot be estimated from the spike train. The stimulus amplitude distribution is shown in Fig. 3 (right, dashed line), together with the noise distribution (normalized to have unit variance; solid line) which is nearly Gaussian. J08 H Attias and C. E. Schreiner 4,-----------------------, .... .. 1 3.5 3 0.8 . '. ~2 .6 ~o.e en 2 0.4 1.5 0.2 100 0 -5 .. Figure 4: Left: signal-to-noise ratio SNR(f) vs. modulation frequency f for nonnaturalistic stimuli (solid line) compared with naturalistic stimuli (dotted line). Right: normalized noise distribution (solid line), amplitude distribution of stimuli (dashed line) compared with that of naturalistic stimuli (dotted line), and of Gaussianized stimuli (dashed-dotted line). Using (2) we obtain an information rate of Ir,s ~ 114bit/sec. For the spike rate of 82spike/sec measured in this unit, this translates into 1.4bit/spike. Averaging across units, we have 1.3 ± 0.2bit/spike for naturalistic stimuli. Although this information rate was computed using the conditional mean estimator (3), it is interesting to examine the Wiener filter h{t) which provides the optimal linear estimator of the stimulus, as discussed in the previous section. This filter is displayed in Fig. 5 (solid line) and has a temporal width of several tens of milliseconds. 5 Information on Non-Naturalistic Stimuli The SNR(f) for uniform stimuli is shown in Fig. 4 (left, solid line) for the same unit as in Fig. 3, and is significantly lower than the corresponding SNR(f) for exponential stimuli plotted for comparison (dashed line). For the mutual information rate we obtain Ir,B ~ 77bit/sec, which amounts to 0.94bit/spike. Averaging across units, we have 0.8 ± 0.2bit/spike for non-naturalistic stimuli. The stimulus amplitude distribution is shown in Fig. 4 (right, dashed line), together with the exponential distribution (dotted line) plotted for comparison, as well as the noise distribution (normalized to have unit variance). The noise in this case is less Gaussian than for exponential stimuli, suggesting that our calculated bound on Ir,s may be lower for uniform stimuli. Fig. 5 shows the stimulus reconstruction filter (dashed line). It has a similar time course as the filter for exponential stimuli, but the decay is significantly slower and its temporal width is more than 100msec. 6 Conclusion We measured the rate at which auditory neurons carry information on simple stimuli with naturalistic amplitude modulation, and found that it was higher than for stimuli with non-naturalistic modulation. A result along the same lines for the frog was obtained by (Rieke et al. 1995) using Gaussian signals whose spectrum was shaped according to the frog call spectrum. Similarly, work in vision (Laughlin 1981; Field 1987; Atick and Redlich 1990; Ruderman and Bialek 1994; Dong and Atick 1995) suggests that visual receptive field properties are consistent with optimal coding predictions based on characteristics of natural images. Future work will explore coding of stimuli with more complex natural statistical characteristics and Coding a/Naturalistic Stimuli by Auditory Midbrain Neurons ~ 0 .6,------------------------. 0 .4 0 . 3 0 .2 0 . '" __ -.r..1 o 1.-___ -'-=----::;::-.::: v Vrr -~------==---__l -0.'" -0.2 L---_-=o=-. ..,=-------::0::-------::0,--. ..,=------' t 109 Figure 5: Impulse response of Wiener reconstruction filter for naturalistic stimuli (solid line) and non-naturalistic stimuli (dashed line). will extend to higher processing stages. Acknowledgements We thank W. Bialek, K. Miller, S. Nagarajan, and F. Theunissen for useful discussions and B. Bonham, M. Escabi, M. Kvale, L. Miller, and H. Read for experimental support. Supported by The Office of Naval Research (NOOOI4-94-1-0547), NIDCD (ROI-02260), and the Sloan Foundation. References J.J. Atick and N. Redlich (1990). Towards a theory of early visual processing. Neural Comput. 2,308-320. J .J. Atick (1992). Could information theory provide an ecological theory of sensory processing. Network 3, 213-251. H. Attias and C.E. Schreiner (1997). Temporal low-order statistics of natural sounds. In Advances in Neural Information Processing Systems 9, MIT Press. W. Bialek, F. Rieke, R. de Ruyter van Steveninck, and D. Warland (1991). Reading the neural code. Science 252, 1854-1857. D.W. Dong and J.J. Atick (1995). Temporal decorrelation: a theory of lagged and non-lagged responses in the lateral geniculate nucleus. Network 6, 159-178. D.J. Field (1987). Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. 4, 2379-2394. S.M. Kay (1993). Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice-Hall, New Jersey. S.B. Laughlin (1981). A simple coding procedure enhances a neuron's information capacity. Z. Naturforsch. 36c, 910-912. J.O. Pickles (1988). An introduction to the physiology of hearing (2nd Ed.). San Diego, CA: Academic Press. F. Rieke, D. Bodnar, and W. Bialek (1995). Naturalistic stimuli increase the rate and efficiency of information transmission by primary auditory neurons. Proc. R. Soc. Lond. B, 262, 259-265. F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek (1997). Spikes: Exploring the Neural Code. MIT Press, Cambridge, MA. D.L. Ruderman and W. Bialek (1994). Statistics of natural images: scaling in the woods. Phys. Rev. Lett. 73,814-817.
1997
90
1,442
Enhancing Q-Learning for Optimal Asset Allocation Ralph Neuneier Siemens AG, Corporate Technology D-81730 MUnchen, Germany Ralph.Neuneier@mchp.siemens.de Abstract This paper enhances the Q-Iearning algorithm for optimal asset allocation proposed in (Neuneier, 1996 [6]). The new formulation simplifies the approach by using only one value-function for many assets and allows model-free policy-iteration. After testing the new algorithm on real data, the possibility of risk management within the framework of Markov decision problems is analyzed. The proposed methods allows the construction of a multi-period portfolio management system which takes into account transaction costs, the risk preferences of the investor, and several constraints on the allocation. 1 Introduction Asset allocation and portfolio management deal with the distribution of capital to various investment opportunities like stocks, bonds, foreign exchanges and others. The aim is to construct a portfolio with a maximal expected return for a given risk level and time horizon while simultaneously obeying institutional or legally required constraints. To find such an optimal portfolio the investor has to solve a difficult optimization problem consisting of two phases [4]. First, the expected yields together with a certainty measure has to be predicted. Second, based on these estimates, mean-variance techniques are typically applied to find an appropriate fund allocation. The problem is further complicated if the investor wants to revise herlhis decision at every time step and if transaction costs for changing the allocations must be considered. disturbanc ies -,--financial market investmen ts j return '---investor I--rates, prices I--Markov Decision Problem: Xt = ($t, J(t}' state: market $t at = p(xt) p(Xt+llxd r( Xt, at, $t+l) and portfolio J(t policy p, actions transition probabilities return function Within the framework of Markov Decision Problems, MDPs, the modeling phase and the search for an optimal portfolio can be combined (fig. above). Furthermore, transaction costs, constraints, and decision revision are naturally integrated. The theory ofMDPs formalizes control problems within stochastic environments [1]. If the discrete state space is small and if an accurate model of the system is available, MDPs can be solved by conEnhancing Q-Leaming for Optimal Asset Allocation 937 ventional Dynamic Programming, DP. On the other extreme, reinforcement learning methods using function approximator and stochastic approximation for computing the relevant expectation values can be applied to problems with large (continuous) state spaces and without an appropriate model available [2, 10]. In [6], asset allocation is fonnalized as a MDP under the following assumptions which clarify the relationship between MDP and portfolio optimization: 1. The investor may trade at each time step for an infinite time horizon. 2. The investor is not able to influence the market by her/his trading. 3. There are only two possible assets for investing the capital. 4. The investor has no risk aversion and always invests the total amount. The reinforcement algorithm Q-Learning, QL, has been tested on the task to invest liquid capital in the Gennan stock market DAX, using neural networks as value function approximators for the Q-values Q(x, a). The resulting allocation strategy generated more profit than a heuristic benchmark policy [6]. Here, a new fonnulation of the QL algorithm is proposed which allows to relax the third assumption. Furthennore, in section 3 the possibility of risk control within the MDP framework is analyzed which relaxes assumption four. 2 Q-Learning with uncontrollable state elements This section explains how the QL algorithm can be simplified by the introduction of an artificial detenninistic transition step. Using real data, the successful application of the new algorithm is demonstrated. 2.1 Q-Leaming for asset allocation The situation of an investor is fonnalized at time step t by the state vector Xt = ($t, Kt), which consists of elements $t describing the financial market (e. g. interest rates, stock indices), and of elements K t describing the investor's current allocation of the capital (e. g. how much capital is invested in which asset). The investor's decision at for a new allocation and the dynamics on the financial market let the state switch to Xt+l = ($t+l' K t+1 ) according to the transition probability p(Xt+lIXto at). Each transition results in an immediate return rt = r(xt, Xt+l. at} which incorporates possible transaction costs depending on the decision at and the change of the value of K t due to the new values of the assets at time t + 1. The aim is to maximize the expected discounted sum of the returns, V* (x) = E(2::~o It rt Ixo = x). by following an optimal stationary policy J.l. (xt) = at. For a discrete finite state space the solution can be stated as the recursive Bellman equation: V· (xd = m:-x [L p(xt+llxt, a)rt + ~I L p(xt+llxt. a) V* (Xt+l)] . (1) Xt+l X.+l A more useful fonnulationdefines a Q-function Q·(x, a) of state-action pairs (Xt. at), to allow the application ofan iterative stochastic approximation scheme, called Q-Learning [11]. The Q-value Q*(xt,a,) quantifies the expected discounted sum of returns if one executes action at in state Xt and follows an optimal policy thereafter, i. e. V* (xt) = maxa Q* (Xt, a). Observing the tuple (Xt, Xt+l, at, rd, the tabulated Q-values are updated 938 R. Neuneier in the k + 1 iteration step with learning rate 17k according to: It can be shown, that the sequence of Q k converges under certain assumptions to Q* . If the Q-values Q* (x, a) are approximated by separate neural networks with weight veCtor w a for different actions a, Q* (x, a) ~ Q(x; wa ) , the adaptations (called NN-QL) are based on the temporal differences dt : dt := r(xt, at , Xt+l) + ,),maxQ(Xt+l; w~) - Q(Xt; wZ t ) , aEA Note, that although the market dependent part $t of the state vector is independent of the investor's decisions, the future wealth Kt+l and the returns rt are not. Therefore, asset allocation is a multi-stage decision problem and may not be reduced to pure prediction if transaction costs must be considered. On the other hand, the attractive feature that the decisions do not influence the market allows to approximate the Q-values using historical data of the financial market. We need not to invest real money during the training phase. 2.2 Introduction of an artificial deterministic transition Now, the Q-values are reformulated in order to make them independent of the actions chosen at the time step t. Due to assumption 2, which states that the investor can not influence the market by the trading decisions, the stochastic process of the dynamics of $t is an uncontrollable Markov chain. This allows the introduction of a deterministic intermediate step between the transition from Xt to Xt+1 (see fig. below). After the investor has "hosen an action at, the capital K t changes to K: because he/she may have paid transaction costs Ct = c(Kt, at) and K; reflects the new allocation whereas the state of the market, $t, remains the same. Because the costs Ct are known in advance, this transition is deterministic and controllable. Then, the market switches stochastically to $t+1 and generates the immediate return r~ = r' ($t, K:, $t+1) i.e., rt = Ct + r~ . The capital changes to Kt+1 = r~ + K;. This transition is uncontrollable by the investor. V* ($, K) = V* (x) is now computed using the costs Ct and returns r~ (compare also eq. 1) tn.sid .. , ... torml.lode 110<'_ t+l St St St+l K t at K' Kt+l t Ct r: Q(SbK~) Defining Q* ($t, Kn as the Q-values of the intermediate time step Q* ($t , K:) E [r' ($t , K: , $t+1) + ')'V* ($t+1 ' Kt+d] Enhancing Q-Leaming for Optimal Asset Allocation gives rise to the optimal value function and policy (time indices are suppressed), V* ($, K) = max[c(K, a) + Q* ($, K')], a Jl*($, K) = argmax[c(K, a) + Q*($, K')]. a Defining the temporal differences dt for the approximation Q k as dt := r' ($t, K:, $t+1) + ,max[c(Kt+b a) + Q(k)($t+1, K:+ 1 )] - Q(k)($t, KD a leads to the update equations for the Q-values represented by tables or networks: QLU: Q(k+l)($t,K;) Q(k)($t, K:) + 1/kdt , NN-QLU: w(k+l) w(k) + 1/kdtV'Q($, K'; w(k») . 939 The simplification is now obvious, because (NN-)QLU only needs one table or neural network no matter how many assets are concerned. This may lead to a faster convergence and better results. The training algorithm boils down to the iteration of the following steps: QLU for optimal investment decisions 1. draw randomly patterns $t, $t+ 1 from the data set, draw randomly an asset allocation K: 2. for all possible actions a: compute rf, c(Kt+b a), Q(k)($t+b K:+I) 3. compute temporal difference dt 4. compute new value Q(k+1)($t, Kn resp. Q($t, K:; w(k+1») 5. stop, ifQ-values have converged, otherwise go to 1 Since QLU is equivalent to Q-Leaming, QLU converges to the optimal Q-values under the same conditions as QL (e. g [2]). The main advantage of (NN-)QLU is that this algorithm only needs one value function no matter how many assets are concerned and how fine the grid of actions are: Q*(($,K),a) = c(K,a) + Q*($,K'). Interestingly, the convergence to an optimal policy of QLU does not rely on an explicit exploration strategy because the randomly chosen capital K: in step 1 simulates a random action which was responsible for the transition from K t . In combination with the randomly chosen market state $t, a sufficient exploration of the action and state space is guaranteed. 2.3 M\ldel-free policy-iteration The refonnulation also allows the design of a policy iteration algorithm by alternating a policy evaluation phase (PE) and a policy improvement (PI) step. Defining the temporal differences dt for the approximation Q~I of the policy JlI in the k step ofPE dt := r' ($t, K;, $t+d + ,[c(Kt+I, JlI ($t+l, Kt+1)) + Q(k) (K:+ 1 , $t+d] - Q(k)(K;, $t} leads to the following update equation for tabulated Q-values Q(k+l)($ K') Q(k)($ K") d JJI t, t = IJ.I t, t + 1/k t· 940 R. Neuneier After convergence, one can improve the policy J-li to J-lI+l by J-l1+I($t, Kt} = arg max[c(Kt, a) + QJ.'I ($t, KD] . a By alternating the two steps PE and PI, the sequence of policies [J-l1 (x )]1=0, ... converges under the typical assumptions to the optimal policy J-l* (x) [2]. Note, that policy iteration is normally not possible using classical QL, if one has not an appropriate model at hand. The introduction of the detenninistic intermediate step allows to start with an initial strategy (e. g. given by a broker), which can be subsequently optimized by model-free policy iteration trained with historical data of the financial market. Generalization to parameterized value functions is straightforward. 2.4 Experiments on the German Stock Index DAX The NN-QLU algorithm is now tested on a real world task: assume that an investor wishes to invest herihis capital into a portfolio of stocks which behaves like the German stock index DAX. Herihis alternative is to keep the capital in the certain asset cash, referred to as DM. We compare the resulting strategy with three benchmarks, namely Neuro-Fuzzy, Buy&Hold and the naive prediction. The Buy&Hold strategy invests at the first time step in the DAX and only sells at the end. The naive prediction invests if the past return of the DAX has been positive and v. v. The third is based on a Neuro-Fuzzy model which was optimized to predict the daily changes of the DAX [8]. The heuristic benchmark strategy is then constructed by taking the sign of the prediction as a trading signal, such that a positive prediction leads to an investment in stocks. The input vector of the Neuro-Fuzzy model, which consists of the DAX itself and 11 other influencing market variables, was carefully optimized for optimal prediction. These inputs also constitutes the $t part of the state vector which describes the market within the NN-QLU algorithm. The data is split into a training (from 2. Jan. 1986 to 31. Dec. 1994) and a test set (from 2. Jan. 1993 to 1. Aug. 1996). The transaction costs (Ct) are 0.2% of the invested capital if Kt is changed from DM to DAX, which are realistic for financial institutions. Referring to an epoch as one loop over all training patterns, the training proceeds as outlined in the previous section for 10000 epochs with T}k = "'0 . 0.999k with start value "'0 = 0.05. Table 1: Comparison of the profitability of the strategies, the number of position changes and investments in DAX for the test (training) data. I strategy profit I investments in DAX I position changes I NN-QLU 1.60 (3.74) 70 (73)% 30 (29)% N euro-Fuzzy 1.35 (1.98) 53 (53)% 50 (52)% Naive Prediction 0.80 (1.06) 51 (51)% 51 (48)% Buy&Hold 1.21 (1.46) 100 (100)% 0(0)% The strategy constructed with the NN-QLU algorithm, using a neural network with 8 hidden neurons and a linear output, clearly beats the benchmarks. The capital at the end of the test set (training set) exceeds the second best strategy Neuro-Fuzzy by about 18.5% (89%) (fig. 1). One reason for this success is, that QLU changes less often the position and thus, avoids expensive transaction costs. The Neuro-Fuzzy policy changes almost every second day whereas NN-QLU changes only every third day (see tab. 1). It is interesting to analyze the learning behavior during training by evaluating the strategies ofNN-QLU after each epoch. At the beginning, the policies suggest to change almost never or each time to invest in DAX. After some thousand epochs, these bang-bang strategies starts to differentiate. Simultaneously, the more complex the strategies become the more profit they generate (fig. 2). Enhancing Q-Leaming for Optimal Asset Allocation 941 de~lopment 01 the Capital 3.5 NN-QLU 2.5 09 . '. ' o 8 NaIVe PredlCllon 1 3.94 18.96 time lime Figure 1: Comparison of the development of the capital for the test set (left) and the training set (right). The NN-QLU strategy clearly beats all the benchmarks. DAX-mvestrnents In " . r8ILm CNGf 60 days o i 8000 10000 2000 4000 6000 8000 10000 opoehs epochs Figure 2: Training course: percentage ofDAX investments (left), profitability measured as the average return over 60 days on the training set (right). 3 Controlling the Variance of the Investment Strategies 3.1 Risk-adjusted MDPs People are not only interested in maximizing the return, but also in controlling the risk of their investments. This has been formalized in the Markowitz portfolio-selection, which aims for an allocation with the maximal expected return for a given risk level [4]. Given a stationary fo1icy f..L( x) with finite state space, the associated value function V JI. (x) and its variance (T (V JI. ( X )) can be defined as V"(x) E [t. ~'r(x"I", x'+1) xo ~ xl, E [ (t. ~'r(x" p" X'+1) - V"(X)), Xo = x] . Then, an optimal strategy f..L* (x ; ,\) for a risk-adjusted MDP (see [9], S. 410 for variancepenalized MDPs) is f..L*(x;,\) = argmax[VJI.(x) - ,\(T2(VJI.(x))] for'\ > O. JI. By variation of '\, one can construct so-called efficient portfolios which have minimal risk for each achievable level of expected return. But in comparison to classical portfolio theory, this approach manages multi-period portfolio management systems including transaction costs. Furthermore, typical min-max requirements on the trading volume and other allocation constraints can be easily implemented by constraining the action space. 942 R. Neuneier 3.2 Non-linear Utility Functions In general, it is not possible to compute (J"2 (V If. (x)) with ( approximate) dynamic programming or reinforcement techniques, because (J"2 (VJ.I (x)) can not be written in a recursive Bellman equation. One solution to this problem is the use of a return function rt, which penalizes high variance. In financial analysis, the Sharpe-ratio, which relates the mean of the single returns to their variance i. e., r/(J"(r), is often employed to describe the smoothness of an equity curve. For example, Moody has developed a Sharpe-ratio based error function and combines it with a recursive training procedure [5] (see also [3]). The limitation of the Sharpe-ratio is, that it penalizes also upside volatility. For this reason, the use of an utility function with a negative second derivative, typical for risk averse investors, seems to be more promising. For such return functions an additional unit increase is less valuable than the last unit increase [4]. An example is r = log (new portfolio value I old portfolio value) which also penalizes losses much stronger than gains. The Q-function Q(x, a) may lead to intermediate values of a* as shown in the figure below. --~ ~ -~.--~-~ - ---'-- - ' . 1 O. " 01 \ . J ,. " " t rtMaM change 01 the pcwtIoko I4l1A ... % 4 Conclusion and Future Work e"I ---'--'--_~~~_ I ~"7J r ~J '" , . . 1 ' '' ~ ,_l 1II°'i I .0 .,,----.-:;----0; --;. - :i -.• -y:-- •• -~ % of l'N'8Sur'8n11n UncertlWl asset Two improvements of Q-Ieaming have been proposed to bridge the gap between classical portfolio management and asset allocation with adaptive dynamic programming. It is planned to apply these techniques within the framework of a European Community sponsored research project in order to design a decision support system for strategic asset allocation [7). Future work includes approximations and variational methods to compute explicitly the risk (J"2 (V If. (x)) of a policy. References [I J D. P. Bertsekas. Dynamic Programming and Optimal Control, vol. I. Athena Scientific, 1995. [2] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [3J M. Choey and A. S. Weigend. Nonlinear trading models through Sharpe Ratio maximization. In proc. ofNNCM'96, 1997. World Scientific. [4J E. J. Elton and M. J. Gruber. Modern Portfolio Theory and Investment Analysis. 1995. [5J J. Moody, L. Whu, Y. Liao, and M. Saffell. Performance Functions and Reinforcement Learning for Trading Systems and Portfolios. Journal of Forecasting, 1998. forthcoming, [6J R. Neuneier. Optimal asset allocation using adaptive dynamic programming. In proc. of Advances in Neural Information Processing Systems, vol. 8, 1996. [7J R. Neuneier, H. G. Zimmermann, P. Hierve, and P. Nairn. Advanced Adaptive Asset Allocation. EU Neuro-Demonstrator, 1997, [8J R. Neuneier, H. G. Zimmermann, and S. Siekmann. Advanced Neuro-Fuzzy in Finance: Predicting the German Stock Index DAX, 1996. Invited presentation at ICONIP'96, Hong Kong, availabel by email fromRalph.Neuneier@mchp.siemens.de. [9J M. L. Puterman. Markov Decision Processes. John Wiley & Sons, 1994. [IOJ S. P. Singh. Learning to Solve Markovian Decision Processes, CMPSCI TR 93-77, University of Massachusetts, November 1993. [I I J C. J. C. H. Watkins and P. Dayan. Technical Note: Q-Learning. Machine Learning: Special Issue on Reinforcement Learning, 8,3/4:279-292, May 1992.
1997
91
1,443
Nonparametric Model-Based Reinforcement Learning Christopher G. Atkeson College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280, USA ATR Human Information Processing, 2-2 Hikaridai, Seiko-cho, Soraku-gun, 619-02 Kyoto, Japan cga@cc.gatech.edu http://www.cc.gatech.edu/fac/Chris.Atkeson/ Abstract This paper describes some of the interactions of model learning algorithms and planning algorithms we have found in exploring model-based reinforcement learning. The paper focuses on how local trajectory optimizers can be used effectively with learned nonparametric models. We find that trajectory planners that are fully consistent with the learned model often have difficulty finding reasonable plans in the early stages of learning. Trajectory planners that balance obeying the learned model with minimizing cost (or maximizing reward) often do better, even if the plan is not fully consistent with the learned model. 1 INTRODUCTION We are exploring the use of nonparametric models in robot learning (Atkeson et al., 1997b; Atkeson and Schaal, 1997). This paper describes the interaction of model learning algorithms and planning algorithms, focusing on how local trajectory optimization can be used effectively with nonparametric models in reinforcement learning. We find that trajectory optimizers that are fully consistent with the learned model often have difficulty finding reasonable plans in the early stages of learning. The message of this paper is that a planner should not be entirely consistent with the learned model during model-based reinforcement learning. Trajectory optimizers that balance obeying the learned model with minimizing cost (or maximizing reward) often do better, even if the plan is not fully consistent with the learned model: Nonparametric Model-Based Reinforcement Learning 1009 A ' ~ ~ .--/ ~ 1"'\ \ - / ~ ./ ---e L V 2\ \ '\ V V '\ [i V II '\\ l\ If \ ( ) + \ \ i\ ) \ ."" / / \ \ I"" ./ V \ ~ I'---" / \ \ l"" ~ ~ I"" V \ \ !"" ~ ~ ~ ~ it Figure 1: A: Planning in terms of trajectory segments. B: Planning in terms of trajectories all the way to a goal point. Two kinds of reinforcement learning algorithms are direct (non-model-based) and indirect (model-based) . Direct reinforcement learning algorithms learn a policy or value function without explicitly representing a model of the controlled system (Sutton et al. , 1992) . Model-based approaches learn an explicit model of the system simultaneously with a value function and policy (Sutton, 1990, 1991a,b; Barto et al. , 1995; Kaelbling et al. , 1996) . We will focus on model-based reinforcement learning, in which the learner uses a planner to derive a policy from a learned model and an optimization criterion. 2 CONSISTENT LOCAL PLANNING An efficient approach to dynamic programming, a form of global planning, is to use local trajectory optimizers (Atkeson, 1994). These local planners find a plan for each starting point in a grid in the state space. Figure 1 compares the output of a traditional cell based dynamic programming process with the output of a planner based on integrating local plans. Traditional dynamic programming generates trajectory segments from each cell to neighboring cells, while the planner we use generates entire trajectories. These locally optimal trajectories have local policies and local models of the value function along the trajectories (Dyer and McReynolds, 1970; Jacobson and Mayne, 1970). The locally optimal trajectories are made consistent with their neighbors by using the local value function to predict the value of a neighboring trajectory. If all the local value functions are consistent with their neighbors the aggregate value function is a unique solution to the Bellman equation and the corresponding trajectories and policy are globally optimal. We would like any local planning algorithm to produce a local model of the value function so we can perform this type of consistency checking. We would also like a local policy from the local planner, so we can respond to disturbances and modeling errors. Differential dynamic programming is a local planner that has these characteristics (Dyer and McReynolds. 1970; Jacobson and Mayne, 1970). Differential dynamic programming maintains a local quadratic model of the value function along the current best trajectory x* (t): V (x,t) = Vo(t) + Vx(t)(x - x*(t))T + 0.5(x - x*(t))TVxx(t)(x - x*(t)) (1) 1010 C. G. Atkeson as well as a local linear model of the corresponding policy: U(X,t) = u*(t) + K(t)(x - x*(t)) (2) u(x, t) is the local policy at time t, the control signal u as a function of state x. u * (t) is the model's estimate of the control signal necessary to follow the current best trajectory x*(t). K(t) are the feedback gains that alter the control signals in response to deviations from the current best trajectory. These gains are also the first derivative of the policy along the current best trajectory. The first phase of each optimization iteration is to apply the current local policy to the learned model, integrating the modeled dynamics forward in time and seeing where the simulated trajectory goes. The second phase of the differential dynamic programming approach is to calculate the components of the local quadratic model of the value function at each point along the trajectory: the constant term Vo (t), the gradient Vx (t), and the Hessian Vxx (t). These terms are constructed by integrating backwards in time along the trajectory. The value function is used to produce a new policy, which is represented using a new x*(t), u*(t), and K(t). The availability of a local value function and policy is an attractive feature of differential dynamic programming. However, we have found several problems when applying this method to model-based reinforcement learning with nonparametric models: 1. Methods that enforce consistency with the learned model need an initial trajectory that obeys that model, which is often difficult to produce. 2. The integration of the learned model forward in time often blows up when the learned model is inaccurate or when the plant is unstable and the current policy fails to stabilize it. 3. The backward integration to produce the value function and a corresponding policy uses derivatives of the learned model, which are often quite inaccurate in the early stages of learning, producing inaccurate value function estimates and ineffective policies. 3 INCONSISTENT LOCAL PLANNING To avoid the problems of consistent local planners, we developed a trajectory optimization approach that does not integrate the learned model and does not require full consistency with the learned model. Unfortunately, the price of these modifications is that the method does not produce a value function or a policy, just a trajectory (x(t), u(t)). To allow inconsistency with the learned model, we represent the state history x(t) and the control history u(t) separately, rather than calculate x(t) from the learned model and u(t). We also modify the original optimization criterion C = Lk C(Xk, Uk) by changing the hard constraint that Xk+1 = f(Xk' Uk) on each time step into a soft constraint: Cnew = L [C(Xk' Uk) +~IXk+1-f(Xk,Uk)12] (3) k C(Xk' Uk) is the one step cost in the original optimization criterion. ~ is the penalty on the trajectory being inconsistent with the learned model Xk+1 = f(Xk' Uk). IXk +1 - f (Xk' Uk) I is the magnitude of the mismatch of the trajectory and the model prediction at time step k in the trajectory. ~ provides a way to control the amount of inconsistency. A small ~ reflects lack of confidence in the model, and allows Nonparametric Model-Based Reinforcement Learning 1011 //If\<''',,,· " ~j Figure 2: The SARCOS robot arm with a pendulum gripped in the hand. The pendulum aXIS is aligned with the fingers and with the forearm in this arm configuration. the optimized trajectory to be inconsistent with the model in favor of reducing C(Xk, Uk)' A large). reflects confidence in the model, and forces the optimized trajectory to be more consistent with the model. ). can increase with time or with the number of learning trials. If we use a model that estimates the confidence level of a prediction, we can vary). for each lookup based on Xk and Uk. Locally weighted learning techniques provide exactly this type of local confidence estimate (Atkeson et al., 1997a). Now that we are not integrating the trajectory we can use more compact representations of the trajectory, such as splines (Cohen, 1992) or wavelets (Liu et al., 1994). We no longer require that Xk+l = f(Xk, Uk), which is a condition difficult to fulfill without having x and u represented as independent values on each time step. We can now parameterize the trajectory using the spline knot points, for example. In this work we used B splines (Cohen, 1992) to represent the trajectory. Other choices for spline basis functions would probably work just as well. We can use any nonlinear programming or function optimization method to minimize the criterion in Eq. 3. In this work we used Powell's method (Press et al., 1988) to optimize the knot points, a method which is convenient to use but not particularly efficient. 4 IMPLEMENTATION ON AN ACTUAL ROBOT Both local planning methods work well with learned parametric models. However, differential dynamic programming did not work at all with learned nonparametric models, for reasons already discussed. This section describes how the inconsistent local planning method was used in an application of model-based reinforcement learning: robot learning from demonstration using a pendulum swing up task (Atkeson and Schaal, 1997). The pendulum swing up task is a more complex version of the pole or broom balancing task (Spong, 1995). The hand holds the axis of the pendulum, and the pendulum rotates about this hinge in an angular movement (Figure 2). Instead of starting with the pendulum vertical and above its rotational joint, the pendulum is hanging down from the hand, and the goal of the swing up task is to move the hand so that the pendulum swings up and is then balanced in the inverted position. The swing up task was chosen for study because it is a difficult dynamic maneuver and requires practice for humans to learn, but it is easy to tell if the task is successfully executed (at the end of the task the pendulum is balanced upright and does not fall down) . We implemented learning from demonstration on a hydraulic seven degree of free1012 .. c: .S! 'tI ~ CD "S. c: III § 'S b i ~ .! CD .§. c: .2 • 1:: ~ ~ ] III ~ 1.0 0.0 -1.0 -2.0 -3.0 -4.0 -5.0 0.0 0.5 0.4 0.3 0.2 0.1 -0.0 human demonstration 1 st trial (imitation) 2nd trial 3rd trial 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 .~.~ ......... -0.1 . -. '., /. --_ .................. --,.,.,."..~ °-.----° -0.2 -0.3 0.0 0.2 1.0 1.2 1.4 1.6 1.8 2.0 seconds C. G. Atkeson Figure 3: The hand and pendulum motion during robot learning from demonstration using a nonparametric model. dom anthropomorphic robot arm (SARCOS Dextrous Arm located at ATR, Figure 2). The robot observed its own performance with the same stereo vision system that was used to observe the human demonstrations. The robot observed a human swinging up a pendulum using a horizontal hand movement (dotted line in Figure 3). The most obvious approach to learning from demonstration is to have the robot imitate the human motion, by following the human hand trajectory. The dashed lines in Figures 3 show the robot hand motion as it attempts to follow the human demonstration of the swing up task, and the corresponding pendulum angles. Because of differences in the task dynamics for the human and for the robot, this direct imitation failed to swing the pendulum up, as the pendulum did not get even halfway up to the vertical position, and then oscillated about the hanging down position. The approach we used was to apply a planner to finding a swing up trajectory that worked for the robot, based on learning both a model and a reward function and using the human demonstration to initialize the planning process. The data collected during the initial imitation trial and subsequent trials was used to build a model. Nonparametric models were constructed using locally weighted learning as described in (Atkeson et al., 1997a). These models did not use knowledge of the model structure but instead assumed a general relationship: (4) where () is the pendulum angle and x is the hand position. Training data from the demonstrations was stored in a database, and a local model was constructed to answer each query. Meta-parameters such as distance metrics were tuned using cross validation on the training set. For example, cross validation was able to quickly establish that hand position and velocity (x and x) played an insignificant role in predicting future pendulum angular velocities. The planner used a cost function that penalizes deviations from the demonstration trajectory sampled at 60H z: C(Xk, Uk) = (Xk X~)T(Xk X~) + uluk (5) Nonparametric Model-Based Reinforcement Learning 1013 where the state is x = ((J, il, x , x), xd is the demonstrated motion, k is the sample index, and the control is u = (x). Equation 3 was optimized using B splines to represent x and u. The knot points for x and u were initially separately optimized to minimize (6) and (7) The tolerated inconsistency, ). was kept constant during a set of trials and set at values ranging from 100 to 100000. The exact value of ). did not make much difference. Learning failed when). was set to zero, as there was no way for the learned model to affect the plan. The planning process failed when ). was set too high , enforcing the learned model too strongly. The next attempt got the pendulum up a little more. Adding this new data to the database and replanning resul ted in a movement that succeeded (trial 3 in Figure 3). The behavior shown in Figure 3 is quite repeatable. The balancing behavior at the end of the trial is learned separately and continues for several minutes, at which point the trial is automatically terminated (Schaal, 1997). 5 DISCUSSION AND CONCLUSION We applied locally weighted regression (Atkeson et aI. , 1997a) in an attempt to avoid the structural modeling errors of idealized parametric models during model-based reinforcement learning, and also to see if a priori knowledge of the structure of the task dynamics was necessary. In an exploration of the swingup task, we found that these nonparametric models required a planner that ignored the learned model to some extent. The fundamental reason for this is that planners amplify modeling error. Mechanisms for this amplification include: • The planners take advantage of any modeling error to reduce the cost of the planned trajectory, so the planning process seeks out modeling error that reduces apparent cost . • Some planners use derivatives of the model, which amplifies any noise in the model. Models that support fast learning will have errors and noise. For example, in order to learn a model of the complexity necessary to accurately model the full robot dynamics between the commanded and actual hand accelerations a large amount of data is required, independent of modeling technique. The input would be 21 dimensional (robot state and command) ignoring actuator dynamics. Because there are few robot trials during learning, there is not enough data to make such a model even just in the vicinity of a successful trajectory. If it was required that enough data is collected during learning to make an accurate model. robot learning would be greatly slowed down. One solution to this error amplification is to bias the nonparametric modeling tools to oversmooth the data. This reduces the benefit of nonparametric modeling, and also ignores the true learned model to some degree. Our solution to this problem is to introduce a controlled amount of inconsistency with the learned model into the planning process. The control parameter). is explicit and can be changed as a function of time, amount of data, or as a function of confidence in the model at the query point. 1014 C. G. Atkeson References Atkeson, C. G. (1994). Using local trajectory optimizers to speed up global optimization in dynamic programming. In Cowan, J. D., Tesauro, G., and Alspector, J., editors, Advances in Neural Information Processing Systems 6, pages 663-670. Morgan Kaufmann, San Mateo, CA. Atkeson, C. G., Moore, A. W., and Schaal, S. (1997a). Locally weighted learning. Artificial Intelligence Review, 11:11-73. Atkeson, C. G., Moore, A. W., and Schaal, S. (1997b). Locally weighted learning for control. Artificial Intelligence Review, 11:75-113. Atkeson, C. G. and Schaal, S. (1997) . Robot learning from demonstration. In Proceedings of the 1997 International Conference on Machine Learning. Barto, A. G., Bradtke, S. J., and Singh, S. P. (1995). Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1):81-138. Cohen, M. F. (1992). Interactive spacetime control for animation. Computer Graphics, 26(2):293-302. Dyer, P. and McReynolds, S. (1970). The Computational Theory of Optimal Control. Academic, NY. Jacobson, D. and Mayne, D. (1970). Differential Dynamic Programming. Elsevier, NY. Kaelbling, L. P., Littman, M. L., and Moore, A. W. (1996). Reinforcement learning: A survey. lournal of Artificial Intelligence Research, 4:237-285. Liu, Z., Gortler, S. J., and Cohen, M. F. (1994). Hierarchical spacetime control. Computer Graphics (SIGGRAPH '94 Proceedings), pages 35-42. Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P. (1988). Numerical Recipes in C. Cambridge University Press, New York, NY. Schaal, S. (1997). Learning from demonstration. In Mozer, M. C., Jordan, M., and Petsche, T ., editors, Advances in Neural Information Processing Systems 9, pages 1040-1046. MIT Press, Cambridge, MA. Spong, M. W. (1995). The swing up control problem for the acrobot. IEEE Control Systems Magazine, 15(1):49-55. Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Seventh International Machine Learning Workshop, pages 216-224. Morgan Kaufmann, San Mateo, CA. http://envy.cs.umass.edu/People/sutton/publications.html. Sutton, R. S. (1991a). Dyna, an integrated architecture for learning, planning and reacting. http://envy.cs.umass.edu/People/sutton/publications.html, Working Notes of the 1991 AAAI Spring Symposium on Integrated Intelligent Architectures pp. 151-155 and SIGART Bulletin 2, pp. 160-163. Sutton, R. S. (1991b). Planning by incremental dynamic programming. In Eighth International Machine Learning Workshop, pages 353-357. Morgan Kaufmann, San Mateo, CA. http://envy.cs.umass.edu/People/sutton/publications.html. Sutton, R. S., Barto, A. G., and Williams, R. J. (1992). Reinforcement learning is direct adaptive optimal control. IEEE Control Systems Magazine, 12:19-22.
1997
92
1,444
On Parallel Versus Serial Processing: A Computational Study of Visual Search Eyal Cohen Department of Psychology Tel-Aviv University Tel Aviv 69978, Israel eyalc@devil. tau .ac .il Eytan Ruppin Departments of Computer Science & Physiology Tel-Aviv University Tel Aviv 69978, Israel ruppin@math.tau.ac.il Abstract A novel neural network model of pre-attention processing in visualsearch tasks is presented. Using displays of line orientations taken from Wolfe's experiments [1992], we study the hypothesis that the distinction between parallel versus serial processes arises from the availability of global information in the internal representations of the visual scene. The model operates in two phases. First, the visual displays are compressed via principal-component-analysis. Second, the compressed data is processed by a target detector module in order to identify the existence of a target in the display. Our main finding is that targets in displays which were found experimentally to be processed in parallel can be detected by the system, while targets in experimentally-serial displays cannot. This fundamental difference is explained via variance analysis of the compressed representations, providing a numerical criterion distinguishing parallel from serial displays. Our model yields a mapping of response-time slopes that is similar to Duncan and Humphreys's "search surface" [1989], providing an explicit formulation of their intuitive notion of feature similarity. It presents a neural realization of the processing that may underlie the classical metaphorical explanations of visual search. On Parallel versus Serial Processing: A Computational Study a/Visual Search 11 1 Introduction This paper presents a neural-model of pre-attentive visual processing. The model explains why certain displays can be processed very fast, "in parallel" , while others require slower, "serial" processing, in subsequent attentional systems. Our approach stems from the observation that the visual environment is overflowing with diverse information, but the biological information-processing systems analyzing it have a limited capacity [1]. This apparent mismatch suggests that data compression should be performed at an early stage of perception, and that via an accompanying process of dimension reduction, only a few essential features of the visual display should be retained. We propose that only parallel displays incorporate global features that enable fast target detection, and hence they can be processed pre-attentively, with all items (target and dis tractors) examined at once. On the other hand, in serial displays' representations, global information is obscure and target detection requires a serial, attentional scan of local features across the display. Using principal-component-analysis (peA), our main goal is to demonstrate that neural systems employing compressed, dimensionally reduced representations of the visual information can successfully process only parallel displays and not serial ones. The sourCe of this difference will be explained via variance analysis of the displays' projections on the principal axes. The modeling of visual attention in cognitive psychology involves the use of metaphors, e.g., Posner's beam of attention [2]. A visual attention system of a surviving organism must supply fast answers to burning issues such as detecting a target in the visual field and characterizing its primary features. An attentional system employing a constant-speed beam of attention [3] probably cannot perform such tasks fast enough and a pre-attentive system is required. Treisman's feature integration theory (FIT) describes such a system [4]. According to FIT, features of separate dimensions (shape, color, orientation) are first coded pre-attentively in a locations map and in separate feature maps, each map representing the values of a particular dimension. Then, in the second stage, attention "glues" the features together conjoining them into objects at their specified locations. This hypothesis was supported using the visual-search paradigm [4], in which subjects are asked to detect a target within an array of distractors, which differ on given physical dimensions such as color, shape or orientation. As long as the target is significantly different from the distractors in one dimension, the reaction time (RT) is short and shows almost no dependence on the number of distractors (low RT slope). This result suggests that in this case the target is detected pre-attentively, in parallel. However, if the target and distractors are similar, or the target specifications are more complex, reaction time grows considerably as a function of the number of distractors [5, 6], suggesting that the displays' items are scanned serially using an attentional process. FIT and other related cognitive models of visual search are formulated on the conceptual level and do not offer a detailed description of the processes involved in transforming the visual scene from an ordered set of data points into given values in specified feature maps. This paper presents a novel computational explanation of the source of the distinction between parallel and serial processing, progressing from general metaphorical terms to a neural network realization. Interestingly, we also come out with a computational interpretation of some of these metaphorical terms, such as feature similarity. 12 E. Cohen and E. Ruppin 2 The Model We focus our study on visual-search experiments of line orientations performed by Wolfe et. al. [7], using three set-sizes composed of 4, 8 and 12 items. The number of items equals the number of dis tractors + target in target displays, and in non-target displays the target was replaced by another distractor, keeping a constant set-size. Five experimental conditions were simulated: (A) - a 20 degrees tilted target among vertical distractors (homogeneous background). (B) - a vertical target among 20 degrees tilted distractors (homogeneous background). (C) - a vertical target among heterogeneous background ( a mixture of lines with ±20, ±40 , ±60 , ±80 degrees orientations). (E) - a vertical target among two flanking distractor orientations (at ±20 degrees), and (G) - a vertical target among two flanking distractor orientations (±40 degrees). The response times (RT) as a function of the set-size measured by Wolfe et. al. [7] show that type A, Band G displays are scanned in a parallel manner (1.2, 1.8,4.8 msec/item for the RT slopes), while type C and E displays are scanned serially (19.7,17.5 msec/item). The input displays of our system were prepared following Wolfe's prescription: Nine images of the basic line orientations were produced as nine matrices of gray-level values. Displays for the various conditions of Wolfe's experiments were produced by randomly assigning these matrices into a 4x4 array, yielding 128x100 display-matrices that were transformed into 12800 display-vectors. A total number of 2400 displays were produced in 30 groups (80 displays in each group): 5 conditions (A, B, C, E, G ) x target/non-target x 3 set-sizes (4,8, 12). Our model is composed of two neural network modules connected in sequence as illustrated in Figure 1: a peA module which compresses the visual data into a set of principal axes, and a Target Detector (TD) module. The latter module uses the compressed data obtained by the former module to detect a target within an array of distractors. The system is presented with line-orientation displays as described above. NO·TARGET =·1 TARGET-I t t /--- -Tn [JUTPUT LAYER (I UNIT)-------, Tn INrnRMEDIATE LAYER (12 UNITS) TARGET DETECTOR MODULE (11)) PeA O~=~ LAYER J DATA COMPRESSION --..;;:::~~~ MODULE (PeA) INPUT LAYER (12Il00 UNITS) _ / DISPLAY Figure 1: General architecture of the model For the PCA module we use the neural network proposed by Sanger, with the connections' values updated in accordance with his Generalized Hebbian Algorithm (GHA) [8]. The outputs of the trained system are the projections of the displayvectors along the first few principal axes, ordered with respect to their eigenvalue magnitudes. Compressing the data is achieved by choosing outputs from the first On Parallel versus Serial Processing: A Computational Study o/Visual Search 13 few neurons (maximal variance and minimal information loss). Target detection in our system is performed by a feed-forward (FF) 3-layered network, trained via a standard back-propagation algorithm in a supervised-learning manner. The input layer of the FF network is composed of the first eight output neurons of the peA module. The transfer function used in the intermediate and output layers is the hyperbolic tangent function. 3 Results 3.1 Target Detection The performance of the system was examined in two simulation experiments. In the first, the peA module was trained only with "parallel" task displays, and in the second, only with "serial" task displays. There is an inherent difference in the ability of the model to detect targets in parallel versus serial displays. In parallel task conditions (A, B, G) the target detector module learns the task after a comparatively small number (800 to 2000) of epochs, reaching performance level of almost 100%. However, the target detector module is not capable of learning to detect a target in serial displays (e, E conditions) . Interestingly, these results hold (1) whether the preceding peA module was trained to perform data compression using parallel task displays or serial ones, (2) whether the target detector was a linear simple perceptron, or the more powerful, non-linear network depicted in Figure 1, and (3) whether the full set of 144 principal axes (with non-zero eigenvalues) was used. 3.2 Information Span To analyze the differences between parallel and serial tasks we examined the eigenvalues obtained from the peA of the training-set displays. The eigenvalues of condition B (parallel) displays in 4 and 12 set-sizes and of condition e (serial-task) displays are presented in Figure 2. Each training set contains a mixture of target and non-target displays. (a) (b) PARALLEL SERIAL 40 40 l!J +4 ITEMS +4 ITEMS 35 II> 35 "'I;l o 12 ITEMS o 12 ITEMS 30 ~ 30 25 25 ~ w ~ w ~20 ~20 ~ 15 ~ 15 w w 10 10 5 ~ 5 0 0 -5 0 10 20 30 40 -5 0 10 20 30 40 No. of PRINCIPAL AXIS No. of PRINCIPAL AXIS Figure 2: Eigenvalues spectrum of displays with different set-sizes, for parallel and serial tasks. Due to the sparseness of the displays (a few black lines on white background), it takes only 31 principal axes to describe the parallel training-set in full (see fig 2a. Note that the remaining axes have zero eigenvalues, indicating that they contain no additional information.), and 144 axes for the serial set (only the first 50 axes are shown in fig 2b). 14 E. Cohen and E. Ruppin As evident, the eigenvalues distributions of the two display types are fundamentally different: in the parallel task, most of the eigenvalues "mass" is concentrated in the first few (15) principal axes, testifying that indeed, the dimension of the parallel displays space is quite confined. But for the serial task, the eigenvalues are distributed almost uniformly over 144 axes. This inherent difference is independent of set-size: 4 and 12-item displays have practically the same eigenvalue spectra. 3.3 Variance Analysis The target detector inputs are the projections of the display-vectors along the first few principal axes. Thus, some insight to the source of the difference between parallel and serial tasks can be gained performing a variance analysis on these projections. The five different task conditions were analyzed separately, taking a group of 85 target displays and a group of 85 non-target displays for each set-size. Two types of variances were calculated for the projections on the 5th principal axis: The "within groups" variance, which is a measure of the statistical noise within each group of 85 displays, and the "between groups" variance, which measures the separation between target and non-target groups of displays for each set-size. These variances were averaged for each task (condition), over all set-sizes. The resulting ratios Q of within-groups to between-groups standard deviations are: QA = 0.0259, QB = 0.0587 ,and Qa = 0.0114 for parallel displays (A, B, G), and QE = 0.2125 Qc = 0.771 for serial ones (E, C). As evident, for parallel task displays the Q values are smaller by an order of magnitude compared with the serial displays, indicating a better separation between target and non-target displays in parallel tasks. Moreover, using Q as a criterion for parallel/serial distinction one can predict that displays with Q < < 1 will be processed in parallel, and serially otherwise, in accordance with the experimental response time (RT) slopes measured by Wolfe et. al. [7]. This differences are further demonstrated in Figure 3, depicting projections of display-vectors on the sub-space spanned by the 5, 6 and 7th principal axes. Clearly, for the parallel task (condition B), the PCA representations of the target-displays (plus signs) are separated from non-target representations (circles), while for serial displays (condition C) there is no such separation. It should be emphasized that there is no other principal axis along which such a separation is manifested for serial displays. -11106 -1 un .11615 -11025 _1163 '.II , .. " .,0' Til o o o 0 o .. . + ++ . + .+ 7.&12 '.7 1.1186 INIIS 11166 ,. 18846 ,. "'AXIS 71hAXIS , ow 110" ::::~ _1157 -11M ~ :., Hill -'.1' -'181 _1182 '10 '07 ,.II • 10~ , .. .. '~AXIS . 0o 1"1 o 1114 1113 1.1e2 ,., ,. no AXIS . l • 0 +0 o o o 1.1'11 '.71 1 iTT 1.178 1.175 1 114 .,. Figure 3: Projections of display-vectors on the sub-space spanned by the 5, 6 and 7th principal axes. Plus signs and circles denote target and non-target displayvectors respectively, (a) for a parallel task (condition B), and (b) for a serial task (condition C). Set-size is 8 items. On Parallel versus Serial Processing: A Computational Study o/Visual Search 15 While Treisman and her co-workers view the distinction between parallel and serial tasks as a fundamental one, Duncan and Humphreys [5] claim that there is no sharp distinction between them, and that search efficiency varies continuously across tasks and conditions. The determining factors according to Duncan and Humphreys are the similarities between the target and the non-targets (T-N similarities) and the similarities between the non-targets themselves (N-N similarity). Displays with homogeneous background (high N-N similarity) and a target which is significantly different from the distractors (low T-N similarity) will exhibit parallel, low RT slopes, and vice versa. This claim was illustrated by them using a qualitative "search surface" description as shown in figure 4a. Based on results from our variance analysis, we can now examine this claim quantitatively: We have constructed a "search surface", using actual numerical data of RT slopes from Wolfe's experiments, replacing the N-N similarity axis by its mathematical manifestation, the within-groups standard deviation, and N-T similarity by between-groups standard deviation 1. The resulting surface (Figure 4b) is qualitatively similar to Duncan and Humphreys's. This interesting result testifies that the PCA representation succeeds in producing a viable realization of such intuitive terms as inputs similarity, and is compatible with the way we perceive the world in visual search tasks. (a) o CIo-..... ~:..-4.:0,........::~"""""'" 1- _.-...-_ l.rgeI-.-Jargel IImllarll), Figun J. The seatcllaurface. (b) SEARCH SURFACE Figure 4: RT rates versus: (a) Input similarities (the search surface, reprinted from Duncan and Humphreys, 1989). (b) Standard deviations (within and between) of the PCA variance analysis. The asterisks denote Wolfe's experimental data. 4 Summary In this work we present a two-component neural network model of pre-attentional visual processing. The model has been applied to the visual search paradigm performed by Wolfe et. al. Our main finding is that when global-feature compression is applied to visual displays, there is an inherent difference between the representations of serial and parallel-task displays: The neural network studied in this paper has succeeded in detecting a target among distractors only for displays that were experimentally found to be processed in parallel. Based on the outcome of the 1 In general, each principal axis contains information from different features, which may mask the information concerning the existence of a target. Hence, the first principal axis may not be the best choice for a discrimination task. In our simulations, the 5th axis for example, was primarily dedicated to target information, and was hence used for the variance analysis (obviously, the neural network uses information from all the first eight principal axes). 16 E. Cohen andE. Ruppin variance analysis performed on the PCA representations of the visual displays, we present a quantitative criterion enabling one to distinguish between serial and parallel displays. Furthermore, the resulting 'search-surface' generated by the PCA components is in close correspondence with the metaphorical description of Duncan and Humphreys. The network demonstrates an interesting generalization ability: Naturally, it can learn to detect a target in parallel displays from examples of such displays. However, it can also learn to perform this task from examples of serial displays only! On the other hand, we find that it is impossible to learn serial tasks, irrespective of the combination of parallel and serial displays that are presented to the network during the training phase. This generalization ability is manifested not only during the learning phase, but also during the performance phase; displays belonging to the same task have a similar eigenvalue spectrum, irrespective of the actual set-size of the displays, and this result holds true for parallel as well as for serial displays. The role of PCA in perception was previously investigated by Cottrell [9], designing a neural network which performed tasks as face identification and gender discrimination. One might argue that PCA, being a global component analysis is not compatible with the existence of local feature detectors (e.g. orientation detectors) in the cortex. Our work is in line with recent proposals [10J that there exist two pathways for sensory input processing: A fast sub-cortical pathway that contains limited information, and a slow cortical pathway which is capable of providing richer representations of the stimuli. Given this assumption this paper has presented the first neural realization of the processing that may underline the classical metaphorical explanations involved in visual search. References [1] J. K. Tsotsos. Analyzing vision at the complexity level. Behavioral and Brain Sciences, 13:423-469, 1990. [2J M. I. Posner, C. R. Snyder, and B. J. Davidson. Attention and the detection of signals. Journal of Experimental Psychology: General, 109:160-174, 1980. [3J Y. Tsal. Movement of attention across the visual field. Journal of Experimental Psychology: Human Perception and Performance, 9:523-530, 1983. [4] A. Treisman and G. Gelade. A feature integration theory of attention. Cognitive Psychology, 12:97-136,1980. [5] J. Duncan and G. Humphreys. Visual search and stimulus similarity. Psychological Review, 96:433-458, 1989. [6] A. Treisman and S. Gormican. Feature analysis in early vision: Evidence from search assymetries. Psychological Review, 95:15-48, 1988. [7] J . M. Wolfe, S. R. Friedman-Hill, M. I. Stewart, and K. M. O'Connell. The role of categorization in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance, 18:34-49, 1992. [8] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Network, 2:459-473, 1989. [9] G. W. Cottrell. Extracting features from faces using compression networks: Face, identity, emotion and gender recognition using holons. Proceedings of the 1990 Connectionist Models Summer School, pages 328-337, 1990. [10] J. L. Armony, D. Servan-Schreiber, J . D. Cohen, and J. E. LeDoux. Computational modeling of emotion: exploration through the anatomy and physiology of fear conditioning. Trends in Cognitive Sciences, 1(1):28-34, 1997.
1997
93
1,445
A Superadditive-Impairment Theory of Optic Aphasia Michael C. Mozer Dept. of Computer Science University of Colorado Boulder; CO 80309-0430 Mark Sitton Dept. of Computer Science University of Colorado Boulder; CO 80309-0430 Abstract Martha Farah Dept. of Psychology University of Pennsylvania Phila., PA 19104-6196 Accounts of neurological disorders often posit damage to a specific functional pathway of the brain. Farah (1990) has proposed an alternative class of explanations involving partial damage to multiple pathways. We explore this explanation for optic aphasia, a disorder in which severe perfonnance deficits are observed when patients are asked to name visually presented objects, but surprisingly, performance is relatively nonnal on naming objects from auditory cues and on gesturing the appropriate use of visually presented objects. We model this highly specific deficit through partial damage to two pathways-one that maps visual input to semantics, and the other that maps semantics to naming responses. The effect of this damage is superadditive, meaning that tasks which require one pathway or the other show little or no performance deficit, but the damage is manifested when a task requires both pathways (i.e., naming visually presented objects). Our model explains other phenomena associated with optic aphasia, and makes testable experimental predictions. Neuropsychology is the study of disrupted cognition resulting from damage to functional systems in the brain. Generally, accounts of neuropsychological disorders posit damage to a particular functional system or a disconnection between systems. Farah (1990) suggested an alternative class of explanations for neuropsychological disorders: partial damage to multiple systems, which is manifested through interactions among the loci of damage. We explore this explanation for the neuropsychological disorder of optic aphasia. Optic aphasia, arising from unilateral left posterior lesions, including occipital cortex and the splenium of the corpus callosum (Schnider, Benson, & Scharre, 1994), is marked by a deficit in naming visually presented objects, hereafter referred to as visual naming (Farah, 1990). However, patients can demonstrate recognition of visually presented objects nonverbally, for example, by gesturing the appropriate use of an object or sorting visual items into their proper superordinate categories (hereafter, visual gesturing). Patients can also name objects by nonvisual cues such as a verbal definition or typical sounds made by the objects (hereafter, auditory naming). The highly specific nature of the deficit rules out an explanation in terms of damage to a single pathway in a standard model of visual naming (Figure 1), suggesting that a more complex model is required, involving A Superadditive-Impainnent Theory of Optic Aphasia FIGURE 1. A standard box-and-arrow model of visual naming. The boxes denote levels of representation, and the arrows denote pathways mapping from one level of representation to another. Although optic aphasia cannot be explained by damage to the vision-to-semantics pathway or the semantics-ta-naming pathway, Farah (1990) proposed an explanation in terms of partial damage to both pathways (the X's). visual 67 auditory mUltiple semantic systems or multiple pathways to visual naming. However, a mere parsimonious account is suggested by Farah (1990): Optic aphasia might arise from partial lesions to two pathways in the standard model-those connecting visual input to semantics, and semantics to naming-and the effect of damage to these pathways is superadditive, meaning that tasks which require only one of these pathways (e.g., visual gesturing, or auditory naming) will be relatively unimpaired, whereas tasks requiring both pathways (e.g., visual naming) will show a significant deficit. 1 A MODEL OF SUPERADDITIVE IMPAIRMENTS We present a computational model of the superadditive-impairment theory of optic aphasia by elaborating the architecture of Figure 1. The architecture has four pathways: visual input to semantics (V~S), auditory input to semantics (A~S), semantics to naming (S~N), and semantics to gesturing (S~G). Each pathway acts as an associative memory. The critical property of a pathway that is required to explain optic aphasia is a speed-accuracy trade off: The initial output of a pathway appears rapidly, but it may be inaccurate. This "quick and dirty" guess is refined over time, and the pathway output asymptotically converges on the best interpretation of the input. We implement a pathway using the architecture suggested by Mathis and Mozer (1996). In this architecture, inputs are mapped to their best interpretations by means of a two-stage process (Figure 2). First, a quick, one-shot mapping is performed by a multilayer feedforward connectionist network to transform the input directly to its corresponding output. This is followed by a slower iterative clean-up process carried out by a recurrent attractor network. This architecture shows a speed-accuracy trade off by virtue of the .assumption that the feed forward mapping network does not have the capacity to produce exactly the right output to every input, especially when the inputs are corrupted by noise or are otherwise incomplete. Consequently, the clean up stage is required to produce a sensible interpretation of the noisy output of the mapping network. Fully distributed attractor networks have been used for similar purposes (e.g., Plaut & Shallice, 1993). For simplicity, we adopt a localist -attractor network with a layer of state units and a layer of radial basis function (RBF) units, one RBF unit per attractor. Each RBF or attractor unit measures the distance of the current state to the attractor that it represents. The activity of attractor unit i, aj, is: FIGURE 2. Connectionist implementation of a processing pathway. The pathway consists of feedforward mapping network followed by a recurrent cleanup or attractor network. Circles denote connectionist processing units and arrows denote connections between units or between layers of units. clean up network mapping network 68 M. C. Mozer, M. Sitton and M. Farah (1) (2) where set) is the state unit activity vector at time t, ~i is the vector denoting the location of attractor i, and ~i is the strength of the attractor. The strength detennines the region of the state space over which an attractor will exert its pull, and also the rate at which the state will converge to the attractor. The state units receive input from the mapping network and from the attractor units and are updated as follows: s;(t) = d;(t)e;(t) + (I - d;(t» La/t -I )~j; j (3) where sift) is the activity of state unit i at time t, e; is the ith output of the mapping net, ~j; is the ith element of attractor j, and d; is given by d .(t) = h[I __ e;(_t-..,.-_l)] I ej(t) (4) where h[.] is a linear threshold function that bounds activity between -1 and +1, ej IS a weighted time average of the ith output of the mapping net, ej(t) = cxej(t) + (I -CX)e;(t-I) (5) In all simulations, cx = .02. The activity of the state units are governed by two forces-the external input from the feedforward net (first tenn in Equation 3) and the attractor unit activities (second tenn). The parameter d; acts as a kind of attentional mechanism that modulates the relative influence of these two forces. The basic idea is that when the input coming from the mapping net is changing, the system should be responsive to the input and should not yet be concerned with interpreting the input. In this case, the input is copied straight through to the state units and hence dj should have a value close to I. When the input begins to stabilize, however, the focus shifts to interpreting the input and following the dynamics of the attractor network. This shift corresponds to dj being lowered to zero. The weighted time average in the update rule for dj is what allows for the smooth transition of the function to its new value. For certain constructions of the function d, Zemel and Mozer (in preparation) have proven convergence of the algorithm to an attractor. Apart from speed-accuracy trade off, these dynamics have another important consequence for the present model, particularly with respect to cascading pathways. If pathway A feeds into pathway S, such as V~S feeding into S~N, then the state unit activities of A act as the input to S. Because these activities change over time as the state approaches a well-fonned state, the dynamics of pathway S can be quite complex as it is forced to deal with an unstable input. This property is important in explaining several phenomena associated with optic aphasia. 1.1 PATTERN GENERATION Patterns were constructed for each of the five representational spaces: visual and auditory input, semantic, name and gesture responses. Each representational space was arbitrarily made to be 200 dimensional. We generated 200 binary-valued (-1,+1) patterns in each space, which were meant to correspond to known entities of that representational domain. For the visual, auditory, and semantic spaces, patterns were partitioned into 50 similarity clusters with 4 ·siblings per cluster. Patterns were chosen randomly subject to two constraints: patterns in different clusters had to be at least 80· apart, and siblings had to be between 25· and 50· apart. Because similarity of patterns in the name and gesture spaces was irrelevant to our modeling, we did not impose a similarity structure on these spaces. A Superadditive-Impainnent Theory of Optic Aphasia 69 Instead, we generated patterns in these spaces at random subject to the constraint that every pattern had to be at least 60· from every other. After generating patterns in each of the representational spaces, we established arbitrary correspondences among the patterns such that visual pattern n. auditory pattern n, semantic pattern n, name pattern n, and gesture pattern n all represented the same concept. That is, the appropriate response in a visual-naming task to visual pattern n would be semantic pattern n and name pattern n. 1.2 TRAINING PROCEDURE The feedforward networks in the four pathways (V-"7S, A-"7S, S-"7N, and S-"70) were independently trained on all 200 associations using back propagation. Each of these networks contained a single hidden layer of 150 units, and all units in the network used the symmetric activation function to give activities in the range [-1,+1]. The amount of training was chosen such that performance on the training examples was not perfect; usually several elements in the output would be erroneous-i.e., have the wrong sign-and others would not be exactly correct-i.e., -lor + 1. This was done to embody the architectural assumption that the feedforward net does not have the capacity to map every input to exactly the right output, and hence, the clean-up process is required. Training was not required for the clean-up network. Due to the localist representation of attractors in the clean-up network, it was trivial to hand wire each clean-up net with the 200 attractors for its domain, along with one rest-state attractor. All attractor strengths were initialized to the same value, ~= 15, except the rest-state attractor, for which P=5. The rest-state attractor required a lower strength so that even a weak external input would be sufficient to kick the attractor network out of the rest state. 1.3 SIMULATION METHODOLOGY After each pathway had been trained, the model was damaged by "lesioning" or removing a fraction y of the connections in the V -"7S and S-"7N mapping networks. The lesioned connections were chosen at random and an equal fraction was removed from the two pathways. The clean-up nets were not damaged. The architecture was damaged a total of 30 different times, creating 30 simulated patients who were tested on each of the four tasks and on all 200 input patterns for a task. The results we report come from averaging across simulated patients and input patterns. Responses were detennined after the system had been given sufficient time to relax into a name or gesture attractor, which was taken to be the response. Each response was classified as one of the following mutually exclusive response types: correct, perseveration (response is the same as that produced on any of the three immediately preceding trials), visual error (the visual pattern corresponding to the incorrect response is a sibling of the visual pattern corresponding to the correct response), semantic error, visual+semantic error, or other error. 1.4 PRIMING MECHANISM Priming-the increased availability of recently experienced stimuli-has been found across a wide variety of tasks in normal subjects. We included priming in our model as a strengthening (increasing the Pi parameter) of recently visited attractors (see Mathis & Mozer 1996, for details, and Becker, Behrmann, & Moscovitch, 1993, for a related approach). In the damaged model, this mechanism often gave rise to perseverations. 2 RESULTS We have examined the model's behavior as we varied the amount of damage, quantified by the parameter y. However, we report on the perfonnance of simulated patients with y = .30. This intermediate amount of damage yielded no floor or ceiling effects, and also produced error rates for the visual-naming task in the range of 30-40%, roughly the median performance of patients in the literature. 70 TABLE 1. Error rate of the damaged model on various tasks. M. C. Mozer, M. Sitton and M. Farah task error rate aUOltory gesturing U.U"'/o auditory naming 0.5% visual gesturing 8.7% visual naming 36.8% Table 1 presents the error rates of the model on four tasks. The pattern of errors shows a qualitative fit to human patient data. The model produced no errors on the auditory gesturing task because the two component pathways (A~S and S~G) were undamaged. Relatively few errors were made on the auditory-naming and visual-gesturing tasks, each of which involved one damaged pathway, because the clean-up nets were able to compensate for the damage. However, the error rate for the visual-naming task was quite large, due t'O damage on both of its component pathways (V~S and S~N). The error rate for visual naming cannot be accounted for by summing the effects of the damage to the two component pathways because the sum of the error rates for auditory naming and visual gesturing, each of which involves one of the two partially damaged pathways, is nearly four times smaller. Rather, the effects of damage on these pathways interact, and their interaction leads to superadditive impairments. When a visual pattern is presented to the model, it is mapped by the damaged V~S pathway into a corrupted semantic representation which is then cleaned up. While the corruption is sufficiently minor that clean up will eventually succeed, the clean up process is slowed considerably by the corruption. During the period of time in which the semantic clean-up network is searching for the correct attractor, the corrupted semantic representation is nonetheless fed into the damaged S~N pathway. The combined effect of the (initially) noisy semantic representation serving as input to a damaged pathway leads to corruption of the naming representation past the point where it can be cleaned-up properly. Interactions in the architecture are inevitable, and are not merely a consequence of some arbitrary assumption that is built into our model. To argue this point, we consider two modifications to the architecture that might eliminate the interaction in the damaged model. First, if we allowed the V~S pathway to relax into a well-formed state before feeding its output into the S~N pathway, there would be little interaction-the effects of the damage would be additive. However, cortical pathways do not operate sequentially, one stage finishing its computation and then turning on the next stage. Moreover, in the undamaged brain, such a processing strategy is unadaptive, as cascading partial results from one pathway to the next can speed processing without the introduction of errors (McClelland, 1979). Second, the interaction might be eliminated by making the S~N pathway continually responsive to changes in the output of the V ~S pathway. Then, the rate of convergence of the V ~S pathway would be irrelevant to determining the eventual output of the S~N pathway. However, because the output of the S~N pathway depends not only on its input but its internal state (the state of the clean-up net), one cannot design a pathway that is continually responsive to changes in the input and is also able to clean up noisy responses. Thus, the two modifications one might consider to eliminate the interactions in the damaged model seriously weaken the computational power of the undamaged model. We therefore conclude that the framework of our model makes it difficult to avoid an interaction of damage in two pathways. A subtle yet significant aspect of the model's performance is that the error rate on the visual-gesturing task was reliably higher than the error rate on the auditory-naming task, despite the fact that each task made use of one damaged pathway, and the pathways were damaged to the same degree. The difference in performance is due to the fact that the damaged pathway for the visual-gesturing task is the first in a cascade of two, while the damaged pathway for the auditory-naming task is the second. The initially noisy response from a damaged pathway early in the system propagates to subsequent pathways, and A Superadditive-Impairment Theory of Optic Aphasia 71 although the damaged pathway will eventually produce the correct response, this is not sufficient to ensure that subsequent pathways will do so as well. 2.1 DISTRIBUTION OF ERRORS FOR VISUAL OBJECT NAMING Figure 2 presents the model's error distribution for the visual-naming task. Consistent with the patient data (Farah, 1990), the model produces many more semantic and perseveration errors than by chance. The chance error proportions were computed by assuming that if the correct response was not made, then all other responses had an equal probability of being chosen. To understand the predominance of semantic errors, consider the effect of damage to the V ~S pathway. For relatively small amounts of damage, the mapping produced will be close to the correct mapping. "Close" here means that the Euclidean distance in the semantic output space between the correct and perturbed mapping is small. Most of the time, minor perturbation of the mapping will be compensated for by the clean-up net. Occasionally, the perturbation will land the model in a different attractor basin, and a different response will be made. However, when the wrong attractor is selected, it will be one "close" to the correct attractor, i.e., it will likely be a sibling in the same pattern cluster as the correct attractor. In the case of the V ~S pathway, the siblings of the correct attract or are by definition semantically related. A semantic error will be produced by the model when a sibling semantic attractor is chosen, and then this pattern is correctly mapped to a naming response in the S~N pathway. In addition to semantic errors, the other frequent error type in visual naming is perseverations. The priming mechanism is responsible for the significant number of perseverations, although in the unlesioned model, it facilitates processing of repeated stimuli without producing perseverations. Just as important as the presence of perseverative and semantic errors is the absence of visual errors, a feature of optic aphasia that contrasts sharply with visual agnosia (Farah, 1990). The same mechanisms explain why the rate of visual errors is close to its chance value and why visual+semantic errors are above chance. Visual-naming errors occur because there is an error either in the V ~S or S~N mappings, or both. Since the erroneous outputs of these pathways show a strong tendency to be similar to the correct output, and because semantic and name similarity does not imply visual similarity (the patterns were paired randomly), visual errors should only occur by chance. When a visual error does occur, though, there is a high probability that the error is also semantic because of the strong bias that already exists toward producing semantic errors. This is the reason why more visual+semantic errors occur than by chance and why the proportion of these FIGURE 3. Distribution of error types made by model on the V ~N task (black bars) relative to chance (grey bars). • actual • cI1ance visual vis+sem pe~Yendwe Error type 72 M. C. Mozer, M. Sitton and M Farah errors is only slightly less than the proportion of visual errors. Plaut and Shallice (1993) have proposed a connectionist model to account for the distribution of errors made by optic aphasics. Although their model was not designed to account for any of the other phenomena associated with the disorder, it has much in common with the model we are proposing. Unlike our model, however, theirs requires the assumption that visually similar objects also share semantic similarity. This assumption might be questioned, especially because our model does not require this assumption to produce the correct distribution of error responses. 3 DISCUSSION In demonstrating superadditive effects of damage, we have offered an account of optic aphasia that explains the primary phenomenon: severe impairments in visual naming in conjunction with relatively spared performance on naming from verbal description or gesturing the appropriate use of a visually presented object. The model also explains the distribution of errors on visual naming. Although we did not have the space in this brief report to elaborate, the model accounts for several other distinct characteristics of optic aphasia, including the tendency of patients to "home in" on the correct name for a visually presented object when given sufficient time, and a positive correlation between the error rates on naming and gesturing responses to a visual object (Sitton, Mozer, & Farah, 1998). Further, the model makes several strong predictions which have yet to be tested experimentally. One such prediction, which was apparent in the results presented earlier, is that a higher error rate should be observed on visual gesturing than on auditory naming when the tasks are equated for difficulty, as our simulation does. More generally, we have strengthened the plausibility of Farah's (1990) hypothesis that partial damage to two processing pathways may result in close-to-normal performance on tasks involving one pathway or the other while yielding a severe performance deficit on tasks involving both damaged pathways. The superadditive-impairment theory thus may provide a more parsimonious account of various disorders that were previously believed to require more complex architectures or explanations. 4 ACKNOWLEDGMENTS This research was supported by grant 97-18 from the McDonnell-Pew Program in Cognitive Neuroscience. 5 REFERENCES Becker, S., Behrmann, M., & Moscovitch, K. (1993). Word priming in attractor networks. Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society (pp. 231-236). Hillsdale, NJ: Erlbaum. Farah, M. J. (1990). Visual agnosia. Cambridge, MA: MIT PresslBradford Books. Mathis, D. W., & Mozer, M. C. (1996). Conscious and unconscious perception: A computational theory. In G. Cottrell (Ed.), Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society (pp. 324-328). Hillsdale, NJ: Erlbaum. McClelland, J. L. (1979). On the time relations of mental processes: An examination of systems of processes in cascade. Psychological Review, 86, 287-330. Plaut, D., & Shallice, T. (1993). Perseverative and semantic influences on visual object naming errors in optic aphasia: A connectionist approach. Journal of Cognitive Neuroscience, 5,89-112. Schnider, A., Benson, D. E, and Scharre, D. W. (1994). Visual agnosia and optic aphasia: Are they anatomically distinct? Cortex, 30, 445-457. Sitton, M., Mozer, M. C., & Farah, M. (1998). Diffuse lesions in a modular connectionisl architecture: An account of optic aphasia. Manuscript submitted for publication.
1997
94
1,446
S-Map: A network with a simple self-organization algorithm for generative topographic mappings Kimmo Kiviluoto Laboratory of Computer and Information Science Helsinki University of Technology P.O. Box 2200 FIN-02015 HUT, Espoo, Finland Kimmo.KiviluotoChut.fi Erkki Oja Laboratory of Computer and Information Science Helsinki University of Technology P.O. Box 2200 FIN-02015 HUT, Espoo, Finland Erkki.OjaChut.fi Abstract The S-Map is a network with a simple learning algorithm that combines the self-organization capability of the Self-Organizing Map (SOM) and the probabilistic interpretability of the Generative Topographic Mapping (GTM). The simulations suggest that the SMap algorithm has a stronger tendency to self-organize from random initial configuration than the GTM. The S-Map algorithm can be further simplified to employ pure Hebbian learning, without changing the qualitative behaviour of the network. 1 Introduction The self-organizing map (SOM; for a review, see [1]) forms a topographic mapping from the data space onto a (usually two-dimensional) output space. The SOM has been succesfully used in a large number of applications [2]; nevertheless, there are some open theoretical questions, as discussed in [1, 3]. Most of these questions arise because of the following two facts: the SOM is not a generative model, i.e. it does not generate a density in the data space, and it does not have a well-defined objective function that the training process would strictly minimize. Bishop et al. [3] introduced the generative topographic mapping (GTM) as a solution to these problems. However, it seems that the GTM requires a careful initialization to self-organize. Although this can be done in many practical applications, from a theoretical point of view the GTM does not yet offer a fully satisfactory model for natural or artificial self-organizing systems. 550 K. Kiviluoto and E. Oja In this paper, we first briefly review the SOM and GTM algorithms (section 2); then we introduce the S-Map, which may be regarded as a crossbreed of SOM and GTM (section 3); finally, we present some simulation results with the three algorithms (section 4), showing that the S-Map manages to combine the computational simplicity and the ability to self-organize of the SOM with the probabilistic framework of the GTM. 2 SOM and GTM 2.1 The SOM algorithm The self-organizing map associates each data vector et with that map unit that has its weight vector closest to the data vector. The activations 11! of the map units are given by ~ = {I, when IIJLi - etll < IIJLj - etll, 'Vj # i 11, 0, otherwise (1) where JLi is the weight vector of the ith map unit Ci, i = 1, . . . , K. Using these activations, the SOM weight vector update rule can be written as K JLj := JLj + tSt L 11:h(Ci' Cj ; j3t)(et - JLj) (2) i=l Here parameter tSt is a learning rate parameter that decreases with time. The neighborhood function h(Ci' Cj ; j3t) is a decreasing function of the distance between map units Ci and Cj ; j3t is a width parameter that makes the neighborhood function get narrower as learning proceeds. One popular choice for the neighborhood function is a Gaussian with inverse variance j3t. 2.2 The GTM algorithm In the GTM algorithm, the map is considered as a latent space, from which a nonlinear mapping to the data space is first defined. Specifically, a point C in the latent space is mapped to the point v in the data space according to the formula L v(C; M) = MtP(C) = L ¢j(C)JLj (3) j=l where tP is a vector consisting of L Gaussian basis functions, and M is a D x L matrix that has vectors ILj as its columns, D being the dimension of the data space. The probability density p( C) in the latent space generates a density to the manifold that lies in the data space and is defined by (3). IT the latent space is of lower dimension than the data space, the manifold would be singular, so a Gaussian noise model is added. A single point in the latent space generates thus the following density in the data space: ( j3)D/2 [j3 ] p(eIC;M,j3)= 211' exp -2'llv(C;M)-eW (4) where j3 is the inverse of the variance of the noise. The key point of the GTM is to approximate the density in the data space by assuming the latent space prior p( C) to consist of equiprobable delta functions that S-Map 551 form a regular lattice in the latent space. The centers Ci of the delta functions are called the latent vectors of the GTM, and they are the GTM equivalent to the SOM map units. The approximation of the density generated in the data space is thus given by 1 K p(eIM,m = K LP(elCi;M,,8) i=l (5) The parameters of the GTM are determined by minimizing the negative log likelihood error f(M,,8) = - t. ln [~ t. P({'IC,;M,,8)] (6) over the set of sample vectors {et }. The batch version of the GTM uses the EM algorithm [4]; for details, see [3]. One may also resort to an on-line gradient descent procedure that yields the GTM update steps K 1L~+1 := IL~ + 6t,8t L lI:(Mt ,,8t)<pj(Ci)[et - v(Ci;Mt)] (7) i=l fJH' := i3' + 6' [ ~ t. ~hM', (J')lle' - v(C,; MI)II' - 2~ ] (8) where 11: (M,,8) is the GTM counterpart to the SOM unit activation, the posterior probability p(Cilet ; M,,8) of the latent vector Ci given data vector et : lIf(M,,8) = p(Cilet;M,,8) p(etICi; M,,8) - L:f.=1 p(etICi'; M,,8) (9) exp[-~IIV(Ci;M) _et Il2] 2.3 Connections between SOM and GTM Let us consider a GTM that has an equal number of latent vectors and basis functions!, each latent vector Ci being the center for one Gaussian basis function <Pi(C). Latent vector locations may be viewed as units of the SOM, and consequently the basis functions may be interpreted as connection strengths between the units. Let us use the shorthand notation <P~ == <Pj(Ci). Note that <P~ = <frt, and assume that the basis functions be normalized so that L:~l <P~ = L:~1 <P~ = 1. At the zero-noise limit, or when ,8 ~ 00, the softmax activations of the GTM given in (9) approach the winner-take-all function (1) of the SOM. The winner unit Cc(t) for the data vector et is the map unit that has its image closest to the data vector, so that the index c( t) is given by c(t) = ar~in Ilv(C,) - e'll = ~in (t, 4>;1';) - {t (10) INote that this choice serves the purpose of illustration only; to use GTM properly, one should choose much more latent vectors than basis functions. 552 K. Kiviluoto and E. Oja The GTM weight update step (7) then becomes IL~+1 := IL~ + ~t¢j(t) ret - v(c(t); Mt)] (11) This resembles the variant of SOM, in which the winner is searched with the rule (10) and weights are updated as IL~+1 := ILj +~t¢j(t)(et - IL;) (12) Unlike the original SOM rules (1) and (2), the modified SOM with rules (10) and (12) does minimize a well-defined objective function: the SOM distortion measure [5, 6, 7, 1]. However, there is a difference between GTM and SOM learning rules (11) and (12). With SOM, each individual weight vector moves towards the data vector, but with GTM, the image ofthe winnerlatent vector v(c(t); M) moves towards the data vector, and all weight vectors ILj move to the same direction. For nonzero noise, when 0 < f3 < 00, there is more difference between GTM and SOM: with GTM, not only the winner unit but activations from other units as well contribute to the weight update. 3 S-Map Combining the softmax activations of the GTM and the learning rule of the SOM, we arrive at a new algorithm: the S-Map. 3.1 The S-Map algorithm The S-Map resembles a GTM with an equal number of latent vectors and basis functions. The position of the ith unit on the map is is given by the latent vector (i; the connection strength of the unit to another unit j is ¢), and a weight vector ILi is associated with the unit. The activation of the unit is obtained using rule (9). The S-Map weights learn proportionally to the activation of the unit that the weight is associated with, and the activations of the neighboring units: 1'1+1 ,= 1'; +6' (t. ¢iij:) (e' -1';) (13) which can be further simplified to a fully Hebbian rule, updating each weight proportionally to the activation of the corresponding unit only, so that IL}+l := ILj + ~t7]~ (et - ILj) (14) The parameter f3 value may be adjusted in the following way: start with a small value, slowly increase it so that the map unfolds and spreads out, and then keep increasing the value as long as the error (6) decreases. The parameter adjustment scheme could also be connected with the topographic error of the mapping, as proposed in [9] for the SOM. Assuming normalized input and weight vectors, the "dot-product metric" form of the learning rules (13) and (14) may be written as 1')+1 ,= 1'] + 6' (t. ¢;ij:) (I -1']1'?)e' (15) S-Map 553 and JL~+1 := 1') + 6t1Jj(I - JL)JLf)et (16) respectively; the matrix in the second parenthesis keeps the weight vectors normalized to unit length, assuming a small value for the learning rate parameter 6t [8]. The dot-product metric form of a unit activity is ~' = exp [,8 (L:~1 ¢)I't {t] l L: ==1 exp [13 (Lf=1 1>;' JLj) T et ] (17) which approximates the posterior probability p('ilet; M, 13) that the data vector were generated by that specific unit. This is based on the observation that if the data vectors {et } are normalized to unit length, the density generated in the data space (unit sphere in RD) becomes al.. -1 K norm lzmg i [ ( ) T ] p({IC,; M,,8) = ( constant) x exp ,8 f;. ¢jl'j { 3.2 S-Map algorithm minimizes the GTM error function in dot-product metric (18) The GTM error function is the negative log likelihood, which is given by (6) and is reproduced here: (19) When the weights are updated using a batch version of (15), accumulating the updates for one epoch, the expected value of the error [4] for the unit (i is T E(£rW) = - LpOld(ilet;M,m In[p°ewU:i)pOeW(et l(i;M,!3)] t=1 ' '" -'----T/~Id.t ==1!K T ( K ) T (20) = - L 1J~ld,t 13 L 1>] JLjew et + terms not involving the weight vectors t==1 j==1 The change of the error for the whole map after one epoch is thus K T K E(£oew - fOld) = - L L L 1J~ld,t131>;(JLjew - JLjldfet i=1 t=1 j=1 = - p.s t, (~ ~ q~ld" ¢;{t) T (I - I'lld 1''Jld T) (t, t, q~ld,t' ¢j' (" ) , '" -' '", -'(21) aT ~ J K = -136 L[uf Uj - (uf JLjld)2] ~ 0 j=1 with equality only when the weights are already in the error minimum. 554 K. Kiviluoto and E. Oja 4 Experimental results The self-organization ability of the SOM, the GTM, and the S-Map was tested on an artificial data set: 500 points from a uniform random distribution in the unit square. The initial weight vectors for all models were set to random values, and the final configuration of the map was plotted on top of the data (figure 1). For all the algorithms, the batch version was used. The SOM was trained as recommended in [1] in two phases, the first starting with a wide neighborhood function, the second with a narrow neighborhood. The GTM was trained using the Matlab implementation by Svensen, following the recommendations given in [to]. The S-Map was trained in two ways: using the "full" rule (13), and the simplified rule (14). In both cases, the parameter {3 value was slowly increased every epoch; by monitoring the error (6) of the S-Map (see the error plot in the figure) the suitable value for {3 can be found. In the GTM simulations, we experimented with many different choices for basis function width and their number, both with normalized and unnormalized basis functions. It turned out that GTM is somewhat sensitive to these choices: it had difficulties to unfold after a random initialization, unless the basis functions were set so wide (with respect to the weight matrix prior) that the map was well-organized already in its initial configuration. On the other hand, using very wide basis functions with the GTM resulted in a map that was too rigid to adapt well to the data. We also tried to update the parameter {3 according to an annealing schedule, as with the S-Map, but this did not seem to solve the problem. [ . " ,'-. . . I' . . . r·· I. .' . . : :. : ~"" " : .. . ' : . ,. .' r ." " . ..... :, .. :. f . ~: .. , n ' ~. . . . r··· . . . .. ': l~ ~ .. : .. ' ' ~ '. ,':' ... . . 7 ' " ~ . f :t;jj ,{ .~ .. " .. : . .' i l' .f.:t+1. . .t}'. ..... hS* ~:_I;r7. .~ .. t'>i . ~: :H ~' ,.-' · ~· . ';I..r.' ·I.n '~.' .~ . ~ . Figure 1: Random initialization (top left), SOM (top middle), GTM (top right), "full" S-Map (bottom left), simplified S-Map (bottom middle), On bottom right, the S-Map error as a function of epochs is displayed; the parameter {3 was slightly increased every epoch, which causes the error to increase in the early (unfolding) phase of the learning, as the weight update only minimizes the error for a given {3. 5 Conclusions The S-Map and SOM seem to have a stronger tendency to self-organize from random initialization than the GTM. In data analysis applications, when the GTM can S-Map 555 be properly initialized, SOM, S-Map, and GTM yield comparable results; those obtained using the latter two algorithms are also straightforward to interpret in probabilistic terms. In Euclidean metric, the GTM has the additional advantage of guaranteed convergence to some error minimum; the convergence of the S-Map in Euclidean metric is still an open question. On the other hand, the batch G TM is computationally clearly heavier per epoch than the S-Map, while the S-Map is somewhat heavier than the SOM. The SOM has an impressive record of proven applications in a variety of different tasks, and much more experimenting is needed for any alternative method to reach the same level of practicality. SOM is also the basic bottom-up procedure of selforganization in the sense that it starts from a minimum of functional principles realizable in parallel neural networks. This makes it hard to analyze, however. A probabilistic approach like the GTM stems from the opposite point of view by emphasizing the statistical model, but as a trade-off, the resulting algorithm may not share all the desirable properties of the SOM. Our new approach, the S-map, seems to have succeeded in inheriting the strong self-organization capability of the SOM, while offering a sound probabilistic interpretation like the GTM. References [1] T. Kohonen, Self-Organizing Maps. Springer Series in Information Sciences 30, Berlin Heidelberg New York: Springer, 1995. [2] T. Kohonen, E. Oja, O. Simula, A. Visa, and J. Kangas, "Engineering applications of the self-organizing map," Proceedings of the IEEE, vol. 84, pp. 13581384, Oct. 1996. [3] C. M. Bishop, M. Svensen, and C. K. I. Williams, "GTM: A principled alternative to the self-organizing map," in Advances in Neural Information Processing Systems (to appear) (M. C. Mozer, M. I. Jordan, and T. Petche, eds.), vol. 9, MIT Press, 1997. [4] A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via the EM algorithm," Journal of the Royal Statistical Society, vol. B 39, no. 1, pp. 1-38, 1977. [5] S. P. Luttrell, "Code vector density in topographic mappings," Memorandum 4669, Defense Research Agency, Malvern, UK, 1992. [6] T. M. Heskes and B. Kappen, "Error potentials for self-organization," in Proceedings of the International Conference on Neural Networks (ICNN'99), vol. 3, (Piscataway, New Jersey, USA), pp. 1219-1223, IEEE Neural Networks Council, Apr. 1993. [7] S. P. Luttrell, "A Bayesian analysis of self-organising maps," Neural Computation, vol. 6, pp. 767-794, 1994. [8] E. Oja, "A simplified neuron model as a principal component analyzer," Journal of Mathematical Biology, vol. 15, pp. 267-273, 1982. [9] K. Kiviluoto, "Topology preservation in self-organizing maps," in Proceedings of the International Conference on Neural Networks (ICNN'96), vol. 1, (Piscataway, New Jersey, USA), pp. 294-299, IEEE Neural Networks Council, June 1996. [10] M. Svensen, The GTM toolbox - user's guide. Neural Computing Research Group / Aston University, Birmingham, UK, 1.0 ed., Oct. 1996. Available at URL http://neural-server . aston. ac. uk/GTM/MATLAB..Impl. html.
1997
95
1,447
Bayesian model of surface perception William T. Freeman MERL, Mitsubishi Electric Res. Lab. 201 Broadway Cambridge, MA 02139 freeman~erl.com Paul A. Viola Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 viola~ai.mit.edu Abstract Image intensity variations can result from several different object surface effects, including shading from 3-dimensional relief of the object, or paint on the surface itself. An essential problem in vision, which people solve naturally, is to attribute the proper physical cause, e.g. surface relief or paint, to an observed image. We addressed this problem with an approach combining psychophysical and Bayesian computational methods. We assessed human performance on a set of test images, and found that people made fairly consistent judgements of surface properties. Our computational model assigned simple prior probabilities to different relief or paint explanations for an image, and solved for the most probable interpretation in a Bayesian framework. The ratings of the test images by our algorithm compared surprisingly well with the mean ratings of our subjects. 1 Introduction When people study a picture, they can judge whether it depicts a shaded, 3dimensional surface, or simply a flat surface with markings or paint on it. The two images shown in Figure 1 illustrate this distinction [1]. To many observers Figure 1a appears to be a raised plateau lit from the left. Figure 1b is simply a re-arrangement of the local features of la, yet it does not give an impression of shape or depth. There is no simple correct answer for this problem; either of these images could be explained as marks on paper, or as illuminated shapes. Nevertheless people tend to make particular judgements of shape or reflectance. We seek an algorithm to arrive at those same judgements. There are many reasons to study this problem. Disentangling shape and reflectance is a prototypical underdetermined vision problem, which biological vision systems routinely solve. Insights into this problem may apply to other vision problems 788 W T. Freeman and P. A. Viola as well. A machine that could interpret images as people do would have many applications, such as the interactive editing and manipulation of images. Finally, there is a large body of computer vision work on "shape from shading"-inferring the 3-dimensional shape of a shaded object [4]. Virtually every algorithm assumes that all image intensity changes are caused by shading; these algorithms fail for any image with reflectance changes. To bring this body of work into practical use, we need to be able to disambiguate shading from reflectance changes. There has been very little work on this problem. Sinha and Adelson [9] examined a world of painted polyhedra, and used consistancy constraints to identify regions of shape and reflectance changes. Their consistancy constraints involved specific assumptions which need not always hold and may be better described in a probabilistic framework . In addition, we seek a solution for more general, greyscale lmages. Our approach combines psychophysics and computational modeling. First we will review the physics of image formation and describe the under-constrained surface perception problem. We then describe an experiment to measure the interpretations of surface shading and reflectance among different individuals. We will see that the judgements are fairly consistent across individuals and can be averaged to define "ground truth" for a set of test images. Our approach to modeling the human judgements is Bayesian. We begin by formulating prior probabilities for shapes and reflectance images, in the spirit of recent work on the statistical modeling of images [5, 8, 11]. Using these priors, the algorithm then determines whether an image is more likely to have been generated by a 3D shape or as a pattern of reflectance. We compare our algorithm's performance to that of the human subjects. (a) (b) (c) (d) Figure 1: Images (a) and (b), designed by Adelson [1], are nearly the same everywhere, yet give different percepts of shading and reflectance. (a) looks like a plateau, lit from the left; (b) looks like marks on paper. Illustrating the underconstrained nature of perception, both images can be explained either by reflectance changes on paper (they are), or, under appropriate lighting conditions, by the shapes (c) and (d), respectively (vertical scale exaggerated). Bayesian Model of Surface Perception 789 2 Physics of Imaging One simple model for the generation of an image from a three dimensional shape is the Lambertian model: I(x, y) = R(x, y) (i. n(x, y)) , (1) where I (x, y) is an image indexed by pixel location , n( x, y) is the surface normal at every point on the surface conveniently indexed by the pixel to which that surface patch projects, i is a unit vector that points in the direction of the light source, and R( x, y) is the reflectance at every point on the surface 1 . A patch of surface is brighter if the light shines onto it directly and darker if the light shines on it obliquely. A patch can also be dark simply because it is painted with a darker pigment. The shape of the object is probably more easily described as a depth map z(x, y) from which n(x, y) is computed. The classical "shape from shading" task attempts to compute z from I given knowledge of i and assuming R is everywhere constant. Notice that the problem is "illposed"; while I( x, y) does constrain n( x, y) it is not sufficient to uniquely determine the surface normal at each pixel. Some assumption about global properties of z is necessary to condition the problem. If R is allowed to vary, the problem becomes even more under-constrained. For example, R = I and n( x, y) = j is a valid solution for every image. This is the "all reflectance" hypothesis, where the inferred surface is flat and· all of the image variation is due to reflectance. Interestingly there is also an "all shape" solution for every image where R =. 1 and I(x, y) = i· n(x, y) (see Figure 1 for examples of such shapes). Since the relationship between z and I is non-linear, "shape from shading" cannot be solved directly and requires a time consuming search procedure. For our computational experiments we seek a rendering model for shapes which simplifies the mathematics, yet maintains the essential ambiguities of the problem. We use the approximations of linear shading [6]. This involves two sets of approximations. First, that the rendered image I (x, y) is some function, G ( g; , g; ), only of the surface slope at any point: (2) The second approximation is that the rendering function G itself is a linear function of the surface slopes, oz oz oz oz G( ox' Oy) ~ kl + k2 ox + k3 oy . (3) Under linear shading, finding a shape which explains a given image is a trivial integration along the direction of the assumed light source. Despite this simplicity, images rendered under linear shading appear fairly realistically shaded [6]. 3 Psychophysics We used a survey to assess subjects' image judgements. We made a set of 60 test images, using Canvas and Photoshop programs to generate and manipulate the images. Our goal was to create a set of images with varying degrees of shadedness. We sought to assess to what extent each subject saw each image as created by 1 Note: we assume orthographic projection, a distant light source, and no shadowing. 790 W T. Freeman and P. A. Vwla shading changes or reflectance changes. Each of our 18 naive observers was given a 4 page survey showing the images in a different random order. To explain the problem of image interpretation quickly to naive subjects, we used a concrete story (Adelson's Theater Set Shop analogy [2] is a related didactic example). The survey instructions were as follows: Pretend that each of the following pictures is a photograph of work made by either a painter or a sculptor. The painter could use paint, markers, air brushes, computer, etc. , to make any kind of mark on a fiat canvas. The paint had no 3-dimensionality; everything was perfectly fiat. The sculptor could make 3-dimensional obJects, but could make no markings on them. She could mold, sculpt, and scrape her sculptures, but could not draw or paint. All the objects were made out of a uniform plaster material and were made visible by lighting and shading effects. The subjects used a 5-point rating scale to indicate whether each image was made by the painter (P) or sculptor (S): S, S?, ?, P?, P. 3.1 Survey Results We examined a non-parametric comparison of the image ratings, the rank order correlation (the linear correlation of image rankings in order of shapeness by each observer) [7]. Over all possible pairings of subjects, the rank order correlations ranged from 0.3 to 0.9, averaging 0.65. All of these correlations were statistically significant, most at the 0.0001 level. We concluded that for our set of test images, people do give a very similar set of interpretations of shading and reflectance. We assigned a numerical value to each of the 5 survey responses (S=2; S?=l; ?=O; P?=-l; P=-2) and found the average numerical "shadedness" score for each image. Figure 2 shows a histogram of the survey responses for each image, ordered in decreasing order of shadedness. The two images of Figure 1 had average scores of 1.7 and -1.6, respectively, confirming the impressions of shading and reflectance. There was good consensus for the rankings of the most paint-like and most sculpturelike images; the middle images showed a higher score variance. The rankings by each individual showed a strong correlation with the rankings by the average of the remaining subjects, ranging from 0.6 to 0.9. Figure 4 shows the histogram of those correlations. The ordering of the images by the average of the subjects' responses provides a "ground truth" with which to compare the rankings of our algorithm. Figure 3, left, shows a randomly chosen subset of the sorted images, in decreasing order of assessed sculptureness . 4 Algorithm We will assume that people are choosing the most probable interpretation of the observed image. We will adopt a Bayesian approach and calculate the most probable interpretation for each image under a particular set of prior probabilities for images and shapes. To parallel the choices we gave our subjects, we will choose between interpretations that account for the image entirely by shape changes, or entirely by reflectance changes. Thus, our images are either a rendered shape, multiplied by a uniform reflectance image, or a flat shape, multiplied by some non-uniform reflectance image. Bayesian Model of Surface Perception l!!S o (,) cnp 10 intensity: score frequency for each image 20 30 40 50 image number (sorted by shapeness) 60 Figure 2: Histogram of survey responses. Intensity shows the number of responses of each score (vertical scale) for each image (horizontal, sorted in increasing order of shape ness ). 791 To find the most probable interpretation, given an image, we need to assign prior probabilities to shape and reflectance configurations. There has been recent interest in characterizing the probabilities of images by the expected distributions of subband coefficient values [5, 8, 11]. The statistical distribution of bandpass linear filter outputs, for natural images, is highly kurtotic; the output is usually small , but in rare cases it takes on very large values. This non-gaussian behavior is not a property of the filter operation, because filtered "random" images appear gaussian. Rather it is a property of the structure of natural images. An exponential distribution, P(c) ex: e- 1cl , where c is the filter coefficient value, is a reasonable model. These priors have been used in texture synthesis, noise removal, and receptive field modeling. Here, we apply them to the task of scene interpretation. We explored using a very simple image prior: P(I) ex: exp (-L oI(x,y)2 + OI(x , y)2) ox oy (4) r,y Here we treat the image derivative as an image subband corresponding to a very simple filter. We applied this image prior to both reflectance images, I(x , y), as well as range images, z(x, y). For any given picture, we seek to decide whether a shape or a reflectance explanation is more probable. The proper Bayesian approach would be to integrate the prior probabilities of all shapes which could explain the image in order to arrive at the total probability of a shape explanation. (The reflectance explanation, R is unique; the image itself). We employed a computationally simpler procedure, a very rough approximation to the proper calculation: we evaluated the prior probability, P(S) of the single, most probable shape explanation, S, for the image. Using the ratio test of a binary hypothesis, we formed a shapeness index, J, by the ratio of the probabilities for the shape and reflectance explanations, J = ~i!~. The index J was used to rank the test images by shapeness. We need to find the most probable shape explanation. The overall log likelihood of a shape, z, given an image is, using the linear shading approximations of Eq. (3): log P(z, kl' k2, k31I) log P(Ilz, kl' k2, k3) + log P(z) + c Lx,y(I - kl + k2* + k3~~)2 + Lx,y J g~2 + g;2 + c, (5) where c is a normalization constant. We use a multi-scale gradient descent algorithm that simultaneously determines the optimal shape and illumination parameters for an image (similar to that used by [10]). The optimization procedure has three stages starting with a quarter resolution version of I, and moving to the half and 792 W. T. Freeman and P. A. Vwla Figure 3: 28 of the 60 test images, arranged in decreasing order of subjects' shapeness ratings. Left: Subjects' rankings. Right: Algorithm 's rankings. then full resolution. The solution found at the low resolution is interpolated up to the next level and is used as a starting point for the next step in the optimization. In our experiments images are 128x128 pixels. The optimization procedure takes 4000 descent steps at each resolution level. 5 Results Surprisingly, the simple prior probability of Eq. (4) accounts for much of the ratings of shading or paint by our human subjects. Figure 3 compares the rankings (shown in raster scan order) of a subset of the test images for our algorithm and the average of our subjects. The overall agreement is good. Figure 4 compares two measures: (1) the correlations (dark bars) of the subjects' individual ratings to the mean subject rating with (2) the correlation of our algorithm's ratings to the mean subject rating. SUbjects show correlations between 0.6 and 0.9; our Bayesian algorithm showed a correlation of 0.64. Treating the mean subjects' ratings as the right answer, our algorithm did worse than most subjects but not as badly as some subjects. Figure 1 illustrates how our algorithm chooses an interpretation for an image. If a simple shape explains an image, such as the shape explanation (c) for image (a), the shape gradient penalties will be small, assigning a high prior probability to that shape. If a complicated shape (d) is required to explain a simple image (b), the Bayesian Model of Surface Perception 793 low prior probability of the shape and the high prior probability of the reflectance image will favor a "paint" explanation. We noted that many of the shapes inferred from paint-like images showed long ridges coincidently aligned with the assumed light direction. The assumption of generic light direction can be applied in a Bayesian framework [3] to penalize such coincidental alignments. We speculate that such a term would further penalize those unlikely shape interpretations and may improve algorithm performance. Rank order corralaUon with mean Imago rating 0.2 Figure 4: Correlation of individual subjects' image ratings compared with the mean rating (bars) compared with correlation of algorithm's rating with the mean rating (dashed line). Acknow ledgements We thank E. Adelson, D. Brainard, and J. Tenenbaum for helpful discussions. References [1] E. H. Adelson, 1995. personal communication. [2] E. H. Adelson and A. P. Pentland. The perception of shading and reflectance. In B. Blum, editor, Channels in the Visual Nervous System: Neurophysiology, Psychophysics, and Models, pages 195-207. Freund Publishing, London, 1991. [3] W. T. Freeman. The generic viewpoint assumption in a framework for visual perception. Nature, 368(6471):542-545, April 7 1994. [4] B. K. P. Horn and M. J. Brooks. Shape from shading. MIT Press, Cambridge, MA, 1989. [5] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for ·natural images. Nature, 381:607-609, 1996. [6] A. P. Pentland. Linear shape from shading. Inti. J. Compo Vis., 1(4):153-162, 1990. [7] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C. Cambridge Univ. Press, 1992. [8] E. P. Simoncelli and E. H. Adelson. Noise removal via Bayesian wavelet coring. In 3rd Annual Inti. Con/. on Image Processing, Laussanne, Switzerland, 1996. IEEE. [9] P. Sinha and E. H. Adelson. Recovering reflectance and illumination in a world of painted polyhedra. In Proc. 4th Intl. Con/. Computer Vision, pages 156-163. IEEE, 1993. [10] D. Terzopoulos. Multilevel computational processes for visual surface reconstruction. Compo Vis., Graphics, Image Proc., 24:52-96, 1983. [11] S. C. Zhu and D. Mumford. Learning generic prior models for visual computation. Submitted to IEEE Trans. PAMI, 1997.
1997
96
1,448
Training Methods for Adaptive Boosting of Neural Networks Holger Schwenk Dept.IRO Universite de Montreal 2920 Chemin de la Tour, Montreal, Qc, Canada, H3C 317 schwenk@iro.umontreal.ca Yoshua Bengio Dept.IRO Universite de Montreal and AT&T Laboratories, NJ bengioy@iro.umontreal.ca Abstract "Boosting" is a general method for improving the performance of any learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing. A recently proposed and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems using rather simple learning algorithms [4], and decision trees [1, 2, 6]. In this paper we use AdaBoost to improve the performances of neural networks. We compare training methods based on sampling the training set and weighting the cost function. Our system achieves about 1.4% error on a data base of online handwritten digits from more than 200 writers. Adaptive boosting of a multi-layer network achieved 1.5% error on the UCI Letters and 8.1 % error on the UCI satellite data set. 1 Introduction AdaBoost [4, 5] (for Adaptive Boosting) constructs a composite classifier by sequentially training classifiers, while putting more and more emphasis on certain patterns. AdaBoost has been applied to rather weak learning algorithms (with low capacity) [4] and to decision trees [1, 2, 6], and not yet, until now, to the best of our knowledge, to artificial neural networks. These experiments displayed rather intriguing generalization properties, such as continued decrease in generalization error after training error reaches zero. Previous workers also disagree on the reasons for the impressive generalization performance displayed by AdaBoost on a large array of tasks. One issue raised by Breiman [1] and the authors of AdaBoost [4] is whether some of this effect is due to a reduction in variance similar to the one obtained from the Bagging algorithm. In this paper we explore the application of AdaBoost to Diabolo (auto-associative) networks and multi-layer neural networks (MLPs). In doing so, we also compare three dif648 H. Schwenk and Y. Bengio ferent versions of AdaBoost: (R) training each classifier with a fixed training set obtained by resampling with replacement from the original training set (as in [1]), (E) training by resampling after each epoch a new training set from the original training set, and (W) training by directly weighting the cost fundion (here the squared error) of the neural network. Note that the second version (E) is a better approximation of the weighted cost function than the first one (R), in particular when many epochs are performed. If the variance reduction induced by averaging the hypotheses from very different models explains a good part of the generalization performance of AdaBoost, then the weighted training version (W) should perform worse then the resampling versions, and the fixed sample version (R) should perform better then the continuously resampled version (E). 2 AdaBoost AdaBoost combines the hypotheses generated by a set of classifiers trained one after the other. The tth classifier is trained with more emphasis on certain patterns, using a cost function weighted by a probability distribution Dt over the training data (Dt(i) is positive and Li Dt(i) = 1). Some learning algorithms don't permit training with respect to a weighted cost function. In this case sampling with replacement (using the probability distribution Dt ) can be used to approximate a weighted cost function. Examples with high probability would then occur more often than those with low probability, while some examples may not occur in the sample at all although their probability is not zero. This is particularly true in the simple resampling version (labeled "R" earlier), and unlikely when a new training set is resampled after each epoch ("E" version). Neural networks can be trained directly with respect to a distribution over the learning data by weighting the cost function (this is the "W" version): the squared error on the i-th pattern is weighted by the probability D t (i). The result of training the tth classifier is a hypothesis ht : X -+ Y where Y = {I, ... , k} is the space of labels, and X is the space of input features. After the tth round the weighted error €t of the resulting classifier is calculated and the distribution Dt+l is computed from D t , by increasing the probability of incorrectly labeled examples. The global decision f is obtained by weighted voting. Figure I (left) summarizes the basic AdaBoost algorithm. It converges (learns the training set) if each classifier yields a weighted error that is less than 50%, i.e., better than chance in the 2-c1ass case. There is also a multi-class version, called pseudoloss-AdaBoost, that can be used when the classifier computes confidence scores for each class. Due to lack of space, we give only the algorithm (see figure 1, right) and we refer the reader to the references for more details [4, 5]. AdaBoost has very interesting theoretical properties, in particular it can be shown that the error of the composite classifier on the training data decreases exponentially fast to zero [5] as the number of combined classifiers is increased. More importantly, however, bounds on the generalization error of such a system have been formulated [7]. These are based on a notion of margin of classification, defined as the difference between the score of the correct class and the strongest score of a wrong class. In the case in which there are just two possible labels {-I, +1}, this is yf(x), where f is the composite classifier and y the correct label. Obviously, the classification is correct if the margin is positive. We now state the theorem bounding the generalization error of Adaboost [7] (and any classifier obtained by a convex combination of a set of classifiers). Let H be a set of hypotheses (from which the ht hare chosen), with VC-dimenstion d. Let f be any convex combination of hypotheses from H. Let S be a sample of N examples chosen independently at random according to a distribution D. Then with probability at least 1 - 8 over the random choice of the training set S from D, the following bound is satisfied for all () > 0: ( 1 dlog2 (N/d) ) PD[yf(x) ~ 0] ~ Ps[yf(x) ~ ()] + 0 jN (}2 + log(1/8) (1) Note that this bound is independent of the number of combined hypotheses and how they Training Methods for Adaptive Boosting of Neural Networks Input: sequence of N examples (Xl, YI), ... , (X N , Y N ) with labels Yi E Y = {I, ... , k} Init: Dl(i) = l/N for all i loit: letB = {(i,y): i E{l, ... ,N},y i= yd Repeat: 1. Train neural network with respect to distribution D t and obtain hypothesis ht : X ~ Y 2. calculate the weighted error of ht : Dl (i. y) = l/IBI for all (i, y) E B Repeat: I. Train neural network with respect to distribution Dt and obtain hypothesis ht : X x Y ~ [0,1] 2. calculate the pseudo-loss of ht : 649 _ " D (.) abort loop €t ~ t '/, if €t > ~ i:ht(x,)#y, €t = ~ LDt(i, y)(l-ht(xi, Yd+ht(Xi' y)) (i,y)EB 3. set (3t = €t/(1 - €t) 4. update distribution Dt D (i) Dt(i) a O, t+l Zt /Jt with c5i = (ht(Xi) = Yi) and Zt a normalization constant Output: final hypothesis: 1 f(x) = arg max L log Ii" yEY /Jt t:ht(x)=y 3. set (3t = €t/(1 - €t) 4. update distribution D t D (i ) Dt(i,y) a~((1+ht(x"y,)-ht(x"y» t+l ,Y Zt /Jt where Zt is a normalization constant Output: final hypothesis: f(x) = arg max L (log ; ) ht(x, y) yEY t /Jt Figure I: AdaBoost algorithm (left), multi-class extension using confidence scores (right) are chosen from H. The distribution of the margins however plays an important role. It can be shown that the AdaBoost algorithm is especially well suited to the task of maximizing the number of training examples with large margin [7]. 3 The Diabolo Classifier Normally, neural networks used for classification are trained to map an input vector to an output vector that encodes directly the classes, usually by the so called "I-out-of-N encoding". An alternative approach with interesting properties is to use auto-associative neural networks, also called autoencoders or Diabolo networks, to learn a model of each class. In the simplest case, each autoencoder network is trained only with examples of the corresponding class, i.e., it learns to reconstruct all examples of one class at its output. The distance between the input vector and the reconstructed output vector expresses the likelihood that a particular example is part of the corresponding class. Therefore classification is done by choosing the best fitting model. Figure 2 summarizes the basic architecture. It shows also typical classification behavior for an online character recognition task. The input and output vectors are (x, y)-coordinate sequences of a character. The visual representation in the figure is obtained by connecting these points. In this example the" I" is correctly classified since the network for this class has the smallest reconstruction error. The Diabolo classifier uses a distributed representation of the models which is much more compact than the enumeration of references often used by distance-based classifiers like nearest-neighbor or RBF networks. Furthermore, one has to calculate only one distance measure for each class to recognize. This allows to incorporate knowledge by a domain specific distance measure at a very low computational cost. In previous work [8], we have shown that the well-known tangent-distance [11] can be used in the objective function of the autoencoders. This Diabolo classifier has achieved state-of-the-art results in handwritten OCR [8,9]. Recently, we have also extended the idea of a transformation invariant distance 650 H. Schwenk and Y. Bengio 1 ~ score I 0.08 1 1 6 0.13 1 ~ score 7 0.23 character input output distance decision to classify sequence sequences measures module Figure 2: Architecture of a Diabolo classifier measure to online character recognition [10]. One autoencoder alone, however, can not learn efficiently the model of a character if it is written in many different stroke orders and directions. The architecture can be extended by using several autoencoders per class, each one specializing on a particular writing style (subclass). For the class "0", for instance, we would have one Diabolo network that learns a model for zeros written clockwise and another one for zeros written counterclockwise. The assignment of the training examples to the different subclass models should ideally be done in an unsupervised way. However, this can be quite difficult since the number of writing styles is not known in advance and usually the number of examples in each subclass varies a lot. Our training data base contains for instance 100 zeros written counterclockwise, but only 3 written clockwise (there are also some more examples written in other strange styles). Classical clustering algorithms would probably tend to ignore subclasses with very few examples since they aren't responsible for much of the error, but this may result in poor generalization behavior. Therefore, in previous work we have manually assigned the subclass labels [10]. Of course, this is not a generally satisfactory approach, and certainly infeasible when the training set is large. In the following, we will show that the emphasizing algorithm of AdaBoost can be used to train multiple Diabolo classifiers per class, performing a soft assignment of examples of the training set to each network. 4 Results with Diabolo and MLP Classifiers Experiments have been performed on three data sets: a data base of online handwritten digits, the UeI Letters database of offline machine-printed alphabetical characters and the UCI satellite database that is generated from Landsat Multi-spectral Scanner image data. All data sets have a pre-defined training and test set. The Diabolo classifier was only applied to the online data set (since it takes advantage of the structure of the input features). The online data set was collected at Paris 6 University [10]. It is writer-independent (different writers in training and test sets) and there are 203 writers, 1200 training examples and 830 test examples. Each writer gave only one example per class. Therefore, there are many different writing styles, with very different frequencies. We only applied a simple pr~processing: the characters were resamfled to 11 points, centered and size normalized to a (x,y)-coordinate sequence in [-1, 1]2 . Since the Diabolo classifier with tangent distance [10] is invariant to small transformations we don't need to extract further features. Table 1 summarizes the results on the test set of different approaches before using AdaBoost. The Diabolo classifier with hand-selected sub-classes in the training set performs best since it is invariant to transformations and since it can deal with the different writing styles. The experiments suggest that fully connected neural networks are not well suited for this task: small nets do poorly on both training and test sets, while large nets overfit.
1997
97
1,449
An Analog VLSI Model of the Fly Elementary Motion Detector Reid R. Harrison and Christof Koch Computation and Neural Systems Program, 139-74 California Institute of Technology Pasadena, CA 91125 [harrison,koch]@klab.caltech.edu Abstract Flies are capable of rapidly detecting and integrating visual motion information in behaviorly-relevant ways. The first stage of visual motion processing in flies is a retinotopic array of functional units known as elementary motion detectors (EMDs). Several decades ago, Reichardt and colleagues developed a correlation-based model of motion detection that described the behavior of these neural circuits. We have implemented a variant of this model in a 2.0-JLm analog CMOS VLSI process. The result is a low-power, continuous-time analog circuit with integrated photoreceptors that responds to motion in real time. The responses of the circuit to drifting sinusoidal gratings qualitatively resemble the temporal frequency response, spatial frequency response, and direction selectivity of motion-sensitive neurons observed in insects. In addition to its possible engineering applications, the circuit could potentially be used as a building block for constructing hardware models of higher-level insect motion integration. 1 INTRODUCTION Flies rely heavily on visual motion information to survive. In the fly, motion information is known to underlie many important behaviors including stabilization during flight, orienting towards small, rapidly-moving objects (Egelhaaf and Borst 1993), and estimating time-tocontact for safe landings (Borst and Bahde 1988). Some motion-related tasks like extending the legs for landing can be excecuted less than 70 milliseconds after stimulus presentation. The computational machinery performing this sensory processing is fast, small, low-power, and robust. There is good evidence that motion information is first extracted by local elementary motion detectors (see Egelhaaf et al. 1988 and references therein). These EMDs are arAn Analog VLSI Model of the Fly Elementary Motion Detector a) EMD Architecture photoreceptors high-pass filters low-pass filters multipliers subtraction EMD output 40 <' .£30 E-20 8 ~ 10 __ Ul o 50 100 881 , ! , , 150 200 250 300 350 time[msJ 150 200 250 300 350 time[ms) Figure 1: Elementary Motion Detector. a) A simplified version of our EMD circuit architecture. In the actual circuit implementation, there are separate ON and OFF channels that operate in parallel. These two channels are summed after the mulipIication. b) The measured response of the EMD test circuit to a drifting sinusoidal grating. Notice that the output is phase dependent, but has a positive mean response. If the grating was drifting in the opposite direction, the circuit would give a negative mean response. ranged retinotopically, and receive input from adjacent photoreceptors. The properties of these motion-sensitive units have been studied extensively during the past 30 years. Direct recording from individual EMDs is difficult due to the small size of the cells, but much work has been done recording from large tangential cells that integrate the outputs of many EMDs over large portions of the visual field. From these studies, the behavior of individual EMDs has been inferred. If we wish to study models of motion integration in the fly, we first need a model of the EMD. Since many motion integration neurons in the fly are only a few synapses away from muscles, it may be possible in the near future to contruct models that complete the sensorimotor loop. If we wish to include the real world in the loop, we need a mobile system that works in real time. In the pursuit of such a system, we follow the neuromorphic engineering approach pioneered by Mead (Mead 1989) and implement a hardware model of the fly EMD in a commercially available 2.0-J-tm CMOS VLSI process. All data presented in this paper are from one such chip. 2 ALGORITHM AND ARCHITECTURE Figure la shows a simplified version of the motion detector. This is an elaborated version of the correlation-based motion detector first proposed by Reichardt and colleagues (see Reichardt 1987 and references therein). The Reichardt motion detector works by correlating (by means of a multiplication) the response of one photoreceptor to the delayed response of an adjacent photoreceptor. Our model uses the phase lag inherent in a low-pass filter to supply the delay. The outputs from two mirror-symmetric correlators are subtracted to remove any response to full-field flicker (ws = 0, Wt > 0). Correlation-based EMDs are not pure velocity sensors. Their response is strongly affected by the contrast and the spatial frequency components of the stimulating pattern. They can best be described as direction-selective spatiotemporal filters. The mean steady-state response R of the motion detector shown in Figure I a to a sinusoidal grating drifting in one direction can be expressed as a separable function of stimulus amplitude (D.I), temporal 882 a) .. fC dV I poSl\lve component 0 Tt t v + negative component of C ~; + . c dV 1= Tt R. R. Harrison and C. Koch b) c) dIout t it + lout = lin t = CUr It Figure 2: EMD Subcircuits. a) Temporal derivative circuit. In combination with the firstorder low-pass filter inherent in the photoreceptor, this forms the high-pass filter with time constant TH. The feedback amplifier enforces V = Yin, and the output is the current needed for the nFET or pFET source follower to charge or discharge the capacitor C. b) Current-mode low-pass filter. The time constant TL is determined by the bias current IT (which is set by a bias voltage supplied from off-chip), the capacitance C, and the thermal voltage UT = kT /q. c) Current-mode one-quadrant multiplier. The devices shown are floating-gate nFETs. Two control gates capacitively couple to the floating node, forming a capacitive divider. frequency (Wt = 21r It), and spatial frequency (ws = 21r is): (1) (2) where D.cp is the angular separation of the photoreceptors, TH is the time constant of the high-pass filter, and TL is the time constant of the low-pass filter (see Figure 1 a). (Note that this holds only for motion in a particular direction. Motion detectors are not linearly separable overall, but the single-direction analysis is useful for making comparisons.) 3 CIRCUIT DESCRIPTION In addition to the basic Reichardt model described above, we include a high-pass filter in series with the photoreceptor. This amplifies transient responses and removes the DC component of the photoreceptor signal. We primarily use the high-pass filter as a convenient circuit to switch from a voltage-mode to a current-mode representation (see Figure 2a). For the photoreceptor, we use an adaptive circuit developed by Delbruck (Delbruck and Mead 1996) that produces an output voltage proportional to log intensity. We bias the photoreceptor very weakly to attenuate high temporal frequencies. This is directly followed by a temporal derivative circuit (Mead 1989) (see Figure 2a), the result being a high-pass filter with the dominant pole TH being set by the photoreceptor cutoff frequency. The outputs of the temporal derivati·,re circuit are two unidirectional currents that represent the positive and negative components of a high-pass filtered version of the photoreceptor output. This resembles the ON and OFF channels found in many biological visual systems. Some studies suggest ON and OFF channels are present in the fly (Franceschini et al. 1989) but the evidence is mixed (Egelhaaf and Borst 1992). This two-channel representation is useful for current-mode circuits, since the following translinear circuits work only with unidirectional An Analog VLSI Model of the Fly Elementary Motion Detector 0.8 0.2 0.1 1 10 Temporal Frequency J, [Hz] 883 100 Figure 3: Temporal Frequency Response. Circuit data was taken with is = 0.05 cyclesldeg and 86% contrast. Theory trace is Rt(Wt) from Equation 2, where TH = 360 ms and TL = 25 ms were directly measured in separate experiments - these terms were not fit to the data. Insect data was taken from a wide-field motion neuron in the blowfly Calliphora erythrocephala (O'Carroll et al. 1996). All three curves were normalized by their peak response. currents. It should be noted that the use of ON and OFF channels introduces nonlinearities into the circuit that are not accounted for in the simple model described by Equation 2. The current-mode low-pass filter is shown in Figure 2b. The time constant TL is set by the bias current 11'. This is a log-domain filter that takes advantage of the exponential behavior of field-effect transistors (FETs) in the subthreshold region of operation (Minch, personal communication). The current-mode multiplier is shown in Figure 2c. This circuit is also translinear, using a diode-connected FET to convert the input currents into log-encoded voltages. A weighted sum of the voltages is computed with a capacitive divider, and the resulting voltage is exponentiated by the output FET into the output current. The capacitive divider creates a floating node, and the charge on all these nodes must be equalized to ensure matching across independent multipliers. This is easily accomplished by exposing the chip to UV light for several minutes. This circuit represents one of a family of floating-gate MaS translinear circuits developed by Minch that are capable of computing arbitrary power laws in current mode (Minch et al. 1996). After the multiplication stage, the currents from the ON and OFF channels are summed, and the final subtraction of the left and right channels is done off-chip. There is a gain mismatch of approximately 2.5 between the left and right channels that is now compensated for manually. This mismatch must be lowered before large on-chip arrays of EMDs are practical. A new circuit designed to lessen this gain mismatch is currently being tested. It is interesting to note that there is no significant offset error in the output currents from each channel. This is a consequence of using translinear circuits which typically have gain errors due to transistor mismatch, but no fixed offset errors. 4 EXPERIMENTS As we showed in Equation 2, the motion detector's response to a drifting sinusoidal grating of a particular direction should be a separable function of Ill, temporal frequency, and 884 08 ~ 0.6 2!1 ~ c 0,4 e ::.: .~ 0.2 ... i}. ° ..... ~, 2 0,001 ~con ..... .ID.!'ory __ ._i~ect 0,01 0,1 Spatial Frequency Is [cycleoJdeg] , , , , , ... , , \ \ \ R. R. Harrison and C. Koch ~ \ , ... \ /'c'····· ~"'~/ Figure 4: Spatial Frequency Response. Circuit data was taken with it = 4 Hz and 86% contrast. Theory trace is Rs(ws) from Equation 2 multiplied by exp( -ws2 / K2) to account for blurring in the optics, The photoreceptor spacing ~rp = 1.90 was directly measured in an separate experiment. Only K and the overall magnitude were varied to fit the data. Insect data was taken from a wide-field motion neuron in the hoverfly Volucella pelluscens (O'Carroll et al. 1996). Circuit and insect data were nonnalized by their peak response. spatial frequency, We tested the circuit along these axes using printed sinusoidal gratings mounted on a rotating drum. A lens with an 8-mm focal length was mounted over the chip. Each stimulus pattern had a fixed contrast ~I /21 and spatial frequency fs. The temporal frequency was set by the pattern's angular velocity vas seen by the chip, where it = fsv. The response of the circuit to a drifting sine wave grating is phase dependent (see Figure I b). In flies, this phase dependency is removed by integrating over large numbers of EMDs (spatial integration). In order to evaluate the perfonnance of our circuit, we measured the mean response over time. Figure 3 shows the temporal frequency response ofthe circuit as compared to theory, and to a wide-field motion neuron in the fly. The circuit exhibits temporal frequency tuning. The point of peak response is largely detennined by n, and can be changed by altering the lowpass filter bias current. The deviation of the circuit behavior from theory at low frequencies is thought to be a consequence of crossover distortion in the temporal derivative circuit. At high temporal frequencies, parasitic capacitances in current mirrors are a likely candidate for the discrepancy. The temporal frequency response of the blowfly Calliphora is broader than both the theory and circuit curves. This might be a result of time-constant adaptation found in blowfly motion-sensitive neurons (de Ruyter van Steveninck et at. 1986). Figure 4 shows the spatial frequency response of the circuit. The response goes toward zero as Ws approaches zero, indicating that the circuit greatly attenuates full-field flicker. The circuit begins aliasing at Ws = 1/2~rp, giving a response in the wrong direction. Spatial aliasing has also been observed in flies (Gotz 1965). The optics used in the experiment act as an antialiasing filter, so aliasing could be avoided by defocusing the lens slightly. Figure 5 shows the directional tuning of the circuit. It can be shown that as long as the spatial wavelength is large compared to ~rp, the directional sensitivity of a correlationbased motion detector should approximate a cosine function (Zanker 1990). The circuit's perfonnance matches this quite well. Motion sensitive neurons in the fly show cosine-like direction selectivity. Figure 6 shows the contrast response of the circuit. Insect EMDs show a saturating contrast An Analog VLSI Model of the Fly Elementary Motion Detector 0.8 0.6 9. 0.4 § ~ 02 co:: ~con ...... t!J~ory __ .J!!~ect ~ a .t" i -0.2 ~ri"')I.' ~-o.4 _ ... -0.6 -0.8 -1 -150 -100 ~ \ ,\ ''. ~\ -50 a 50 Direction of MolIOn [deg) , '. , ' , '. . ' , '. lO-. 885 " 100 150 Figure 5: Directional Response. Circuit data was taken with it = 6 Hz, is = 0.05 cycles/deg and 86% contrast. Theory trace is cos 0:, where 0: is the direction of motion relative to the axis along the two photoreceptors. Insect data was taken from the HI neuron in the blowfly Calliphora erythrocephala (van Hateren 1990). HI is a spiking neuron with a low spontaneous firing rate. The flattened negative responses visible in the graph are a result of the cell's limited dynamic range in this region. All three curves were normalized by their peak response. response curve, which can be accounted for by introducing saturating nonlinearities before the multiplication stage (Egelhaaf and Borst 1989). We did not attempt to model contrast saturation in our circuit, though it could be added in future versions. 5 CONCLUSIONS We implemented and tested an analog VLSI model of the fly elementary motion detector. The circuit's spatiotemporal frequency response and directional selectivity is qualitatively similar to the responses of motion-sensitive neurons in the fly. This circuit could be a useful building block for constructing analog VLSI models of motion integration in flies. As an integrated, low-power, real-time sensory processor, the circuit may also have engineering applications. Acknowledgements This work was supported by the Center for Neuromorphic Systems Engineering as a part of NSF's Engineering Research Center program, and by ONR. Reid Harrison is supported by an NDSEG fellowship from ONR. We thank Bradley Minch, Ho1ger Krapp, and Rainer Deutschmann for invaluable discussions. References A. Borst and S. Bahde (1988) Visual information processing in the fly's landing system. 1. Compo Physiol. A 163: 167-173. T. Delbruck and C. Mead (1996) Analog VLSI phototransduction by continuous-time, adaptive, logarithmic photoreceptor circuits. CNS Memo No. 30, Caltech. M. Egelhaaf, K. Hausen, W. Reichardt, and C. Wehrhahn (1988) Visual course control in 886 08 " i'! 8~06 § <.> ::; .~O.4 U '" 02 ~con ....•. t!'!'ory __ .j~~ct >" ;l I I I /' .,/ I I I I I I I I ,..... i ....... . -.... ,' .... / ,'JI. -R. R. Harrison and C. Koch 10 20 30 40 50 60 70 80 90 100 Contrast &112[[%] Figure 6: Contrast Response. Circuit data was taken with It = 6 Hz and is = 0.1 cycles/deg. Theory trace is Rj(D.I) from Equation 2 with its magnitude scaled to fit the circuit data. Insect data was taken from the HS neuron in the blowfly Calliphora erythrocephala (Egelhaaf and Borst 1989). Circuit and insect data were normalized by their peak response. flies relies on neuronal computation of object and background motion. TINS 11: 351-3 5 8. M. Egelbaaf and A. Borst (1989) Transient and steady-state response properties of movement detectors. J. Opt. Soc. Am. A 6: 116-127. M. Egelbaaf and A. Borst (1992) Are there separate ON and OFF channels in fly motion vision? Visual Neuroscience 8: 151-164. M. Egelbaaf and A. Borst (1993) A look into the cockpit of the fly: Visual orientation, algorithms, and identified neurons. J. Neurosci. 13: 4563-4574. N. Franceschini, A. Riehle, and A. Ie Nestour (1989) Directionally selective motion detection by insect neurons. In StavengaIHardie (eds.), Facets o/Vision, Berlin: Springer-Verlag. K.G. Gotz (1965) Die optischen Ubertragungseigenschaften der Komplexaugen von Drosophila. Kybemetik2: 215-221. J.H. van Hateren (1990) Directional tuning curves, elementary movement detectors, and the estimation of the direction of visual movement. Vision Res. 30: 603-614. C. Mead (1989) Analog VLSI and Neural Systems. Reading, Mass.: Addison-Wesley. B.A. Minch, C. Diorio, P. Hasler, and C. Mead (1996) Translinear circuits using subthreshold floating-gate MaS transistors. Analog Int. Circuits and Signal Processing 9: 167-179. D.C. O'Carroll, N.J. Bidwell, S.B. Laughlin, and EJ. Warrant (1996) Insect motion detectors matched to visual ecology. Nature 382: 63-66. W. Reichardt (1987) Evaluation of optical motion information by movement detectors. J. Compo Phys. A 161: 533-547. R.R. de Ruyter van Steveninck, W.H. Zaagman, and H.A.K. Mastebroek (1986) Adaptation of transient responses of a movement-sensitive neuron in the visual system of the blowfly Calliphora erythrocephala. Bioi. Cybern. 54: 223-236. J.M. Zanker (1990) On the directional sensitivity of motion detectors. Bioi. Cybern. 62: 177-183.
1997
98
1,450
Experiences with Bayesian Learning Real World Application Peter Sykacek, Georg Dorffner Austrian Research Institute for Artificial Intelligence Schottengasse 3, A-10ID Vienna Austria peter, georg@ai.univie.ac.at Peter Rappelsberger Institute for Neurophysiology at the University Vienna Wahringer StraBe 17, A-lOgO Wien Peter.Rappelsberger@univie.ac.at Josef Zeitlhofer Department of Neurology at the AKH Vienna Wahringer Giirtel 18-20, A-lOgO Wien Josef.Zeitlhofer@univie.ac.at Abstract • In a This paper reports about an application of Bayes' inferred neural network classifiers in the field of automatic sleep staging. The reason for using Bayesian learning for this task is two-fold. First, Bayesian inference is known to embody regularization automatically. Second, a side effect of Bayesian learning leads to larger variance of network outputs in regions without training data. This results in well known moderation effects, which can be used to detect outliers. In a 5 fold cross-validation experiment the full Bayesian solution found with R. Neals hybrid Monte Carlo algorithm, was not better than a single maximum a-posteriori (MAP) solution found with D.J. MacKay's evidence approximation. In a second experiment we studied the properties of both solutions in rejecting classification of movement artefacts. Experiences with Bayesian Learning in a Real World Application 965 1 Introduction Sleep staging is usually based on rules defined by Rechtschaffen and Kales (see [8]). Rechtschaffen and Kales rules define 4 sleep stages, stage one to four, as well as rapid eye movement (REM) and wakefulness. In [1] J. Bentrup and S. Ray report that every year nearly one million US citizens consulted their physicians concerning their sleep. Since sleep staging is a tedious task (one all night recording on average takes abou t 3 hours to score manually), much effort was spent in designing automatic sleep stagers. Sleep staging is a classification problem which was solved using classical statistical t.echniques or techniques emerged from the field of artificial intelligence (AI) . Among classical techniques especially the k nearest neighbor technique was used. In [1] J. Bentrup and S. Ray report that the classical technique outperformed their AI approaches. Among techniques from the field of AI, researchers used inductive learning to build tree based classifiers (e.g. ID3, C4.5) as reported by M. Kubat et. a1. in [4]. Neural networks have also been used to build a classifier from training examples. Among those who used multi layer perceptron networks to build the classifier, the work of R. Schaltenbrand et. a1. seems most interesting. In [to] they use a separate network to refuse classification of too distant input vectors. The performance usually reported is in the range of 75 to 85 percent. \Vhich enhancements to these approaches can be made to get a. reliable system wit.h hopefully better performance? According to S. Roberts et. al. in [9], outlier detection is important to get reliable results in a critical (e.g. medical) environment. To get reliable results one must refuse classification of dubious inputs. Those inputs are marked separately for further inspection by a human expert. To be able to detect such dubious inputs, we use Bayesian inference to calculate a distribution over the neural network weights. This approach automatically incorporates the calculation of confidence for each network estimate. Bayesian inference has the further advantage that regularization is part of the learning algorithm. Additional methods like weight decay penalty a.nd cross validation for decay parameter tuning are no longer needed. Bayesian inference for neural networks was among others investigated by D.J. MacKay (see [5]), Thodberg (see [11]) and Buntine and Weigend (~ee [3]). The a.im of this paper is to study how Bayesian inference leads to probabilities for classes, which together with doubt levels allow to refuse classification of outliers. As we are interested in evaluating the resulting performance, we use a comparative method on the same data set and use a significance test, such that the effect of the method can easily be evaluated. 2 Methods In this section we give a short description of the inference techniques used to perform the experiments. We have used two approaches using neural networks as classifiers and an instance based approach in order to make the performance estimates comparable to other methods. 2.1 Architecture for polychotomous classification For polychotomous classification problems usually a l-of-c target coding scheme is used. Usually it is sufficient to use a network architecture with one hidden layer. In [2] pp. 237-240, C. Bishop gives a general motivation for the softmax data model, 966 P. Sykacek, G. Dorffner; P. Rappelsberger and 1. Zeitlhofer which should be used if one wants the network outputs to be probabilities for classes. If we assume that the class conditional densities, p(£ I Ck), of the hidden unit activation vector, £, are from the general family of exponential distributions, then using t.he transformation in (1), allows to interpret the network outputs as probabilities for classes. This transformation is known as normalized exponential or softmax activation function. p(Ck 1£) = exp(ak) Lkl exp(ak l ) (1) In ! 1) t.he value ak is the value at output node k before applying softmax activation. Softmax transformation of the activations in the output layer is used for both network approaches used in this paper. 2.2 Bayesian Inference In [6] D.J. MacKay uses Bayesian inference and marginalization to get moderated probabilities for classes in regions where the network is uncertain about the class label. In conjunction with doubt levels this allows to suppress a classification of such patterns. A closer investigation of this approach showed that marginalization leads to moderated probabilities, but the degree of moderation heavily depends on the direction in which we move away from the region with sufficient training data. Therefore one has to be careful about whether the moderation effect should be used for outliers detection. A Bayesian solution for neural networks is a posterior distribution over weight space calculated via Bayes' theorem using a prior over weights. (2) In (2), w is the weight vector of the network and V represents the training data. Two different possibilities are known to calculate the posterior in (2). In [5] D.J . MacKay derives an analytical expression assuming a Gaussian distribution. In [7] R. Neal uses a hybrid Monte Carlo method to sample from the posterior. For one input pattern, the posterior over weight space will lead to a distribution of network outputs. For a classification problem, following MacKay [6], the network estimate is calculated by marginalization over the output distribution. P(C1 I~, V) =.J P(C1 I~, w)p(w I V)dw = J y(~, w)p(w I V)dw (3) In general, the distribution over output activations will have small variance in regions well represented in the training data and large variance everywhere else. The reason for that is the influence of the likelihood term p(V I w), which forces the network mapping to lie close to the desired one in regions with training data, but which has no influence on the network mapping in regions without training data. At least for for generalized linear models applied to regression, this property is quantifiable. In [12] C. Williams et.al. showed that the error bar is proportional to the inverse input data density p(~)-l. A similar relation is also plausible for the output activation in classification problems. Experiences with Bayesian Learning in a Real World Application 967 Due to the nonlinearity of the softmax transformation, marginalization will moderate probabilities for classes. Moderation will be larger in regions with large variance of the output activation. Compared to a decision made with the most probable weight, the network guess for the class label will be less certain. This moderation effect allows to reject classification of outlying patterns. Since upper integral can not be solved analytically for classification problems, there are t.wo possibilities to solve it. In [6] D.J. MacKay uses an approximation. Using hybrid Monte Carlo sampling as an implementation of Bayesian inference (see R. Neal in [7]), there is no need to perform upper integration analytically. The hybrid Monte Carlo algorithm samples from the posterior and upper integral is calculated as a finite sum. 1 L P(C1 I~, 1)) ~ L LY(~' Wi) i=l (4) Assuming, that the posterior over weights is represented exactly by the sampled weights, there is no need to limit the number of hidden units, if a correct (scaled) prior is used. Consequently in the experiments the network size was chosen to be large. We used 25 hidden units. Implementation details of the hybrid Monte Carlo algorithm may be found in [7]. 2.3 The Competitor The classifier, used to give performance estimates to compare to, is built as a two layer perceptron network with softmax transformation applied to the outputs. As an error function we use the cross entropy error including a consistent weight decay penalty, as it is e.g. proposed by C. Bishop in [2], pp. 338. The decay parameters are estimated with D.J. MacKay's evidence approximation ( see [5] for details). Note that the restriction of D.J. MacKay's implementation of Bayesian learning, which has no solution to arrive at moderated probabilities in l-of-c classification problems, do not apply here since we use only one MAP value. The key problem with this approach is the Gaussian approximation of the posterior over weights, which is used to derive the most probable decay parameters. This approximation is certainly only valid if the number of network parameters is small compared to the number of training samples. One consequence is, that the size of the network has to be restricted. Our model uses 6 hidden units. To make the performance of the Bayes inferred classifier also comparable to other methods, we decided to include performance estimates of a k nearest neighbor algorithm. This algorithm is easy to implement and from [1] we have some evidence that its performance is good. 3 Experiments and Results In this sect.ion we discuss the results of a sleep staging experiment based on the t.echniques described in the "Methods" section. 3.1 Data All experiments are performed with spectral features calculated from a database of 5 different healthy subjects. All recordings were scored according to the Rechtschaffen & Kales rules. The data pool consisted from data calculated for all electrodes 968 P. Sykacek, G. Doif.fner, P. Rappelsberger and J. Zeitlhofer available, which were horizontal eye movement, vertical eye movement and 18 EEG f'lectrodes placed with respect to the international 10-20 system. The data were transformed into the frequency domain. We used power density values as well as coherency between different electrodes, which is a correlation coefficient expressed as a function of frequency as input features. All data were transformed to zero mean and unit variance. From the resulting feature space we selected 10 features, which were used as inputs for classification. Feature selection was done with a suboptimal search algorithm which used the performance of a k nearest neighbor classifier for evaluation. We used more than 2300 samples during t.raining and about 580 for testing. 3.2 Analysis of Both Classifiers The analysis of both classifiers described in the "Methods" section should reveal whether besides good classification performance the Bayes' inferred classifier is also ,apable of refusing outlying test patterns. Increasing the doubt level should lead to better results of the classifier trained by Bayesian Inference if the test data contains out.lying patterns. We performed two experiments. During the first experiment Wf' calculated results from a 5 fold cross validation, where training is done with 4 subjects and tests are performed with one independent test person. In a second j,f'St. we examine the differences of both algorithms on patterns which are definitely outliers. We used the same classifiers as in the first experiment. Test patterns for t his experiment were classified movement artefacts, which should not be classified as one of the sleep stages. The classifier used in conjunction with Bayesian inference was a 2-layer neural net.work with 10 inputs, 25 hidden units with sigmoid activation and five output units with softmax activation. The large number of hidden units is motivated by the results reported from R. Neal in [7]. R. Neal studied the properties of neural networks in a Bayesian framework when using Gaussian priors over weights. He concluded that there is no need for limiting the complexity of the network when using a correct Bayesian approach. The standard deviation of the Gaussian prior 1S scaled by the number of hidden units. For the comparative approach we used a neural network with 10 inputs, 6 hidden units and 5 outputs with softmax activation. Optimization was done via the BFGS algorit.hm (see C. Bishop in [2]) with automatic weight decay parameter tuning (D.J. MacKay's evidence approximation). As described in the methods section, the smaller network used here is motivated by the Gaussian approximation of the posterior over weights, which is used in the expression for the most probable decay parameters. The third result is a result achieved with a k nearest neighbor classifier with k set to three. All results are summaried in table 1. Each column summarizes the results achieved with one of the algorithms and a certain doubt level during the cross validation run. As the k nearest neighbor classifier gives only coarse probability estimates, we give only the performance estimate when all test patterns are classified. An examination of table 1 shows that the differences between the MAP-solution and the Bayesian solution are extremely small. Consequently, using a t-test, the O-hypothesis could not be rejected at any reasonable significance level. On the other hand compared to the Bayesian solution, the performance of the k nearest neighbor classifier is significantly lower (the significance level is 0.001). Experiences with Bayesian Learning in a Real World Application 969 Table 1: Classification Performance MAP Doubt Cases 0 5% 100/0 15% Mean Perf. 78.6% 80.4% 81.6% 83.2'70 Std. Dev. 9.1% 9.4% 9.4% 9.1% Bayes Doubt Cases 0 5% 10% 15% Mean Perf. 78.4% 80.2% 82.2% 83.6% Std. Dev. 8.6% 9.0% 9.4% 9.1% k nearest neighbor Doubt Cases 0 5% 10% 15% Mean Perf. 74.6% Std. Dev. 8.4% Table 2: Rejection of Movement Periods Method MAP Bayes recognized outliers No. % No. % 0 0% 1 7.1% 1 7.7% 6 46.f% 2 15.4% 5 38.5% 0 0% 5 38.5% 1 7.7% 3 23.1% The last experiment revealed that both training algorithms lead to comparable performance estimates, when clean data is used. When using the classifier in practice there is no guarantee that the data are clean. One common problem of all night recordings are the so called movement periods, which are periods with muscle activity due to movements of the sleeping subject. During a second experiment we tried t.o assess the robustness of both neural classifiers against such inputs. During this experiment we used a fixed doubt level, for which approximately 5% of the clean t.est. data from the last experiment were rejected. With this doubt level we classified 13 movement periods, which should not be assigned to any of the other stages. The number of correctly refused outlying patterns are shown in table 2. Analysis of the results with a t-test showed a significant higher rate of removed outliers for the full Bayesian approach. Nevertheless as the number of misclassified outliers is large, one has to be careful in using this side-effect of Bayesian inference. 4 Conclusion Using Bayesian Inference for neural network training is an approach which leads to better classification results compared with simpler training procedures. Comparing wit.h the "one MAP" solution, we observed significantly larger reliability in detecting dubious patterns. The large amount of remaining misclassified patterns, which were obviously outlying, shows that we should not rely blindly on the moderating effect of marginalization. Despite the large amount of time which is required to calculate t.he solution, Bayesian inference has relevance for practical applications. On one hand the Bayesian solution shows good performance. But the main reason is the a.bility to encode a validity region of the model into the solution. Compared to all methods which do not aim at a predictive distribution, this is a clear advantage for Bayesian inference. 970 P. Sykacek, G. Doif.fner, P. Rappelsberger and 1. Zeitlhofer Acknowledgements We want to acknowledge the work of R. Neal from the Departments of Statistics and Computer Science at the University of Toronto, who made his implementation of hybrid Monte-Carlo sampling for Bayesian inference available electronically. His software was used to calculate the full Bayes' inferred classification results. We also want to express gratitude to S. Roberts from Imperial College London, one of the partners in the ANNDEE project. His work and his consequence in insisting on confidence measures for network decisions had a large positive impact on our work. This work was sponsored by the Austrian Federal Ministry of Science and Transport. It was done in the framework of the BIOMED 1 concerted action ANNDEE, financed by the European Commission, DG. XII. References [1] J.A. Bentrup and S.R. Ray. An examination of inductive learning algorithms for the classification of sleep signals. Technical Report UIUCDCS-R-93-1792, Dept of Computer Science, University of Illinois, Urbana-Champaign, 1993. [3] C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995. [3] W. L. Buntine and A. S. Weigend. Bayesian back-propagation. Complex Systems, 5:603-643, 1991. [4] M. Kubat, G. Pfurtscheller, and D. Flotzinger. Discrimination and classification using bot.h binary and continuous variables. Biological Cybernetics, 70:443-448, 1994. [5] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4:415-447, 1992. [6] D. J. C. MacKay. The evidence framework applied to classification networks. Neural Computation, 4:720-736, 1992. [7] R. M. Neal. Bayesian Learning for Neural Networks. Springer, New York, 1996. [8] A. Rechtschaffen and A. Kales. A manual of standardized terminology, techniques and scoring system for sleep stages of human subjects. NIH Publication No. 204, US Government Printing Office, Washington, DC., 1968. [9] S. Roberts, L. Tarassenko, J. Pardey, and D. Siegwart. A confidence measure for artificial neural networks. In International Conference Neural Networks and Expert Systems in Medicine and Healthcare, pages 23-30, Plymouth, UK, 1994. [10] N. Schaltenbrand, R. Lengelle, and J.P. Macher. Neural network model: application to automatic analysis of human sleep. Computers and Biomedical Research, 26:157171, 1993. ~ll] H. H. Thodberg. A review of bayesian neural networks with an application to near infrared spectroscopy. IEEE Transactions on Neural Networks, 7(1):56-72, January 1996. [12] C. K. I. Williams, C. Quazaz, C. M. Bishop, and H. Zhu. On the relationship between bayesian error bars and the input data density. In Fourth International Conference on Artificial Neural Networks, Churchill Col/ege, University of Cambridge, UK. lEE Conference Publication No. 409, pages 160-165, 1995.
1997
99
1,451
Computational Differences between Asymmetrical and Symmetrical Networks Zhaoping Li Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WCIN 3AR. zhaoping@gatsby.ucl.ac.uk dayan@gatsby.ucl.ac.uk Abstract Symmetrically connected recurrent networks have recently been used as models of a host of neural computations. However, because of the separation between excitation and inhibition, biological neural networks are asymmetrical. We study characteristic differences between asymmetrical networks and their symmetrical counterparts, showing that they have dramatically different dynamical behavior and also how the differences can be exploited for computational ends. We illustrate our results in the case of a network that is a selective amplifier. 1 Introduction A large class of non-linear recurrent networks, including those studied by Grossberg,9 the Hopfield net,lO,l1 and many more recent proposals for the head direction system,27 orientation tuning in primarls visual cortex,25, 1,3, 18 eye position,20 and spatial location in the hippocampus 9 make a key simplifying assumption that the connections between the neurons are symmetric. Analysis is relatively straightforward in this case, since there is a Lyapunov (or energy) function4, 11 that often guarantees the convergence of the motion trajectory to an equilibrium point. However, the assumption of symmetry is broadly false. Networks in the brain are almost never symmetrical, if for no other reason than the separation between excitation and inhibition. In fact, the question of whether ignoring the polarity of the cells is simplification or over-simplication has yet to be fully answered. Networks with excitatory and inhibitory cells (EI systems, for short) have long been studied,6 for instance from the per~ective of pattern generation in invertebrates,23 and oscillations in the thalamus ' 24 and the olfactory systemP, 13 Further, since the discovery of 40 Hz oscillations or synchronization amongst cells in primary visual cortex of anesthetised cat,8,5 oscillatory models of VI involving separate excitatory and inhibitory cells have also been popular, mainly from the perspective of how the oscillations can be created and sustained and how they can Computational Differences between Asymmetrical and Symmetrical Networks 275 be used for feature linking or binding.26,22,12 However the scope for computing with dynamically stable behaviors such as limit cycles is not yet clear. In this paper, we study the computational differences between a family of EI systems and their symmetric counterparts (which we call S systems). One inspiration for this work is Li's nonlinear EI system modeling how the primary visual cortex performs contour enhancement and pre-attentive region segmentation. 14 , 15 Studies by Braun2 had suggested that an S system model of the cortex can not perform contour enhancement unless additional (and biologically questionable) mechanisms are used. This posed a question about the true differences between EI and S systems that we answer. We show that EI systems can take advantage of dynamically stable modes that are not available to S systems. The computational significance of this result is discussed and demonstrated in the context of models of orientation selectivity. More details of this work, especially its significance for models of the primary visual cortical system, can be found in Li & Dayan (1999).16 2 Theory and Experiment Consider a simple, but biologically significant, EI system in which excitatory and inhibitory cells come in pairs and there are no 'lon~-range' connections from the inhibitory cells14, 15 (to which the Lyapunov theory1 ,21 does not yet apply): Xi = -Xi + L:j Jijg(Xj) - h(Yd + Ii TyYi = -Yi + L:j Wijg(Xj) , (1) where Xi are the principal excitatory cells, which receive external or sensory input h and generate the network outputs g(Xi); Yi are the inhibitory interneurons (which are taken here as having no external input); function g(x) = [x - T]+ is the threshold non-linear activation function for the excitatory cells; h(y) is the activation function for the inhibitory cells (for analytical convenience, we use the linear form h(y) = (y - Ty) although the results are similar with the non-linear h(y) = [y - Ty]+); Ty is a time-constant for the inhibitory cells; and Jij and Wij are the output connections of the excitatory cells. Excitatory and inhibitory cells can also be perturbed by Gaussian noise. In the limit that the inhibitory cells are made infinitely fast (Ty = 0), we have Yi = L:j Wijg(Xj), leaving the excitatory cells to interact directly with each other: Xi -Xi + L:j Jijg(Xj) - h(L:j Wijg(Xj)) + Ii = -Xi + L:j(Jij - Wij)g(Xj) + Ii + /'i,i (2) (3) where /'i,i are constants. In this network, the neural connections Jij - Wij between any two cells X can be either excitatory or inhibitory, as in many abstract neural network models. When Jij = Jji and Wij = Wji, the network has symmetric connections. This paper compares EI systems with such connections and the corresponding S systems. Since there are many ways of setting Jij and Wij in the EI system whilst keeping constant Jij - Wij, which is the effective weight in the S system, one may intuitively expect the EI system to have a broader computational range. The response of either system to given inputs is goverhed by the location and linear stability of their fixed points. The S network is so defined as to have fixed points x (where x = 0 in equation 3) that are the same as those (x, y) of the EI network. In particular, x depends on inputs I (the input-output sensitivity) via ax = (IT- JDg + WDg)-1 dI, where IT is the identity matrix, J and Ware the connection matrices, and Dg is a diagonal matrix with elements [Dg]ii = g'(Xi). However, although the locations of the fixed points are the same for the EI and S 276 Z. Li and P. Dayan systems, the dynamical behavior of the systems about those fixed points are quite different, and this is what leads to their differing computational power. To analyse the stability of the fixed points, consider, for simplicity the case that Ty = 1 in the EI system, and that the matrices JDg and WDg commute with eigenvalues >"i and >..J; respectively for k = 1, ... ,N where N is the dimension of x. The local deviations near the fixed points along each of the N modes will grow in time if the real parts of the following values are positive ,fI = -1 + (1/2>..t ± (t (>..t)2 - >..J;)1/2 for the EI system ,r = -1 - >..J; + >..t for the S system In the case that>.. J and >.. ware real, then if the S system is unstable, then the EI system is also unstable. Forif-1+>..{ ->..r' > Othen(>..{)2-4>..r' > (>..{ -2?,and s02,fI = -2+>..{+{(>..f)2 _4>..W)I/2 > o. However, if the EI system is oscillatory, 4>..w > (>..J)2, then the S system is stable since -1 +>..J - >..w < _l+>..J - (>..J)2 /4 = -(1 - >..J /2)2 ::; O. Hence the EI system can be unstable and oscillatory while the S system is stable. We are interested in the capacity of both systems to be selective amplifiers. This means that there is a class of inputs I that should be comparatively boosted by the system; whereas others should be comparatively suppressed. For instance, if the cells represent the orientation of a bar at a point, then the mode containing a unimodal, well-tuned, 'bump' in orientation space should be enhanced compared with poorly tuned inputs.2 ,1,18 However, if the cells represent oriented small bars at multiple points in visual space, then isolated smooth and strai?ht contours should be enhanced compared with extended homogeneous textures. 4,15 The quality of the systems will be judged according to how much selective amplification they can stably deliver. The critical trade-off is that the more the selected mode is amplified, the more likely it is that, when the input is non-specific, the system will be unstable to fluctuations in the direction of the selected mode, and therefore will hallucinate spurious answers. 3 The Two Point System A particularly simple case to consider has just two neurons (for the S system; two pairs of neurons for the EI system) and weights J = (~o ~) W = (wo w ) ) )0 w Wo The idea is that each node coarsely models a group of neurons, and the interactions between neurons within a group (jo and wo) are qualitatively different from interactions between neurons between groups (j and w). The form of selective amplification here is that symmetric or ambiguous inputs Ia = 1(1, 1) should be suppressed compared with asymmetric inputs Ib = 1(1, 0) (and, equivalently, 1(0,1). In particular, given, la, the system should not spontaneously generate a response with Xl significantly different from X2· Define the fixed points to be Xl = x2 > T under Ia and x~ > T > x~ under I b, where T is the threshold of the excitatory neurons. These relationships will be true across a wide range of input levels I. The ratio R = dxUdl = 1 + ((wo + w) - (jo + j)) = 1 + (w - j) dxlIdl 1 + (wo - jo) 1 + (wo - jo) (4) of the average relative responses as the input level 1 changes is a measure of how the system selectively amplifies the preferred or consistent inputs against ambiguous ones. This measure is appropriate only when the fluctuations of the system Computational Differences between Asymmetrical and Symmetrical Networks The symmetry preserving network A I a B Ib C -2 -2 The symmetry breaking network I a D Ib 6 X2 277 -~4 - 2 0 2 4 6 8 -~4 -2 0 2 4 6 8 -.:14 -2 0 2 4 6 8 -~4 -2 a 2 4 6 8 Xl Xl Xl Xl Figure 1: Phase portraits for the S system in the 2 point case. A;B) Evolution in response to IG ex (1, 1) and Ib ex (1,0) for parameters for which the response to Ia is stably symmetric. C;D) Evolution in response to Ia and Ib for parameters for which the symmetric response to Ia is unstable, inducing two extra equilibrium points. The dotted lines show the thresholds T for g(x). from the fixed points xa and xb are well behaved. We will show that this requirement permits larger values of R in the EI system than the S system, suggesting that the EI system can be a more powerful selective amplifier. In the S system, the stabilities are governed by,S = -(1 + Wo - jo) for the single mode of deviation Xl x~ around fixed point band,f = - (1 + (wo ± w) - (jo ± j)) for the two modes of deviation X± == (Xl - xl) ± (X2 x~) around fixed point a. Since we only consider cases when the input-output relationship dX/ dI of the fixed points is well defined, this means,s < a and,~ < O. However, for some interaction parameters, there are two extra (uneven) fixed points x~ =1= x~ for (the even) input fa. Dynamic systems theory dictates these two uneven fixed points will be stable and that they will appear when the '-' mode of the perturbation around the even fixed point x~ = x~ is unstable. The system breaks symmetry in inputs, ie the motion trajectory diverges from the (unstable) even fixed point to one of the (stable) uneven ones. To avoid such cases, it is necessary that,~ < O. Combining this condition with equation 4 and,s < a leads to a upper bound on the amplification ratio R S < 2. Figure 1 shows phase portraits and the equilibrium pOints of the S system under input fa and fb for the two different system parameter regions. As we have described, the EI system has exactly the same fixed points as the S system, but they are more unstable. The stability around the symmetric fixed point under Ia is governed by ,f:I = -1+(jo±j)/2±J(jo ± j)2/4 - (wo ± w), while that of the asymmetric fixed pointunderIb orIa by ,EI = -1+jo/2±JHJ4 - woo Consequently, when there are three fixed points under la, all of them can be unstable in the EI system, and the motion trajectory cannot converge to any of them. In this case, when both the' +' and '-' modes around the symmetric fixed point x~ = x~ are unstable, the global dynamics constrains the motion trajectory to a limit cycle around the fixed points. If x~ ~ x~ on this limit cycle, then the EI system will not break symmetry, even though the selective amplification ratio R > 2. Figure 2 demonstrates the performance of the EI system in this regime. Figure 2A;B show various aspects of the response to input P which should be comparatively suppressed. The system oscillates in such a way that Xl and X2 tend to be extremely similar (including being synchronised). Figure 2C;D show the same aspects of the response to Ib, which should be amplified. Again the network oscillates, and, although g(X2) is not driven completely to a (it peaks at 15), it is very strongly dominated by g(xd, and further, the overall response is much stronger than in figure 2A;B. The pertinent difference between the EI and S systems is that while the S system (when h(y) is linear) can only roll down the energy landscape to a stable fixed 278 Response to I a = 1{1, 1) A B 80 80 c Z. Li and P Dayan Response to Ib = 1(1,0) D 40 -200 10 20 time 30 40 50 0 0 1000 x 2000 3000 100 lime 200 :m Figure 2: Projections of the response of the EI system. AiB) E~olution of response to la. A) Xl VS Yl and B) g(xl) - g(X2) (solid); g(X!}+g(X2) (dotted) across time show that the Xl = X2 mode dominates and the growth of Xl X2 is strongly suppressed. C;D) Evolution of the response to lb. Here, the response of Xl always dominates that of X2 over oscillations. The difference between g(Xl )+g(X2) and g(Xl) - g(X2) is too small to be evident on the figure. Note the difference in scales between AiB and C;D. Herejo = 2.1 ;j = O.4;wo = 1.11 ;w = 0.9. point and break the input symmetry, the EI system can resort to global limit cycles Xl (t) ~ X2(t) between unstable fixed points and maintain input symmetry. This is often (robustly over a large range of parameters) the case even when the '-' mode is locally more unstable (at the symmetric fixed point) than the ' +' mode, because the' -' mode is much strongly suppressed when the motion trajectory enters the subthreshold region Xl < T and X2 < T. As we can see in figure 2A;B, this acts to suppress any overall growth in the' -' mode. Since the asymmetric fixed point under Ib is just as unstable as that under la, the El system responds to asymmetric input Ib also by a stable limit cycle around the asymmetric fixed point. Since the response of the system in response to either pattern is oscillatory, there are various reasonable ways of evaluating the relative response ratio. Using the mean responses of the system during a cycle to define X, the selective amplification ratio in figure 2 is REI = 97, which is significantly higher than the R S = 2 available from the S system. This is a simple existence proof of the superiority of the EI system for amplification, albeit at the expense of oscillations. In fact, in this two point case, it can be shown that any meaningful behavior of the S system (including symmetry breaking) can be qualitatively replicated in the EI system, but not vice-versa. 4 The Orientation System Symmetric recurrent networks have recently been investigated in great depth for representing and calculating a wide variety of quantities, including orientation tuning. The idea behind the recurrent networks is that they should take noisy (and perhaps weakly tuned) input and selectively amplify the component that represents an orientation 0 in the input, leaving a tuned pattern of excitation across the population that faithfully represents the underlying input. Based on the analysis above, we can expect that if an S network amplifies a tuned input enough, then it will break input symmetry given an untuned input and thus hallucinate a tuned response. However, an EI system, in the same oscillatory regime as for the two point system, can maintain untuned and suppressed response to untuned inputs. We designed a particular El system with a high selective amplification factor for tuned inputs 1(0). In this case, units Xi, Yi have preferred orientations Oi = (i - N/2)7r/N for i = 1 .. . n. the connection matrices J is Toplitz with Gaussian tuning, and, for simplicity, [W]ij does not depend on i,j. Figure 3B (and inset) shows the output of two units in the network in response to a tuned input, showing the nature of the oscillations and the way that selectivity builds up over the course of each period. Figure 3C shows the activities of all the units at three particular phases of the oscillation. Figure 3A shows how the mean activity of the most Computational Differences between Asymmetrical and Symmetrical Networks 279 A) Cell outputs vs a or b B) Cell outputs vs time C) Cell outputs vs (}i x 10' X 10' 10' j" 6 6 ~10' '" .a. tuned ~102 &. ~ .. ' ij100 flat 2 2 ~ E 10-2 0 10 15 20 ~5 46 49 50 -~ -45 45 90 a or b Figure 3: The Gaussian orientation network. A) Mean response of the 8; = 0° unit in the network as a function of a (untuned) or b (tuned) with a log scale. B) Activity of the 8i = 0° (solid) and 8; = 30° (dashed) units in the network over the course of the positive part of an oscillation. Inset - activity of these units over all time. C) Activity of all the units at the three times shown as (i), (ii) and (iii) in (B) (i) (dashed) is in the rising phase of the oscillation; (ii) (solid) is at the peak; and (iii) (dotted) is during the falling phase. Here, the input is Ii = a + be-IJ[ /20'2, with (J" = 13°, and the Toplitz weights are J;j = (3 + 21e-(IJi-IJj )2/20',2)/N, with (J"' = 20° and Wij = 23.S/N. activated unit scales with the levels of tuned and untuned input. The network amplifies the tuned inputs dramatically more - note the logarithmic scale. The S system breaks symmetry to the untuned input (b = 0) for these weights. If the weights are scaled uniformly by a factor of 0.22, then the S system is appropriately stable. However, the magnification ratio is 4.2 rather than something greater than 1000 in the EI system. The orientation system can be understood to a large qualitative degree by looking at its two-point cousins. Many of the essential constraints on the system are determined by the behavior of the system when the mode with Xi = Xj dominates, in which case the complex non-linearities induced by orientation tuning or cut off and its equivalents are irrelevant. Let J(I) and W(I) for (angular) frequency f be the Fourier transforms of J(i - j) == [J]ij and W(i - j) == [WLj and define )..(J) = Re{ -1 + J(I)/2 + iJ(W(I) - J2(1)/4)}. Then, let 1* >0 be the frequency such that )..(1*) 2: )..(1) for all f > O. This is the non-translation-invariant mode that is most likely to cause instabilities for translation invariant behavior. A two point system that closely corresponds to the full system can be found by solving the simultaneous equations: jo + j = J(O) WO + w = W(O) jo - j = J(J*) WO - w = W(I*) This design equates the Xl = X2 mode in the two point system with the f = 0 mode in the orientation system and the Xl = -X2 mode with the f = 1* mode. For smooth J and W, 1* is often the smallest or one of the smallest non-zero spatial frequencies. It is easy to see that the two systems are exactly equivalent in the translation invariant mode Xi = Xj under translation invariant input Ii = Ij in both the linear and nonlinear regimes. The close correspondence between the two systems in other dynamic regimes is supported by simulation results.16 Quantitatively, however, the amplification ratio differs between the two systems. 5 Conclusions We have studied the dynamical behavior of networks with symmetrical and asymmetrical connections and have shown that the extra degrees of dynamical freedom 280 Z. Li and P. Dayan of the latter can be put to good computational use, eg global dynamic stability via local instability. Many applications of recurrent networks involve selective amplification - and the selective amplification factors for asymmetrical networks can greatly exceed those of symmetrical networks. We showed this in the case of orientation selectivity. However, it was originally inspired by a similar result in contour enhancement and texture segregation for which the activity of isolated oriented line elements should be enhanced if they form part of a smooth contour in the input and suppressed if they form part of an extended homogeneous texture. Further, the output should be homogeneous if the input is homogeneous (in the same way that the orientation network should not hallucinate orientations from untuned input). In this case, similar analysis16 shows that stable contour enhancement is limited to just a factor of 3.0 for the S system (but not for the EI system), suggesting an explanation for the poor performance of a slew of S systems in the literature designed for this purpose. We used a very simple system with just two pairs of neurons to develop analytical intuitions which are powerful enough to guide our design of the more complex systems. We expect that the details of our model, with the exact pairing of excitatory and inhibitory cells and the threshold non-linearity, are not crucial for the results. Inhibition in the cortex is, of course, substantially more complicated than we have suggested. In particular, inhibitory cells do have somewhat faster (though finite) time constants than excitatory cells, and are also not so subject to short term plasticity effects such as spike rate adaptation. Nevertheless, oscillations of various sorts can certainly occur, suggesting the relevance of the computational regime that we have studied. References [1] Ben-Yishai, R, Bar-Or, RL & Sompolinsky, H (1995) PNAS 92:3844-3848. [2] Braun, J, Neibur, E, Schuster, HG & Koch, C (1994) Society for Neuroscience Abstracts 20:1665. [3] Carandini, M & Ringach, DL (1997) Vision Research 37:3061-307l. [4] Cohen, MA & Grossberg, S (1983) IEEE Transactions on Systems, Man and Cybernetics 13:815-826. [5] Eckhom, R, et al (1988) Biological Cybernetics 60:121-130. [6] Ermentrout, GB & Cowan, JD (1979). Journal of Mathematical Biology 7:265-280. [7] Golomb, D, Wang, XI & Rinzel, J (1996). Journal of Neurophysiology 75:750-769. [8] Gray, CM, Konig, P, Engel, AK & Singer, W (1989) Nature 338:334-337. [9] Grossberg, S (1988) Neural Networks 1:17-61. [10] Hopfield, JJ (1982) PNAS 79:2554-2558. [11] Hopfield, JJ (1984) PNAS 81:3088-3092. [12] Konig, P, Janosch, B & Schillen, TB (1992) Neural Computation 4:666-681. [13] Li, Z (1995) InJL van Hemmen et ai, eds, Models of Neural Networks. Vol. 2. NY: Springer. [14] Li, Z (1997) In KYM Wong, I King & DY Yeung, editors, Theoretical Aspects of Neural Computation. Hong Kong: Springer-Verlag. [15] Li, Z (1998) Neural Computation 10:903-940. [16] Li, Z. and Dayan, P. (1999) to be published in Network: Computations in Neural Systems. [17] Li, Z & Hopfield, JJ (1989). Biological Cybernetics 61:379-392. [18] Pouget, A, Zhang, KC, Deneve, S & Latham, PE (1998) Neural Computation, 10373-401. [19] Samsonovich A & McNaughton, BL (1997) Journal of Neuroscience 17:5900-5920. [20] Seung, HS (1996) PNAS 93:13339-13344. [21] Seung, HS et al (1998). NIPS 10. [22] Sompolinsky, H, Golomb, D & Kleinfeld, D (1990) PNAS 87:7200-7204. [23] Stein, PSG, et al (1997) Neurons, Networks, and Motor Behavior. Cambridge, MA: MIT Press. [24] Steriade, M, McCormick, DA & Sejnowski, TJ (1993). Science 262:679-685. [25] Suarez, H, Koch, C & Douglas, R (1995) Journal of Neuroscience 15:6700-6719. [26] von der Malsburg, C (1988) Neural Networks 1:141-148. [27] Zhang, K (1996) Journal of Neuroscience 16:2112-2126.
1998
1
1,452
Replicator Equations, Maximal Cliques, and Graph Isomorphism Marcello Pelillo Dipartimento di Informatica Universita Ca' Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, Italy E-mail: pelillo@dsi.unive.it Abstract We present a new energy-minimization framework for the graph isomorphism problem which is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. To solve the program we use "replicator" equations, a class of simple continuous- and discrete-time dynamical systems developed in various branches of theoretical biology. We show how, despite their inability to escape from local solutions, they nevertheless provide experimental results which are competitive with those obtained using more elaborate mean-field annealing heuristics. 1 INTRODUCTION The graph isomorphism problem is one of those few combinatorial optimization problems which still resist any computational complexity characterization [6]. Despite decades of active research, no polynomial-time algorithm for it has yet been found. At the same time, while clearly belonging to N P, no proof has beel1 provided that it is NP-complete. Indeed, there is strong evidence that this cannot be the case for, otherwise, the polynomial hierarchy would collapse [5]. The current belief is that the problem lies strictly between the P and NP-complete classes. Because of its theoretical as well as practical importance, the problem has attracted much attention in the neural network community, and various powerful heuristics have been developed [11, 18, 19, 20]. Following Hopfield and Tank's seminal work [10], the typical approach has been to write down a (continuous) energy function whose minimizers correspond to the (discrete) solutions being sought, and then construct a dynamical system which converges toward them. Almost invariably, all the algorithms developed so far are based on techniques borrowed from statistical mechanics, in particular mean field theory, which allow one to escape from poor Replicator Equations, Maximal Cliques, and Graph Isomorphism 551 local solutions. In this paper, we develop a new energy-minimization framework for the graph isomorphism problem which is based on the idea of reducing it to the maximum clique problem, another well-known combinatorial optimization problem. Central to our approach is a powerful result originally proved by Motzkin and Straus [13], and recently extended in various ways [3, 7, 16], which allows us to formulate the maximum clique problem in terms of an indefinite quadratic program. We then present a class of straightforward continuous- and discrete-time dynamical systems known in mathematical biology as replicator equations, and show how, thanks to their dynamical properties, they naturally suggest themselves as a useful heuristic for solving the proposed graph isomorphism program. The extensive experimental results presented show that, despite their simplicity and their inherent inability to escape from local optima, replicator dynamics are nevertheless competitive with more sophisticated deterministic annealing algorithms. The proposed formulation seems therefore a promising framework within which powerful continuous-based graph matching heuristics can be developed, and is in fact being employed for solving practical computer vision problems [17J. More details on the work presented here can be found in [15J. 2 A QUADRATIC PROGRAM FOR GRAPH ISOMORPHISM 2.1 GRAPH ISOMORPHISM AS CLIQUE SEARCH Let G = (V, E) be an undirected graph, where V is the set of vertices and E ~ V x V is the set of edges. The order of G is the number of its vertices, and its size is the number of edges. Two vertices i,j E V are said to be adjacent if (i,j) E E. The adjacency matrix of G is the n x n symmetric matrix A = (aij) defined as follows: aij = 1 if (i,j) E E, aij = a otherwise. Given two graphs G' = (V', E') and Gil = (V", E") having the same order and size, an isomorphism between them is any bijection ¢ : V' -t V" such that (i,j) E E' {:} (¢(i),¢(j)) E E", for all i,j E V'. Two graphs are said to be isomorphic if there exists an isomorphism between them. The graph isomorphism problem is therefore to decide whether two graphs are isomorphic and, in the affirmative, to find an isomorphism. Barrow and Burstall [IJ introduced the notion of an association graph as a useful auxiliary graph structure for solving general graphjsubgraph isomorphism problems. The association graph derived from G' and Gil is the undirected graph G = (V, E), where V = V' X V" and E = {((i, h), (j, k)) E V x V : i:f= j, h:f= k, and (i,j) E E' {:} (h, k) E E"} . Given an arbitrary undirected graph G = (V, E), a subset of vertices C is called a clique if all its vertices are mutually adjacent, i.e., for all i,j E C we have (i,j) E E. A clique is said to be maximal if it is not contained in any larger clique, and maximum if it is the largest clique in the graph. The clique number, denoted by w(G), is defined as the cardinality of the maximum clique. The following result establishes an equivalence between the graph isomorphism problem and the maximum clique problem (see [15J for proof). Theorem 2.1 Let G' and Gil be two graphs of order n , and let G be the corresponding association graph. Then, G' and Gil are isomorphic if and only if w(G) = n. In this case, any maximum clique of G induces an isomorphism between G' and Gil , and vice versa. 552 M. Pelillo 2.2 CONTINUOUS FORMULATION OF MAX-CLIQUE Let G = (V, E) be an arbitrary undirected graph of order n, and let Sn denote the standard simplex of lRn: Sn={xElRn : Xi~O foralli=l. .. n, and tXi=I}. z==1 Given a subset of vertices C of G, we will denote by XC its characteristic vector which is the point in Sn defined as xI = 1/ICI if i E C, xi = 0 otherwise, where ICI denotes the cardinality of C. Now, consider the following quadratic function: f(x) = x T Ax (1) where "T" denotes transposition. The Motzkin-Straus theorem [13] establishes a remarkable connection between global (local) maximizers of fin Sn and maximum (maximal) cliques of G. Specifically, it states that a subset of vertices C of a graph G is a maximum clique if and only if its characteristic vector XC is a global maximizer of the function f in Sn. A similiar relationship holds between (strict) local maximizers and maximal cliques [7, 16]. One drawback associated with the original Motzkin-Straus formulation relates to the existence of spurious solutions, i.e., maximizers of f which are not in the form of characteristic vectors [16]. In principle, spurious solutions represent a problem since, while providing information about the order of the maximum clique, do not allow us to extract the vertices comprising the clique. Fortunately, there is straightforward solution to this problem which has recently been introduced and studied by Bomze [3]. Consider the following regularized version of function f: j (x) = x T Ax + ~ X T X . (2) The following is the spurious-free counterpart of the original Motzkin-Straus theorem (see [3] for proof). Theorem 2.2 Let C be a subset of vertices of a graph G, and let X C be its characteristic vector. Then the following statements hold: (a) C is a maximum clique of G if and only if XC is a global maximizer of j over the simplex Sn. Its order is then given by ICI = 1/2(1 - f(x C )). (b) C is a maximal clique of G if and only if XC is a local maximizer of j in Sn. (c) All local (and hence global) maximizers of j over Sn are strict. Unlike the Motzkin-Straus formulation, the previous result guarantees that all maximizers of j on Sn are strict, and are characteristic vectors of maximal/maximum cliques in the graph. In an exact sense, therefore, a one-to-one correspondence exists between maximal cliques and local maximizers of j in Sn on the one hand, and maximum cliques and global maximizers on the other hand. 2.3 A QUADRATIC PROGRAM FOR GRAPH ISOMORPHISM Let G' and Gil be two arbitrary graphs of order n, and let A denote the adjacency matrix of the corresponding association graph, whose order is assumed to be N. The graph isomorphism problem is equivalent to the following program: maXImIze j(x) = xT (A + ~ IN)X subject to x E SN (3) Replicator Equations. Maximal Cliques. and Graph Isomorphism 553 More precisely, the following result holds, which is a straightforward consequence of Theorems 2.1 and 2.2. Theorem 2.3 Let G' and Gil be two graphs of order n, and let x* be a global solution of program (3), where A is the adjacency matrix of the association graph of G' and Gil . Then, G' and Gil are isomorphic if and only if j(x*) = 1 - 1/2n. In this case, any global solution to (3) induces an isomorphism between G' and Gil, and vice versa. In [15] we discuss the analogies between our objective function and those proposed in the literature (e.g., [18, 19]). 3 REPLICATOR EQUATIONS AND GRAPH ISOMORPHISM Let W be a non-negative n x n matrix, and consider the following dynamical system: ~Xi(t) = Xi(t) ("i(t) - t.X;(t)";(t)) , i = 1. . . n where 7ri(t) = 2:.7=1 WijXj(t), i = 1 . . . n , and its discrete-time counterpart: Xi(t)7ri(t) xi(t+1)=2:.n () .()' j = l x] t 7r] t i=l .. . n. (4) (5) It is readily seen that the simplex Sn is invariant under these dynamics, which means that every trajectory starting in Sn will remain in Sn for all future times. Both (4) and (5) are called replicator equations in theoretical biology, since they are used to model evolution over time of relative frequencies of interacting, selfreplicating entities [9]. The discrete-time dynamical equations turn also out to be a special case of a general class of dynamical systems introduced by Baum and Eagon [2] in the context of Markov chain theory. Theorem 3.1 If W is symmetric, then the quadratic polynomial F(x) = xTWx is strictly increasing along any non-constant trajectory of both continuous-time (4) and discrete-time (5) replicator equations. Furthermore, any such trajectory converges to a (unique) stationary point. Finally, a vector x E Sn is asymptotically stable under (4) and (5) if and only if x is a strict local maximizer of F on Sn. The previous result is known in mathematical biology as the Fundamental Theorem of Natural Selection [9, 21]. As far as the discrete-time model is concerned, it can be regarded as a straightforward implication of the more general Baum-Eagon theorem [2]. The fact that all trajectories of the replicator dynamics converge to a stationary point is proven in [12]. Recently, there has been much interest in evolutionary game theory around the following exponential version of replicator equations, which arises as a model of evolution guided by imitation [8, 21]: :t Xi (t) = Xi(t) (L:7~1 ~:;;;~ .. '(t) - 1), i = l... n (6) where K, is a positive constant. As K, tends to 0, the orbits of this dynamics approach those of the standard, first-order replicator model (4), slowed down by the factor 554 M Pelillo K. Hofbauer [8] has recently proven that when the matrix W is symmetric, the quadratic polynomial F defined in Theorem 3.1 is also strictly increasing, as in the first-order case. After discussing various properties of this, and more general dynamics, he concluded that the model behaves essentially in the same way as the standard replicator equations, the only difference being the size of the basins of attraction around stable equilibria. A customary way of discretizating equation (6) is given by the following difference equations: xi(t)e"1l';(t) Xi(t + 1) = L:n ( ) (t)' i = l. .. n (7) . X · t e"1l'J )=1 ) which enjoys many of the properties of the first-order system (5), e.g., they have the same set of equilibria. The properties discussed above naturally suggest using replicator equations as a useful heuristic for the graph isomorphism problem. Let G' and G" be two graphs of order n, and let A denote the adjacency matrix of the corresponding N-vertex association graph G. By letting 1 W = A + "2IN we know that the replicator dynamical systems, starting from an arbitrary initial state, will iteratively maximize the function j(x) = xT(A + !IN)x in SN, and will eventually converge to a strict local maximizer which, by virtue of Theorem 2.2 will then correspond to the characteristic vector of a maximal clique in the association graph. This will in turn induce an isomorphism between two subgraphs of G' and G" which is "maximal," in the sense that there is no other isomorphism between subgraphs of G' and G" which includes the one found. Clearly, in theory there is no guarantee that the converged solution will be a global maximizer of j, and therefore that it will induce an isomorphism between the two original graphs. Previous work done on the maximum clique problem [4, 14], and also the results presented in this paper, however, suggest that the basins of attraction of global maximizers are quite large, and very frequently the algorithm converges to one of them. 4 EXPERIMENTAL RESULTS In the experiments reported here, the discrete-time replicator equation (5) and its exponential counterpart (7) with K = 10 were used. The algorithms were started from the barycenter of the simplex and they were stopped when either a maximal clique was found or the distance between two successive points was smaller than a fixed threshold, which was set to 10-17 . In the latter case the converged vector was randomly perturbed, and the algorithm restarted from the perturbed point. Because of the one-to-one correspondence between local maximizers and maximal cliques, this situation corresponds to convergence to a saddle point. All the experiments were run on a Sparc20. Undirected 100-vertex random graphs were generated with expected connectivities ranging from 1 % to 99%. For each connectivity value, 10'0 graphs were produced and each of them had its vertices randomly permuted so as to obtain a pair of isomorphic graphs. Overall, therefore, 1500 pairs of isomorphic graphs were used. Each pair was given as input to the replicator models and, after convergence, a success was recorded when the cardinality of the returned clique was equal to the order of the graphs given as input (Le., 100) .1 Because of the stopping criterion employed, this 1 Due to the high computational time required, in the 1 % and 99% cases the first-order replicator algorithm (5) was tested only on 10 pairs, instead of 100. Replicator Equations, Maximal Cliques, and Graph Isomorphism f '00 ~, / - ---. -- . ~--.-. - - .i 75 1 U ! 50 j / I -I t c 25 I i i .. -.--.. -. . -. 555 -. . . . \ \ \ • \ I ' I iii 001 003 0 05 0' 0.2 0.3 0 " as 06 0.7 a a 09 0 95 0 97 099 Expecled connectivity 001 003 0 .05 01 0 .2 03 0 " as 06 07 08 0.9 095 a 97 0 99 Expected connecllvlty Figure 1: Percentage of correct isomorphisms obtained using the first-order (left) and the exponential (right) replicator equations, as a function of the expected connectivity. 100000 ---(±I ~1U7 (1) .~~om<l~~ ¥ 10000 !!!. ! 1000 '00 '0 (±21~IIIK) (:t201 0) (N ~ Ill) \~t!%) (:t226) ' ,(1<)MI '1<)58) ... ----.----(tI07) (i069) (:to 94) 1±I""'17) I ;;! 1000 I (:t294KfI) . & ~ ,00 '-' t e '0 .. 001 0030 05 0.1 02 03 0" a 5 06 07 08 09 095 0 ,9 7 a 99 Expected connectivity 001 003 0 .05 a 1 0 2 03 0 " 0.5 06 0.7 08 0.9 095 097 099 Expected connectivity Figure 2: Average computational time taken by the first-order (left) and the exponential (right) replicator equations, as a function of the expected connectivity. The vertical axes are in logarithmic scale, and the numbers in parentheses represent the standard deviation. guarantees that a maximum clique, and therefore a correct isomorphism, was found. The proportion of successes as a function of the expected connectivities for both replicator models is plotted in Fig. 1, whereas Fig. 2 shows the average CPU time taken by the two algorithms to converge (in logarithmic scale). Notice how the exponential replicator system (7) is dramatically faster and also performs bet.ter than the first-order model (5). These results are significantly superior to those reported by Simic [20] who obtained poor results at connectivities less than 40% even on smaller graphs (Le. , up to 75 vertices). They also compare favorably with the results obtained more recently by Rangarajan et ai. [18] on 100-vertex random graphs for connectivities up to 50%. Specifically, at 1% and 3% connectivities they report a percentage of correct isomorphisms of about 30% and 0%, respectively. Using our approach we obtained, on the same kind of graphs, a percentage of success of 80% and 11%, respectively. Rangarajan and Mjolsness [19] also ran experiments on 100-vertex random graphs with various connectivities, using a powerful Lagrangian relaxation network. Except for a few instances, they always obtained a correct solution. The computational time required by their model, however, turns out to largely exceed ours. As an example, the average time taken by their algorithm to match two 100-vertex 50%connectivity graphs was about 30 minutes on an SGI workstation. As shown in Fig. 2, we obtained identical results in about 3 seconds. It should be emphasized that all the algorithms mentioned above do incorporate sophisticated annealing mechanisms to escape from poor local minima. By contrast, in the presented work no attempt was made to prevent the algorithms from converging to such solutions. 556 M Pelillo Acknowledgments. This work has been done while the author was visiting the Department of Computer Science at the Yale University. Funding for this research has been provided by the Consiglio Nazionale delle Ricerche, Italy. The author would like to thank 1. M. Bomze, A. Rangarajan, K. Siddiqi, and S. W. Zucker for many stimulating discussions. References [1) H. G. Barrow and R. M. Burstall, "Subgraph isomorphism, matching relational structures and maximal cliques," Inform. Process. Lett., vol. 4, no. 4, pp. 83- 84, 1976. [2) L. E. Baum and J. A. Eagon, "An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model for ecology," Bull. Amer. Math. Soc., vol. 73, pp. 360- 363, 1967. [3) 1. M. Bomze, "Evolution towards the maximum clique," J. Global Optim., vol. 10, pp. 143- 164, 1997. [4) I. M. Bomze, M. Pelillo, and R. Giacomini, "Evolutionary approach to the maximum clique problem: Empirical evidence on a larger scale," in Developments in Global Optimization, I. M. Bomze et al., eds., Kluwer, The Netherlands, 1997, pp. 95- 108. [5) R. B. Boppana, J. Hastad, and S. Zachos, "Does co-NP have short interactive proofs?" Inform. Process. Lett., vol. 25, pp. 127-132, 1987. [6) M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco, CA, 1979. [7) L. E. Gibbons, D. W. Hearn, P. M. Pardalos, and M. V. Ramana, "Continuous characterizations of the maximum clique problem," Math. Oper. Res., vol. 22, no. 3, pp. 754- 768, 1997. [8) J. Hofbauer, "Imitation dynamics for games," Collegium Budapest, preprint, 1995. [9) J . Hofbauer and K. Sigmund, The Theory of Evolution and Dynamical Systems. Cambridge University Press, Cambridge, UK, 1988. [10) J . J . Hopfield and D. W. Tank, "Neural computation of decisions in optimization problems," Biol. Cybern., vol. 52, pp. 141- 152, 1985. [11) R. Kree and A. Zippelius, "Recognition of topological features of graphs and images in neural networks," J. Phys. A : Math. Gen., vol. 21 , pp. L813- L818, 1988. [12) V. Losert and E. Akin, "Dynamics of games and genes: Discrete versuS continuous time," J. Math. Biol., vol. 17, pp. 241- 251, 1983. [13) T . S. Motzkin and E. G. Straus, "Maxima for graphs and a new proof of a theorem of'I\min," Canad. J. Math., vol. 17, pp. 533- 540, 1965. [14) M. Pelillo, "Relaxation labeling networks for the maximum clique problem," J. Artij. Neural Networks, vol. 2, no. 4, pp. 313- 328, 1995. [15) M. Pelillo, "Replicator equations, maximal cliques, and graph isomorphism," Neural Computation, to appear. [16) M. Pelillo and A. Jagota, "Feasible and infeasible maxima in a quadratic program for maximum clique," J. Artij. Neural Networks, vol. 2, no. 4, pp. 411- 420, 1995. [17) M. Pelillo, K. Siddiqi, and S. W Zucker, "Matching hierarchical structures using association graphs," in Computer Vision - ECCV'98, Vol. II, H. Burkhardt and B. Neumann, eds., Springer-Verlag, Berlin, 1998, pp. 3- 16. [18) A. Rangarajan, S. Gold, and E. Mjolsness, "A novel optimizing network architecture with applications," Neural Computation, vol. 8, pp. 1041- 1060, 1996. [19) A. Rangarajan and E. Mjolsness, "A Lagrangian relaxation network for graph matching," IEEE Trans. Neural Networks, vol. 7, no. 6, pp. 1365- 1381, 1996. [20) P. D. Simir, "Constrained nets for graph matching and other quadratic assignment problems," Neural Computation, vol. 3, pp. 268-281, 1991. [21) J. W. Weibull, Evolutionary Game Theory. MIT Press, Cambridge, MA, 1995.
1998
10
1,453
Support Vector Machines Applied to Face Recognition P. Jonathon Phillips National Institute of Standards and Technology Bldg 225/ Rm A216 Gaithersburg. MD 20899 Tel 301.975.5348; Fax 301.975.5287 jonathon@nist.gov Abstract Face recognition is a K class problem. where K is the number of known individuals; and support vector machines (SVMs) are a binary classification method. By reformulating the face recognition problem and reinterpreting the output of the SVM classifier. we developed a SVM -based face recognition algorithm. The face recognition problem is formulated as a problem in difference space. which models dissimilarities between two facial images. In difference space we formulate face recognition as a two class problem. The classes are: dissimilarities between faces of the same person. and dissimilarities between faces of different people. By modifying the interpretation of the decision surface generated by SVM. we generated a similarity metric between faces that is learned from examples of differences between faces. The SVM-based algorithm is compared with a principal component analysis (PeA) based algorithm on a difficult set of images from the FEREf database. Performance was measured for both verification and identification scenarios. The identification performance for SVM is 77-78% versus 54% for PCA. For verification. the equal error rate is 7% for SVM and 13 % for PCA. 1 Introduction Face recognition has developed into a major research area in pattern recognition and computer vision. Face recognition is different from classical pattern-recognition problems such as character recognition. In classical pattern recognition. there are relatively few classes, and many samples per class. With many samples per class. algorithms can classify samples not previously seen by interpolating among the training samples. On the other hand, in 804 P. J Phillips face recognition, there are many individuals (classes), and only a few images (samples) per person, and algorithms must recognize faces by extrapolating from the training samples. In numerous applications there can be only one training sample (image) of each person. Support vector machines (SVMs) are formulated to solve a classical two class pattern recognition problem. We adapt SVM to face recognition by modifying the interpretation of the output of a SVM classifier and devising a representation of facial images that is concordant with a two class problem. Traditional SVM returns a binary value, the class of the object. To train our SVM algorithm, we formulate the problem in a difference space, which explicitly captures the dissimilarities between two facial images. This is a departure from traditional face space or view-based approaches, which encodes each facial image as a separate view of a face. In difference space, we are interested in the following two classes: the dissimilarities between images of the same individual, and dissimilarities between images of different people. These two classes are the input to a SVM algorithm. A SVM algorithm generates a decision surface separating the two classes. For face recognition, we re-interpret the decision surface to produce a similarity metric between two facial images. This allows us to construct face-recognition algorithms. The work of Moghaddam et al. [3] uses a Bayesian method in a difference space, but they do not derive a similarity distance from both positive and negative samples. We demonstrate our SVM-based algorithm on both verification and identification applications. In identification, the algorithm is presented with an image of an unknown person. The algorithm reports its best estimate of the identity of an unknown person from a database of known individuals. In a more general response, the algorithm will report a list of the most similar individuals in the database. In verification (also referred to as authentication), the algorithm is presented with an image and a claimed identity of the person. The algorithm either accepts or rejects the claim. Or, the algorithm can return a confidence measure of the validity of the claim. To provide a benchmark for comparison, we compared our algorithm with a principal component analysis (PCA) based algorithm. We report results on images from the FEREf database of images, which is the de facto standard in the face recognition community. From our experience with the FEREf database, we selected harder sets of images on which to test the algorithms. Thus, we avoided saturating performance of either algorithm and providing a robust comparison between the algorithms. To test the ability of our algorithm to generalize to new faces, we trained and tested the algorithms on separate sets of faces. 2 Background In this section we will give a brief overview of SVM to present the notation used in this paper. For details of SVM see Vapnik [7], or for a tutorial see Burges [1]. SVM is a binary classification method that finds the optimal linear decision surface based on the concept of structural risk minimization. The decision surface is a weighted combination of elements of the training set. These elements are called support vectors and characterize the boundary between the two classes. The input to a SVM algorithm is a set {( XI, Yi) } of labeled training data, where XI is the data and Yi = -1 or 1 is the label. The output of a SVM algorithm is a set of Ns support vectors SI, coefficient weights ai, class labels Yi of the support vectors, and a constant term b. The linear decision surface is where w· z +b = 0, Ns W = ~aiYisl' i=l Support Vector Machines Applied to Face Recognition 805 SVM can be extended to nonlinear decision surfaces by using a kernel K ( " .) that satisfies Mercer's condition [1, 7]. The nonlinear decision surface is Ns L oWiK(sj, z) + b = O. i= l A facial image is represented as a vector P E RN, where RN is referred to as face space. Face space can be the original pixel values vectorized or another feature space; for example, projecting the facial image on the eigenvectors generated by performing PCA on a training set of faces [6] (also referred to as eigenfaces). We write PI '" P2 if PI and P2 are images of the same face, and PI 1- P2 if they are images of different faces. To avoid confusion we adopted the following terminology for identification and verification. The gallery is the set of images of known people and a probe is an unknown face that is presented to the system. In identification, the face in a probe is identified. In verification, a probe is the facial image presented to the system whose identity is to be verified. The set of unknown faces is call the probe set. 3 Verification as a two class problem Verification is fundamentally a two class problem. A verification algorithm is presented with an image P and a claimed identity. Either the algorithm accepts or rejects the claim. A straightforward method for constructing a classifier for person X, is to feed a SVM algorithm a training set with one class consisting of facial images of person X and the other class consisting of facial images of other people. A SVM algorithm will generated a linear decision surface, and the identity of the face in image P is accepted if w·p + b:::; 0, otherwise the claim is rejected. This classifier is designed to minimizes the structural risk. Structural risk is an overall measure of classifier performance. However, verification performance is usually measured by two statistics, the probability of correct verification, Pv, and the probability of false acceptance, PF . There is a tradeoff between Pv and PF . At one extreme all claims are rejected and Pv = PF = 0; and at the other extreme, all claims are accepted and Pv = PF = 1. The operating values for Pv and PF are dictated by the application. Unfortunately, the decision surface generated by a SVM algorithm produces a single performance point for Pv and PF. To allow for adjusting Pv and PF. we parameterize a SVM decision surface by ~. The parametrized decision surface is w· z +b =~, and the identity of the face image p is accepted if w ' p+ b:::;~. If ~ = -00, then all claims are rejected and Pv = PF = 0; if ~ = +00, all claims are accepted and Pv = PF = O. By varying ~ between negative and positive infinity, all possible combinations of Pv and PF are found. Nonlinear parametrized decision surfaces are described by N s L QiYiK(Sj, z) + b = ~. i = l 806 P J. Phillips 4 Representation In a canonical face recognition algorithm. each individual is a class and the distribution of each face is estimated or approximated. In this method. for a gallery of K individuals. the identification problem is a K class problem. and the verification problem is K instances of a two class problems. To reduce face recognition to a single instance of a two class problem. we introduce a new representation. We model the dissimilarities between faces. Let T = {t 1, ... , t M} be a training set of faces of K individuals. with multiple images of each of the K individuals. From T. we generate two classes. The first is the within-class differences set. which are the dissimilarities in facial images of the same person. Formally the within-class difference set is The set C1 contains within-class differences for all K individuals in T. not dissimilarities for one of the K individuals in the training set. The second is the between-class differences set. which are the dissimilarities among images of different individuals in the training set. Formally. C2 = {tl - tjltl f tj}. Classes C1 and C2 are the inputs to our SVM algorithm. which generates a decision surface. In the pure SVM paradigm. given the difference between facial images Pl and P2. the classifier estimates if the faces in the two images are from the same person. In the modification described in section 3. the classification returns a measure of similarity t5 = W, (Pl - P2) + b. This similarity measure is the basis for the SVM-based verification and identification algorithms presented in this paper. 5 Verification In verification. there is a gallery {gj} of m known individuals. The algorithm is presented with a probe p and a claim to be person j in the gallery. The first step of the verification algorithm computes the similarity score Ns t5= LO:iYiK(Sl,gj -p) +b. i= l The second step accepts the claim if t5 ~ ~. Otherwise. the claim is rejected. The value of ~ is set to meet the desired tradeoff between Pv and PF. 6 Identification In identification. there is a gallery {gj} of m known individuals. The algorithm is presented with a probe p to be identified. The first step of the identification algorithm computes a similarity score between the probe and each of the gallery images. The similar score between p and gj is Ns t5j = L O:iYiK(St, gj - p) + b. i=l In the second step. the probe is identified as person j that has minimum similarity score t5j . An alternative method of reporting identification results is to order the gallery by the similarity measure t5j . Support Vector Machines Applied to Face Recognition 807 (a) (b) Figure 1: (a) Original image from the FEREr database. (b) Image after prepr«:>ee)sing. 7 Experiments We demonstrate our SVM-based verification and identification algorithms on 400 frontal images from the FEREf database of facial images [5]. To provide a benchmark for algorithm pedormance. we provide performance for a PCA-based algorithm on the same set of images. The PCA algorithm identifies faces with a L2 nearest neighbor classifier. For the SVM-based algorithms. a radial basis kernel was used. The 400 images consisted of two images of 200 individuals. and were divided into disjoint training and testing sets. Each set consisted of two images of 100 people. All 400 images were preprocessed to normalize geometry and illumination. and to remove background and hair (figure 1). The preprocessing procedure consisted of manually locating the centers of the eyes; translating. rotating. and scaling the faces to place the center of the eyes on specific pixels; masking the faces to remove background and hair; histogram equalizing the non-masked facial pixels; and scaling the non-masked facial pixels to have zero mean and unit variance. PeA was pedormed on 100 preprocessed images (one image of each person in the training set). This produced 99 eigenvectors {et} and eigenvalues {Ad. The eigenvectors were ordered so that Ai < A j when i < j. Thus. the low order eigenvectors encode the majority of the variance in the training set. The faces were represented by projecting them on a subset of the eigenvectors and this is the face space. We varied the dimension of face space by changing the number of eigenvectors in the representation. In all experiments. the SVM training set consisted of the same images. '!he SVM-training set T consisted of two images of 50 individuals from the general training set of 100 individuals. The set C1 consisted of all 50 within-class differences from faces of the same individuals. The set C2 consisted of 50 randomly selected between-class differences. The verification and identification algorithms were tested on a gallery consisted of 100 images from the test set. with one image person. The probe set consisted of the remaining images in the test set (100 individuals. with one image per person). We report results for verification on a face space that consisted of the first 30 eigenfeatures (an eigenfeature is the projection of the image onto an eigenvector). The results are reported as a receiver operator curve (ROC) in figure 2. The ROC in figure 2 was computed 808 0.9 Ii 0.8 "8 ... ." to > '0 0.7 ~ ~ -8 a: 0.6 0.5 0.4 0 " r ./ 0' ,f I l j .. ( I " 1--.2---'} I ( ) 0.1 r' r..r __ r~ r ,--j _..J.. _ _ SVM algcrithm PeA algcrithm ----. 0.2 0.3 0.4 0.5 Probabilty of false acceptance 0.6 Figure 2: ROC for verification (using first 30 eigenfeatures). P. J. Phillips by averaging the ROC for each of the 100 individuals in the gallery. For person gj' the probe set consisted of one image of person gj and 99 faces of different people. A summary statistic for verification is the equal error rate. The equal error rate is the point where the probability of false acceptance is equal to the probability of false verification, or mathematically, PF = 1 - Pv. For the SVM-based algorithm the equal error rate is 0.07, and for the PeA-based algorithm is 0.13. For identification, the algorithm estimated the identity of each of the probes in the probe set. We compute the probability of correctly identifying the probes for a set of face spaces parametrized by the number of eigenfeatures. We always use the first n eigenfeatures, thus we are slowly increasing the amount of information, as measured by variance, available to the classifier. Figure 3 shows probability of identification as a function of representing faces by the first n eigenfeatures. PeA achieves a correct identification rate of 54% and SVM achieves an identification rate of 77-78%. (The PCA results we report are significantly lower than those reported in the literature [2, 3]. This is because we selected a set of images that are more difficult to recognize. The results are consistent with experimentations in our group with PeA-based algorithms on the FEREf database [4]. We selected this set of images so that performance of neither the PCA or SVM algorithms are saturated.) 8 Conclusion We introduced a new technique for applying SVM to face recognition. We demonstrated the algorithm on both verification and identification applications. We compared the performance of our algorithm to a PCA-based algorithm. For verification, the equal error rate of our algorithm was almost half that of the PCA algorithm, 7% versus 13%. For identification, the error of SVM was half that of PeA, 22-23% versus 46%. This indicates that SVM is making more efficient use of the information in face space than the baseline PeA algorithm. One of the major concerns in practical face recognition applications is the ability of the Support Vector Machines Applied to Face Recognition 0.8 0.6 0.4 0.2 /---------------/ // ---------------_ .. _-----____ --i.---SVMscore& PCA score" ---. ------------------------------------o~ ______ ~ ______ i_ ______ ~ ______ J_ ____ ~ o 20 40 60 80 100 Number of eigenfeature& Figure 3: Probability of identification as a function of the number eigenfeatures. 809 algorithm to generalize from a training set of faces to faces outside of the training set. We demonstrated the ability of the SVM-based algorithm to generalize by training and testing on separate sets. Future research directions include varying the kernel K, changing the representation space, and expanding the size of the gallery and probe set. There is nothing in our method that is specific to faces, and it should generalize to other biometrics such as fingerprints. References [1] C. J. C. Burges. A tuturial on support vector machines for pattern recognition. Data mining and knowledge discovery, (submitted), 1998. [2] B. Moghaddam and A. Pentland. Face recognition using view-based and modular eigenspaces. In Proc. SPIE Conference on Automatic Systems for the Identification and Inspection of Humans, volume SPIE Vol. 2277, pages 12-21, 1994. [3] B. Moghaddam, W. Wahid, and A. Pentland. Beyond eigenfaces: probablistic matching for face recognition. In 3rd International Conference on Automatic Face and Gesture Recognition, pages 30--35, 1998. [4] H. Moon and P. J. Phillips. Analysis of PCA-based face recognition algorithms. In K W. Bowyer and P. J. Phillips, editors, Empirical Evaluation Techniques in Computer Vision. IEEE Computer Society Press, Los Alamitos, CA. 1998. [5] P. J. Phillips, H. Wechsler, J. Huang. and P. Rauss. The FEREf database and evaluation procedure for f~recognition algorithms. Image and Vision Computing Journal, 16(5):295-306.1998. [6] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience, 3(1):71-86,1991. [7] V. Vapnik. The nature of statistical learning theory. Springer. New York, 1995.
1998
100
1,454
A Theory of Mean Field Approximation T.Tanaka Department of Electronics and Information Engineering Tokyo Metropolitan University I-I, Minami-Osawa, Hachioji, Tokyo 192-0397 Japan Abstract I present a theory of mean field approximation based on information geometry. This theory includes in a consistent way the naive mean field approximation, as well as the TAP approach and the linear response theorem in statistical physics, giving clear information-theoretic interpretations to them. 1 INTRODUCTION Many problems of neural networks, such as learning and pattern recognition, can be cast into a framework of statistical estimation problem. How difficult it is to solve a particular problem depends on a statistical model one employs in solving the problem. For Boltzmann machines[ 1] for example, it is computationally very hard to evaluate expectations of state variables from the model parameters. Mean field approximation[2], which is originated in statistical physics, has been frequently used in practical situations in order to circumvent this difficulty. In the context of statistical physics several advanced theories have been known, such as the TAP approach[3], linear response theorem[4], and so on. For neural networks, application of mean field approximation has been mostly confined to that of the so-called naive mean field approximation, but there are also attempts to utilize those advanced theories[5, 6, 7, 8]. In this paper I present an information-theoretic formulation of mean field approximation. It is based on information geometry[9], which has been successfully applied to several problems in neural networks[ 1 0]. This formulation includes the naive mean field approximation as well as the advanced theories in a consistent way. I give the formulation for Boltzmann machines, but its extension to wider classes of statistical models is possible, as described elsewhere[ 11 ]. 2 BOLTZMANN MACHINES A Boltzmann machine is a statistical model with N binary random variables Si E {-I, I}, i = 1, ... , N. The vector s = (s}, . .. , S N) is called the state of the Boltzmann machine. 352 T. Tanaka The state s is also a random variable, and its probability law is given by the BoltzmannGibbs distribution p(s) = e- E (s)-1/J(p) , where E( s) is the "energy" defined by E(s) = - 2: hisi - 2: wij SiSj (ij) (I) (2) with hi and wij the parameters, and -1jJ(p) is determined by the normalization condition and is called the Helmholtz free energy of p. The notation (ij) means that the summation should be taken over all distinct pairs. Let 'fJi(P) == (Si}p and 'fJij(p) == (SiSj}p, where (.}p means the expectation with respect to p. The following problem is essential for Boltzmann machines: Problem 1 Evaluate the expectations '1Ji (p) and 'fJij (p) from the parameters hi and wij of the Boltzmann machine p. 3 INFORMATION GEOMETRY 3.1 ORTHOGONAL DUAL FOLIATIONS A whole set M of the Boltzmann-Gibbs distribution (1) realizable by a Boltzmann machine is regarded as an exponential family. Let us use shorthand notations I, J, ... , to represent distinct pairs of indices, such as ij. The parameters hi and wI constitute a coordinate system of M, called the canonical parameters of M. The expectations "Ii and 'f/I constitute another coordinate system of M, called the expectation parameters of M. Let Fo be a subset of M on which wI are all equal to zero. I call Fo the factorizable submodel of M since p(s) E Fo can be factorized with respect to Si. On Fo the problem is easy: Since wI are all zero, Si are statistically independent of others, and therefore 'fJi = tanh - 1 hi and 'fJij = 'fJi'fJj hold. Mean field approximation systematically reduces the problem onto the factorizable submodel Fo. For this reduction, I introduce dual foliations F and A onto M. The foliation F = {F(w)}, M = Uw F(w), is parametrized by w == (wI) and each leaf F(w) is defined as F(w) = {p(s) I wI (p) = wI}. (3) The leaf F(O) is the same as Fo, the factorizable submodel. Each leaf F( w) is again an exponential family with hi and 'fJi the canonical and the expectation parameters, respectively. A pair of dual potentials is defined on each leaf, one is the Helmholtz free energy 1/J(p) == 1jJ(p) and another is its Legendre transform, or the Gibbs free energy, (4) and the parameters of p E F( w) are given by 'fJi(P) = ()i1/J(p), hi(p) = ()i¢>(p), (5) where {)i == ({)/{)hi) and {)i ::::i (()/{)'fJi). Another foliation A {A(m)}, M = Urn A( m), is parametrized by m == (md and each leaf A( m) is defined as A(m) = {p(s) I 'fJi(P) = mi}. (6) A Theory of Mean Field Approximation 353 Each leaf A(m) is not an exponential family, but again a pair of dual potentials."b and ¢ is defined on each leaf, the former is given by (7) and the latter by its Legendre transform as ¢(p) = L wI (p)1]I(p) - ."b(p), (8) I and the parameters of P E A(m) are given by 1JI(p) = fh."b(p), wI (p) = al ¢(p), (9) where al = (a/awl) and al = (a / a1JI). These two foliations form the orthogonal dual foliations, since the leaves F{w) and A(m) are orthogonal at their intersecting point. I introduce still another coordinate system on M, called the mixed coordinate system, on the basis of the orthogonal dual foliations. It uses a pair (m, w) of the expectation and the canonical parameters to specify a single element p EM. The m part specifies the leaf A(m) on which p resides, and the w part specifies the leaf F(w). 3.2 REFORMULATION OF PROBLEM Assume that a target Boltzmann machine q is given by specifying its parameters hi (q) and wI (q). Problem I is restated as follows: evaluate its expectations 1Ji(q) and 1JI(q) from those parameters. To evaluate 1Ji mean field approximation translates the problem into the following one: Problem 2 Let F( w) be a leaf on which q resides. Find p E F{ w) which is the closest to q. At first sight this problem is trivial, since one immediately finds the solution p = q. However, sol ving this problem with respect to TJi (p) is nontrivial, and it is the key to understanding of mean field approximation including advanced theories. Let us measure the proximity of p to q by the Kullback divergence D{pllq) = LP{s) log :~:~, s (10) then solving Problem 2 reduces to finding a minimizer p E F{w) of D{pllq) for a given q. For p, q E F(w), D{pllq) is expressed in terms of the dual potentials",& and ¢ of F(w) as (11 ) The minimization problem is thus equivalent to minimizing ( 12) since ",&{q) in eq. (11) does not depend on p. Solving the stationary condition EfG{p) = 0 with respect to 1Ji(P) will give the correct expectations 1Ji{q), since the true minimizer is p = q. However, the scenario is in general intractable since¢{p) cannot be given explicitly as a function of 1Ji{P). 354 T Tanaka 3.3 PLEFKA EXPANSION The problem is easy if wI = O. In this case ¢(p) is given explicitly as a function of mi == 7]i(p) as 1 '" [ 1 + mi 1 - mi] </>(p) = "2 ~ (1 + mi) log 2 + (1 - md log 2 . i (13) Minimization of G(p) with respect to mi gives the solution mi = tanh hi as expected. When wI 1= 0 the expression (13) is no longer exact, but to compensate the error one may use, leaving convergence problem aside, the Taylor expansion of¢(w) == ¢(p) with respect to w = 0, ¢(w) ¢(O) + 2:(ch¢(O))wI + ~ 2:UhaJ¢(O))wI wJ 1 IJ + ! 2: ({hfhaK¢(O))wlwJwK + .... 6 IJK ( 14) This expansion has been called the Plefka expansion[ 12] in the literature of spin glasses. Note that in considering the expansion one should temporarily assume that m is fixed: One can rely on the solution m evaluated from the stationary condition 8G(p) = 0 only if the expansion does not change the value of m. The coefficients in the expansion can be efficiently computed by fully utilizing the orthogonal dual structure of the foliations. First, we have the following theorem: Theorem 1 The coefficients o/the expansion (14) are given by the cumulant tensors (l/the corresponding orders, dejined on A(m). Because ¢ = -;fi holds, one can consider derivatives of;fi instead of those of ¢. The firstorder derivatives aI;fi are immediately given by the property of the potential of the leaf A(m) (eq. (9», yielding ( 15) where Po denotes the distribution on A(m) corresponding to w = O. The coefficients of the lowest-orders, including the first-order one, are given by the following theorem. Theorem 2 The jirst-, second-, and third-order coefficients o/the expansion (14) are given by: where l == logPo. (h;fi(o) (h{h;fi(O) alaJaK;fi(O) T/I(PO) ((all)(aJl) )po ((all)(aJl)(aKl) )po (16) The proofs will be found in [11]. It should be noted that, although these results happen to be the same as the ones which would be obtained by regarding A(m) as an exponential family, they are not the same in general since actually A(m) is not an exponential family; for example, they are different for the fourth-order coefficients. The explicit formulas for these coefficients for Boltzmann machines are given as follows: • For the first-order, (17) A Theory of Mean Field Approximation 355 • For the second-order, (th )2~(O) = (1 - mr)(1 - m;,) (I = ii'), (18) and (19) • For the third-order, (th )3~(O) = 4mimi' (1 - mn(1 - mr,) (I = ii'), (20) and for 1 = ij, J = j k, K = ik for three distinct indices i , j, and k, (h{h8K~(O) = (1 - m;)(1 - m;)(1 m~) (21) For other combinations of I , J, and K , (22) 4 MEAN FIELD APPROXIMATION 4.1 MEAN FIELD EQUATION Truncating the Plefka expansion (14) up to n-th order term gives n-th order approximations, ~n (P) and Gn(p) == ~n(P) -L:i hi(q)mi . The Weiss free energy, which is used in the naive mean field approximation, is given by ~l (p). The TAP approach picks up all relevant terms of the Plefka expansion[ 12], and for the SK model it gives the second-order approximation ~2(P) . The stationary condition 8iGn (p) = 0 gives the so-called mean field equation, from which a solution of the approximate minimization problem is to be determined. For n = 1 it takes the following familiar form, tanh- 1 mi - hi - 2: wijmj = 0 # i and for n = 2 it includes the so-called On sager reaction term. (23) tanh- 1 mi - hi - 2: wijmj + 2:(wij )2(1 - m;)mi = 0 (24) #i # i Note that all of these are expressed as functions of ffii. Geometrically, the mean field equation approximately represents the "surface" hf(p) hi(q) in terms of the mixed coordinate system of M , since for the exact Gibbs free energy G, the stationary condition QiG(p) = 0 gives hi(p) - hi(q) = O. Accordingly, the approximate relation hi(p) = 8i~n(P), for fixed m, represents the n-th order approximate expression of the leaf A(m) in the canonical coordinate system. The fit of this expression to the true leaf A( m) around the point w = 0 becomes beller as the order of approximation gets higher, as seen in Fig. I. Such a behavior is well expected, since the Plefka expansion is essentially a Taylor expansion. 4.2 LINEAR RESPONSE For estimating r/1(p) one can utilize the linear response theorem. In information geometrical framework it is represented as a trivial identity relation for the Fisher information on the leaf F( w). The Fisher information matrix (gij), or the Riemannian metric tensor, on the leaf F(w) , and its inverse (gij) are given by (25) 356 T Tanaka 0.4 .---------r--.,..---_~-... _ .. Oth order . ,/ ,... \1 I //----· 1st order : ; ",'" !. " -_. 2nd order :'::~=~==~'~- ~--. ~.'.~ !;~ ~;~:; 0.2 , . 0.25 FO, / / )/ .// ,/ ; OL-~--~~--------~ I '\ A(m) I l\ " I \ " ,1"----- ...... : " I \ ' -- -. _ _ _ 0.1 '---__ -lo.-_~ __ __" o 0.499 0.501 Figure I: Approximate expressions of A(m) by mean field approximations of several orders for 2-unit Boltzmann machine, with (ml' m2) = (0.5, 0.5) (left), and their magnified view (right). Figure 2: Relation between "naive" approximation and present theory. and (26) respectively. In the framework here, the linear response theorem states the trivial fact that those are the inverse of the other. In mean field approximation, one substitutes an approximation ¢n(P) in place of ¢(p) in eq. (26) to get an approximate inverse of the metric (r/j). The derivatives in eq. (26) can be analytically calculated, and therefore (rJj) can be numerically evaluated by substituting to it a solution Tni of the mean field equation. Equating its inverse to (9ij) gives an estimate of 17ij (p) by using eq. (25). So far, Problem I has been sol ved within the framework of mean field approximation, with T1li and 17ij obtained by the mean field equation and the linear response theorem, respectively. 5 DISCUSSION Following the framework presented so far, one can in principle construct algorithms of mean field approximation of desired orders. The first-order algorithm with linear response has been first proposed and examined by Kappen and Rodrfguez[7, 8]. Tanaka[13] has formulated second- and third-order algorithms and explored them by computer simulations, It is also possible to extend the present formulation so that it can be applicable to higherorder Boltzmann machines. Tanaka[ 14] discusses an extension of the present formulation to third-order Boltzmann machines: It is possible to extend linear response theorem to higher-orders, and it allows us to treat higher-order correlations within the framework of mean field approximation. A Theory of Mean Field Approximation 357 The common understanding about the "naive" mean field approximation is that it minimizes Kullback divergence D(A>llq) with respect to A> E Fo for a given q. It can be shown that this view is consistent with the theory presented in this paper. Assume that q E F( w) and Po E A(m), and let p be a distribution corresponding the intersecting point of the leaves F(w) and A(m). Because of the orthogonality of the two foliations F and A the following "Pythagorean law[9]" holds (Fig. 2). D(Pollq) = D(Pollp) + D(pllq) (27) Intuitively, D(A> lip) measures the squared distance between F( w) and Fa, and is a secondorder quantity in w. It should be ignored in the first-order approximation, and thus D(Pollq) ~ D(pllq) holds. Under this approximation minimization of the former with respect to Po is equivalent to that of the latter with respect to p, which establishes the relation between the "naive" approximation and the present theory. It can also be checked directly that the first-order approximation of D(pllq) exactly gives D(A>llq), the Weiss free energy. The present theory provides an alternative view about the validity of mean field approximation: As opposed to a common "belief" that mean field approximation is a good one when N is sufficiently large, one can state from the present formulation that it is so whenever higher-order contribution of the Plefka expansion vanishes, regardless o/whether N is large or not. This provides a theoretical basis for the observation that mean field approximation often works well for small networks. The author would like to thank the Telecommunications Advancement Foundation for financial support. References [1] Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985) A learning algorithm for Boltzmann machines. Cognitive Science 9: 147-169. [2] Peterson, c., and Anderson, J. R. (1987) A mean field theory learning algorithm for neural networks. Complex Systems 1: 995-1019. [3] Thouless, D. J., Anderson, P. w., and Palmer, R. G. (1977) Solution of 'Solvable model of a spin glass'. Phil. Mag. 35 (3): 593-60l. [4] Parisi, G. (1988) Statistical Field Theory. Addison-Wesley. [5] Galland, C. C. (1993) The limitations of deterministic Boltzmann machine learning. Network 4 (3): 355-379. [6] Hofmann, T. and Buhmann, J. M. (1997) Pairwise data clustering by deterministic annealing. IEEE Trans. Patl. Anal. & Machine IntelJ. 19 (I): 1-14; Errata, ibid. 19 (2): 197 (1997). [7] Kappen, H. 1. and RodrIguez, F. B. (1998) Efficient learning in Boltzmann machines using linear response theory. Neural Computation. 10 (5): 1137-1156. [8] Kappen, H. J. and Rodriguez, F. B. (1998) Boltzmann machine learning using mean field theory and linear response correction. In M. I. Jordan, M. 1. Kearns, and S. A. Solla (Eds.), Advances ill Neural Information Processing S.ystems 10, pp. 280-286. The MIT Press. [9] Amari, S.-I. (1985) Differential-Geometrical Method in Statistics. Lecture Notes in Statistics 28, Springer-Verlag. [10] Amari, S.-I., Kurata, K .• and Nagaoka, H. (1992) Information geometry of Boltzmann machines. IEEE Trans. Neural Networks 3 (2): 260-271. [II] Tanaka, T. Information geometry of mean field approximation. preprint. [12] Plefka, P. (1982) Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model. 1. Phys. A: Math. Gen. 15 (6): 197t-1978. [13] Tanaka, T. (1998) Mean field theory of Boltzmann machine learning. Phys. Rev. E. 58 (2): 2302-2310. [14] Tanaka, T. (1998) Estimation of third-order correlations within mean field approximation. In S. Usui and T. Omori (Eds.), Proc. Fifth International Conference on Neurallllformation Processing, vol. 1, pp. 554-557. PART IV ALGORITHMS AND ARCHITECTURE
1998
101
1,455
Unsupervised and supervised clustering: the mutual information between parameters and observations Didier Herschkowitz Jean-Pierre Nadal Laboratoire de Physique Statistique de l'E.N.S.* Ecole Normale Superieure 24, rue Lhomond - 75231 Paris cedex 05, France herschko@lps.ens.fr nadal@lps.ens.fr http://www.lps.ens.frrrisc/rescomp Abstract Recent works in parameter estimation and neural coding have demonstrated that optimal performance are related to the mutual information between parameters and data. We consider the mutual information in the case where the dependency in the parameter (a vector 8) of the conditional p.d.f. of each observation (a vector 0, is through the scalar product 8.~ only. We derive bounds and asymptotic behaviour for the mutual information and compare with results obtained on the same model with the" replica technique" . 1 INTRODUCTION In this contribution we consider an unsupervised clustering task. Recent results on neural coding and parameter estimation (supervised and unsupervised learning tasks) show that the mutual information between data and parameters (equivalently between neural activities and stimulus) is a relevant tool for deriving optimal performances (Clarke and Barron, 1990; Nadal and Parga, 1994; Opper and Kinzel, 1995; Haussler and Opper, 1995; Opper and Haussler, 1995; Rissanen, 1996; BruneI and Nadal 1998). Laboratory associated with C.N.R.S. (U.R.A. 1306), ENS, and Universities Paris VI and Paris VII. Mutual Information between Parameters and Observations 233 With this tool we analyze a particular case which has been studied extensively with the "replica technique" in the framework of statistical mechanics (Watkin and Nadal, 1994; Reimann and Van den Broeck, 1996; Buhot and Gordon, 1998). After introducing the model in the next section, we consider the mutual information between the patterns and the parameter. We derive a bound on it which is of interest for not too large p. We show how the "free energy" associated to Gibbs learning is related to the mutual information. We then compare the exact results with replica calculations. We show that the asymptotic behaviour (p > > N) of the mutual information is in agreement with the exact result which is known to be related to the Fisher information (Clarke and Barron, 1990; Rissanen, 1996; Brunei and Nadal 1998). However for moderate values of a = pIN, we can eliminate false solutions of the replica calculation. Finally, we give bounds related to the mutual information between the parameter and its estimators, and discuss common features of parameter estimation and neural coding. 2 THE MODEL We consider the problem where a direction 0 (a unit vector) of dimension N has to be found based on the observation of p patterns. The probability distribution of the patterns is uniform except in the unknown symmetry-breaking direction O. Various instances of this problem have been studied recently within the satistical mechanics framework, making use of the replica technique (Watkin and Nadal, 1994; Reimann and Van den Broeck, 1996; Buhot and Gordon, 1998). More specifically it is assumed that a set of patterns D = {~J.L}~= 1 is generated by p independant samplings from a non-uniform probability distribution P(~IO) where 0 = {Ol , ... , ON} represents the symmetry-breaking orientation. The probability is written in the form: 1 ~2 P(~IO) = o/S exp( -2 - V(A)) (1) where N is the dimension of the space, A = O.~ is the overlap and V(A) characterizes the structure of the data in the breaking direction. As justified within the Bayesian and Statistical Physics frameworks, one has to consider a prior distribution on the parameter space, p(O), e.g. the uniform distribution on the sphere. The mutual information J(DIO) between the data and 0 is defined by (2) It can be rewritten: J(DIO) = -a < V(A) > _ <<In(Z) », N N (3) where Z = 1 00 dOp(O)exp( - t V(AJ.L)) -00 J.L=l (4) In the statistical physics literature -In Z is a "free energy". The brackets < < .. > > stand for the average over the pattern distribution, and < .. > is the average over the resulting overlap distribution. We will consider properties valid for any Nand any p . others for p > > N , and the replica calculations are valid for Nand p large at any given value of a = ~ . 234 D. Herschkowitz and J.-P Nadal 3 LINEAR BOUND The mutual information, a positive quantity, cannot grow faster than linearly in the amount of data, p. We derive the simple linear bound: [(DIB) :::; - p < V(>') > (5) We proove the inequality for the case < >. >= O. The extension to the case < >. >i- 0 is straightforward. The mutual information can be written as [ = H(D) - H(DIB). The calculation of H(DIB) is straightforward: H(DIB) = p; In(e27r) + ~« >.2 > -1) + p < V > (6) Now, the entropy of the data H(D) = - J dDP(D)lnP(D) is lower or equal to the entropy of a Gaussian distribution with the same variance. We thus calculate the covariance matrix of the data 2 « ~r~j »= 61Jv( 6ij + « >. > -l)ei(/j) where D denotes the average over the parameter distribution. We then have N pN P", 2 H(D) :::; T ln(27fe) + "2 L....ln(l + « >. > -l)ri) i=l N_ (7) (8) where I i are the eigen value of the matrix BiB). Using I: Bt = 1 and the property i=l In(l + x) :::; x we obtain H(D) :::; p; In(27fe) + ~« >.2 > -1) (9) Putting (9) and (6) together, we find the inequality (5). l.From this and (3) it follows also p < V >:::; -« In(Z)>>:::; 0 (10) 4 REPLICA CALCULATIONS In the limit N -T 00 with a finite, the free energy becomes self-averaging, that is equal to its average, and its calculation can be performed by standard replica technique. This calculation is the same as calculations related to Gibbs learning, done in (Reimann and van den Broeck, 1996, Buhot and Gordon, 1998), but the interpretation of the order parameters is different. Assuming replica symmetry, we reproduce in fig.2 results from (Buhot and Gordon, 1998) for the behaviour with a of Q which is the typical overlap between two directions compatible with the data. The overlap distribution P(>.) was chosen to get patterns distributed according to two clusters along the symmetry-breaking direction P(>.) = 1 L exp( _ (>. - Ep)2 ) 2O'.j27i= f = ±l 20'2 (11) In fig.2 and fig.1 we show the corresponding behaviour of the average free energy and of the mutual information. Mutual Information between Parameters and Observations 235 4.1 Discussion Up to aI , Q = 0 and the mutual information is in a purely linear phase I(~D) = -a < F()') >. This correspond to a regime where the data have no correlations. For a ~ aI, the replica calculation admits up to three differents solutions. In view of the fact that the mutual information can never decrease with a and that the average free energy can not be positive, it follows that only two behaviours are acceptable. In the first, Q leaves the solution Q = 0 at aI , and follows the lower branch until a 3 where it jumps to the upper branch. This is the stable way. The second possibility is that Q = 0 until a2 where it directly jumps to the upper branch. In (Buhot and Gordon, 1998) , it has been suggested that One can reach the upper branch, well before a 3 . Here we have thus shown that it is only possible from a2. It remains also the possibility of a replica symetry breacking phase in this range of a. In the limit a --+ 00 the replica calculus gives for the behaviour of the mutual information I(DIO) ~ ~ In(a < (dV~f))2 » (12) The r.h.s can be shown to be equal to half the logarithm of the determinant of the Fisher information matrix, which is the exact asymptotic behaviour (Clarke and Barron, 1990; BruneI and Nadal, 1998). It can be shown that this behaviour for p > > N implies that the best possible estimator based on the data will saturate the Cramer-Rao bound (see e.g. Blahut, 1988). It has already been noted that the asymptotics performance in estimating the direction, as computed by the replica technique, saturate this bound (Van den Broeck, 1997). What we have check here is that this manifests itself in the behaviour of the mutual information for large a. 4.2 Bounds for specific estimators Given the data D, one wants to find an estimate J of the parameter. The amount of information I(DIO) limits the performance of the estimator. Indeed, one has I(JIO) ::; I(DIO). This basic relationship allows to derive interesting bounds based on the choice of particular estimators. We consider first Gibbs learning, which consists in sampling a direction J from the 'a posteriori' probability P(JID) = P(DIJ)p(J) / P(D) . In this particular case, the differential entropy ofthe estimator J and of the parameter 0 are equal H(J) = H(O). If 1 - Qg 2 is the variance of the Gibbs estimator one gets, for a Gaussian prior on 0, the relations (13) These relations together with the linear bound (5) allows to bound the order parameter Qg for small a where this bound is of interest. The Bayes estimator consists in taking for J the center of mass of the 'a posteriori' probability. In the limit a --+ 00 , this distribution becomes Gaussian centered at its most probable value. We can thus assume PBayes (JIO) to be Gaussian with mean QbB and variance 1 - Qb 2 , then the first inequality in (13) (with Qg replaced by Qb and Gibbs by Bayes) is an equality. Then using the Cramer-Rao bound on the variance of the estimator, that is (1 Q~)/Q~ ~ (a < (dV/d).)2 »-1, one can bound the mutual information for the Bayes estimator N dV().) 2 IBayes(JIB) ::; 2ln(1 + a < (~) » (14) 236 D. Herschkowitz and J-P. Nadal These different quantities are shown on fig.1. 5 CONCLUSION We have studied the mutual information between data and parameter in a problem of unsupervised clustering: \ve deriyed bounds, asymptotic behaviour, and compared these results with replica calculations. Most of the results concerning the behaviour of the mutual information, observed for this particular clustering task, are" universal" , in that they will be qualitatively the same for any problem which can be formulated as either a parameter estimation task or a neural coding/signal processing task. In particular, there is a linear regime for small enough amount of data (number of coding cells), up to a maximal value related to the VC dimension of the system. For large data size, the behaviour is logarithmic - that is I ,...., lnp (Nadal and Parga, 1994; Opper and Haussler, 1995) or ~ lnp (Clarke and Barron, 1990; Opper and Haussler, 1995; BruneI and Nadal, 1998) depending on the smoothness of the model. A mOre detailed review with mOre such universal features, exact bounds and relations between unsupervised and supervised learning will be presented elsewhere. (Nadal, Herschkowitz, to appear in Phys. rev. E). Acknowledgements We thank Arnaud Buhot and Mirta Gordon for stimulating discussions. This work has been partly supported by the French contract DGA 96 2557 A/DSP. References [B88] [BG98] [BN98] [CB90] [H095] [OH95] [NP94a] [OK95] [Ris] R. E. Blahut, Addison-Wesley, Cambridge MA, 1998. A. Buhot and M. Gordon. Phys. Rev. E, 57(3):3326- 3333,1998. N. BruneI and J.-P. Nadal. Neural Computation, to appear, 1998. B. S. Clarke and A. R. Barron. IEEE Trans. on Information Theory, 36 (3):453- 471, 1990. D. Haussler and M. Opper. conditionally independent observations. In VIIIth Ann. Workshop on Computational Learning Theory (COL T95) , pages 402-411, Santa Cruz, 1995 (ACM, New-York). M. Opper and D. Haussler supervised learning, Phys. Rev. Lett., 75:3772-3775, 1995. J.-P. Nadal and N. Parga. unsupervised learning. Neural Computation, 6:489- 506, 1994. M. Opper and W. Kinzel. In E. Domany J .L. van Hemmen and K. Schulten, editors, Physics of Neural Networks, pages 151- . Springer, 1995. J. Rissanen. IEEE Trans. on Information Theory, 42 (1) :40-47, 1996. Mutual Information between Parameters and Observations 237 [RVdB96] [VdB98] [WN94] 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 I I I I 0.0 0.0 I I I I I I P. Reimann and C. Van den Broeck. Phys. Rev. E, 53 (4):3989-3998, 1996. C. Van den Broeck. In proceedings of the TANG workshop (HongKong May 26-28, 1997). T. Watkin and J.-P. Nadal. 1. Phys. A: Math. and Gen., 27:18991915, 1994. I I I I I I I I 2.0 / / I I I I I ./ ./ / ./ I ./ I ./' I /' I / I / I I I , , 1/ f , J • 4.0 6.0 .-"': -----=-~.-----,.,. ,. -------- -piN <V> -- replica information ---- O.5*ln(1 +p/N «V')**2» -O.5*ln(1-Qb**2) - -O.5*ln(1-Qg**2) 8.0 a 10.0 12.0 ----14.0 Figure 1: Dashed line is the linear bound on the mutual information I(DI())/N. The latter, calculated with the replica technique, saturates the bound for a ::::: aI, and is the (lower) solid line for a > al . The special structure On fig.2 is not visible here due to the graph scale. The curve -~ln(1 - Q/) is a lower bound On the mutual information between the Gibbs estimator and () (which would be equal to this bound if the conditional probability distribution of the estimator were Gaussian with mean Qg() and variance 1- Qg2). Shown also is the analogous curve -~ln(1 - Qb2) for the Bayes estimator. In the limit a -t (Xl these two latter Gaussian CUrves and the replica information I(DI()), all converge toward the exact asymptotic behaviour, which can be expressed as ~ln(1 + a < (d\~~'\))2 » (upper solid line). This latter expression is, for any p, an upper bound for the two Gaussian CUrves. 238 D. Herschkowitz and J-P. Nadal 0.002 r-------r-----,-----r-------, -- -«In(z»> 0.001 0.000 -0.001 -0.002 -0.003 L-___ ---JI....-___ ---L ___ ....I....--L ____ --.J 2.0 2.2 2.4 ex. 2.6 ----------------------------------------0.9 0.8 0.7 -Qb - - _. bome Cramer-Rao 0.6 0.5 0.4 0.3 0.2 a2 ex 3 0.1 0.0 2.0 2.2 2.4 2.6 a Figure 2: In the lower figure, the optimal learning curve Qb(a) for p = 1.2 and a = 0.5, as computed in (Buhot and Gordon, 1998) under the replica symetric ansatz. We have put the Cramer-Rao bound for this quantity. In the upper figure, the average free energy - < < InZ > > / N. All the part above zero has to be rejected. al = 2.10, a2 = 2.515 and a3 = 2.527
1998
102
1,456
Stationarity and Stability of Autoregressive Neural Network Processes Friedrich Leisch\ Adrian Trapletti2 & Kurt Hornikl 1 Institut fur Statistik Technische UniversiUit Wien Wiedner Hauptstrafie 8-10 / 1071 A-1040 Wien, Austria firstname.lastname@ci.tuwien.ac.at 2 Institut fiir Unternehmensfiihrung Wirtschaftsuniversi tat Wien Augasse 2-6 A-lOgO Wien, Austria adrian. trapletti@wu-wien.ac.at Abstract We analyze the asymptotic behavior of autoregressive neural network (AR-NN) processes using techniques from Markov chains and non-linear time series analysis. It is shown that standard AR-NNs without shortcut connections are asymptotically stationary. If linear shortcut connections are allowed, only the shortcut weights determine whether the overall system is stationary, hence standard conditions for linear AR processes can be used. 1 Introduction In this paper we consider the popular class of nonlinear autoregressive processes driven by additive noise, which are defined by stochastic difference equations of form (1) where ft is an iid. noise process. If g( . .. , (J) is a feedforward neural network with parameter ("weight") vector (J, we call Equation 1 an autoregressive neural network process of order p, short AR-NN(p) in the following. AR-NNs are a natural generalization of the classic linear autoregressive AR(p) process (2) See, e.g., Brockwell & Davis (1987) for a comprehensive introduction into AR and ARMA (autoregressive moving average) models. 268 F. Leisch, A. Trapletti and K. Hornik One of the most central questions in linear time series theory is the stationarity of the model, i.e., whether the probabilistic structure of the series is constant over time or at least asymptotically constant (when not started in equilibrium). Surprisingly, this question has not gained much interest in the NN literature, especially there are-up to our knowledge-no results giving conditions for the stationarity of ARNN models. There are results on the stationarity of Hopfield nets (Wang & Sheng, 1996), but these nets cannot be used to estimate conditional expectations for time series prediction. The rest of this paper is organized as follows: In Section 2 we recall some results from time series analysis and Markov chain theory defining the relationship between a time series and its associated Markov chain. In Section 3 we use these results to establish that standard AR-NN models without shortcut connections are stationary. We also give conditions for AR-NN models with shortcut connections to be stationary. Section 4 examines the NN modeling of an important class of non-stationary time series, namely integrated series. All proofs are deferred to the appendix. 2 Some Time Series and Markov Chain Theory 2.1 Stationarity Let ~t denote a time series generated by a (possibly nonlinear) autoregressive process as defined in (1). If lEft = 0, then 9 equals the conditional expectation 1E(~t I~t-l' ... , ~t-p) and g(~t-l' ... , ~t-p) is the best prediction for ~t in the mean square sense. If we are interested in the long term properties of the series, we may ask whether certain features such as mean or variance change over time or remain constant. The time series is called weakly stationary if lE~t = Jl and cov(~t,~t+h) = ,h, 'it, i.e., mean and covariances do not depend on the time t. A stronger criterion is that the whole distribution (and not only mean and covariance) of the process does not depend on the time, in this case the series is called strictly stationary. Strong stationarity implies weak stationarity if the second moments of the series exist. For details see standard time series textbooks such as Brockwell & Davis (1987). If ~t is strictly stationary, then IP (~t E A) = rr( A), 'it and rrO is called the stationary distribution of the series. Obviously the series can only be stationary from the beginning if it is started with the stationary distribution such that ~o '" rr. If it is not started with rr, e.g., because ~o is a constant, then we call the series asymptotically stationary if it converges to its stationary distribution: lim IP(~t E A) = rr(A) t-HX) 2.2 Time Series as Markov Chains Using the notation Xt-l (~t-l"" ,~t_p)' (g(Xt-d,~t-l"" , ~t-p+d (ft,O, ... ,O)' (3) (4) (5) we can write scalar autoregressive models of order p such as (1) or (2) as a first order vector model (6) Stationarity and Stability of Autoregressive Neural Network Processes 269 with Xt, et E lRP (e.g., Chan & Tong, 1985). If we write pn(x,A) = IP{Xt+n E Alxt = x} p( x, A) = pl (x, A) for the probability of going from point x to set A E B in n steps, then {xd with p(x , A) forms a Markov chain with state space (lRP ,B,>'), where B are the Borel sets on lRP and>' is the usual Lebesgue measure. The Markov chain {xd is called cp-irreducible, if for some IT-finite measure cp on (lRP, B, >.) 00 n=l whenever cp(A) > O. This me~ns essentially, that all parts of the state space can be reached by the Markov chain irrespective of the starting point. Another important property of Markov chains is aperiodicity, which loosely speaking means that there are no (infinitely often repeated) cycles. See, e.g., Tong (1990) for details. The Markov chain {Xt} is called geometrically ergodic, if there exists a probability measure 1I"(A) on (lR P, B, >.) and a p > 1 such that Vx E lRP : lim pnllpn(x,.) - 11"(·)11 = 0 n-+oo where II . II denotes the total variation. Then 11" satisfies the invariance equation 1I"(A) = ! p(x, A) 1I"(dx) , VA E B There is a close relationship between a time series and its associated Markov chain. If the Markov chain is geometrically ergodic, then its distribution will converge to 11" and the time series is asymptotically stationary. If the time series is started with distribution 11", i.e., Xo "" 11", then the series {~d is strictly stationary. 3 Stationarity of AR-NN Models We now apply the concepts defined in Section 2 to the case where 9 is defined by a neural network. Let x denote a p-dimensional input vector, then we consider the following standard network architectures: Single hidden layer perceptrons: g(x) = 'Yo + L,8ilT(ai + a~x) (7) where ai, ,8i and 'Yo are scalar weights, aj are p-dimensional weight vectors, and IT(') is a bounded sigmoid function such as tanh(·). Single hidden layer perceptrons with shortcut connections: (8) where c is an additional weight vector for shortcut connections between inputs and output. In this case we define the characteristic polynomial c(z) associated with the linear shortcuts as c(z) = 1 ClZ C2z2 . .. - cP zP, Z E C. 270 F. Leisch, A. TrapleUi and K. Hornik Radial basis function networks: (9) where mj are center vectors and ¢( ... ) is one of the usual bounded radial basis functions such as ¢(x) = exp( _x 2 ). Lemma 1 Let {xtl be defined by (6), let IEjt:tl < 00 and let the PDF of f:t be positive everywhere in JR. Then if 9 is defined by any of (7), (8) or (9), the Markov chain {Xt} is ¢-irreducible and aperiodic. Lemma 1 basically says that the state space of the Markov chain, i.e., the points that can be reached, cannot be reduced depending on the starting point. An example for a reducible Markov chain would be a series that is always positive if only Xo > ° (and negative otherwise). This cannot happen in the AR-NN(p) case due to the unbounded additive noise term. Theorem 1 Let {~tl be defined by (1), {xtl by (6), further let IEktl < 00 and the PDF of f:t be positive everywhere in JR. Then 1. If 9 is a network without linear shortcuts as defined in (7) and (9), then { x tl is geometrically ergodic and {~tl is asymptotically stationary. 2. If 9 is a network with linear shortcuts as defined in (8) and additionally c(z) f 0, Vz E C : Izl ~ 1, then {xtl is geometrically ergodic and {~tl is asymptotically stationary. The time series {~t} remains stationary if we allow for more than one hidden layer (-+ multi layer perceptron, MLP) or non-linear output units, as long as the overall mapping has bounded range. An MLP with shortcut connections combines a (possibly non-stationary) linear AR(p) process with a non-linear stationary NN part. Thus, the NN part can be used to model non-linear fluctuations around a linear process like a random walk. The only part of the network that controls whether the overall process is stationary are the linear shortcut connections (if present). If there are no shortcuts, then the process is always stationary. With shortcuts, the usual test for stability of a linear system applies. 4 Integrated Models An important method in classic time series analysis is to. first transform a nonstationary series into a stationary one and then model the remainder by a stationary process. The probably most popular models of this kind are autoregressive integrated moving average (ARIMA) models, which can be transformed into stationary ARMA processes by simple differencing. Let I::!..k denote the k-th order difference operator et ~t-l I::!..(~t ~t-d = ~t 2~t-l + ~t-2 (10) (11) (12) Stationarity and Stability of Autoregressive Neural Network Processes 271 with ~ 1 = ~. E.g., a standard random walk ~t = ~t-l +ft is non-stationary because of the growing variance, but can be transformed into the iid (and hence stationary) noise process ft by taking first differences. If a time series is non-stationary, but can be transformed into a stationary series by taking k-th differences, we call the series integrated of order k. Standard MLPs or RBFs without shortcuts are asymptotically stationary. It is therefore important to take care that these networks are only used to model stationary processes. Of course the network can be trained to mimic a non-stationary process on a finite time interval, but the out-of-sample or prediction performance will be poor, because the network inherently cannot capture some important features of the process. One way to overcome this problem is to first transform the process into a stationary series (e.g., by differencing an integrated series) and train the network on the transformed series (Chng et al., 1996). As differencing is a linear operation, this transformation can also be easily incorporated into the network by choosing the shortcut connections and weights from input to hidden units accordingly. Assume we want to model an integrated series of integration order k, such that ~k~t = g(~k~t_l' . .. ' ~k~t_p) + ft where ~k~t is stationary. By (12) this is equivalent to k ~t ~(-lt-l (~)~t-n + g(~k~t_l' ... ' ~k~t_p) + ft k ~(-lt-l (~)~t-n + g(~t-l' ... ,~t-p-k) + ft which (for p > k) can be modeled by an MLP with shortcut connections as defined by (8) where the shortcut weight vector c is fixed to (~) := 0 for n > k and 9 is such that g(~t-l' ... ,~t-p-k) = g(~kXt_d. This is always possible and can basically be obtained by adding c to all weights between input and first hidden layer of g. An AR-NN(p) can model integrated series up to integration order p. If the order of integration is known, the shortcut weights can either be fixed, or the differenced series is used as input. If the order is unknown, we can also train the complete network including the shortcut connections and implicitly estimate the order of integration. After training the final model can be checked for stationarity by looking at the characteristic roots of the polynomial defined by the shortcut connections. 4.1 Fractional Integration Up to now we have only considered integrated series with positive integer order of integration, i.e., kEN. In the last years models with fractional integration order became very popular (again). Series with integration order of 0.5 < k < 1 can be shown to exhibit self-similar or fractal behavior, and have long memory. These type of processes were introduced by Mandelbrot in a series of paper modeling river flows, e.g., see Mandelbrot & Ness (1968). More recently, self-similar processes were used to model Ethernet traffic by Leland et al. (1994). Also some financial time series such as foreign exchange data series exhibit long memory and self-similarity. 272 FLeisch. A. Trapletti and K. Hornik The fractional differencing operator ~ k , k E [-1, 1] is defined by the series expansion k ~ f(-k+n) ~ ~t = ~ r(-k)f(n + 1)~t-n (13) which is obtained from the Taylor series of (1 - z)k. For k > 1 we first use Equation (12) and then the above series for the fractional remainder. For practical computation, the series (13) is of course truncated at some term n = N. An ARNN(p) model with shortcut connections can approximate the series up to the first p terms. 5 Summary We have shown that AR-NN models using standard NN architectures without shortcuts are asymptotically stationary. If linear shortcuts between inputs and outputs are included-which many popular software packages have already implementedthen only the weights of the shortcut connections determine if the overall system is stationary. It is also possible to model many integrated time series by this kind of networks. The asymptotic behavior of AR-NNs is especially important for parameter estimation, predictions over larger intervals of time, or when using the network to generate artificial time series. Limiting (normal) distributions of parameter estimates are only guaranteed for stationary series. We therefore always recommend to transform a non-stationary series to a stationary series if possible (e.g., by differencing) before training a network on it. Another important aspect of stationarity is that a single trajectory displays the complete probability law of the process. If we have observed one long enough trajectory of the process we can (in theory) estimate all interesting quantities of the process by averaging over time. This need not be true for non-stationary processes in general, where some quantities may only be estimated by averaging over several independent trajectories. E.g., one might train the network on an available sample and then use the trained network afterwards-driven by artificial noise from a random number generator-to generate new data with similar properties than the training sample. The asymptotic stationarity guarantees that the AR-NN model cannot show "explosive" behavior or growing variance with time. We currently are working on extensions of this paper in several directions. AR-NN processes can be shown to be strong mixing (the memory of the process vanishes exponentially fast) and have autocorrelations going to zero at an exponential rate. Another question is a thorough analysis of the properties of parameter estimates (weights) and tests for the order of integration. Finally we want to extend the univariate results to the multivariate case with a special interest towards cointegrated processes. Acknowledgement This piece of research was supported by the Austrian Science Foundation (FWF) under grant SFB#OlO ('Adaptive Information Systems and Modeling in Economics and Management Science'). Stationarity and Stability of Autoregressive Neural Network Processes 273 Appendix: Mathematical Proofs Proof of Lemma 1 It can easily be shown that {xe} is <p-irreducible if the support of the probability density function (PDF) of €t is the whole real line, i.e., the PDF is positive everywhere in IR (Chan & Tong, 1985). In this case every non-null p-dimensional hypercube is reached in p steps with positive probability (and hence every non-null Borel set A). A necessary and sufficient condition for {Xt} to be aperiodic is that there exists a set A and positive integer n such that pn(x, A) > 0 and pn+l (x, A) > 0 for all x E A (Tong, 1990, p. 455). In our case this is true for all n due to the unbounded additive noise. Proof of Theorem 1 We use the following result from nonlinear time series theory: Theorem 2 (Chan & Tong 1985) Let {Xt} be defined by (1), (6) and let G be compact, i.e. preserve compact sets. IfG can be decomposedasG = Gh+Gd andGd(-) is of bounded range, Gh(-) is continuous and homogeneous, i.e., Gh(ax) = aGh(x), the origin is a fixed point of Gh and Gh is uniform asymptotically stable, IEI€tl < 00 and the PDF of €t is positive everywhere in IR, then {Xt} is geometrically ergodic. The noise process €t fulfills the conditions by assumption. Clearly all networks are continuous compact functions. Standard MLPs without shortcut connections and RBFs have a bounded range, hence Gh == 0 and G == Gd , and the series {ee} is asymptotically stationary. If we allow for linear shortcut connections between the input and the outputs, we get Gh = c'x and Gd = 70 + l.:i (3i(T(ai + aix) i.e., Gh is the linear shortcut part of the network, and Gd is a standard MLP without shortcut connections. Clearly, Gh is continuous, homogeneous and has the origin as a fixed point. Hence, the series {eel is asymptotically stationary if G h is asymptotically stable, i.e., when all characteristic roots of Gh have a magnitude less than unity. Obviously the same is true for RBFs with shortcut connections. Note that the model reduces to a standard linear AR(p) model if Gd == O. References Brockwell, P. J. & Davis, R. A. (1987). Time Series: Theory and Methods. Springer Series in Statistics. New York, USA: Springer Verlag. Chan, K. S. & Tong, H. (1985). On the use of the deterministic Lyapunov function for the ergodicity of stochastic difference equations. Advances in Applied Probability, 17, 666-678. Chng, E. S., Chen, S., & Mulgrew, B. (1996). Gradient radial basis function networks for nonlinear and nonstationary time series prediction. IEEE Transactions on Neural Networks, 7(1), 190- 194. Husmeier, D. & Taylor, J. G. (1997). Predicting conditional probability densities of stationary stochastic time series. Neural Networks, 10(3),479-497. Jones, D. A. (1978). Nonlinear autoregressive processes. Proceedings of the Royal Society London A, 360, 71- 95. Leland, W. E., Taqqu, M. S., Willinger, W., & Wilson, D. V. (1994) . On the self-similar nature of ethernet traffic (extended version). IEEE/ACM Transactions on Networking, 2(1), 1- 15. Mandelbrot, B. B. & Ness, J . W. V. (1968). Fractional brownian motions, fractional noises and applications. SIAM Review, 10(4), 422-437. Tong, H. (1990). Non-linear time series: A dynamical system approach. New York, USA: Oxford University Press. Wang, T. & Sheng, Z. (1996). Asymptotic stationarity of discrete-time stochastic neural networks. Neural Networks, 9(6) , 957-963.
1998
103
1,457
A Principle for Unsupervised Hierarchical Decomposition of Visual Scenes Michael C. Mozer Dept. of Computer Science University of Colorado Boulder, CO 80309-0430 ABSTRACT Structure in a visual scene can be described at many levels of granularity. At a coarse level, the scene is composed of objects; at a finer level, each object is made up of parts, and the parts of subparts. In this work, I propose a simple principle by which such hierarchical structure can be extracted from visual scenes: Regularity in the relations among different parts of an object is weaker than in the internal structure of a part. This principle can be applied recursively to define part-whole relationships among elements in a scene. The principle does not make use of object models, categories, or other sorts of higher-level knowledge; rather, part-whole relationships can be established based on the statistics of a set of sample visual scenes. I illustrate with a model that performs unsupervised decomposition of simple scenes. The model can account for the results from a human learning experiment on the ontogeny of partwhole relationships. 1 INTRODUCTION The structure in a visual scene can be described at many levels of granularity. Consider the scene in Figure I a. At a coarse level, the scene might be said to consist of stick man and stick dog. However, stick man and stick dog themselves can be decomposed further. One might describe stick man as having two components, a head and a body. The head in turn can be described in terms of its parts: the eyes, nose, and mouth. This sort of scene decomposition can continue recursively down to the level of the primitive visual features. Figure I b shows a partial decomposition of the scene in Figure I a. A scene decomposition establishes part-whole relationships among objects. For example, the mouth (a whole) consists of two parts, the teeth and the lips. If we assume that any part can belong to only one whole, the decomposition imposes a hierarchical structure over the elements in the scene. Where does this structure come from? What makes an object an object, a part a part? I propose a simple principle by which such hierarchical structure can be extracted from visual scenes and incorporate the principle in a simulation model. The principle is based on the statistics of the visual environment, not on object models or other sorts of higherlevel knowledge, or on a teacher to classify objects or their parts. Hierarchical Decomposition of Visual Scenes 53 2 WHAT MAKES A PART A PART? Parts combine to form objects. Parts are combined in different ways to form different objects and different instances of an object. Consequently, the structural relations among different parts of an object are less regular than is the internal structure of a part. To illustrate, consider Figure 2, which depicts four instances of a box shell and lid. The components of the lid-the top and the handle-appear in a regular configuration, as do the components of the shell-the sides and base-but the relation of the lid to the shell is variable. Thus, configural regularity is an indication that components should be grouped together to form a unit. I call this the regularity principle. Other variants of the regularity principle have been suggested by Becker (1995) and Tenenbaum (1994). The regularity depicted in Figure 2 is quite rigid: one component of a part always occurs in a fixed spatial position relative to another. The regularity principle can also be cast in terms of abstract relationships such as containment and encirclement. The only difference is the featural representation that subserves the regularity discovery process. In this paper, however, I address primarily regularities that are based on physical features and fixed spatial relationships. Another generalization of the regularity principle is that it can be applied recursively to suggest not only parts of wholes, but subparts of parts. According to the regularity principle, information is implicit in the environment that can be used to establish part-whole relationships. This information comes in the form of statistical regularities among features in a visual scene. The regularity principle does not depend on explicit labeling of parts or objects. In contrast, Schyns and Murphy (1992, 1993) have suggested a theory of part ontogeny that presupposes explicit categorization of objects. They propose a homogeneity principle which states that "if a fragment of a stimulus plays a consistent role in categorization, the perceptual parts composing the fragment are instantiated as a single unit in the stimulus representation in memory." Their empirical studies with human subjects find support for the homogeneity principle. Superficially, the homogeneity and regularity principles seem quite different: while the homogeneity principle applies to supervised category learning (i.e., with a teacher to classify instances), the regularity principle applies to unsupervised discovery. But it is possible to transform one learning paradigm into the other. For example, in a category learning task, if only one category is to be learned and if the training examples are all positive instances of the category, then inducing the defining characteristics of the category is equivalent to extracting regularities in the stimulus environment. Thus, category learning in a diverse stimulus environment can be conceptualized as unsupervised regularity extraction in multiple, narrow stimulus environments (each environment being formed by taking all positive instances of a given class). (a) (b) scene ~ stick man stick dog ~ head body ~ ~ eyes nose mouth arm torso leg ~ lips teeth FIGURE 1. (a) A graphical depiction of stick man and his faithful companion, stick dog; (b) a partial decomposition of the scene into its parts. FIGURE 2. Four different instances of a box with a lid 54 M. C. Mozer There are several other differences between the regularity principle proposed here and the homogeneity principle of Schyns and Murphy, but they are minor. Schyns and Murphy seem to interpret "fragment" more narrowly as spatially contiguous perceptual features. They also don't address the hierarchical nature of part-whole relationships. Nonetheless, the two principles share the notion of using the statistical structure of the visual environment to establish part-whole relations. 3 A FLAT REPRESENTATION OF STRUCTURE I have incorporated the regularity principle into a neural net that discovers partwhole relations in its environment. Neural nets, having powerful learning paradigms for unsupervised discovery, are well suited for this task. However, they have a fundamental difficulty representing complex, articulated data structures of the sort necessary to encode hierarchies (but see Pollack, 1988, and Smolensky, 1990, for promising advances). I thus begin by describing a novel representation scheme for hierarchical structures that can readily be integrated into a neural net. The tree structure in Figure I b depicts one representation of a hierarchical decomposition. The complete tree has as its leaf nodes the primitive visual features of the scene. The tree specifies the relationships among the visual features. There is another way of capturing these relationships, more connectionist in spirit than the tree structure. The idea is to assign to each primitive feature a tag-a scalar in [0, I)-such that features within a subtree have similar values. For the features of stick man, possible tags might be: eyes .1, nose .2, lips .28, teeth .32, arm .6, torso .7, leg .8. Denoting the set of all features having tags in [a, ~] by Sea, ~), one can specify any subtree of the stick man representation. For example, S(O, 1) includes all features of stick man; S(0,.5) includes all features in the subtree whose root is stick man's head, S(.5,I) his body; S(.25,.35) indicates the parts of the mouth. By a simple algorithm, tags can be assigned to the leaf nodes of any tree such that any subtree can be selected by specifying an appropriate tag range. The only requirement for this algorithm is knowledge of the maximum branching factor. There is no fixed limit to the depth of the tree that can be thus represented; however, the deeper the tree, the finer the tag resolution that wiII be needed. The tags provide a "flat" way of representing hierarchical structure. Although the tree is implicit in the representation, the tags convey all information in the tree, and thus can capture complex, articulated structures. The tags in fact convey additional information. For example in the above feature list, note that lips is closer to nose than teeth is to nose. This information can easily be ignored, but it is still worth observing that the tags carry extra baggage not present in the symbolic tree structure. It is convenient to represent the tags on a range [0, 21t) rather than [0, I]. This allows the tag to be identified with a directional-or angular-value. Viewed as part of a cyclic continuum, the directional tags are homogeneous, in contrast to the linear tags where tags near ° and 1 have special status by virtue of being at endpoints of the continuum. Homogeneity results in a more elegant model, as described below. The directional tags also permit a neurophysiological interpretation, albeit speculative. It has been suggested that synchronized oscillatory activities in the nervous system can be used to convey information above and beyond that contained in the average firing rate of individual neurons (e.g., Eckhorn et aI., 1988; Gray et aI., 1989; von der Malsburg, 1981). These osciIIations vary in their phase, the relative offset of the bursts. The directional tags could map directly to phases of oscillations, providing a means of implementing the tagging in neocortex. 4 REGULARITY DISCOVERY Many learning paradigms allow for the djscovery of regUlarity. I have used an autoencoder architecture (Plaut, Nowlan, & Hinton, 1986) that maps an input pattern-a Hierarchical Decomposition of Visual Scenes 55 representation of visual features in a scene-to an output pattern via a small layer of hidden units. The goal of this type of architecture is for the network to reproduce the input pattern over the output units. The task requires discovery of regularities because the hidden layer serves as an encoding bottleneck that limits the representational capacity of the system. Consequently, stronger regularities (the most common patterns) will be encoded over the weaker. 5 MAGIC We now need to combine the autoencoder architecture with the notion of tags such that regularity of feature configurations in the input will increase the likelihood that the features will be assigned the same tags. This goal can be achieved using a model we developed for segmenting an image into different objects using supervised learning. The model, MAGIC (Mozer, Zemel, Behrmann, & Williams, 1992), was trained on images containing several visual objects and its task was to tag features according to which object they belonged. A teacher provided the target tags. Each unit in MAGIC conveys two distinct values: a probability that a feature is present, which I will call the feature activity, and a tag associated with the feature. The tag is a directional (angular) value, of the sort suggested earlier. (The tag representation is in reality a complex number whose direction corresponds to the directional value and whose magnitude is related to the unit's confidence in the direction. As this latter aspect of the representation is not central to the present work, I discuss it no further.) The architecture is a two layer recurrent net. The input or feature layer is set of spatiotopic arrays-in most simulations having dimensions 25x25-each array containing detectors for features of a given type: oriented line segments at 0 0 ,45 0 ,90 0 , and 135 0 • In addition, there is a layer of hidden units. Each hidden unit is reciprocally connected to input from a local spatial patch of the input array; in the current simulations, the patch has dimensions 4x4. For each patch there is a corresponding fixed-size pool of hidden units. To achieve a translation invariant response across the image, the pools are arranged in a spatiotopic array in which neighboring pools respond to neighboring patches and the patch-to-pool weights are constrained to be the same at all locations in the array. There are interlayer connections, but no intralayer connections. The images presented to MAGIC consist of an arrangement of features over the input array. The feature activity is clamped on (i.e., the feature is present), and the initial directional tag of the feature is set at random. Feature unit activities and tags feed to the hidden units, which in turn feed back to the feature units. Through a relaxation process, the system settles on an assignment of tags to the feature units (as well as to the hidden units, although read out from the model concerns only the feature units). MAGIC is a mean-field approximation to a stochastic network of directional units with binary-gated outputs (Zemel, Williams, & Mozer, 1995). This means that a mean-field energy functional can be written that expresses the network state and controls the dynamics; consequently, MAGIC is guaranteed to converge to a stable pattern of tags. Each hidden unit detects a spatially local configuration offeatures, and it acts to reinstate a pattern of tags over the configuration. By adjusting its incoming and outgoing weights during training, the hidden unit is made to respond to configurations that are consistently tagged in the training set. For example, if the training set contains many corner junctions where horizontal and vertical lines come to a point and if the teacher tags all features composing these lines as belonging to the same object, then a hidden unit might learn to detect this configuration, and when it does so, to force the tags of the component features to be the same. In our earlier work, MAGIC was trained to map the feature activity pattern to a target pattern of feature tags, where there was a distinct tag for each object in the image. In the present work, the training objective is rather to impose uniform tags over the features. Additionally, the training objective encourages MAGIC to reinstate the feature activity 56 Iteration 1 ,, - , ---~i I Iteration 2 ----- -;,- ~ I I Directional Tag Spectrum M C. Mozer Iteration 4 Iteration 6 Iteration II _ AI!II.·, 1 1 1.'I!t~ FIGURE 3. The state of MAGIC as processing proceeds for an image composed of a pair of lines made out of horizontal and vertical line segments. The coloring of a segment represents the directional tag. The segments belonging to a line are randomly tagged initially; over processing iterations, these tags are brought into alignment. pattern over the feature units; that is, the hidden units must encode and propagate information back to the feature units that is sufficient to specify the feature activities (if the feature activities weren't clamped). With this training criterion, MAGIC becomes a type of autoencoder. The key property of MAGIC is that it can assign a feature configuration the same tag only if it learns to encode the configuration. If an arrangement is not encoded, there will be no force to align the feature tags. Further, fixed weak inhibitory connections between every pair of feature units serve to spread the tags apart if the force to align them is not strong enough. Note that this training paradigm does not require a teacher to tag features as belonging to one part or another. MAGIC will try to tag all features as belonging to the same part, but it is able to do so only for configurations of features that it is able to encode. Consequently, highly regular and recurring configurations will be grouped together, and irregular configurations will be pulled apart. The strength of grouping will be proportional to the degree of regularity. 6 SIMULATION EXPERIMENTS To illustrate the behavior of the model, I show a simple simulation in which MAGIC is trained on pairs of lines, one vertical and one horizontal. Each line is made up of 6 colinear line segments. The segments are primitive input features of the model. The two lines may appear in different positions relative to one another. Hence, the strongest regularity is in the segments that make up a line, not the junction between the lines. When trained with two hidden units, MAGIC has sufficient resources to encode the structure within each line, but not the relationships among the lines; because this structure is not encoded, the features of the two lines are not assigned the same tags (Figure 3). Although each "part" is made up of features having a uniform orientation and in a colinear arrangement, the composition and structure of the parts is immaterial; MAGIC's performance depends only on the regularity of the configurations. In the next set of simulations, MAGIC discovers regularities of a more arbitrary nature. 6.1 MODELING HUMAN LEARNING OF PART-WHOLE RELATIONS Schyns and Murphy (1992) studied the ontogeny of part-whole relationships by training human subjects on a novel class of objects and then examining how the subjects decomposed the objects into their parts. I briefly describe their experiment, followed by a simulation that accounts for their results. In the first phase of the experiment, subjects were shown 3-D gray level "martian rocks" on a CRT screen. The rocks were constructed by deforming a sphere, resulting in various bumps or protrusions. Subjects watched the rocks rotating on the screen, allowing them to view the rock from all sides. Subjects were shown six instances, all of which were labeled "M 1 rocks" and were then tested to determine whether they could distinguish M 1 Hierarchical Decomposition of Visual Scenes 57 rocks from other rocks. Subjects continued training until they performed correctly on this task. Every Ml rock was divided into octants; the protrusions on seven of the octants were generated randomly, and the protrusions on the last octant were the same for all Ml rocks. Two groups of subjects were studied. The A group saw M I rocks all having part A, the B group saw M 1 rocks all having part B. Following training, subjects were asked to delineate the parts they thought were important on various exemplars. Subjects selected the target part from the category on which they were trained 93% of the time, and the alternative target-the target from the other category-only 8% of the time, indicating that the learning task made a part dramatically more salient. To model this phase of the experiment, I generated two dimensional contours of the same flavor as Schyns and Murphy's martian rocks (Figure 4). Each rock-can it a "venusian rock" for distinction-can be divided into four quadrants or parts. Two groups of venusian rocks were generated. Rocks of category A an contained part A (left panel, Figure 4), rocks of category B contained part B (center panel, Figure 4). One network was trained on six exemplars of category A rocks, another network was trained on six exemplars of category B rocks. Then, with learning turned off, both networks were tested on five presentations each of twelve new exemplars, six each of categories A and B. Just as the human subjects were instructed to delineate parts, we must ask MAGIC to do the same. One approach would be to run the model with a test stimulus and, once it settles, select an features having directional tags clustered tightly together as belonging to the same part. However, this requires specifying and tuning a clustering procedure. To avoid this additional step, I simply compared how tightly clustered were the tags of the target part relative to those of the alternative target. I used a directional variance measure that yields a value of 0 if all tags are identical and I if the tags are distributed uniformly over the directional spectrum. By this measure, the variance was .30 for the target part and .68 for the alternative target (F(l, 118) = 322.0, P < .001), indicating that the grouping of features of the target part was significantly stronger. This replicates, at least qualitatively, the finding of Schyns and Murphy. In a second phase of Schyns and Murphy's experiment, subjects were trained on category C rocks, which were formed by adjoining parts A and B and generating the remaining six octants at random. Following training, subjects were again asked to delineate parts. All subjects delineated A and B as distinct parts. In contrast, a naive group of subjects who were trained on category C alone always grouped A and B together as a single part. To model this phase, I generated six category C venusian rocks that had both parts A and B (right panel, Figure 4). The versions of MAGIC that had been trained on category A and B rocks alone were now trained on category C rocks. As a control condition, a third version of MAGIC was trained from scratch on category C rocks alone. I compared the tightness of clustering of the combined A-B part for the first two nets to the third. Using the same variance measure as above, the nets that first received training on parts A and B alone yielded a variance of .57, and the net that was only trained on the combined A-B part yielded a variance of .47 (F(1,88) = 7.02, P < .02). One cannot directly compare the variance of the A-B part to that of the A and B parts alone, because the measure is structured such that parts with more features always yield larger variances. However, one can compare the two conditions using the relative variance of the combined A-B part to the A 'S'~7 (-"to. ~~ ~~" t.r ~ I~~ , ~ ~ -,~ ,,~ -,-./ ~~""-.... .... ,~~ FIGURE 4. Three examples of the martian rock stimuli used to train MAGIC. From left to right, the rocks are of categories A, B, and C. The lighter regions are the contours that define rocks of a given category. 58 M C. Mozer and B parts alone. This yielded the same outcome as before (.21 for the first two nets, .12 for the third net, F(l,88) = 5.80, p < .02). Thus, MAGIC is also able to account for the effects of prior learning on part ontogeny. 7 CONCLUSIONS The regularity principle proposed in this work seems consistent with the homogeneity principle proposed earlier by Schyns and Murphy (1991, 1992). Indeed, MAGIC is able to model Schyns and Murphy's data using an unsupervised training paradigm, although Schyns and Murphy framed their experiment as a classification task. This work is but a start at modeling the development of part-whole hierarchies based on perceptual experience. MAGIC requires further elaboration, and I am somewhat skeptical that it is sufficiently powerful in its present form to be pushed much further. The main issue restricting it is the representation of input features. The oriented-line-segment features are certainly too primitive and inflexible a representation. For example, MAGIC could not be trained to recognize the lid and shell of Figure 2 because it encodes the orientation of the features with respect to the image plane, not with respect to one another. Minimally, the representation requires some version of scale and rotation invariance. Perhaps the most interesting computational'issue raised by MAGIC is how the pattern of feature tags is mapped into an explicit part-whole decomposition. This involves clustering together the similar tags as a unit, or possibly selecting all tags in a given range. To do so requires specification of additional parameters that are external to the model (e.g., how tight the cluster should be, how broad the range should be, around what tag direction it should be centered). These parameters are deeply related to attentional issues, and a current direction of research is to explore this relationship. 8 ACKNOWLEDGEMENTS This research was supported by NSF PYI award IRI-9058450 and grant 97-18 from the McDonnell-Pew Program in Cognitive Neuroscience. 9 REFERENCES Becker, S. (1995). JPMAX: Learning to recognize moving objects as a model-fitting problem. In G. Tesauro, D. S. Touretzky, & T. K. Leen (Eds), Advances in Neural Informatio/l ProcessinK Systems 7 (pp. 933-940). Cambridge, MA: MIT Press. Eckhorn, Roo Bauer, R., Jordan, w., Brosch, M., Kruse, w., Munk, M., & Reitboek, H. J. (1988). Coherent oscillations: A mechanism of feature linking in the visual cortex? Biological Cybernetics, 60, 121-130. Gray, C. M., Koenig, P., Engel, A. K., & Singer, W. (1989). Oscillatory responses in cat visual cortex exhibit intercolumnar synchronization which reflects global stimulus properties. Nature (London), 338, 334-337. Mozer, M. c., Zemel, R. S., Behrmann, M .. & Williams, C. K. I. (1992). Learning to segment images using dynamic feature binding. Neural Computation, 4, 650-666. Plaut, D. C., Nowlan, S., & Hinton, G. E. (1986). Experiments 011 leaminK by back propagation (Technical report CMU-CS-86- 126). Pittsburgh, PA: Carnegie-Mellon University, Department of Computer Science. Pollack, J. B. (1988). Recursive auto-associative memory: Devising compositional distributed representations. In Proceedings of the Tenth Annual Conference of the Cognitive Science Society (pp. 33-39). Hillsdale, NJ: Erlbaum. Schyns, P. G., & Murphy, G. L. (1992). The ontogeny of units in object categories. In Proceedings of the Fourteenth Annual Conference (!fthe COKnitil'e Science Society (pp. 197-202). Hillsdale, NJ: Erlbaum. Schyns, P. G. , & Murphy, G. L. (1993). The ontogeny of transformable part representations in object concepts. In Proceedings of the Fifteenth Annual Conference of the OIKllitive Science Society (pp. 917-922). Hillsdale. NJ: Erlbaum. Smolensky, P. (1990). Tensor product variable binding and the representation of symbolic structures in connectionist networks. Artificial Intelligence, 46. 159-2 I 6. Tenenbaum, J. B. (1994). Functional parts. In A. Ram & K. Eiselt (Eds.). Pmceedings (If the Sixteenth An/lual Conference of the Cognitive Science Society (pp. 864-869). Hillsdale, NJ: Erlbaum. von der Malsburg, C. (1981). The correlatioll theory of brain fun ction (Internal Report 81-2). Goettingen: Department of Neurobiology, Max Planck Institute for Biophysical Chemistry. Zemel, R. S., Williams, C. K. I.. & Mozer, M. C. (1995). Lending direction to neural networks. Neural Networks, 8, 503-512.
1998
104
1,458
Gradient Descent for General Reinforcement Learning Leemon Baird leemon@cs.cmu.edu www.cs.cmu.edu/- Ieemon Computer Science Department 5000 Forbes Avenue Carnegie Mellon University Pittsburgh, PA 15213-3891 Abstract Andrew Moore awm@cs.cmu.edu www.cs.cmu.edu/-awm Computer Science Department 5000 Forbes Avenue Carnegie Mellon University Pittsburgh, PA 15213-3891 A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcementlearning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Qlearning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policysearch and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (V APS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed. 1 CONVERGENCE OF GREEDY EXPLORATION Many reinforcement-learning algorithms are known that use a parameterized function approximator to represent a value function, and adjust the weights incrementally during learning. Examples include Q-learning, SARSA, and advantage learning. There are simple MOPs where the original form of these algorithms fails to converge, as summarized in Table 1. For the cases with..J, the algorithms are guaranteed to converge under reasonable assumptions such as Gradient Descent for General Reinforcement Learning 969 Table 1. Current convergence results for incremental, value-based RL algorithms. Residual algorithms changed every X in the first two columns to ..J. The new al in this X to a ..J. Usuallydistribution distribution greedy distribution Markov chain r-----~----~----_+------------.-------MDP POMDP r--------:---'''---_+." =convergence guaranteed X=counterexample is known that either diverges or oscillates between the best and worst ible icies. decaying learning rates. For the cases with X, there are known counterexamples where it will either diverge or osciIlate between the best and worst possible policies, which have very-different values. This can happen even with infinite training time and slowly-decreasing learning rates (Baird, 95, Gordon, 96). Each X in the first two columns can be changed to a ..J and made to converge by using a modified form of the algorithm, the residual form (Baird 95). But this is only possible when learning with a fixed training distribution, and that is rarely practical. For most large problems, it is useful to explore with a policy that is usualIy-greedy with respect to the current value function, and that changes as the value function changes. In that case (the rightmost column of the chart), the current convergence guarantees are not very good. One way to guarantee convergence in alI three columns is to modify the algorithm so that it is performing stochastic gradient descent on some average error function, where the average is weighted by state-visitation frequencies for the current usually-greedy policy. Then the weighting changes as the policy changes. It might appear that this gradient is difficult to compute. Consider Qlearning exploring with a Boltzman distribution that is usually greedy with respect to the learned Q function. It seems difficult to calculate gradients, since changing a single weight will change many Q values, changing a single Q value will change many action-choice probabilities in that state, and changing a single action-choice probability may affect the frequency with which every state in the MDP is visited. Although this might seem difficult, it is not. Surprisingly, unbiased estimates of the gradients of visitation distributions with respect to the weights can be calculated quickly, and the resulting algorithms can put a ..J in every case in Table 1. 2 DERIVATION OF THE V APS EQUATION Consider a sequence of transitions observed while following a particular stochastic policy on an MDP. Let Sl = {xo,uo,Ro, xt.ut.Rt. ... xl.t.ul_t.RI_t. xtout.RI} be the sequence of states, actions, and reinforcements up to time t, where performing action UI in state XI yields reinforcement RI and a transition to state XI+I. The 970 L. Baird and A. W. Moore stochastic policy may be a function of a vector of weights w. Assume the MOP has a single start state named Xo. If the MOP has terminal states, and x, is a terminal state, then X'+I=XO. Let S, be the set of all possible sequences from time 0 to t. Let e(s,) be a given error function that calculates an error on each time step, such as the squared Bellman residual at time t, or some other error occurring at time t. If e is a function of the weights, then it must be a smooth function of the weights. Consider a period of time starting at time 0 and ending with probability P(endls,) after the sequence s, occurs. The probabilities must be such that the expected squared period length is finite. Let B be the expected total error during that period, where the expectation is weighted according to the state-visitation frequencies generated by the given policy: T r B = I I P(period ends at time T after trajectory Sr) I e(s,) (I) ,=0 xc = I I e(s,)P(sJ (2) 1= 0 s, eSt where: ,- I pes,) = P(u, I sJP(R, I s,)O P(u, I s,)P(R, I s,)P(S'+1 I s,)fi - P(end Is,)] ,=0 (3) Note that on the first line, for a particular s" the error e(s,) will be added in to B once for every sequence that starts with s,. Each of these terms will be weighted by the probability of a complete trajectory that starts with s,. The sum of the probabilities of all trajectories that start with s, is simply the probability of s, being observed, since the period is assumed to end eventually with probability one. So the second line equals the first. The third line is the probability of the sequence, of which only the P(u,lx,) factor might be a function of w. If so, this probability must be a smooth function of the weights and nonzero everywhere. The partial derivative of B with respect to w, a particular element of the weight vector w, is: (4) (5) Space here is limited, and it may not be clear from the short sketch of this derivation, but summing (5) over an entire period does give an unbiased estimate of B, the expected total error during a period. An incremental algorithm to perform stochastic gradient descent on B is the weight update given on the left side of Table 2, where the summation over previous time steps is replaced with a trace T, for each weight. This algorithm is more general than previously-published algorithms of this form, in that e can be a function of all previous states, actions, and reinforcements, rather than just the current reinforcement. This is what allows V APS to do both value and policy search. Every algorithm proposed in this paper is a special case of the V APS equation on the left side of Table 2. Note that no model is needed for this algorithm. The only probability needed in the algorithm is the policy, not the transition probability from the MOP. This is stochastic gradient descent on B, and the update rule is only correct if the observed transitions are sampled from trajectories found by following Gradient Descent for General Reinforcement Learning 971 Table 2. The general YAPS algorithm (left), and several instantiations of it (right). This single algorithm includes both value-based and policy-search approaches and h . b" d' d . t elr com matlOn, an gives guarantee convergence m every case. eSARSA (St) = t £2 (R,_1 + }Q(xt, ut ) - Q(xt_1 , u,-ll eQ-learm"g(s,) = 1- E2lRI_1 + y m~ Q(x" u) - Q(x,_1' u,-;l ~w, = -aL~ e(s,) + e(s,)T,] [RH + r m", A(x" u) -1' A(x,_,. UH ) 1 eadva"lag, (S,)=fE2 "(~-I) A( ) + A m,:u' X, _I' U ~T, = ~I In(P(u'_1 I S,_I)) eva/lte -'leraIlO" (S/) = + [ max E[ R'_I + yV (xJ] - V (x/-I) J It, 1 eSARI'A-poh,y (SJ = (t - P)eSARI'A(SJ + pT.b - y' R/J the current, stochastic policy. Both e and P should be smooth functions of w, and for any given w vector, e should be bounded. The algorithm is simple, but actuaIly generates a large class of different algorithms depending on the choice of e and when the trace is reset to zero. For a single sequence, sampled by following the current policy, the sum of ~w along the sequence will give an unbiased estimate of the true gradient, with finite variance. Therefore, during learning, if weight updates are made at the end of each trial, and if the weights stay within a bounded region, and the learning rate approaches zero, then B wiIl converge with probability one. Adding a weight-decay term (a constant times the 2-norm of the weight vector) onto B will prevent weight divergence for small initial learning rates. There is no guarantee that a global minimum will be found when using general function approximators, but at least it will converge. This is true for backprop as well. 3 INSTANTIATING THE V APS ALGORITHM Many reinforcement-learning algorithms are value-based; they try to learn a value function that satisfies the BeUman equation. Examples are Q-learning, which learns a value function, actor-critic algorithms, which learn a value function and the policy which is greedy with respect to it, and TO( 1), which learns a value function based on future rewards. Other algorithms are pure policy-search algorithms; they directly learn a policy that returns high rewards. These include REINFORCE (Williams, 1988), backprop through time, learning automata, and genetic algorithms. The algorithms proposed here combine the two approaches: they perform Value And Policy Search (YAPS). The ,general VAPS equation is instantiated by choosing an expression for e. This can be a Bellman residual (yielding value-based), the reinforcement (yielding policy-search), or a linear combination of the two (yielding Value And Policy Search). The single VAPS update rule on the left side of Table 2 generates a variety of different types of algorithms, some of which are described in the foIlowing sections. 3.1 REDUCING MEAN SQUARED RESIDUAL PER TRIAL If the MOP has terminal states, and a trial is the time from the start until a terminal state is reached, then it is possible to minimize the expected total error per trial by resetting the trace to zero at the start of each trial. Then, a convergent form of SARSA, Q-Iearning, incremental value iteration, or advantage learning can be generated by choosing e to be the squared Bellman residual, as shown on the right side of Table 2. In each case, the expected value is taken over all possible (x/>u"R,) 972 L. Baird and A. W Moore triplets, given St-I' The policy must be a smooth, nonzero function of the weights. So it could not be an c-greedy policy that chooses the greedy action with probability (I-c) and chooses uniformly otherwise. That would cause a discontinuity in the gradient when two Q values in a state were equal. But the policy could be something that approaches c-greedy as a positive temperature c approaches zero: & 1 + eQ(x.II) l c P(u I x) = -;; + (I - &) I (I + eQ(x,u') lc ) (6) II' where n is the number of possible actions in each state. For each instance in Table 2 other than value iteration, the gradient of e can be estimated using two, independent, unbiased estimates of the expected value. For example: !, eSARSA (Sf) == e SAR.S:4 (Sf {r¢ !, Q(X'f , U'f ) - !, Q(Xf _l , U f _I )) (7) When $=1, this is an estimate of the true gradient. When $<1, this is a residual algorithm, as described in (Baird, 96), and it retains guaranteed convergence, but may learn more quickly than pure gradient descent for some values of $. Note that the gradient of Q(x,u) at time I uses primed variables. That means a new state and action at time I were generated independently from the state and action at time 1-1. Of course, if the MOP is deterministic, then the primed variables are the same as the unprimed. If the MOP is nondeterministic but the model is known, then the model must be evaluated one additional time to get the other state. If the model is not known, then there are three choices. First, a model could be learned from past data, and then evaluated to give this independent sample. Second, the issue could be ignored, simply reusing the unprimed variables in place of the primed variables. This may affect the quality of the learned function (depending on how random the MOP is), but doesn't stop convergence, and be an acceptable approximation in practice. Third, all past transitions could be recorded, and the primed variables could be found by searching for all the times (Xt-hUt-') has been seen before, and randomly choosing one of those transitions and using its successor state and action as the primed variables. This is equivalent to learning the certainty equivalence model, and sampling from it, and so is a special case of the first choice. For extremely large state-action spaces with many starting states, this is likely to give the same result in practice as simply reusing the unprimed variables as the primed variables. Note, that when weights do not effect the policy at all, these algorithms reduce to standard residual algorithms (Baird, 95). It is also possible to reduce the mean squared residual per step, rather than per trial. This is done by making period lengths independent of the policy, so minimizing error per period will also minimize the error per step. For example, a period might be defined to be the first 100 steps, after which the traces are reset, and the state is returned to the start state. Note that if every state-action pair has a positive chance of being seen in the first 100 steps, then this will nol just be solving a finite-horizon problem. It will be actually be solving the discounted, infinite-horizon problem, by reducing the Bellman residual in every state. But the weighting of the residuals wilI be determined only by what happens during the first 100 steps. Many different problems can be solved by the V APS algorithm by instantiating the definition of "period" in different ways. 3.2 POLICY-SEARCH AND VALUE-BASED LEARNING It is also possible to add a term that tries to maximize reinforcement directly. For example, e could be defined to be e.\·ARSA-I'0!Jcy rather than eSARSA. from Table 2, and Gradient Descent for General Reinforcement Learning 973 10000 ,--------------, {Jl ca ._ 1000 I-< E100 -t----r---,...---,...-----l o 0.2 0.4 Beta 0.6 Figure 1. A POMDP and the number of trials needed to learn it vs. p. A combination of policy-search and value-based RL outperforms either alone. 0.8 the trace reset to zero after each terminal state is reached. The constant b does not affect the expected gradient, but does affect the noise distribution, as discussed in (Williams, 88). When P=O, the algorithm will try to learn a Q function that satisfies the Bellman equation, just as before. When P=I, it directly learns a policy that will minimize the expected total discounted reinforcement. The resulting "Q function" may not even be close to containing true Q values or to satisfying the Bellman equation, it will just give a good policy. When P is in between, this algorithm tries to both satisfy the Bellman equation and give good greedy policies. A similar modification can be made to any of the algorithms in Table 2. In the special case where P=I, this algorithm reduces to the REINFORCE algorithm (Williams, 1988). REINFORCE has been rederived for the special case of gaussian action distributions (Tresp & Hofman, 1995), and extensions of it appear in (Marbach, 1998). This case of pure policy search is particularly interesting, because for P=I , there is no need for any kind of model or of generating two independent successors. Other algorithms have been proposed for finding policies directly, such as those given in (Gullapalli, 92) and the various algorithms from learning automata theory summarized in (Narendra & Thathachar, 89). The V APS algorithms proposed here appears to be the first one unifying these two approaches to reinforcement learning, finding a value function that both approximates a Bellman-equation solution and directly optimizes the greedy policy. Figure 1 shows simulation results for the combined algorithm. A run is said to have learned when the greedy policy is optimal for 1000 consecutive trials. The graph shows the average plot of 100 runs, with different initial random weights between ± 10.6 . The learning rate was optimized separately for each p value. R= 1 when leaving state A, R=2 when leaving state B or entering end, and R=O otherwise. y=0.9. The algorithm used was the modified Q-Iearning from Table 2, with exploration as in equation 13, and q>=c= l, b=O, c=O.1. States A and B share the same parameters, so ordinary SARSA or greedy Q-Iearning could never converge, as shown in (Gordon, 96). When p=O (pure value-based), the new algorithm converges, but of course it cannot learn the optimal policy in the start state, since those two Q values learn to be equal. When P=1 (pure policy-search), learning converges to optimality, but slowly, since there is no value function caching the results in the long sequence of states near the end. By combining the two approaches, the new algorithm learns much more quickly than either alone. It is interesting that the V APS algorithms described in the last three sections can be applied directly to a Partially Observable Markov Decision Process (POMDP), where the true state is hidden, and all that is available on each time step is an 974 L. Baird and A. W Moore ambiguous "observation", which is a function of the true state. Normally, an algorithm such as SARSA only has guaranteed convergance when applied to an MOP. The V APS algorithms will converge in such cases. 4 CONCLUSION A new algorithm has been presented. Special cases of it give new algorithms similar to Q-Iearning, SARSA, and advantage learning, but with guaranteed convergence for a wider range of problems than was previously possible, including POMOPs. For the first time, these can be guaranteed to converge, even when the exploration policy changes during learning. Other special cases allow new approaches to reinforcement learning, where there is a tradeoff between satisfying the Bellman equation and improving the greedy policy. For one MOP, simulation showed that this combined algorithm learned more quickly than either approach alone. This unified theory, unifying for the first time both value-based and policysearch reinforcement learning, is of theoretical interest, and also was of practical value for the simulations performed. Future research with this unified framework may be able to empirically or analytically address the old question of when it is better to learn value functions and when it is better to learn the policy directly. It may also shed light on the new question, of when it is best to do both at once. Acknowledgments This research was sponsored in part by the U.S. Air Force. References Baird, L. C. (1995). Residual Algorithms: Reinforcement Learning with Function Approximation. In Armand Prieditis & Stuart Russell, eds. Machine Learning: Proceedings of the Twelfth International Conference, 9- 1 2 July, Morgan Kaufman Publishers, San Francisco, CA. Gordon, G. (1996). "Stable fitted reinforcement learning". In G. Tesauro, M. Mozer, and M. Hasselmo (eds.), Advances in Neural Information Processing Systems 8, pp. 1052-1058. MIT Press, Cambridge, MA. Gullapalli, V. (1992). Reinforcement Learning and Its Application to Control. Dissertation and COINS Technical Report 92-10, University of Massachusetts, Amherst, MA. Kaelbling, L. P., Littman, M. L. & Cassandra, A., "Planning and Acting in Partially Observable Stochastic Domains". Artificial Intelligence, to appear. Available now at http://www.cs.brown.edu/people/lpk. Marbach, P. (1998). Simulation-Based Optimization of Markov Decision Processes. Thesis LIDS-TH 2429, Massachusetts Institute of Technology. McCallum (1995), A. Reinforcement learning with selective perception and hidden state. Dissertation, Department of Computer Science, UniverSity of Rochester, Rochester, NY. Narendra, K .. & Thathachar, M.A.L. (1989). Learning automata: An introduction. Prentice Hall, Englewood Cliffs, NJ. Tresp, V., & R. Hofman (1995). "Missing and noisy data in nonlinear time-series prediction". In Proceedings of Neural Networks for Signal Processing 5, F. Girosi, J. Makhoul, E. Manolakos and E. Wilson, eds., IEEE Signal Processing Society, New York, New York, 1995. pp. 1-10. Williams, R. J. (1988). Toward a theory of reinforcement-learning connectionist systems. Technical report NU-CCS-88-3, Northeastern University, Boston, MA.
1998
105
1,459
Vertex Identification in High Energy Physics Experiments Gideon Dror* Department of Computer Science The Academic College of Tel-Aviv-Yaffo, Tel Aviv 64044, Israel Halina Abramowiczt David Hornt School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Sciences Tel-Aviv University, Tel Aviv 69978, Israel Abstract In High Energy Physics experiments one has to sort through a high flux of events, at a rate of tens of MHz, and select the few that are of interest. One of the key factors in making this decision is the location of the vertex where the interaction, that led to the event, took place. Here we present a novel solution to the problem of finding the location of the vertex, based on two feedforward neural networks with fixed architectures, whose parameters are chosen so as to obtain a high accuracy. The system is tested on simulated data sets, and is shown to perform better than conventional algorithms. 1 Introduction An event in High Energy Physics (HEP) is the experimental result of an interaction during the collision of particles in an accelerator. The result of this interaction is the production of tens of particles, each of which is ejected in a different direction and energy. Due to the quantum mechanical effects involved, the events differ from one another in the number of particles produced, the types of particles, and their energies. The trajectories of produced particles are detected by a very large and sophisticated detector. • gideon@server.mta.ac.il thalina@Dost.tau.ac.i1 *hom@n;uron.tau.ac.il Vertex Identification in High Energy Physics Experiments 869 Events are typically produced at a rate of 10 MHz, in conjunction with a data volume of up to 500 kBytes per event. The signal is very small, and is selected from the background by multilevel triggers that perform filtering either through hardware or software. In the present paper we confront one problem that is of interest in these experiments and is part of the triggering consideration. This is the location of the vertex of the interaction. To be specific we will use a simulation of data collected by the central tracking detector [1] of the ZEUS experiment [2] at the HEP laboratory DESY in Hamburg, Germany. This detector, placed in a magnetic field , surrounds the interaction point and is sensitive to the path of charged particles. It has a cylindrical shape around the axis, z, where the interaction between the incoming particles takes place. The challenge is to find an efficient and fast method to extract the exact location of the vertex along this axis. 2 The Input Data An example of an event, projected onto the z = 0 plane, is shown in Figure 1. Only the information relevant to triggering is used and displayed. The relevant points, which denote hits by the outgoing particles on wires in the detector, form five rings due to the concentric structure of the detector. Several slightly curved particle tracks emanating from the origin, which is marked with a + sign, and crossing all five rings, can easily be seen. Each track is made of 30-40 data points. All tracks appear in this projection as arcs, and indeed, when viewed in 3 dimensions, every particle follows a helical trajectory due to the solenoidal magnetic field in the detector. 60 40 20 Eo .£ -20 -40 -60 . "1: .. -60 -40 -20 0 20 40 60 x[cml Figure 1: A typical event projected onto the z = 0 plane. The dots, or hits, have a two-fold ambiguity in the determination of the xy coordinates through which the particle has moved. The correct solutions lie on curved tracks that emanate from the origin. Each physical hit is represented twice in Fig. 1 due to an inherent two-fold ambiguity in the determination of its xy coordinates. The correct solutions form curved tracks emanating from the origin. Some of those can be readily seen in the data. Due to the limited time available for decision making at the trigger level, the z coordinate is obtained from the difference in arrival times of a pulse at both ends of the CTD and is available for only a fraction of these points. The hit resolution in xy is '" 230 J.lm , while that of z-by-timing is ::: 4 cm. The quality of the z coordinate 870 G. Dror. H. Abramowicz and D. Hom information is exemplified in figure 2. Figure 2(a) shows points forming a track of a single particle on the z = 0 projection. Since the corresponding track forms a helix with small curvature, one expects a linear dependence of the z coordinate of the hits on their radial position, r = J x 2 + y2. Figure 2(b) compares the values of r with the measured z values for these points. The scatter of the data around the linear regression fit is considerable. 35,--,--,-,1---,1-...,--,..-----.1--1.----, 10~~-~-,..--~-~-~~-, 301a) 25f~20f>10151... ; :-r. 90 80 70 E ~60 N _ 50 40 30 20 b) I I I 'I 10 20 30 40 50 60 70 80 1~5 20 25 30 35 40 45 50 55 x [cm) r[cm) Figure 2: A typical example of uncertainties in the measured z values: (a) a single track taken from the event shown in figure 1, (b) the z coordinate vs r = Jx 2 + y2 the distance from the z axis for the data points shown in (a). The full line is a linear regression fit. 3 The Network Our network is based on step-wise changes in the representation of the data, moving from the input points, to local line segments and to global arcs. The nature of the data and the problem suggest it is best to separate the treatment of the xy coordinates from that of the z coordinate. Two parallel networks which perform entirely different computations, form our final system. The first network, which handles the xy information is responsible for constructing arcs that correctly identify some of the particle tracks in the event. The second network uses this information to evaluate the z location of the point where all tracks meet. 3.1 Arc Identification Network The arc identification network processes information in a fashion akin to the method visual information is processed by the primary visual system [3]. The input layer for this network is made of a large number of neurons (several tens of thousands) and corresponds to the function of the retina. Each input neuron has its distinct receptive field. The sum of all fields covers completely the relevant domain in the xy plane. This domain has 5 concentric rings, which show up in figure 1. The total area of the rings is about 5000 cm2 , and covering it with 100000 input neurons leads to satisfactory resolution. A neuron in the input level fires when a hit is present in its receptive field. We shall label each input neuron by the (xy) coordinates of the center of its receptive field. Neurons of the second layer are line segment detectors. Each second layer neuron is labeled by (XY a), where (X, Y) are the coordinates of the center of the segment Vertex Identification in High Energy Physics Experiments 871 and 0' denotes its orientation. The activation of second layer neurons is given by where VXYa = g(2:: J XY a ,xy Vxy ( 2 ) , xy lxYa ,ry = { ~1 ifr.L < O.5cmArll < 2cm ifO.5cm< r.L < 1cmArii < 2cm otherwise (1) (2) and g( x) is the standard Heaviside step function. rll and r.L are the parallel and perpendicular distances between (X , Y) and (x, y) with respect to the axis of the line segment, defined by 0' . It is important to note that at this level, values of the threshold 82 which are slightly lower than optimum are preferable, taking the risk of obtaining superfluous line segments in order to reduce the probability of missing one. Superfluous line segments are filtered out very efficiently in higher layers. Figure 3 represents the output of the second layer neurons for the input illustrated by the event of figure 1. An active second layer neuron (XY 0') is represented in this figure by a line segment centered at the point (X, Y) making an angle 0' with the x axis. The length of the line segments is immaterial and was chosen only for the purpose of visual clarity. 60 40 20 E 0 ~ >-20 -40 -60 ... ~ "Z .... ~ .~ 1'1( #.>~~ . s ~ , ~.::.. ~ -t!~ ~ ~. ' ~ ~ . "\0 ~1t ~ "'" " ..,. ~ , . i-!. ;J '" lI' -J.. -60 -40 -20 0 20 40 60 xfcml Figure 3: Representation of the activity of second layer neurons XY 0' for the input of figure 1 taken by plotting the appropriate line segments in the xy plane. At some XY locations several line segments with different directions occur due to the rather low threshold parameter used, 82 = 4. Neurons of the third layer transform the representation of local line segments into local arc segments. An arc which passes through the origin is uniquely defined by its radius of curvature R and its slope at the origin. Thus, each third layer neuron is labeled by '" 8 i , where 1"'1 = 1/ R is the curvature and the sign of '" determines the orientation of the arc. 1 < i < 5 is an index which relates each arc segment to the ring it belongs to. -The mapping between second and third layers is based on a winner-take-all mechanism. Namely, for a given local arc segment, we take the arc segment which is closest to being tangent to the local arc segment. Denoting the average radius of the ring i ( i=1 ,2, ... 5) by rj and using f3i = sin -1 (y) 872 G. Dror. H. Abramowicz and D. Horn the final expression for the activation of the third layer neurons is _0 2 2 V",lIi = maxe cos (() - 2f3i - 0:), 0<3 (3) where 6 = 6(X, Y, "', (), i) = J(X - ri cos((} - f3d)2 + (Y - ri sin(() - f3d)2 is simply the distance of the center of the receptive field of the (XY 0:) neuron to the ("'(}) arc. The fourth layer is the last one in the arc identification network. Neurons belonging to this layer are global arc detectors. In other words, they detect projected tracks on the z = 0 plane. A fourth level neuron is denoted by "'(} , where", and () have the previous meaning, now describing global arcs. Fourth layer neurons are connected to third layer neurons in a simple fashion , Vd = g( L 6""",,611 ,11' V""II'i - (}4) . ",'II'i (4) Figure 4 represents the activity of fourth layer neurons. Each active neuron "'(} is equivalent in the xy plane to one arc appearing in the figure . . ~60 40 20 E ~ ~o >-20 -40 -60 -60 -40 -20 f< 20 x em] 40 60 Figure 4: Representation of the activity of fourth layer neurons "'(} for the input of figure 1 taken by plotting the appropriate arcs in the xy plane. The arcs are not precisely congruent to the activity of the input layer which is also shown, due to the finite widths which were used, il", = 0.004 and il(} = 7r/20. This figure was produced with (}4 = 3. 3.2 z Location Network The architecture of the second network has a structure which is identical to the first one, although its computational task is different. We will use an identical labeling system for its neurons, but denote their activities by v xy . The latter will assume continuous values in this network. A first layer neuron of the z-location network receives its input from the same receptive field as its corresponding neuron in the first network. Its value, vxy , is the mean value of the z values of the points within its receptive field . If no z values are available for these points, a null value is assigned to it. The second layer neurons compute the mean value v XY a = (vxy ) of the z coordinate of the first layer neurons in their receptive field , averaging over all neurons within Vertex Identification in High Energy Physics Experiments 873 the section {xy II(x - X) sina - (y - Y) cosal < 0.5cm/\ (x - X)2 + (y - y)2 < 4cm2} , which corresponds to the excitatory part of the synaptic connections of equation (2). If null values appear within that section they are disregarded by the averaging procedure. If all values are null , VXYa is assigned a null value too. This Z averaging procedure is similarly propagated to the third layer neurons. The fourth layer neurons evaluate the Z value of the origin of each arc identified by the first network. This is performed by a simple linear extrapolation. The final z estimate of the vertex, Znet , which should be the common origin of all arcs, IS calculated by averaging the outputs of all active fourth layer neurons. 4 Results In order to test the network, we ran it over a set of 1000 events generated by a Monte-Carlo simulator as well as over a sample of physical events taken from the ZEUS experiment at the HEP laboratory DESY in Hamburg. For the former set we compared the estimate of the net Znet with the nominal location of the vertex z, whereas for the real events in the latter set, we compared it with an estimate Zrec obtained by full reconstruction algorithm , which runs off-line and uses all available data. Results of the two tests can be compared since it is well established that the result of the full reconstruction algorithm is within 1 mm from the exact location of the vertex. z Network z Histogrom 140 <Az>=-2.7±O.2 140 <Az>= 1.9±O.3 (1 = 6.1 ±O.2 (1 = 8.4±O.3 120 120 100 100 80 80 60 60 40 40 20 J ~~ 20 0 0 -40 -20 0 20 40 Az [em] Figure 5: Distribution of ~ z = Ze8timate Zexact values for two types of estimates, (a) the one proposed in this paper and (b) the one based on a commonly used histogram method. We also compared our results with those of an algorithmic method used for triggering at ZEUS [4]. We shall refer to this method as the 'histogram method '. The performance of the two methods was compared on a sample of 1000 Monte-Carlo events. The network was unable to get an estimate for 16 events from the set, as compared with 15 for the histogram method (15 of those events were common 874 G. Dror, H. Abramowicz and D. Horn failures). In Figure 5 we compare the distributions of ~z = Znet Zexact and ~Z = Zhist Zexact for the sample of Monte-Carlo events, where Zexact is the generated location of the vertex. Both methods lead to small biases, -2.7 cm for Znet and 1.9 cm for Zhist . The resolution, as obtained from a Gaussian fit, was found to be better for the network approach (0- = 6.1 cm) as compared to the histogram method (0- = 8.4cm). In addition, it should be noted that the histogram method yields discrete results, with a step of 10 cm, whereas the current method gives continuous values. This can be of great advantage for further processing. Note that off-line, after using the whole CTD information, the resolution is better than 1 mm. 5 Discussion We have described a feed forward double neural network that performs a task of pattern identification by thresholding and selecting subsets of data on which a simple computation can lead to the final answer. The network uses a fixed architecture, which allows for its implementation in hardware, crucial for fast triggering purposes. The basic idea of using a fixed architecture that is inspired by the way our brain processes visual information, is similar to the the raison d'etre of the orientation selective neural network employed by [5]. The latter was based on orientation selective cells only, which were sufficient to select linear tracks that are of interest in HEP experiments. Here we develop an arc identification method, following similar steps. Both methods can also be viewed as generalizations of the Hough transform [6] that was originally proposed for straight line identification and may be regarded as a basic element of pattern recognition problems [7]. Neither [5] nor our present proposal were considered by previous neural network analyses of HEP data [8] . The results that we have obtained are very promising. We hope that they open the possibility for a new type of neural network implementation in triggering devices of HEP experiments. Acknowledgments We are indebted to the ZEUS Collaboration whose data were used for this study. This research was partially supported by the Israel National Science Foundation. References [1] B. Foster et al. , Nuclear Instrum. and Methods in Phys. Res. A338 (1994) 254. [2] ZEUS Collab., The ZEUS Detector, Status Report 1993, DESY 1993; M. Derrick et al. , Phys. Lett. B 293 (1992) 465. [3] D. H. Hubel and T . N. Wiesel, J. Physiol. 195 (1968) 215. [4] A. Quadt, MSc thesis, University of Oxford (1997). [5] H. Abramowicz, D. Horn, U. Naftaly and C. Sahar-Pikielny, Nuclear Instrum. and Methods in Phys. Res. A378 (1996) 305; Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. J. Jordan and T. Petsche, MIT Press 1997, pp. 925- 931. [6] P. V. Hough, "Methods and means to recognize complex patterns", U.S. patent 3.069.654. [7] R. O. Duda and P. E. Hart, "Pattern classification and scene analysis" , Wiley, New York, 1973. [8] B. Denby, Neural Computation, 5 (1993) 505.
1998
106
1,460
Probabilistic Visualisation of High-dimensional Binary Data Michael E. Tipping Microsoft Research, St George House, 1 Guildhall Street, Cambridge CB2 3NH, U.K. mtipping@microsoit.com Abstract We present a probabilistic latent-variable framework for data visualisation, a key feature of which is its applicability to binary and categorical data types for which few established methods exist. A variational approximation to the likelihood is exploited to derive a fast algorithm for determining the model parameters. Illustrations of application to real and synthetic binary data sets are given. 1 Introduction Visualisation is a powerful tool in the exploratory analysis of multivariate data. The rendering of high-dimensional data in two dimensions, while generally implying loss of information, often reveals interesting structure to the human eye. Standard dimensionality-reduction methods from multivariate analysis, notably the principal component projection, are often utilised for this purpose, while techniques such as 'projection pursuit' have been tailored specifically to this end. With the current trend for larger databases and the need for effective 'data mining' methods, visualisation is becoming increasingly topical, and recent novel developments include nonlinear topographic methods (Lowe and Tipping 1997; Bishop, Svensen, and Williams 1998) and hierarchical combinations of linear models (Bishop and Tipping 1998). However, a disadvantageous aspect of many proposed techniques is their applicability only to continuous variables; there are very few such methods proposed specifically for the visualisation of discrete binary data types, which are commonplace in real-world datasets. We approach this difficulty by proposing a probabilistic framework for the visualisation of arbitrary data types, based on an underlying latent variable density model. This leads to an algorithm which permits the visualisation of structure within data, while also defining a generative observation probability model. A further, and Probabilistic Visualisation of High-Dimensional Binary Data 593 intuitively pleasing, result is that the specialisation of the model to continuous variables recovers principal component analysis. Continuous, binary and categorical data types may thus be combined and visualised together within this framework, but for reasons of space, we concentrate on binary types alone in this paper. In the next section we outline the proposed latent variable approach, and in Section 3 consider the difficulties involved in estimating the parameters in this model, giving an efficient variational scheme to this end in Section 4. In Section 5 we illustrate the application of the model and consider the accuracy of the variational approximation. 2 Latent Variable Models for Visualisation In an ideal visualisation model, we would wish all of the dependencies between variables to be evident in the visualisation space, while the information that we lose in the dimensionality-reduction process should represent "noise", independent to each variable. This principle is captured by the following probability density model for a dataset comprising d-dimensional observation vectors t = (t1' t2, ... , td): p(t) = J {gP(!i!X,IJ)} p(x)dx, (1) where x is a two-dimensional latent variable vector, the distribution of which must be a priori specified, and 0 are the model parameters. Now, for a given value of x (or location in the visualisation space), the observations are independent under the model. (In general, of course, the model and conditional independence assumption will only hold approximately.) However, the unconditional observation model p(t) does not, in general, factorise and so can still capture dependencies between the d variables, given the constraint implied by the use of just two underlying latent variables. So, having estimated the parameters 0, data could be visualised by 'inverting' the generative model using Bayes' rule: p(xlt) = p(tlx)p(x)/p(t). Each data point then induces a distribution in the latent space, which for the purposes of visualisation, we might summarise with the conditional mean value (xlt). That this form of model can be appropriate for visualisation was demonstrated by Bishop and Tipping (1998), who showed that if the latent variables are defined to be independent and Gaussian, x "'" N(O, I), and the conditional observation model is also Gaussian, tilx "'" N(wJx + J.l.i' a}I), then maximum-likelihood estimation of the model parameters {Wi, J.l.i, a}} leads to a model where the the posterior mean (xlt) is equivalent to a probabilistic principal component projection. A visualisation method for binary variables now follows naturally. Retaining the Gaussian latent distribution x "'" N(O , I), we specify an appropriate conditional distribution for P( ti Ix, 0). Given that principal components corresponds to a linear model for continuous data types, we adopt the appropriate generalised linear model in the binary case: (2) where O'(A) = {I + exp( -An -1 and Ai = wJx + bi with parameters Wi and k 3 Maximum-likelihood Parameter Estimation The proposed model for binary data already exists in the literature under various guises, most historically as a latent trait model (Bartholomew 1987), although it is not utilised for data visualisation. While in the case of probabilistic principal 594 M. E. Tipping component analysis, ML parameter estimates can be obtained in closed-form, a disadvantageous feature of the binary model is that, with P(tilx) defined by (2), the integral of (1) is analytically intractable and P(t) cannot be computed directly. Fitting a latent trait model thus necessitates a numerical integration, and recent papers have considered both Gauss-Hermite (Moustaki 1996) and Monte-Carlo sampling approximations (Mackay 1995; Sammel, Ryan, and Legler 1997). In this latter case, the log-likelihood for a dataset of N observation vectors {tl, ... , tN} would be approximated by N {I L d } ;: ~ ~ In L ~ g P(tinIXI, Wi, bi) (3) where Xl , l = 1 ... L, are samples from the two-dimensional latent distribution. To obtain parameter estimates we may utilise an expectation-maximisation (EM) approach by noting that (3) is equivalent in form to an L-component latent class model (Bartholomew 1987) where the component probabilities are mutually constrained from (2). Applying standard methodology leads to an E-step which requires computation of N x L posterior 'responsibilities' P(xlltn), and a logistic regression M-step which is unfortunately iterative, although it can be performed relatively efficiently by an iteratively re-weighted least-squares algorithm. Because of these difficulties in implementation, in the next section we describe a variational approximation to the likelihood which can be maximised more efficiently. 4 A Variational Approximation to the Likelihood Jaakkola and Jordan (1997) introduced a variational approximation for the predictive likelihood in a Bayesian logistic regression model and also briefly considered the "dual" problem, which is closely related to the proposed visualisation model. In this approach, the integral in (1) is approximated by: (4) where (5) with Ai = (2ti - l)(wTx + bi) and A(~i) = {O.5 (J(~i)}/2~i. The parameters ~i are the 'variational' parameters, and this approximation has the property that P(tilx, ~i) ::; P(tilx), with equality at ~i = Ai, and thus it follows that P(t) ::; P(t). Now because the exponential in (5) is quadratic in X , then the integral in (4), and also the likelihood, can be computed in closed form. This suggests an alternative algorithm for finding parameter estimates where we iteratively maximise the variational approximation to the likelihood. Each iteration of this algorithm is guaranteed to increase a lower bound on, but will not necessarily maximise, the true likelihood. Nevertheless, we would hope that it will be a close approximation, the accuracy of which is investigated later. At each step in the algorithm, then, we: 1. Obtain the sufficient statistics for the approximated posterior distribution of latent variables given each observation, p(xnltn, ~n). 2. Qptimise the variational parameters ~in in order to make the approximation P(tn) as close as possible to P(tn) for all tn. 3. Update the model parameters Wi and bi to increase P(t). Probabilistic VISualisation of High-Dimensional Binary Data 595 Jaakkola and Jordan (1997) give formulae for the above computations, but these do not include provision for the 'biases' bi, and so the necessary expressions are re-derived below. Note that although we have introduced N x d additional variational parameters, it is no longer necessary to sample from p(x) and compute responsibilities, and no iterative logistic regression step is needed. Computing the Latent Posterior Statistics. From Bayes' rule, the posterior approximation p(xnltn'~n) is Gaussian with covariance and mean given by -1 en = [1 -2 t '\((in)WiW[ 1 ' (6) (7) Optimising the Variational Parameters. B~ause P(t) ~ P(t), the variational approximation can be optimised by maximising P(tn ) with respect to each (,in. We use the EM methodology to obtain updates (8) where the angle brackets (.) denote expectations with respect to p(xnltn,~~ld) and where, from (6) and (7) earlier, the necessary posterior statistics are given by: (xn) = I-Ln, (9) (xnx~) = Cn + I-Lnl-L~. (10) Since (6) and (7) depend on the variational parameters, Cn and I-Ln are computed followed by the update for each (,in from (8). Iteration of this two-stage process is guaranteed to improve monotonically the approximation of P(tn ) and typically only two iterations are necessary for convergence. Optimising the Model Parameters. We again use EM to increase the variationallikelihood approximation with respect to Wi and bi. Defining Wi = (wi, bi)T, x=(xT,1r, leads to updates for both Wi and bi given by: where 5 Visualisation (11) I-Ln) 1 . (12) Synthetic clustered data. We firstly give an example of visualisation of artificially-generated data to illustrate the operation and features of the method. Binary data was synthesised by first generating three random 16-bit prototype vectors, where each bit was set with probability 0.5. Next a 600-point dataset was generated by taking 200 examples of each prototype and inverting each bit with 596 M. E. npping probability 0.05. We generated a second dataset in the same manner, but where the probability of bit inversion was 0.15, simulating more "noise" about each prototype. The final values of ILn from (7) for each data point are plotted in Figure 1. In the left plot for the low-noise dataset, the three clusters are clear, as are the prototype vectors. On the right, the bit-noise is sufficiently high such that clusters now overlap to a degree and the prototypes are no longer evident. However, we can elucidate further information from the model by drawing lines representing P(tilx) = 0.5, or wTx+bi = 0, which may be considered to be 'decision boundaries' for each bit. These offer more convincing evidence of the presence of three clusters. 1,5 ~ 4" + +t 1,5 + • .# + -ito -+ + 0.5 + ., 0,5 0 .. 0 .-. 0 " " " -0.5 "., .. -0.5 "0 " , tv -1 \fi -1 • It ~ -1.5 -1.5 " - 1,5 -1 -0,5 0 0.5 1.5 - 1.5 -1 -0,5 0 0.5 1,5 Figure 1: Visualisation of two synthetic clustered datasets. The three clusters have been denoted by separate glyphs, the size of which reflects the number of examples whose posterior means are located at that point in the latent space. In the right plot, lines corresponding to P(tdx) = 0.5 have been drawn. Handwritten digit data. On the left of Figure 2, a visualisation is given of 1000 examples derived from 16 x 16 images of handwritten digit '2's. There is visual evidence of the natural variability of writing styles in the plot as the posterior latent means in Figure 2 describe an approximate 'horseshoe' structure. On the right of the figure we examine the nature of this by plotting gray-scale images of the vectors P(tlxj), where Xj are four numbered samples in the visualisation space. These images illustrate the expected value of each bit given the latent-space location and demonstrate that the location is indeed indicative of the style of the digit, notably the presence of a loop. Accuracy of the variational approximation. To investigate the accuracy of the approximation, the sampling algorithm of Section 3 for likelihood maximisation was implemented and applied to the above two datasets. The evolution of error (negative log-likelihood per data-point) was plotted against time for both algorithms, using identical initialisations. The 'true' error for the variational approach was estimated using the same 500-point Monte-Carlo sample. Typical results are shown in Figure 3, and the final running time and error (using a sensible stopping criterion) are given for both datasets in Table 1. For these two example datasets, the variational algorithm converges considerably more quickly than in the sampling case, and the difference in final error is relatively small, particularly so for the larger-dimensionality dataset. The approximation of the posterior distributions p(xnltn) is the key factor in the accuracy of the algorithm. In Figure 4, contours of the posterior distribution in the latent space induced Probabilistic VISualisation of High-Dimensional Binary Data DigIt 2 ::., ,' ... · .·~4·~·: .•. ' .. ',. • "(j)~.~ ••• · '. ···· .. :r-~\i·· .' . . 'f. . ..... r"" . . . ;. "·t~·":~~~:'" . .' • "';':\-Y~.,,:.'. •••• ". .. ". I. r ~ .. -;I t'n :. :' ••• ..... ....... : .. ', ,~~ ". . . . ": . ;" '.. . .. , ......,. ..... . . .• '. J:, ,,_ . '. " •• 1t:.:':":~::'''''':'': . .. . : ®. .,: ... t'., • ... ,·~·;'~i''''-:I:". . •••• ~:. , .. , •• " '(.tY • .,::,,:,~ .... # '.. : J.JIl. ~.' :,111'-:') -:..~ .-: . • ;0 ."'. ~ ".,.~ otl ... ". • ,: .' , •. , '.' @;a,\." .':;' • :', •• :. ,-:.':, 3 •• ~" ':.- • _.:' • . ' :.,. '.. '.' • I •• ', ": .f", .. . ": ..... '. . . (2) ~"J ~ (3) ~ (4) ~ 597 Figure 2: Left: visualisation of 256-dimensional digit '2' data. Right: gray-scale images of the conditional probability of each bit at the latent space locations marked. 8 Variational 335 Variational 7.5 Sampling 33 7 325 Sampling 12 65 e 32 w W 6 315 5.5 31 30 5 30 10-' 10° 10' 10' 10' 10-1 10° la' la' la' 10' Time (sees) TIme (sees) Figure 3: Error vs. time for the synthetic data (left) and the digit '2' data (right). by a typical data point are shown for both algorithms and datasets. This approximation is more accurate as dimensionality increases (a phenomenon observed with other datasets too), as the true posterior becomes more Gaussian in form. 6 Conclusions We have outlined a variational approximation for parameter estimation in a probabilistic visualisation model and although we have only considered its application to binary variables here, the extension to mixtures of arbitrary data types is readily implemented. For the two comparisons shown (and others not illustrated here) , the approximation appears acceptably accurate, and particularly so for data of higher dimensionality. The algorithm is considerably faster than a sampling approach, which would permit incorporation of multiple models in a more complex hierarchical architecture, of a sort that has been effectively implemented for visualisation of continuous variables (Bishop and Tipping 1998). 598 M. E. Tipping Synthetic-16 Digit-256 Time Error Time Error Variational 7.8 5.14 25.6 30.23 Sampling 331.1 4.93 1204.5 30.19 Table 1: Comparison of final error and running time for the two algorithms. 0.5 0.5 -0.5 -I -1.5 True Posterior -2 -1 .5 -I -05 0 0.5 True Posterior -I -05 0 05 15 05 o -0.5 -I -15 -2 0.5 -05 -1 -1.5 Approximation -2 -15 - I - 05 0 0.5 Approximation - I -05 0 05 15 Figure 4: True and approximated posteriors for a single example from the synthetic data set (top) and the digit '2' data (bottom). 7 References Bartholomew, D. J. (1987). Latent Variable Models and Factor Analysis. London: Charles Griffin & Co. Ltd. Bishop, C. M., M. Svensen, and C. K. I. Williams (1998). GTM: the Generative Topographic Mapping. Neural Computation 10(1),215- 234. Bishop, C. M. and M. E. Tipping (1998) . A hierarchical latent variable model for data visualization. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 281- 293. Jaakkola, T. S. and M. 1. Jordan (1997). Bayesian logistic regression: a variational approach. In D. Madigan and P. Smyth (Eds.), Proceedings of the 1997 Conference on Artificial Intelligence and Statistics, Ft Lauderdale, FL. Lowe, D. and M. E. Tipping (1997). Neuroscale: Novel topographic feature extraction with radial basis function networks. In M. Mozer, M. Jordan, and T . Petsche (Eds.), Advances in Neural Information Processing Systems 9, pp. 543- 549. Cambridge, Mass: MIT Press. Mackay, D. J. C. (1995). Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research, Section A 354 (1), 73- 80. Moustaki, 1. (1996). A latent trait and a latent class model for mixed observed variables. British Journal of Mathematical and Statistical Psychology 49, 313- 334. Sammel, M. D., L. M. Ryan, and J. M. Legler (1997). Latent variable models for mixed discrete and continuous outcomes. Journal of the Royal Statistical Society, Series B 59, 667- 678.
1998
107
1,461
Computation of Smooth Optical Flow in a Feedback Connected Analog Network Alan Stocker * Institute of Neuroinforrnatics University and ETH Zi.irich Winterthurerstrasse 190 8057 Zi.irich, Switzerland Abstract Rodney Douglas Institute of Neuroinforrnatics University and ETH Zi.irich Winterthurerstrasse 190 8057 Zi.irich, Switzerland In 1986, Tanner and Mead [1] implemented an interesting constraint satisfaction circuit for global motion sensing in a VLSI. We report here a new and improved a VLSI implementation that provides smooth optical flow as well as global motion in a two dimensional visual field. The computation of optical flow is an ill-posed problem, which expresses itself as the aperture problem. However, the optical flow can be estimated by the use of regularization methods, in which additional constraints are introduced in terms of a global energy functional that must be minimized. We show how the algorithmic constraints of Hom and Schunck [2] on computing smooth optical flow can be mapped onto the physical constraints of an equivalent electronic network. 1 Motivation The perception of apparent motion is crucial for navigation. Knowledge of local motion of the environment relative to the observer simplifies the calculation of important tasks such as time-to-contact or focus-of-expansion. There are several methods to compute optical flow. They have the common problem that their computational load is large. This is a severe disadvantage for autonomous agents, whose computational power is restricted by energy, size and weight. Here we show how the global regularization approach which is necessary to solve for the ill-posed nature of computing optical flow, can be formulated as a local feedback constraint, and implemented as a physical analog device that is computationally efficient. * correspondence to: alan@ini.phys.ethz.ch Computation of Optical Flow in an Analog Network 707 2 Smooth Optical Flow Horn and Schunck [2] defined optical flow in relation to the spatial and temporal changes in image brightness. Their model assumes that the total image brightness E(x, y, t) does not change over time; d dt E(x, y, t) = O. (I) Expanding equation (1) according to the chain rule of differentiation leads to o 0 0 F == ox E(x, y, t)u + oy E(x, y, t)v + 8t E(x, y, t) = 0, (2) where u = dx / dt and v = dy / dt represent the two components of the local optical flow vector. Since there is one equation for two unknowns at each spatial location, the problem is ill-posed, and there are an infinite number of possible solutions lying on the constraint line for every location (x, y). However, by introducing an additional constraint the problem can be regularized and a unique solution can be found. For example, Horn and Schunck require the optical flow field to be smooth. As a measure of smoothness they choose the squares of of the spatial derivatives of the flow vectors, (3) One can also view this constraint as introducing a priori knowledge: the closer two points are in the image space the more likely they belong to the projection of the same object. Under the assumption of rigid objects undergoing translational motion, this constraint implies that the points have the same, or at least very similar motion vectors. This assumption is obviously not valid at boundaries of moving objects, and so this algorithm fails to detect motion discontinuities [3]. The computation of smooth optical flow can now be formulated as the minimization problem of a global energy functional, J J ~ dx dy ---7 min (4) L with F and 8 2 as in equation (2) and (3) respectively. Thus, we exactly apply the approach of standard regularization theory [4]: Ax=y x = A -Iy II Ax - y II +.x II P 11= min y: data inverse problem, ill-posed regularization The regularization parameter, .x, controls the degree of smoothing of the solution and its closeness to the data. The norm, II . II, is quadratic. A difference in our case is that A is not constant but depends on the data. However, if we consider motion on a discrete time-axis and look at snapshots rather than continuously changing images, A is quasistationary.1 The energy functional (4) is convex and so, a simple numerical technique like gradient descent would be able to find the global minimum. To compute optical flow while preserving motion discontinuities one can modify the energy functional to include a binary line process that prevents smoothing over discontinuities [4]. However, such an functional will not be convex. Gradient descent methods would probably fail to find the global amongst all local minima and other methods have to be applied. 1 In the a VLSI implementation this requires a much shorter settling time constant for the network than the brightness changes in the image. 708 A. Stocker and R. Doug/as 3 A Physical Analog Model 3.1 Continuous space Standard regularization problems can be mapped onto electronic networks consisting of conductances and capacitors [5]. Hutchinson et al. [6] showed how resistive networks can be used to compute optical flow and Poggio et al. [7] introduced electronic network solutions for second-order-derivative optic flow computation. However, these proposed network architectures all require complicated and sometimes negative conductances although Harris et al. [8] outlined a similar approach as proposed in this paper independently. Furthennore, such networks were not implemented practically, whereas our implementation with constant nearest neighbor conductances is intuitive and straightforward. Consider equation (4): L = L(u, v, '\lu, '\lv, x, y). The Lagrange function L is sufficiently regular (L E C 2 ), and thus it follows from calculus of variation that the solution of equation (4) also suffices the linear Euler-Lagrange equations A '\l2u - Ex (Exu + Eyv + E t ) A'\l2v - Ey(Exu + Eyv + E t ) o O. (5) The Euler-Lagrange equations are only necessary conditions for equation (4). The sufficient condition for solutions of equations (5) to be a weak minimum is the strong Legendrecondition, that is L'ilu'ilu > 0 which is easily shown to be true. and L'ilv'ilv > 0, 3.2 Discrete Space - Mapping to Resistive Network By using a discrete five-point approximation of the Laplacian \7 2 on a regular grid, equations (5) can be rewritten as A(Ui+1 )' + Ui-1 )' + Ui )'+1 + Ui )-1 - 4Ui )') - Ex, ,(Ex ,Ui)' + E y' Vi)' + E t ,) =0 (6) , , , , , t,] l,J' ' . ] ' 1, J A(Vi+1)' +Vi- 1)' +Vi)'+1 +Vi)'-1 - 4Vi)') -Ey' (Ex, ,Ui)' +Ey' ,Vi)' +Et, ,)=0 , , , , , 1,)'.J ' 1 ,1' 1,] where i and j are the indices for the sampling nodes. Consider a single node of the resistive network shown in Figure 1: Figure 1: Single node of a resistive network. From Kirchhoff's law it follows that dV,· , C d~') = G(Vi+1 ,j + Vi-I ,j + Vi,HI + Vi,j-1 - 4Vi,j) + lini.j (7) Computation of Optical Flow in an Analog Network 709 where Vi,j represents the voltage and l in',i the input current. G is the conductance between two neighboring nodes and C the node capacitance. In steady state, equation (7) becomes G(Vi+I ,j + Vi - I,j + Vi,j+! + Vi ,j- I - 4Vi,j) + lini" = O. (8) The analogy with equations (6) is obvious: G ~ .A lUin ·· ~ -Ex· .(Ex · UiJ' +Ey , ViJ' +Et · ) t t ] t. ) t t ) ' t ,]' 1 , ) lVin " ~ -Ey. ,(Ex " UiJ, +Ey" Vi),+Et , ) (9) t , } t , } 1 , ) ' 1 , ) ' I , J To create the full system we use two parallel resistive networks in which the node voltages Ui,j and Vi,j represent the two components of the optical flow vector U and v . The input currents lUini,i and lVini" are computed by a negative recurrentfeedback loop modulated by the input data, which are the spatial and temporal intensity gradients. Notice that the input currents are proportional to the deviation of the local brightness constraint: the less the local optical flow solution fits the data the higher the current lini.j will be to correct the solution and vice versa. Stability and convergence of the network are guaranteed by Maxwell's minimum power principle [4, 9]. 4 The Smooth Optical Flow Chip 4.1 Implementation -CP\~}1J­ ~tf)~ ! I ~ Figure 2: A single motion cell within the three layer network. For simplicity only one resistive network is shown. The circuitry consists of three functional layers (Figure 2). The input layer includes an array of adaptive photoreceptors [10] and provides the derivatives of the image brightness to the second layer, The spatial gradients are the first-order linear approximation obtained by subtracting the two neighboring photoreceptor outputs. The second layer computes the input current to the third layer according to equations (9). Finally these currents are fed into the two resistive networks that report the optical flow components. The schematics of the core of a single motion cell are drawn in Figure 3. The photoreceptor and the temporal differentiator are not shown as well as the other half of the circuitry that computes the y-component of the flow vector. 710 A. Stocker and R. Doug/as A few remarks are appropriate here: First, the two components of the optical flow vector have to be able to take on positive and negative values with respect to some reference potential. Therefore, a symmetrical circuit scheme is applied where the positive and negative (reference voltage) values are carried on separate signal lines. Thus, the actual value is encoded as the difference of the two potentials. temporal differentiator E (E V + E) x x x t ~." .... " ....... " ....... " ......... : Exl l _ f-VViBias ! I:········ .. ·· .. · .. ····· .. ··· .. ·: OpBias v+ X DiffBias 1 Figure 3: Cell core schematics; only the circuitry related to the computation of the x-component of the flow vector is shown. Second, the limited linear range of the Gilbert multipliers leads to a narrow span of flow velocities that can be computed reliably. However, the tuning can be such that the operational range is either at high or very low velocities. Newer implementations are using modified multipliers with a larger linear range. Third, consider a single motion cell (Figure 2). In principle, this cell would be able to satisfy the local constraint perfectly. In practice (see Figure 3), the finite output impedance of the p-type Gilbert multiplier slightly degrades this ideal solution by imposing an effective conductance G load . Thus, a constant voltage on the capacitor representing a non-zero motion signal requires a net output current of the mUltiplier to maintain it. This requirement has two interesting consequences: i) The reported optical flow is dependent on the spatial gradients (contrast). A single uncoupled cell according to Figure 2 has a steady state solution with -Et .Ex . U I , ] ' .J i ,j '" (Gload + E;i.j + E~iJ and -EtEy .. 'Y: 1,) 1, J i,j '" (Gload + E; . + Ey2) 1,) 1,) respectively. For the same object speed, the chip reports higher velocity signals for higher spatial gradients. Preferably, Gload should be as low as possible to minimize its influence on the solution. ii) On the other hand, the locally ill-posed problem is now well-posed because G load imposes a second constraint. Thus, the chip behaves sensibly in the case of low contrast input (small gradients), reporting zero motion where otherwise, unreliable high values would occur. This is convenient because the signal-to-noise ratio at low contrast is very poor. Furthermore, a single cell is forced to report the velocity on the constraint line with smallest absolute value, which is normal to the spatial gradient. That means that the chip Computation of Optical Flow in an Analog Network 711 reports normal flow when there is no neighbor connection. Since there is an trade-off between the robustness of the optical flow computation and a low conductance Glaad, the follower-connected transconductance amplifier in our implementation allows us to control G laad above its small intrinsic value. 4.2 Results The results reported below were obtained from a MOSIS tinychip containing a 7x7 array of motion cells each 325x325 A 2 in size. The chip was fabricated in 1.2 J.,tm technology at AMI. \ ......... "', " "",""",-- ~ ~ "- "- ", ""- , ," "-, " ""3 ... ",~" , ,,,"" .' , ," "- , -~" , " ,~" , , ." ".' ," "- , " 'f-~' ~ , ,1'-'" , , , a b c Figure 4: Smooth optical flow response of the chip to an left-upwards moving edge. a: photoreceptor output, the arrow indicates the actual motion direction. b: weak coupling (small conductance G). c: strong coupling. \ -\ I , lr- ~~~~~~ , - , ,- -/ 2F--- ~ ~~ ~ -E-- ~ 3 , " "'-3F-- ~ ~ ~ "'E--- ~ -E-'r-- /" "/ .F--~~~~~~ --\ 'I / ,.,~ Sr- ~ ~ ~ ~ '4-<Eo-. , I I " ,, '.... &r-- ~ 'E--- ~ -E-'E-~ .,.- ,/ \ 1F-- ~ ~ ~ ~ '<E-4a b c Figure 5: Response of the optical flow chip to a plaid stimulus moving towards the left: a: photoreceptor output; b shows the normal flow computation with disabled coupling between the motion cells in the network while in c the coupling strength is at maximum. The chip is able to compute smooth optical flow in a qualitative manner. The smoothness can be set by adjusting the coupling conductances (Figure 4). Figure 5b presents the normal flow computation that occurs when the coupling between the motion cells is disabled. The limited resolution of this prototype chip together with the small size of the stimulus leads to a noisy response. However it is clear that the chip perceives the two gratings as separate moving objects with motion normal to their edge orientation. When the network 712 A. Stocker and R. Douglas conductance is set very high the chip perfonns a collective computation solving the aperture problem under the assumption of single object motion. Figure 5c shows how the chip can compute the correct motion of a plaid pattern. 5 Conclusion We have presented here an aVLSI implementation of a network that computes 2D smooth optical flow. The strength of the resistive coupling can be varied continuously to obtain different degrees of smoothing, from a purely local up to a single global motion signal. The chip ideally computes smooth optical flow in the classical definition of Horn and Schunck. Instead of using negative and complex conductances we implemented a network solution where each motion cell is perfonning a local constraint satisfaction task in a recurrent negative feedback loop. It is significant that the solution of a global energy minimization task can be achieved within a network of local constraint solving cells that do not have explicit access to the global computational goal. Acknowledgments This article is dedicated to Misha Mahowald. We would like to thank Eric Vittoz, Jorg Kramer, Giacomo Indiveri and Tobi Delbriick for fruitful discussions. We thank the Swiss National Foundation for supporting this work and MOSIS for chip fabrication. References [1] J. Tanner and c.A. Mead. An integrated analog optical motion sensor. In S. -Y. Kung, R. Owen, and G. Nash, editors, VLSI Signal Processing, 2, page 59 ff. IEEE Press, 1986. [2] B.K. Horn and B.G. Schunck. Detennining optical flow. Artificial Intelligence, 17: 185-203, 1981. [3] A. Yuille. Energy functions for early vision and analog networks. Biological Cybernetic~61:115-123, 1989. [4] T. Poggio, V. Torre, and C. Koch. Computational vision and regularization theory. Nature, 317(26):314-319, September 1985. [5] B. K. Horn. Parallel networks for machine vision. Technical Report 1071, MIT AI Lab, December 1988. [6] J. Hutchinson, C. Koch, 1. Luo, and C. Mead. Computing motion using analog and binary resistive networks. Computer, 21 :52-64, March 1988. [7] T. Poggio, W. Yang, and V. Torre. Optical flow: Computational properties and networks, biological and analog. The Computing Neuron, pages 355-370, 1989. [8] 1.G. Harris, C. Koch, E. Staats, and J. Luo. Analog hardware for detecting discontinuities in early vision. Int. Journal of Computer Vision, 4:211-223, 1990. [9] J. Wyatt. Little-known properties of resistive grids that are useful in analog vision chip designs. In C. Koch and H. Li, editors, Vision Chips: Implementing Vision Algorithms with Analog VLSI Circuits, pages 72-89. IEEE Computer Society Press, 1995. [10] S.c. Liu. Silicon retina with adaptive filtering properties. In Advances in Neural Information Processing Systems 10, November 1997.
1998
108
1,462
Applications of multi-resolution neural networks to mammography Clay D. Spence and Paul Sajda Sarnoff Corporation CN5300 Princeton, NJ 08543-5300 {cspence, psajda }@sarnoff.com Abstract We have previously presented a coarse-to-fine hierarchical pyramid/neural network (HPNN) architecture which combines multiscale image processing techniques with neural networks. In this paper we present applications of this general architecture to two problems in mammographic Computer-Aided Diagnosis (CAD). The first application is the detection of microcalcifications. The <:oarse-to-fine HPNN was designed to learn large-scale context information for detecting small objects like microcalcifications. Receiver operating characteristic (ROC) analysis suggests that the hierarchical architecture improves detection performance of a well established CAD system by roughly 50 %. The second application is to detect mammographic masses directly. Since masses are large, extended objects, the coarse-to-fine HPNN architecture is not suitable for this problem. Instead we construct a fine-to-coarse HPNN architecture which is designed to learn small-scale detail structure associated with the extended objects. Our initial results applying the fine-to-coarse HPNN to mass detection are encouraging, with detection performance improvements of about 36 %. We conclude that the ability of the HPNN architecture to integrate information across scales, both coarse-to-fine and fine-to-coarse, makes it well suited for detecting objects which may have contextual clues or detail structure occurring at scales other than the natural scale of the object. 1 Introduction In a previous paper [8] we presented a coarse-to-fine hierarchical pyramid/neural network (HPNN) architecture that combines multi-scale image processing techApplications of Multi-Resolution Neural Networks to Mammography 939 niques with neural networks to search for small targets in images (see figure IA). To search an image we apply the network at a position and use its output as an estimate of the probability that a target (an object of the class we wish to find) is present there. We then repeat this at each position in the image. In the coarseto-fine HPNN, the hidden units of networks operating at low resolution or coarse scale learn associated context information, since the targets themselves are difficult to detect at low resolution. The context is then passed to networks searching at higher resolution. The use of context can significantly improve detection performance since small objects have few distinguishing features. In the HPNN each of the networks receives information directly from only a small part of several feature images, and so the networks can be relatively simple. The network at the highest resolution integrates the contextual information learned at coarser resolutions to detect the object of interest. The HPNN architecture can be extended by considering the implications of inverting the information flow in the coarse-to-fine architecture. This fine-to-coarse HPNN would have networks extracting detail structure at fine resolutions of the image and then passing this detail information to networks operating at coarser scales (see figure IB). For many types of objects, information about the fine structure is important for discriminating between different classes. The fine-to-coarse HPNN is therefore a natural architecture for exploiting fine detail information for detecting extended objects. In this paper, we present our experiences in applying the HPNN framework to two problems in mammographic Computer-Aided Diagnosis (CAD); that of detecting microcalcifications in mammograms and that of detecting malignant masses in mammograms. The coarse-to-fine HPNN architecture is well-suited for the microcalcification problem, while the fine-to-coarse HPNN is suited for mass detection. We evaluate the performance and utility of the HPNN framework by considering its effects on reducing false positive rates in a well characterized CAD system. The University of Chicago (UofC) has been actively developing 'mammographic CAD systems for micro calcification and mass detection [6] and has been evaluating their performance clinically. A general block diagram showing the basic processing elements of these CAD systems is shown in figure 2. First, a pre-processing step is used to segment the breast area and increase the overall signal-to-noise levels in the image. Regions of interest (ROIs) are defined at this stage, representing local areas of the breast which potentially contain a cluster of micro calcifications or a mass. The next stage typically involves feature extraction and rule-based/heuristic analysis, in order to prune false positives. The remaining ROIs are classified as positive or negative by a statistical classifier or neural network. The CAD system is used as a "second reader", aiding the radiologist by pointing out spots to double check. One of the key requirements of CAD is that false positive rates be low enough that radiologists will not ignore the CAD system output. Therefore it is critical to reduce false positive rates of CAD systems without significant reductions in sensitivity. In this paper we evaluate the HPNN framework within the context of reducing the false positive rates of the UofC CAD systems for microcalcification and mass detection. In both cases the HPNN acts as a post-processor of the UofC CAD system. 2 Microcalcification detection Microcalcifications are calcium deposits in breast tissue that appear as very small bright dots in mammograms. Clusters of microcalcifications frequently occur around tumors. Unfortunately microcalcification clusters are sometimes missed, since they 940 C. D. Spence and P. Sqjda P(t) P(t) Figure 1: Hierarchical pyramid/neural network architectures for (A) detecting microcalcifications and (B) detecting masses. In (A) context is propagated from low to high resolution via the hidden units of low resolution networks. In (B) small scale detail information is propagated from high to low resolution. In both cases the output of the last integration network is an estimate of the probability that a target is present. Mammogram -1 Pre-processing Feature extraction and rule-based/ heuristic analysis Statistical/NN classifier Mass or Cluster locations Figure 2: Block diagram for a typical CAD detection system. can be quite subtle and the radiologists can only spend about a minute evaluating a patient's mammograms. Data used for the micro calcification experiments was provided by The University of Chicago. The first set of data consists of 50 true positive and 86 false positive ROls_ These ROIs are 99x99 pixels and digiti7,ed at 100 micron resolution. A second set of data from the UofC clinical testing database included 47 true positives and 103 false positives, also 99x99 and sampled at 100 micron resolution. We trained the coarse-to-fine HPNN architecture in figure 1A as a detector for individual calcifications. For each level in the pyramid a network is trained, beginning with the network at low resolution. The network at a particular pyramid level is applied to one pixel at a time in the image at that resolution, and so produces an output at each pixel. All of the networks are trained to detect micro calcifications, however, at low resolutions the micro calcifications are not directly detectable. To achieve better than chance performance, the networks at those levels must learn something about the context in which micro calcifications appear. To integrate context information with the other features the outputs of hidden units from low resolution networks are propagated hierarchically as inputs to networks operating at higher resolutions. Input to the neural networks come from an integrated feature pyramid (IFF) [lJ. To construct the IFP, we used steerable filters [3J to compute local orientation energy. The steering properties of these filters enable the direct computation of the orientation having maximum energy. We constructed features which represent, at each pixel location, the maximum energy (energy at 8rnax) , the energy at the Applications of Multi-Resolution Neural Networks to Mammography 941 cc HPNN Chicago NN A z (7 Az FPF (7FPF A z (7 Az FPF (7FPF TPF=l.O TPF=l.O 1 .93 .03 .24 .11 .88 .04 .50 .11 2 .94 .02 .21 .11 .91 .02 .43 .10 3 .94 .03 .39 .19 .91 .03 .48 .19 4 .93 .03 .48 .15 .90 .05 .56 .21 5 .93 .03 .51 .06 .88 .05 .68 .21 Table 1: Comparison of HPNN and Chicago networks. orientation perpendicular to emu;]; (ernux - 90°), and the energy at the diagonal (energy at ernux - 450 ).l The resulting features are input into the coarse-to-fine network hierarchy. In examining the truth data for the ROI data set, we found that the experts who specified the microcalcification positions often made errors in these positions of up to ±2 pixels of the correct position. To take this uncertainty in position into account , we used the following error function Euop = - L log( 1 - IT (1 - y(X))) - L 10g(1 - y(x)) (1) pEPos xEp x ENe y which we have called the Uncertain Object Position (UOP) error function [7].2 (y(x) is the network's output when applied to position x.) It is essentially the crossentropy error, but for positive examples the probability of generating a positive output (y( x), in this case) has been replaced by the probability of generating at least one positive output in a region or set of pixels p in the image. In our case each p is a five-by-five pixel square centered on the location specified by the expert. To this we added the standard weight decay regularization term. The regularization constant was adjusted to minimize the ten-fold cross-validation error. The coarse-to-fine HPNN was applied to each input ROI, and an image was constructed from the output of the Level 0 network at each pixel. Each of these pixel values is the network's estimate of the probability that a microcalcification is present there. Training and testing were done using as jackknife protocol [5], whereby one half of the data (25 TPs and 43 FPs) was used for training and the other half for testing. We used five different random splits of the data into training and test sets. For a given ROI, the probability map produced by the network was thresholded at a given value to produce a binary detection map. Region growing was used to count the number of distinct detected regions. The ROI was classified as a positive if the number of regions was greater than or equal to a certain cluster criterion. Table 1 compares ROC results for the HPNN and another network that had been used in the University of Chicago CAD system [9] using five different cluster criterion (cc). Reported are the area under the ROC curve (Az), the standard deviation of A z across the subsets of the jackknife ((7 AJ, the false positive fraction at a true positive fraction of 1.0 (FPF@TPF= 1.0) and the standard deviation of the FPF across the subsets of the jackknife ((7FPF). Az and FPF@TPF = 1.0 represent 1 We found that the energies in the two diagonal directions were nearly identical. 2Keeler et al. [4] developed a network for object recognition that had some similarities to the UOP error. In fact the way in which the outputs of units are combined for their error function can be shown to be an approximation to the UOP error. 942 C. D. Spence and P Sajda the averages of the subsets of the jackknife. Note that both networks operate best when the cluster criterion is set to two. For this case the HPNN has a higher Az than the Chicago network while also halving the false positive rate. This difference, between the two networks' Az and F P F values, is statistically significant (z-test; PAz = .0018, PFPF = .00001). A second set of data was also tested. 150 ROls taken from a clinical prospective study and classified as positive by the full Chicago CAD system (including the Chicago neural network) were used to test the HPNN. Though the Chicago CAD system classified all 150 ROls as positive, only 47 were in fact positive while 103 were negatives. We applied the HPNN trained on the entire previous data set to this new set of ROls. The HPNN was able to reclassify 47/103 negatives as negative, without loss in sensitivity (no false negatives were introduced). On examining the negative examples rejected by the coarse-to-fine HPNN, we found that many of these ROls contained linear, high-contrast structure which would otherwise be false positives for the Chicago network. The Chicago neural network presumably interprets the "peaks" on the linear structure as calcifications. However because the coarse-to-fine HPNN also integrates information from low resolution it can associate these "peaks" with the low-resolution linear structure and reject them. 3 Mass detection Although microcalcifications are an important cue for malignant masses in mammograms, they are not visible or even present in all cases. Thus mammographic CAD systems include algorithms to directly detect the presence of masses. We have started to apply a fine-to-coarse HPNN architecture to detect malignant masses in digitized mammograms. Radiologists often distinguish malignant from benign masses based on the detailed shape of the mass border and the presence of spicules alone the border. Thus to integrate this high resolution information to detect malignant masses, which are extended objects, we apply the fine-to-coarse HPNN of figure lB. As for microcalcifications, we apply the HPNN as a post-processor, but hei'e it processes the output of the mass-detection component of UofC CAD system. The data in our study consists of 72 positive and 100 negative ROls. These are 256-by256 pixels and are sampled at 200 micron resolution. At each level of the fine-to-coarse HPNN several hidden units process the feature images. The outputs of each unit at all of the positions in an image make up a new feature image. This is reduced in resolution by the usual pyramid blur-andsubsample operation to make an input feature image for the network units at the next lower resolution. We trained the entire fine-to-coarse HPNN as one network instead of training a network for each level, one level at a time. This training is quite straightforward. Back-propagating error through the network units is the same as in conventional networks. We must also back-propagate through the pyramid reduction operation, but this is linear and therefore quite simple. In addition we use the same UOP error function (Equation 1) used to train the coarse-to-fine architecture. The rationale for this application of the UOP error function is that the truth data specifies the location of the center of the mass at the highest resolution. However, because of the sub-sampling the center cannot be unambiguously assigned to a particular pixel at low resolution. The features input to the fine-to-coarse HPNN are filtered versions of the image, with filter kernels given by 0/' (r e) = ( q! )1/2rIPle-r2 /2LIPI(r2)etp1> in polar . 'l/q,]1' 71"(q+lpl)! q Applications of Multi-Resolution Neural Networks to Mammography 943 Coarse-to-Fine HPNN Fine-to-Coarse HPNN Sensitivity Microcalcification Mass 100% 45% 32% 95% 47% 36% 90% 63% 40% 80% 69% 78% Table 2: Detector Specificity (% reduction in false positive rate of UofC CAD system) . coordinates, with (q, p) E {(O, 1), (1,0), (0, 2)}. These are combinations of derivatives of Gaussians, and can be written as combinations of separable filter kernels (products of purely horizontal and vertical filters) , so they can be computed at relatively low cost. They are also easy to steer, since this is just multiplication by a complex phase factor. We steered these in the radial and tangential directions relative to the tentative mass centers, and used the real and imaginary parts and their squares and products as features. The center coordinates of the are generated by the earlier stages of the CAD system. These features were extracted at each level of the Gaussian pyramid representation of the mass ROI, and used as inputs only to the network units at the same level. The fine-to-coarse HPNN is quite similar to the convolution network proposed by Le Cun, et al [2], however with a few notable differences. The fine-to-coarse HPNN receives as inputs preset features extracted from the image (in this case radial and tangential gradients) at each resolution, compared to the convolution network, whose inputs are the original pixel values at the highest resolution. Secondly, in the fine-to-coarse HPNN, the inputs to a hidden unit at a particular position are the pixel values at that position in each of the feature images, one pixel value per feature image. Thus the HPNN's hidden units do not learn linear filters, except as linear combinations of the filters used to form the features. Finally the fine-tocoarse HPNN is trained using the UOP error function, which is not used in the Le Cun network. Currently our best performing fine-to-coarse HPNN system for mass detection has two hidden units per pyramid level. This gives an ROC area of A z = 0.85 and eliminates 36 % of the false-positives at a cost of missing 5 % of the actual positives. To improve performance further, we are investigating different regularizers, richer feature sets, and more complex architectures, i.e., more hidden units. 4 Conclusion We have presented the application of multi-resolution neural network architectures to two problems in computer-aided diagnosis, the detection of micro calcifications in mammograms and the direct detection of malignant masses in mammograms. A summary of the performance of these architectures is given in Table 2. In the case of microcalcifications, the coarse-to-fine HPNN architecture successfully discovered large-scale context information that improves the system's performance in detecting small objects. A coarse-to-fine HPNN has been directly integrated with the UofC CAD system for micro calcification detection and the complete system is undergoing clinical evaluation. In the case of malignant masses, a fine-to-coarse HPNN architecture was used to exploit information from fine resolution detail which could be used to differentiate 944 C. D. Spence and P Sajda malignant from benign masses. The results of this network are encouraging, but additional improvement is needed. In general, we have found that the multi-resolution HPNNs are a useful class of network architecture for exploiting and integrating information at multiple scales. 5 Acknowledgments This work was funded by the National Information Display Laboratory, DARPA through ONR contract No. N00014-93-C-0202, and the Murray Foundation. We would like to thank Drs. Robert Nishikawa and Maryellen Giger of The University of Chicago for useful discussions and providing the data. References [1] Peter Burt. Smart sensing within a pyramid vision machine. Proceedings IEEE, 76(8):1006-1015, 1988. Also in Neuro- Vision Systems, Gupta and Knopf, eds., 1994. [2] Y. Le Cun, B. Boser, J . S. Denker, and D. Henderson. Handwritten digit recognition with a back-propagation network. In David S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 396- 404, 2929 Campus Drive, San Mateo: CA 94403, 1991. Morgan-Kaufmann Publishers. [3] William T. Freeman and Edward H. Adelson. The design and use of steerable filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI13(9):891- 906, 1991. [4] James D. Keeler, David E. Rumelhart, and Wee-Keng Leow. Integrated segmentation and recognition of hand-printed numerals. In Richard P. Lippmann, John E. Moody, and David S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 557- 563, 2929 Campus Drive, San Mateo, CA 94403, 1991. Morgan-Kaufmann Publishers. [5] Charles Metz. Current problems in ROC analysis. In Proceedings of the Chest Imaging Conference, pages 315- 33, Madison, WI, November 1988. [6] R. M. Nishikawa, R. C. Haldemann, J. Papaioannou, M. L. Giger, P. Lu, R. A. Schmidt, D. E. Wolverton, U. Bick, and K. DoL Initial experience with a prototype clinical intelligent mammography workstation for computer-aided diagnosis. In Murray H. Loew and Kenneth M. Hanson, editors, Medical Imaging 1995, volume 2434, pages 65- 71, P.O. Box 10, Bellingham WA 98227-0010, 1995. SPIE. [7] Clay D. Spence. Supervised learning of detection and classification tasks with uncertain training data. In Image Understanding Workshop. ARPA, 1996. This Volume. [8] Clay D. Spence, John C. Pearson, and Jim Bergen. Coarse-to-fine image search using neural networks. In Gerald Tesauro, David S. Touretzky, and Todd K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 981988, Massachusetts Institute of Technology, Cambridge, MA 02142, 1994. MIT Press. [9] W. Zhang, K. Doi, M. L. Giger, Y. Wu, R. M. Nishikawa, and R. Schmidt. Computerized detection of clustered micro calcifications in digital mammograms using a shift-invariant artificial neural network. Medical Physics, 21(4):517-524, April 1994.
1998
109
1,463
Orientation, Scale, and Discontinuity as Emergent Properties of Illusory Contour Shape Karvel K. Thornber NEC Research Institute 4 Independence Way Princeton, NJ 08540 Lance R. Williams Dept. of Computer Science University of New Mexico Albuquerque, NM 87131 Abstract A recent neural model of illusory contour formation is based on a distribution of natural shapes traced by particles moving with constant speed in directions given by Brownian motions. The input to that model consists of pairs of position and direction constraints and the output consists of the distribution of contours joining all such pairs. In general, these contours will not be closed and their distribution will not be scale-invariant. In this paper, we show how to compute a scale-invariant distribution of closed contours given position constraints alone and use this result to explain a well known illusory contour effect. 1 INTRODUCTION It has been proposed by Mumford[3] that the distribution of illusory contour shapes can be modeled by particles travelling with constant speed in directions given by Brownian motions. More recently, Williams and Jacobs[7, 8] introduced the notion of a stochastic completion field, the distribution of particle trajectories joining pairs of position and direction constraints, and showed how it could be computed in a local parallel network. They argued that the mode, magnitude and variance of the completion field are related to the observed shape, salience, and sharpness of illusory contours. Unfortunately, the Williams and Jacobs model, as described, has some shortcomings. Recent psychophysics suggests that contour salience is greatly enhanced by closure[2]. Yet, in general, the distribution computed by the Williams and Jacobs model does not consist of closed contours. Nor i:; it scale-invariant-doubling the distances between the constraints does not produce a comparable completion field of 832 K. K. Thornber and L. R. Williams double the size without a corresponding doubling of the particle's speeds. However, the Williams and Jacobs model contains no intrinsic mechanism for speed selection. The speeds (like the directions) must be specified a priori. In this paper, we show how to compute a scale-invariant distribution of closed contours given position constraints alone. 2 TECHNICAL DETAILS 2.1 SHAPE DISTRIBUTION Consistent with our earlier work[5, 6], in this paper we do not use the same distribution described by Mumford[3] but instead assume a distribution of completion shapes consisting of straight-line base-trajectories modified by random impulses drawn from a mixture of two limiting distributions. The first distribution consists of weak but frequently acting impulses (we call this the Gaussian-limit). The distribution of these weak impulses has zero mean and variance equal to (7~. The weak impulses act at Poisson times with rate R g . The second distribution consists of strong but infrequently acting impulses (we call this the Poisson-limit). Here, the magnitude of the random impulses is Gaussian distributed with zero mean. However, the variance is equal to (72 (where (7~ » (7;). The strong impulses act at Poisson times with rate Rp < < kg. Particles decay with half-life equal to a parameter T. The effect is that particles tend to travel in smooth, short paths punctuated by occasional orientation discontinuities. See [5, 6]. 2.2 EIGENSOURCES Let i and j be position and velocity constraints, (xi,id and (xj,Xj). Then P(jl i) is the conditional probability that a particle beginning at i will reach j. Note that these transition probabilities are not symmetric, i.e., P(j 1 i) 1= P(i 1 j). However, by time-reversal symmetry, P(j 1 i) = P(I 1 J) where I = (Xi, -Xi) and J = (Xj, -Xj). Given only the matrix of transition probabilities, P, we would like to compute the relative number of closed contours satisfying a given position and velocity constraint. We begin by noting that, due to their randomness, only increasingly smaller and smaller fractions of contours are likely to satisfy increasing numbers of constraints. Suppose we let s~l) contours start at Xi with Xi. Then (2) _ ~ P( 'I .) (1) Sj - ui J '/, Si is the relative number of contours through Xj with Xj, i.e., which satisfy two constraints. In general, (n+1) _ ~ P( 'I .) (n) Sj - ui J '/, Si Now suppose we compute the eigenvector, with largest, real positive eigenvalue, and take s~1) = Si. Then clearly sin+1) = AnSi. This implies that as the number of constraints satisfied increases by one, the number of contours remaining in the sample of interest decreases by A. However, the ratios of the Si remain invariant. Letting n pass to infinity, we see that the Si are just the relative number of contours through i. To summarize, having started with all possible contours, we are now left with only those bridging pairs of constraints at all past-times. By solving AS = Ps for s we know their relative numbers. We refer to the components of s as the eigensources of the stochastic completion field. Emergent Properties of Illusory Contour Shape 833 2.3 STOCHASTIC COMPLETION FIELDS Note that the eigensources alone do not represent a distribution of closed contours. In fact, the majority of contours contributing to s will not satisfy a single additional constraint. However, the following recurrence equation gives the number of contours which begin at constraint i and end at constraint j and satisfy n - 1 intermediate constraints p(n+1) (j I i) = Lk P(j I k)p(n) (k I i) where p( 1) (j I i) = P(j I i). Given the above recurrence equation, we can define an expression for the relative number of contours of any length which begin and end at constraint i: Ci = limn -+oo p(n)(i I i)/ Lj p(n)(j I)) Using a result from the theory of positive matrices[l}, it is possible to show that the above expression is simply Ci = Si8d Lj Sj8j where sand s are the right and left eigenvectors of P with largest positive real eigenvalue, i.e., AS = Ps and AS = pTs. Because of the time-reversal symmetry of P, the right and left eigenvectors are related by a permutation which exchanges opposite directions, i.e., 8i = St. Finally, given sand s, it is possible to compute the relative number of closed contours through an arbitrary position and velocity in the plane, i.e., to compute the stochastic completion field. If", = (x, x) is an arbitrary position and velocity in the plane, then C(",) = >.s~s Li P(", I i)Si . Lj P(j I ",)8j gives the relative probability that a closed contour will pass through ",. Note, that this is a natural generalization of the Williams and Jacobs[7] factorization of the completion field into the product of source and sink fields. 2.4 SCALE-INVARIANCE Under the restriction that particles have constant speed, the transition probability matrix, P, becomes block-diagonal. Each block corresponds to a different possible speed, 'Y- Since the components of any given eigenvector will be confined to a single block, we can consider P to be a function of, and solve: A(r) s(r) = P(r)s(r) Let Amax (r) be the largest positive real eigenvalue of P(r) and let ,max be the speed where Amax (r) is maximized. Then Smax (rmax), i.e., the eigenvector of P (rmax) associated with Amax (rmax), is the limiting distribution over all spatial scales. 3 EXPERIMENTS 3.1 EIGHT POINT CIRCLE Given eight points spaced uniformly around the perimeter of a circle of diameter, d = 16, we would like to find the distribution of directions through each point and the corresponding completion field (Figure 1 (left)). Neither the order of traversal, directions, i.e., xdlxil, or speed, i.e., , = IXil. are specified a priori. In all of our experiments, we sample direction at 5° intervals. Consequently, there are 72 discrete directions and 576 position-direction pairs, i.e., P(r) is of size 576 x 576. 1 lThe parameters defining the distribution of completion shapes are T = Rga~ = 0.0005 and 'T = 9.5. For simplicity, we assume the pure Gaussian-limit case described in [6]. 834 K. K. Thornber and L. R. Williams • • • o ,..,,, w~ I . ,0 o Eight Point Circle (two sizes) • • I • • • a / " I b .-w • a Il. '0 I i " / xo og~~~~~~~~~~~~~~~ :l~ o 0 " ,0 '" 20 c d Figure 1: Left: (a) The eight position constraints. Neither the order of traversal, directions, or speed are specified a priori. (b) The eigenvector, Smax (,max) represents the limiting distribution over all spatial scales. (c) The product of smaxC!max) and smaxC!max). Orientations tangent to the circle dominate the distribution of closed contours. (d) The stochastic completion field, C, due to smaxC!max). Right: Plot of magnitude of maximum positive real eigenvalue, >'max, vs. logl.l (1/,) for eight point circle with d = 16.0 (solid) and d = 32.0 (dashed). ~ 11 30 == ~ ==::J ~ U Figure 2: Observers report that as the width of the arms increases, the shape of the illusory contour changes from a circle to a square[4]. First, we evaluated Amax b) over the velocity interval [1.1- 1, 1.1-3oJ using standard numerical routines and plotted the magnitude of the largest, real positive eigenvalue, Amax vs. logl.l(l/,). The function reaches its maximum value at '"'(max:::::: 1.1-2°. Consequently, the eigenvector, Smax (1.1- 2°) represents the limiting distribution over all spatial scales (Figure 1 (right)). Next, we scaled the test Figure by a factor of two, i.e., d' = 32.0 and plotted A~axb) over the same interval (Figure 1 (right)). We observe that A~ax(1.1-x+7) :::::: Amax (1.1- X ), i.e., when plotted using a logarithmic x-axis, the functions are identical except for a translation. It follows that '"'(~ax :::::: logl.1 7 x '"'(max:::::: 2.0 x '"'(max' This confirms the scale-invariance of the system-doubling the size of the Figure results in a doubling of the selected speed. 3.2 KOFFKA CROSS The Koffka Cross stimulus (Figure 2) has two basic degrees of freedom which we call diameter (i.e., d) and arm width (i.e., w) (Figure 3 (a)). We are interested in how Emergent Properties of Illusory Contour Shape 835 (a) (b) (-0 5w . O.5d) ( O.5w ,O.Sd ) (e) (d) (--O.Sd , 05w) n o (OSd , 05w) r----------, r---~ .--......, d U (--OSd, --05w) ( 0 Sd ,-O.5w ) u (--05w. -O.5d) (O.5w. --OSd ) Figure 3: (a) Koffka Cross showing diameter, d, and width, w. (b) Orientation and position constraints in terms of d and w. The normal orientation at each endpoint is indicated by the solid lines while the dashed lines represent plus or minus one standard deviation (i.e. , 12.8°) of the Gaussian weighting function. (c) Typically perceived as square. (d) Typically perceived as circle. The positions of the line endpoints is the same. the stochastic completion field changes as these parameters are varied. Observers report that as the width of the arms increases, the shape of the illusory contour changes from a circle to a square[4]. The endpoints of the lines comprising the Koftka Cross can be used to define a set of position and orientation constraints (Figure 3 (b)). The position constraints are specified in terms of the parameters, d and w. The orientation constraints take the form of a Gaussian weighting function which assigns higher probabilities to contours passing through the endpoints with orientations normal to the lines. 2 The prior probabilities assigned to each positiondirection pair by the Gaussian weighting function form a diagonal matrix, D: where P(r) is the transition probability matrix for the random process at scale " A(r) is an eigenvalue of Q(,), and s(r) is the corresponding eigenvector. Let Amax(r) be the largest positive real eigenvalue of Q(r) and let ,max be the scale where Amax(r) is maximized. Then smax(rmax), i.e., the eigenvector of Q(rmax) associated with Amax (rmax), is the limiting distribution over all spatial scales. First, we used a Koffka Cross where d = 2.0 and w = 0.5 and evaluated Amax (r) over the velocity interval [8.0 x 1.1-1 ,8.0 x 1.1-8°] using standard numerical routines.3 The function reaches its maximum value at ,max::::; 8.0 X 1.1-62 (Figure 4 (left)). Observe that the completion field due to the eigenvector, smax(8.0 x 1.1-62 ), is dominated by contours of a predominantly circular shape (Figure 4 (right)). We then uniformly scaled the Koffka Cross Figure by a factor of two, i.e., d' = 4.0 and 20bserve that Figure 3 (c) is perceived as a square while Figure 3 (d) is perceived as a circle. Yet the positions of the line endpoints is the same. It follows that the orientations of the lines affect the percept. We have chosen to model this dependence through the use of a Gaussian weighting function which favors contours passing through the endpoints of the lines in the normal direction. It is possible to motivate this based on the statistics of natural scenes. The distribution of relative orientations at contour crossings is maximum at 90° and drops to nearly zero at 00 and 1800 • 3The parameters defining the distribution of completion shapes were: T = RgO'~ = 0.0005, T = 9.5, €p = O'~/T = 100.0 and Rp = 1.0 X 10-8 . As an anti-aliasing measure, the transition probabilities, P(j I i), were averaged over initial conditions modeled as Gaussians of variance 0'; = 0'; = 0.00024 and O'J = 0.0019. See [6]. ... '"'~ LIl~ I a , o -o > c 00 CI~ -. Wo -o , Ira ... a ~o 836 K. K. Thornber and L. R. Williams Koffke Crosses (TWO sizes) _ -_=0>== a i D.. / ; / X 0 ~/-og-HTnTrnTnTrnTnTrnTnTnTrnTn~~ ~ci a 20 '0 x 60 .. 0 Figure 4: Left: Plot of magnitude of maximum positive real eigenvalue, >'max, vs. logl.l (1h) for Koffka Crosses with d = 2.0 and w = 0.5 (solid) and d = 4.0 and w = 1.0 (dashed). Right: The completion field due to the eigenvector, smax(8.0 x 1.1-62 ) . w' = 1.0 and plotted Anax(,) over the same interval (Figur~ 4 (left)). Observe that A~ax (8.0 X 1.1-x+ ) :::::: Amax(8.0 x 1.1-X). As before, thls confirms the scaleinvariance of the system. Next, we studied how the relative magnitudes of the local maxima of Amax (,) change as the parameter w is varied. We begin with a Koffka Cross where d = 2.0 and w = 0.5 and observe that Amax(r) has two local maxima (Figure 5 (left)). We refer to the larger of these maxima as ,circle . As previously noted, this maximum is located at approximately 8.0 x 1.1-62 . The second maximum is located at approximately 8.0 x 1.1 -32. When the completion field due to the eigenvector, smax(8.0 x 1.1-32 ), is rendered, we observe that the distribution is dominated by contours of predominantly square shape (Figure 5(a)). For this reason, we refer to this local maximum as ,square. Now consider a Koffka Cross where the widths of the arms are doubled but the diameter remains the same, i.e., d' = 2.0 and w' = 1.0. We observe that A~ax (r) still has two local maxima, one at approximately 8.0 x 1.1-63 and a second at approximately 8.0 x l.1-29 (Figure 5 (left)). When we render the completion fields due to the eigenvectors, s~ax(8.0x 1.1-63 ) and s~ax(8.0 x 1.1-29 ), we find that the completion fields have the same general character as before-the contours associated with the smaller spatial scale (i.e., lower speed) are approximately circular and those associated with the larger spatial scale (Le., higher speed) are approximately square (Figure 5 (d) and (c)). Accordingly, we refer to the locations of the respective local maxima as '~ircle and ,~quar e ' However, what is most interesting is that the relative magnitudes of the local maxima have reversed. Whereas we previously observed that Amax(,circle) > Amax(rsquare), we now observe that A~ax(r~quare) > A~ax(r~ircle)' Therefore, the completion field due to the eigenvector, s~ax(r~quar e ) [not s~ax(r~ircle)!l represents the limiting distribution over all spatial scales. This is consistent with the transition from circle to square reported by human observers when the widths of the arms of the Koffka Cross are increased. o > ( Gl o OI~ .- . wo -o • (l:o • o ~o o Q. Emergent Properties of Illusory Contour Shape Koffke Crosses (two c 20 a 40 X widths) b --, ~:...d 60 80 837 a b c d Figure 5: Plot of magnitude of maximum positive real eigenvalue, Amax, vs. log 1.1 (1/"'() for Koffka Crosses with d = 2.0 and w = 0.5 (solid) and d = 2.0 and w = 1.0 (dashed). Stochastic completion fields for Koffka Cross due to (a) Smax ("'(.quar e ) is a local optimum for w = 0.5 (b) Sma x ("'(circl e ) is the global optimum for w = 0.5 (c) s~ax("'(~quar e ) is the global optimum for w = 1.0 (d) s~a x ("'(~quar e ) is a local optimum for w = 1.0. These results are consistent with the circle-to-square transition perceived by human subjects when the width of the arms of the Koffka Cross are increased. 4 CONCLUSION We have improved upon a previous model of illusory contour formation by showing how to compute a scale-invariant distribution of closed contours given position constraints alone. We also used our model to explain a previously unexplained perceptual effect. References [1] Horn, R.A., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, p. 500, 1985. [2] Kovacs, I. and B. Julesz, A Closed Curve is Much More than an Incomplete One: Effect of Closure in Figure-Ground Segmentation, Pmc. Natl. Acad. Sci. USA, 90, pp. 7495-7497, 1993. [3] Mumford, D., Elastica and Computer Vision, Algebraic Geometry and Its Applications, Chandrajit Bajaj (ed.), Springer-Verlag, New York, 1994. [4) Sambin, M., Angular Margins without Gradients, Italian Journal of Psychology 1, pp. 355-361, 1974. [5] Thornber, KK and L.R. Williams, Analytic Solution of Stochastic Completion Fields, Biological Cybernetics 75, pp. 141-151, 1996. [6] Thornber, KK and L.R. Williams, Characterizing the Distribution of Completion Shapes with Corners Using a Mixture of Random Processes, Intl. Workshop on Energy Minimization Methods in Computer Vision, Venice, Italy, 1997. [7] Williams, L.R. and D.W. Jacobs, Stochastic Completion Fields: A Neural Model of Illusory Contour Shape and Salience, Neural Computation 9(4) , pp. 837-858, 1997. [8) Williams, L.R. and D.W. Jacobs, Local Parallel Computation of Stochastic Completion Fields, Neural Computation 9(4), pp. 859-881, 1997.
1998
11
1,464
Divisive Normalization, Line Attractor Networks and Ideal Observers Sophie Denevel Alexandre Pougetl, and P.E. Latham2 1 Georgetown Institute for Computational and Cognitive Sciences, Georgetown University, Washington, DC 20007-2197 2Dpt of Neurobiology, UCLA, Los Angeles, CA 90095-1763, U.S.A. Abstract Gain control by divisive inhibition, a.k.a. divisive normalization, has been proposed to be a general mechanism throughout the visual cortex. We explore in this study the statistical properties of this normalization in the presence of noise. Using simulations, we show that divisive normalization is a close approximation to a maximum likelihood estimator, which, in the context of population coding, is the same as an ideal observer. We also demonstrate analytically that this is a general property of a large class of nonlinear recurrent networks with line attractors. Our work suggests that divisive normalization plays a critical role in noise filtering, and that every cortical layer may be an ideal observer of the activity in the preceding layer. Information processing in the cortex is often formalized as a sequence of a linear stages followed by a nonlinearity. In the visual cortex, the nonlinearity is best described by squaring combined with a divisive pooling of local activities. The divisive part of the nonlinearity has been extensively studied by Heeger and colleagues [1], and several authors have explored the role of this normalization in the computation of high order visual features such as orientation of edges or first and second order motion[ 4]. We show in this paper that divisive normalization can also playa role in noise filtering. More specifically, we demonstrate through simulations that networks implementing this normalization come close to performing maximum likelihood estimation. We then demonstrate analytically that the ability to perform maximum likelihood estimation, and thus efficiently extract information from a population of noisy neurons, is a property exhibited by a large class of networks. Maximum likelihood estimation is a framework commonly used in the theory of ideal observers. A recent example comes from the work of Itti et al., 1998, who have shown that it is possible to account for the behavior of human subjects in simple discrimination tasks. Their model comprised two distinct stages: 1) a network Divisive Normalization. Line Attractor Networks and Ideal Observers 105 which models the noisy response of neurons with tuning curves to orientation and spatial frequency combined with divisive normalization, and 2) an ideal observer (a maximum likelihood estimator) to read out the population activity of the network. Our work suggests that there is no need to distinguish between these two stages, since, as we will show, divisive normalization comes close to providing a maximum likelihood estimation. More generally, we propose that there may not be any part of the cortex that acts as an ideal observer for patterns of activity in sensory areas but, instead, that each cortical layer acts as an ideal observer of the activity in the preceding layer. 1 The network Our network is a simplified model of a cortical hypercolumn for spatial frequency and orientation. It consists of a two dimensional array of units in which each unit is indexed by its preferred orientation, 8i , and spatial frequency, >'j. 1.1 LGN model Units in the cortical layer are assumed to receive direct inputs from the lateral geniculate nucleus (LG N). Here we do not model explicitly the LG N, but focus instead on the pooled LGN input onto each cortical unit. The input to each unit is denoted aij' We distinguish between the mean pooled LGN input, fij(8, >'), as a function of orientation, 8, and spatial frequency, >., and the noise distribution around this mean, P(aijI8, >.). In response to a stimulus of orientation, 8, spatial frequency, >., and contrast, G, the mean LGN input onto unit ij is a circular Gaussian with a small amount of spontaneous activity, 1/: J, ,(8 ') - KG (COS(>. - >'j) - 1 cos(8 - 8i ) - 1) 'J ,/\ exp 2 + 2 + 1/, ~A ~o (1) where K is a constant. Note that spatial frequency is treated as a periodic variable; this was done for convenience only and should have negligible effects on our results as long as we keep>. far from 27m, n an integer. On any given trial the LGN input to cortical unit ij, aij, is sampled from a Gaussian noise distribution with variance ~;j: (2) In our simulations, the variance of the noise was either kept fixed (~'fj = ~2) or set to the mean activity (~t = Jij(8, >.)). The latter is more consistent with the noise that has been measured experimentally in the cortex. We show in figure I-A an example of a noisy LGN pattern of activity. 1.2 Cortical Model: Divisive Normalization Activities in the cortical layer are updated over time according to: 106 A. CORTEX :::::::: r:::L:: t ::;-:::;::::: - r- , _ ::::~ ' ~ S. Deneve, A. Pouget and P. E. Latham DO 0.1 0.2 0.3 0.4 0.5 0.8 0.7 0.1 D.' 1 Contrast Figure 1: A- LGN input (bottom) and stable hill in the cortical network after relaxation (top). The position of the stable hill can be used to estimate orientation (0) and spatial frequency (5.). B- Inverse of the variance of the network estimate for orientation using Gaussian noise with variance equal to the mean as a function of contrast and number of iterations (0, dashed; 1, diamond; 2, circle; and 3, square). The continuous curve corresponds to the theoretical upper bound on the inverse of the variance (i.e. an ideal observer). C- Gain curve for contrast for the cortical units after 1, 2 and 3 iterations. (3) where {Wij,kt} are the filtering weights, Oij(t) is the activity of unit ij at time t, S is a constant, and J1. is what we call the divisive inhibition weight. The filtering weights implement a two dimensional Gaussian filter: Wij,kl = Wi-k,j - l = Kwexp (COS[27!'(i -2k)/P] -1 + cos[27!'(j ~ l)/Pl-1) (4) ~w~ ~WA where Kw is a constant, ~w~ and ~WA control the width of the filtering weights, and there are p 2 units. On each iteration the activity is filtered by the weights, squared, and then normalized by the total local activity. Divisive normalization per se only involves the squaring and division by local activity. We have added the filtering weights to obtain a local pooling of activity between cells with similar preferred orientations and spatial frequencies. This pooling can easily be implemented with cortical lateral connections and it is reasonable to think that such a pooling takes place in the cortex. Divisive Normalization, Line Attractor Networks and Ideal Observers 107 2 Simulation Results Our simulations consist of iterating equation 3 with initial conditions determined by the presentation orientation and spatial frequency. The initial conditions are chosen as follows: For a given presentation angle, (}o, and spatial frequency, Ao, determine the mean cortical activity, /ij((}o, AO), via equation 1. Then generate the actual cortical activity, {aij}, by sampling from the distribution given in equation 2. This serves as our set of initial conditions: Oij (t = 0) = aij' Iterating equation 3 with the above initial conditions, we found that for very low contrast the activity of all cortical units decayed to zero. Above some contrast threshold, however, the activities converged to a smooth stable hill (see figure I-A for an example with parameters (Jw(} = (Jw).. = (J(} = (J).. = I/V8, K = 74, C = 1, J.L = 0.01). The width of the hill is controlled by the width of the filtering weights. Its peak, on the other hand, depends on the orientation and spatial frequency of the LGN input, (}o and Ao. The peak can thus be used to estimate these quantities (see figure I-A). To compute the position of the final hill, we used a population vector estimator [3] although any unbiased estimator would work as well. In all cases we looked at, the network produced an unbiased estimate of (}o and Ao. In our simulations we adjusted (Jw(} and (Jw).. so that the stable hill had the same profile as the mean LGN input (equation 1). As a result, the tuning curves of the cortical units match the tuning curves specified by the pooled LGN input. For this case, we found that the estimate obtained from the network has a variance close to the theoretical minimum, known as the Cramer-Rao bound [3]. For Gaussian noise of fixed variance, the variance of the estimate was 16.6% above this bound, compared to 3833% for the population vector applied directly to the LGN input. In a ID network (orientation alone), these numbers go to 12.9% for the network versus 613% for population vector. For Gaussian noise with variance proportional to the mean, the network was 8.8% above the bound, compared to 722% for the population vector applied directly to the input. These numbers are respectively 9% and 108% for the I-D network. The network is therefore a close approximation to a maximum likelihood estimator, i.e., it is close to being an ideal observer of the LGN activity with respect to orientation and spatial frequency. As long as the contrast, C, was superthreshold, large variations in contrast did not affect our results (figure I-B). However, the tuning of the network units to contrast after reaching the stable state was found to follow a step function whereas, for real neurons, the curves are better described by a sigmoid [2]. Improved agreement with experiment was achieved by taking only 2-3 iterations, at which point the performance of the network is close to optimal (figure I-B) and the tuning curves to contrast are more realistic and closer to sigmoids (figure I-C). Therefore, reaching a stable state is not required for optimal performance, and in fact leads to contrast tuning curves that are inconsistent with experiment. 3 Mathematical Analysis We first prove that line attractor networks with sufficiently small noise are close approximations to a maximum likelihood estimator. We then show how this result applies to our simulations with divisive normalization. J08 S. Deneve, A. Pouget and P. E. Latham 3.1 General Case: Line Attractor Networks Let On be the activity vector (denoted by bold type) at discrete time, n, for a set of P interconnected units. We consider a one dimensional network, i.e., only one feature is encoded; generalization to multidimensional networks is straightforward. A generic mapping for this network may be written (5) where H is a nonlinear function. We assume that this mapping admits a line attractor, which we denote G(O), for which G(O) = H(G(O)) where 0 is a continuous variable. 1 Let the initial state of the network be a function of the presentation parameter, 00 , plus noise, 00 = F(Oo) + N (6) where F(Oo) is the function used to generate the data (in our simulations this would correspond to the mean LGN input, equation 1). Iterating the mapping, equation 5, leads eventually to a point on the line attractor. Consequently, as n -+ 00 , On -+ G(O). The parameter 0 provides an estimate of 00 . To determine how well the network does we need to find fJO :::: 0 - 00 as a function of the noise, N, then average over the noise to compute the mean and variance of fJO. Because the mapping, equation 5, is nonlinear, this cannot be done exactly. For small noise, however, we can take a perturbative approach and expand around a point on the attractor. For line at tractors there is no general method for choosing which point on the attractor to expand around. Our approach will be to expand around an arbitrary point, G( 0), and choose 0 by requiring that the quadratic terms be finite. Keeping terms up to quadratic order, equation 6 may be written G(O) + fJon . (7) n-l In . fJoo + ~ I.: (Jm . fJoo ) . H" . (Jm . fJoo ) , m=O (8) where J(O) == [8G (o)H(G(0))f is the Jacobian (the subscript T means transpose), H" is the Hessian of H evaluated at G(O) and a "." represents the standard dot product. Because the mapping, equation 5, admits a line attractor, J has one eigenvalue equal to 1 and all others less than 1. Denote the eigenvector with eigenvalue 1 as y and its adjoint v t : J . v = v and JT . vt = yt. It is not hard to show that y = 8oG(0), up to a multiplicative constant. Since J has an eigenvalue equal to 1, to avoid the quadratic term in Eq. 8 approaching infinity as n -+ 00 we require that lim I n . fJoo = O. n-too (9) IThe line attractor is, in fact , an idealization; for P units the attractors associated with equation 5 consists of P isolated points. However, for P large, the attractors are spaced closely enough that they may be considered a line. Divisive Normalization. Line Attractor Networks and Ideal Observers 109 This equations has an important consequence: it implies that, to linear order, limn-too 60n = 0 (see equation 8), which in turn implies that 0 00 = G(O) which, ~nally, implies that 0 = O. Consequently we can find the network estimator of 00 , 0, by computing O. We now turn to that task. It is straightforward to show that JOO = vvt . Combining this expression for J with equation 9, using equation 7 to express 600 in terms of 00 and G(O), and, finally using equation 6 to express 00 in terms of the initial mean activity, F(Oo), and the noise, N, we find that vt (0) . [F(Oo) - G(O) + N] = O. (10) Using 00 = 0 - 60 and expanding F(Oo) to first order in 60 then yields 60 = vt(O) . [N + F(O) - G(O)] vt(O) . F'(O) . (11) As long as vt is orthogonal to F(O) - G(O), (60) = 0 and the estimator is unbiased. This must be checked on a case by case basis, but for the circularly symmetric networks we considered orthogonality is satisfied. We can now calculate the variance of the network estimate, (60)2. Assuming v t . [F(O) - G(O)] = 0, equation 11 implies that 2 vt.R·vt (60) = [vt . F'F ' (12) where a prime denotes a derivative with respect to 0 and R is the covariance matrix of the noise, R = (NN). The network is equivalent to maximum likelihood when this variance is equal to the Cramer-Rao bound [3], (60)bR. If the noise, N, is Gaussian with a covariance matrix independent of 0, this bound is equal to: 2 1 (60)CR = F'. R - l . F' (13) For independent Gaussian noise of fixed variance, (T2, and zero covariance, the variance of the network estimate, equation 12, becomes (T2 1(IF'12 cos2 f-L) where f-L is the angle between vt and F'. The Cramer-Rao bound, on the other hand, is equal to (T2 IIF'1 2 . These expressions differ only by cos2 J1., which is 1 if F ex vt . In addition, it is close to 1 for networks that have identical input and output tunin1 curves, F(O) = G(O), and the Jacobian, J, is nearly symmetric, so that v ::::: v (recall that v = G'). If these last two conditions are satisfied, the network comes close to being a maximum likelihood estimator. 3.2 Application to Divisive Normalization Divisive normalization is a particular example of the general case considered above. For simplicity, in our simulations we chose the input and output tuning curves to be equal (F = G in the above notation), which lead to a value of 0.87 for cos2 f-L (evaluated numerically). This predicted a variance 15% above the Cramer-Rao 110 S. Deneve, A. Pouget and P E. Latham bound for independent Gaussian noise with fixed variance, consistent with the 16% we obtained in our simulations. The network also handles fairly well other noise distributions, such as Gaussian noise with variance proportional to the mean, as illustrated by our simulations. 4 Conclusions We have recently shown that a subclass of line attractor networks can be used as maximum likelihood estimators[3]. This paper extend this conclusion to a much wider class of networks, namely, any network that admits a line (or, by straightforward extension of the above analysis, a higher dimensional) attractor. This is true in particular for networks using divisive normalization, a normalization which is thought to match quite closely the nonlinearity found in the primary visual cortex and MT. Although our analysis relies on the existence of an attractor, this is not a requirement for obtaining near optimal noise filtering. As we have seen, 2-3 iterations are enough to achieve asymptotic performance (except at contrasts barely above threshold). What matters most is that our network implement a sequence of low pass filtering to filter out the noise, followed by a square nonlinearity to compensate for the widening of the tuning curve due to the low pass filter, and a normalization to weaken contrast dependence. It is likely that this process would still clean up noise efficiently in the first 2-3 iterations even if activity decayed to zero eventually, that is to say, even if the hills of activity were not stable states. This would allow us to apply our approach to other types of networks, including those lacking circular symmetry and networks with continuously clamped inputs. To conclude, we propose that each cortical layer may read out the activity in the preceding layer in an optimal way thanks to the nonlinear pooling properties of divisive normalization, and, as a result, may behave like an ideal observer. It is therefore possible that the ability to read out neuronal codes in the sensory cortices in an optimal way may not be confined to a few areas like the parietal or frontal cortex, but may instead be a general property of every cortical layer. References [1] D. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181- 197,1992. [2] L. Itti, C. Koch, and J. Braun. A quantitative model for human spatial vision threshold on the basis of non-linear interactions among spatial filters. In R. Lippman, J. Moody, and D. Touretzky, editors, Advances in Neural Information Processing Systems, volume 11. Morgan-Kaufmann, San Mateo, 1998. [3] A. Pouget, K. Zhang, S. Deneve, and P. Latham. Statistically efficient estimation using population coding. Neural Computation, 10:373- 401, 1998. [4] E. Simoncelli and D. Heeger. A model of neuronal responses in visual area MT. Vision Research, 38(5):743- 761, 1998.
1998
110
1,465
Robot Docking using Mixtures of Gaussians Matthew Williamson* Roderick Murray-Smith t Volker Hansent Abstract This paper applies the Mixture of Gaussians probabilistic model, combined with Expectation Maximization optimization to the task of summarizing three dimensional range data for a mobile robot. This provides a flexible way of dealing with uncertainties in sensor information, and allows the introduction of prior knowledge into low-level perception modules. Problems with the basic approach were solved in several ways: the mixture of Gaussians was reparameterized to reflect the types of objects expected in the scene, and priors on model parameters were included in the optimization process. Both approaches force the optimization to find 'interesting' objects, given the sensor and object characteristics. A higher level classifier was used to interpret the results provided by the model, and to reject spurious solutions. 1 Introduction This paper concerns an application of the Mixture of Gaussians (MoG) probabilistic model (Titterington et aI., 1985) for a robot docking application. We use the ExpectationMaximization (EM) approach (Dempster et aI., 1977) to fit Gaussian sub-models to a sparse 3d representation of the robot's environment, finding walls, boxes, etc .. We have modified the MoG formulation in three ways to incorporate prior knowledge about the task, and the sensor characteristics: the parameters of the Gaussians are recast to constrain how they fit the data, priors on these parameters are calculated and incorporated into the EM algorithm, and a higher level processing stage is included which interprets the fit of the Gaussians on the data, detects misclassifications, and providing prior information to guide the modelfitting. The robot is equipped with a LIDAR 3d laser range-finder (PIAP, 1995) which it uses to identify possible docking objects. The range-finder calculates the time of flight for a light pulse reflected off objects in the scene. The particular LIDAR used is not very powerful, making objects with poor reflectance (e.g., dark, shiny, or surfaces not perpendicular to the *Corresponding author: MIT AI Lab, Cambridge, MA, USA. rna t t@ai . rni t . edu tDept. of Mathematical Modelling, Technical University of Denmark. rod@imm. dtu. dk tDaimlerChrysler, Alt-Moabit 96a, Berlin, Germany. hansen@dbag.bIn. dairnierbenz . com 946 M M Williamson, R. Murray-Smith and V. Hansen laser beam) invisible. The scan pattern is also very sparse, especially in the vertical direction, as shown in the scan of a wall in Figure 1. However, if an object is detected, the range returned is accurate (±1-2cm). When the range data is plotted in Cartesian space it forms a number of sparse clusters, leading naturally to the use of MoG clustering algorithms to make sense of the scene. While the Gaussian assumption is not an ideal model of the data, the generality of MoG, and its ease of implementation and analysis motivated its use over a more specialized approach. The sparse nature of the data inspired the modifications to the MoG formulation described in this paper. Model-based object recognition from dense range images has been widely reported (see (Arman and Aggarwal, 1993) for a review), but is not relevant in this case given the sparseness of the data. Denser range images could be collected by combining multiple scans, but the poor visibility of the sensor hampers the application of these techniques. The advantage of the MoG technique is that the segmentation is "soft", and perception proceeds iteratively during learning. This is especially useful for mobile robots where evidence accumulates over time, and the allocation of attention is time and state-dependent. The EM algorithm is useful since it is guaranteed to converge to a local maximum. The following sections of the paper describe the re-parameterization of the Gaussians to model plane-like clusters, the formulation of the priors, and the higher level processing which interprets the clustered data in order to both move the robot and provide prior information to the model-fitting algorithm. -<>.2 -0. e ~ -0.6 ~-08 -, -, .. 2 Figure 1: Plot showing data from a LIDAR scan of a wall, plotted in Cartesian space. The robot is located at the origin, with the y axis pointing forward, x to the right, and z up. The sparse scan pattern is visible, as well as the visibility constraint: the wall extends beyond where the scan ends, but is invisible to the LIDAR due to the orientation of the wall 2 Mixture of Gaussians model The range-finder returns a set of data, each of which is a position in Cartesian space Xi = (Xi, Yi, Zi). The complete set of data D = {Xl ... XN} is modeled as being generated by a mixture density M P(xn) = L P(xn Ii, JLi, Ei , 1l'i)P( i), i=l where we use a Gaussian as the sub-model, with mean JLi, variance Ei and weight 1l'i' which makes the probability of a particular data point: M P(xnIJL, E, 1l') = ~ (21l')3/:jEi I1/2 exp ( -~(Xn - JLi)TE;l(xn - JLi)) Robot Docking Using Mixtures of Gaussians 947 Given a set of data D, the most likely set of parameters is found using the EM algorithm. This algorithm has a number of advantages, such as guaranteed convergence to a local minimum, and efficient computational performance. In 3D Cartesian space, the Gaussian sub-models form ellipsoids, where the size and orientation are determined by the covariance matrix ~~. In the general case, the EM algorithm can be used to learn all the parameters of ~i. The sparseness of the LIDAR data makes this parameterization inappropriate, as various odd collections of points could be clustered together. By changing the parameterization of ~~ to better model plane-like structures, the system can be improved. The reparameterization is most readily expressed in terms of the eigenvalues Ai and eigenvectors ~ of the covariance matrix ~i = ~Ai ~ -I. The covariance matrix of a normal approximation to a plane-like vertical structure will have a large eigenvalue in the z direction, and in the x-y plane one large and one small eigenvalue. Since ~i is symmetrical, the eigenvectors are orthogonal, v:- I = ~T = ~, and ~i can be written: o where Oi is the angle of orientation of the ith sub-model in the x-y plane, ai scales the cluster in the x and y directions, and bi scales in the z direction. The constant, controls the aspect ratio of the ellipsoid in the x-y plane. I The optimal values of these parameters (a, b) are found using EM, first calculating the probability that data point Xn is modeled by Gaussian i, (htn ) for every data point Xn and every Gaussian i, 7ril~il-1/2 exp (-~(Xn fli)T~il(Xn - fli)) hin = --~M~--------~~~--------~--------~-Li==1 7ril~~I - 1/2exp (-~(Xn fldT~il(Xn - fli))' This "responsibility" is then used as a weighting for the updates to the other parameters, fli Ln hinxn {) _ ~ t -I ( 2 Ln htn(Xnl - flil)(Xn 2 - fli2) ) Ln htn ' t 2 an Ln htn[(Xnl fl~I)2 - (Xn2 - fli2)2] (r - l)((xnl - flid sin 0 + (Xn2 fl~2) COSO)2 + (Xnl - flid 2 + (Xn2 - fli2)2 Ln hin( b _ Ln hin (Xn3 - fln3)2 2, Ln hin ' t Ln htn ' where Xnl is the first element of Xn etc. and ( corresponds to the projection of the data into the plane ofthe cluster. It is im~ortant to update the means fli first, and use the new values to update the other parameters. Figure 2 shows a typical model response on real LIDAR data. 2.1 Practicalities of application, and results Starting values for the model parameters are important, as EM is only guaranteed to find a local optimum. The Gaussian mixture components are initialized with a large covariance, allowing them to pick up data and move to the correct positions. We found that initializing the means fli to random data points, rather than randomly in the input space, tended to 1 By experimentation, a value of'Y of 0.01 was found to be reasonable for this application. 2Intuition for the Oi update can be obtained by considering that (Xnl - fltl) is the x component of the distance between Xn and /.Li, which is IXn - /.Ld cos e, and similarly (Xn2 - /.Li2) is IXn - /.Li I sin e, so tan 2() = sin 20 = 2 sin 0 cos 0 = 2(xn1 -1'.1 )(xn 2 -1'.2) . cos 20 cos2 0-sin2 0 (X n 1-l'i1 )2 -(Xn2 -1'.2)2 948 M. M. Williamson, R. Murray-Smith and V. Hansen O '+ ~~ 1 ;Ui?h • ... " ----..-~ + • • Figure 2: Example of clustering of the 3d data points. The left hand graph shows the view from above (the x-y plane), and the right graph shows the view from the side (the y-z plane), with the robot positioned at the origin. The scene shows a box at an oblique angle, with a wall behind. The extent of the plane-like Gaussian sub-models is illustrated using the ellipses, which are drawn at a probability of 0.5. work better, especially given the sensor characteristics-if the LIDAR returned a range measurement, it was likely to be part of an interesting object. Despite the accuracy of measurement, there are still outlying data points, and it is impossible to fully segment the data into separate objects. One simple solution we found was to define a "junk" Gaussian. This is a sub-model placed in the center of the data, with a large covariance ~. This Gaussian then becomes responsible for the outliers in the data (i.e. sparsely distributed data over the whole scene, none of which are associated with a specific object), allowing the object-modeling Gaussians to work undistracted. The use of EM with the a, b, e parameterization found and represented plane-like data clusters better than models where all the elements of the covariance matrix were free to adapt. It also tended to converge faster, probably due to the reduced numbers of parameters in the covariance matrix (3 as opposed to 6). Although the algorithm is constrained to find planes, the parameterization was flexible enough to model other objects such as thin vertical lines (say from a table leg). The only problem with the algorithm was that it occasionally found poor local minimum solutions, such as illustrated in Figure 3. This is a common problem with least squares based clustering methods (Duda and Hart, 1973) . O. OB 07 06 os 04 03 02 01 0 -, • -o.s • • os .. O. o. 07 06 os 04 03 0.2 01 I ..%.6 -04 -02 02 04 06 08 Figure 3: Two examples of 'undesirable' local minimum solutions found by EM. Both graphs show the top view of a scene of a box in front of a wall. The algorithm has incorrectly clustered the box with the left hand side of the wall. Robot Docking Using Mixtures ofGaussians 949 3 Incorporating prior information As well as reformulating the Gaussian models to suit our application, we also incorporated prior knowledge on the parameters of the sub-models. Sensor characteristics are often well-defined, and it makes sense to use these as early as possible in perception, rather than dealing with their side-effects at higher levels of reasoning. Here, e.g., the visibility constraint, by which only planes which are almost perpendicular to the lidar rays are visible, could be included by writing P(xn) = I:~~l P(xnli, f3t)P(i)P(visiblelf3i), the updates could be recalculated, and the feature immediately brought into the modeling process. In addition, prior knowledge about the locations and sizes of objects, maybe from other sensors, can be used to influence the modeling procedure. This allows the sensor to make better use of the sparse data. For a model with parameters f3 and data D, Bayes rule gives: P(f3) II P(,8ID) = P(D) P(xnlf3)· Normally the logarithm of this is taken, to give the log-likelihood, which in the case of mixtures of Gaussians is L(DIf3) = log(p({/-li, 7ri,ai,bi ,6Q)) -log(p(D)) + LlogLp(xnli,/-li,7ri,ai,bi,Oi) n To include the parameter priors in the EM algorithm, distributions for the different parameters are chosen, then the log-likelihood is differentiated as usual to find the updates to the parameters (McMichael, 1995). The calculations are simplified if the priors on all the parameters are assumed to be independent, p( {/-li, 7rt , ai, bt, Od) = It p(/-ldp( 7ri)P( ai)p(bdp( Od· The exact form of the prior distributions varies for different parameters, both to capture different behavior and for ease of implementation. For the element means (/-li), a flat distribution over the data is used, specifying that the means should be among the data points. For the element weights, a multinomial Dirichlet prior can be used, p(7rila) = n::~1J n~l 7rf. When the hyperparameter a > 0, the algorithm favours weights around 1/ NI, and when -1 < a < 0, weights close to 0 or 1.3 The expected value of ai (written as ai) can be encoded using a truncated inverse exponential prior (McMichael, 1995), setting p(ailai) = Kexp(-at/(2ai)), where K is a normalizing factor. 4 The prior for bi has the same form. Priors for Ot were not used, but could be useful to capture the visibility constraint. Given these distributions, the updates to the parameters become /-li I:n hin + a I:n I:j hjn + a 2 o'i I:n hin(/, + a; bt = I:n hin(Xn3 - /-ln3) + bt . 2 I:n hin I:n hin The update for /-li is the same as before, the prior having no effect. The update for at and bt forces them to be near ai and bi , and the update for 7ri is affected by the hyperparameter a. The priors on ai and bi had noticeable effects on the models obtained. Figure 4 shows the results from two fits, starting from identical initial conditions. By adjusting the size of the prior, the algorithm can be guided into finding different sized clusters. Large values of the prior are shown here to demonstrate its effect. 3In this paper we make little use of the Q priors, but introducing separate Q;'S for each object could be a useful next step for scenes with varying object sizes. 4To deal with the case when a, = 0, the prior is truncated, setting p(a;!a,) = 0 when a, < Perit . 950 M M Williamson. R. Murray-Smith and V. Hansen .. ~ " 6JiiZC3!' @ ~ .' . . ~ f . ~ f \....t ..... . \....t ~, . f.1J f:> , , ~ ~ . ~ ~ . ; ; ,. ~ ) ~ . ) • • 1, 4' ~. . , .. ~.' ., ., ~ ~'. ... .. '.' ., .. Figure 4: Example of the action of the priors on ai and bi . The photograph shows a visual image of the scene: a box in front of a wall, and the priors were chosen to prefer a distribution matching the wall. The two left hand graphs show the top and side view of the scene clustered without priors, while the two right hand graphs use priors on ai and bi . The priors give a preference for large values of ai and bi , so biasing the optimization to find a mixture component matching the whole wall as opposed to just the top of it. 4 Classification and diagnosis FEATURES SENSOR MODEL FITIING HIGHER LEVEL MOVE COMMAND DATA EM ALGORITHM PRIOR PROCESSING FOR ROBOT INFORMATION Figure 5: Schematic of system This section describes how higher-level processing can be used to not only interpret the clusters fitted by the EM algorithm, but also affect the model-fitting using prior information. The processes of model-fitting and analysis are thus coupled, and not sequential. The results of the model fitting are primarily processed to steer the robot. Once the cluster has been recognized as a boxlwaIVetc., the location and orientation are used to calculate a move command. To perform the object-recognition, we used a simple classifier on a feature vector extracted from the clustered data. The labels used were specific to docking, and commonly clustered objects - boxes, walls, thin vertical lines. but also included labels for clustering errors (like those shown in Figure 3). The features used were the values of the parameters ai, bi , giving the size of the clusters, but also measures of the visibility of the clusters, and the skewness of the within-cluster data. The classification used simple models of the probability distributions of the features fi' given the objects OJ (i.e. P(hIOj)), using a set of training data. In addition to moving the robot, the classifier can modify the behavior of the model fitting algorithm. If a poor clustering solution is found, EM can be re-run with slightly different initial conditions. If the probable locations or sizes of objects are known from previous scans, or indeed from other sensors, then these can constrain the clustering through priors, or provide initial means. Robot Docking Using Mixtures ofGaussians 951 5 Summary This paper shows that the Mixture of Gaussians architecture combined with EM optimization and the use of parameter priors can be used to segment and analyze real data from the 3D range-finder of a mobile robot. The approach was successfully used to guide a mobile robot towards a docking object, using only its range-finder for perception. For the learning community this provides more than an example of the application of a probabilistic model to a real task. We have shown how the usual Mixture of Gaussians model can be parameterized to include expectations about the environment in a way which can be readily extended. We have included prior knowledge at three different levels: 1. The use of problem-specific parameterization of the covariance matrix to find expected patterns (e.g. planes at particular angles). 2. The use of problem-specific parameter priors to automatically rule-out unlikely objects at the lowest level of perception. 3. The results of the clustering process were post-processed by higher-level classification algorithms which interpreted the parameters of the mixture components, diagnosed typical misclassification, provided new priors for future perception, and gave the robot control system new targets. It is expected that the basic approach can be fruitfully applied to other sensors, to problems which track dynamically changing scenes, or to problems which require relationships between objects in the scene to be accounted for and interpreted. A problem common to all modeling approaches is that it is not trivial to determine the number and types of clusters needed to represent a given scene. Recent work with Markov-Chain Monte-Carlo approaches has been successfully applied to mixtures of Gaussians (Richardson and Green, 1997), allowing a Bayesian solution to this problem, which could provide control systems with even richer probabilistic information (a series of models conditioned on number of clusters). Acknowledgements All authors were employed by Daimler-Benz AG during stages of the work. R. MurraySmith gratefully acknowledges the support of Marie Curie TMR grant FMBICT96 I 369. References Arman, F. and Aggarwal, J. K. (1993). Model-based object recognition in dense-range images-a review. ACM Computing Surveys, 25 (1), 5-43. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society Series B, 39, 1-38. Duda, R. O. and Hart, P. E. (1973). Pattern Classification and Scene Analysis. New York, Wiley. McMichael, D. W. (1995). Bayesian growing and pruning strategies for MAP-optimal estimation of gaussian mixture models. In 4th lEE International Con! on Artificial Neural Networks, pp. 364-368. PIAP (1995). PIAP impact report on TRC lidar performance. Technical Report 1, Industrial Research Institute for Automation and Measure ments, 02-486 Warszawa, AI. Jerozolimskie 202, Poland. Richardson, S. and Green, P. J. (1997). On Bayesian anaysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society B, 50 (4), 700-792. Titterington, D., Smith, A., and Makov, U. (1985). Statistical Analysis of Finite Mixture Distributions. Chichester, John Wiley & Sons.
1998
111
1,466
Semi-Supervised Support Vector Machines Kristin P. Bennett Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 bennek@rpi.edu Ayhan Demiriz Department of Decision Sciences and Engineering Systems Rensselaer Polytechnic Institute Troy, NY 12180 demira@rpi.edu Abstract We introduce a semi-supervised support vector machine (S3yM) method. Given a training set of labeled data and a working set of unlabeled data, S3YM constructs a support vector machine using both the training and working sets. We use S3YM to solve the transduction problem using overall risk minimization (ORM) posed by Yapnik. The transduction problem is to estimate the value of a classification function at the given points in the working set. This contrasts with the standard inductive learning problem of estimating the classification function at all possible values and then using the fixed function to deduce the classes of the working set data. We propose a general S3YM model that minimizes both the misclassification error and the function capacity based on all the available data. We show how the S3YM model for I-norm linear support vector machines can be converted to a mixed-integer program and then solved exactly using integer programming. Results of S3YM and the standard I-norm support vector machine approach are compared on ten data sets. Our computational results support the statistical learning theory results showing that incorporating working data improves generalization when insufficient training information is available. In every case, S3YM either improved or showed no significant difference in generalization compared to the traditional approach. Semi-Supervised Support Vector Machines 369 1 INTRODUCTION In this work we propose a method for semi-supervised support vector machines (S3VM). S3VM are constructed using a mixture of labeled data (the training set) and unlabeled data (the working set). The objective is to assign class labels to the working set such that the "best" support vector machine (SVM) is constructed. If the working set is empty the method becomes the standl1rd SVM approach to classification [20, 9, 8]. If the training set is empty, then the method becomes a form of unsupervised learning. Semi-supervised learning occurs when both training and working sets are nonempty. Semi-supervised learning for problems with small training sets and large working sets is a form of semi-supervised clustering. There are successful semi-supervised algorithms for k-means and fuzzy c-means clustering [4, 18]. Clustering is a potential application for S3VM as well. When the training set is large relative to the working set, S3VM can be viewed as a method for solving the transduction problem according to the principle of overall risk minimization (ORM) posed by Vapnik at the NIPS 1998 SVM Workshop and in [19, Chapter 10]. S3VM for ORM is the focus of this paper. In classification, the transduction problem is to estimate the class of each given point in the unlabeled working set. The usual support vector machine (SVM) approach estimates the entire classification function using the principle of statistical risk minimization (SRM). In transduction, one estimates the classification function at points within the working set using information from both the training and working set data. Theoretically, if there is adequate training data to estimate the function satisfactorily, then SRM will be sufficient. We would expect transduction to yield no significant improvement over SRM alone. If, however, there is inadequate training data, then ORM may improve generalization on the working set. Intuitively, we would expect ORM to yield improvements when the training sets are small or when there is a significant deviation between the training and working set subsamples of the total population. Indeed,the theoretical results in [19] support these hypotheses. In Section 2, we briefly review the standard SV:~\'1 model for structural risk minimization. According to the principles of structural risk minimization, SVM minimize both the empirical misclassification rate and the capacity of the classification function [19, 20] using the training data. The capacity of the function is determined by margin of separation between the two classes based on the training set. ORM also minimizes the both the empirical misclassification rate and the function capacity. But the capacity of the function is determined using both the training and working sets. In Section 3, we show how SVM can be extended to the semi-supervised case and how mixed integer programming can be used practically to solve the resulting problem. We compare support vector machines constructed by structural risk minimization and overall risk minimization computationally on ten problems in Section 4. Our computational results support past theoretical results that improved generalization can be obtained by incorporating working set information during training when there is a deviation between the working set and training set sample distributions. In three of ten real-world problems the semi-supervised approach, S3VM , achieved a significant increase in generalization. In no case did S3VM ever obtain a sifnificant decrease in generalization. We conclude with a discussion of more general S VM algorithms. 370 K. Bennett and A. Demiriz 6 Class 1 - - -- 6 ___ __ .1:> _______ __ ______ w· x = b+ 1 - ----- -- W· x = b - - - --- - -0-------0---- - - - - W· x = b - 1 o o o o 0 oClass -1 Figure 1: Optimal plane maximizes margin. 2 SVM using Structural Risk Minimization The basic SRM task is to estimate a classification function f : RN -t {± I} using input-output training data from two classes (1) The function f should correctly classify unseen examples (x, Y), i.e. f(x) = y if (x, y) is generated from the same underlying probability distribution as the training data. In this work we limit discussion to linear classification functions. We will discuss extensions to the nonlinear case in Section 5. If the points are linearly separable, then there exist an n-vector wand scalar b such that or equivalently w· Xi - b ~ 1 w . Xi - b :S - 1 if Yi = 1, and if Yi = - 1, i = 1, . .. , f Yt [w . Xi b] ~ 1, i = 1, ... , f. (2) (3) The "optimal" separating plane, W . X = b, is the one which is furthest from the closest points in the two classes. Geometrically this is equivalent to maximizing the separation margin or distance between the two parallel planes W . X = b + 1 and W . X = b - 1 (see Figure 1.) The "margin of separation" in Euclidean distance is 2/llw112 where IIw I1 2 = :L~=l wt is the 2-norm. To maximize the margin, we minimize IIw1l2/2 subject to the constraints (3). According to structural risk minimization, for a fixed empirical misclassification rate, larger margins should lead to better generalization and prevent overfitting in high-dimensional attribute spaces. The classifier is called a support vector machine because the solution depends only on the points (called support vectors) located on the two supporting planes w· x = b - 1 and W · x = b + 1. In general the classes will not be separable, so the generalized optimal plane (GOP) problem (4) [9, 20] is used. A slack term T]! is added for each point such that if the point is misclassified, T]i 2: 1. The final GOP formulation is: min w ,b,'1 s.t. e 1 C LT]t + 2 II wll2 i= l Ydw . Xi - b] + T]i 2: 1 T]i ~ 0, i = 1, ... , f (4) where C > 0 is a fixed penalty parameter. The capacity control provided by the margin maximization is imperative to achieve good generalization [21, 19]. The Robust Linear Programming (RLP) approach to SVM is identical to GOP except the margin term is changed from the 2-norm II wll2 to the I-norm, II wlll = Semi-Supervised Support Vector Machines 371 2::;=1 IWj l· The problem becomes the following robust linear program (RLP) [2, 7, 1]: min w ,b,s,,,, s.t. e n CL1]i + LS) i = l j = l Yt [w . Xi b] + 1]i ~ 1 1]i ~ 0, i = 1, ... , f -Sj <= Wj <= Sj, j = 1, ... , n. (5) The RLP formulation is a useful variation of SVM with some nice characteristics. The I-norm weight reduction still provides capacity control. The results in [13] can be used to show that minimizing II will corresponds to maximizing the separation margin using the infinity norm. Statistical learning theory could potentially be extended to incorporate alternative norms. One major benefit of RLP over GOP is dimensionality reduction. Both RLP and GOP minimize the magnitude of the weights w. But RLP forces more of the weights to be 0 due to the properties of the I-norm. Another benefit of RLP over GOP is that it can be solved using linear programming instead of quadratic programming. Both approaches can be extended to handle nonlinear discrimination using kernel functions [8, 12] . Empirical comparisons of the approaches have not found any significant difference in generalization between the formulations [5, 7, 3, 12]. 3 Semi-supervised support vector machines To formulate the S3VM , we start with either SVM formulation, (4) or (5), and then add two constraints for each point in the working set. One constraint calculates the misclassification error as if the point were in class 1 and the other constraint calculates the misclassification error as if the point were in class - l. The objective function calculates the minimum of the two possible misclassification errors. The final class of the points corresponds to the one that results in the smallest error. Specifically we define the semi-supervised support vector machine problem (S3VM) as: w~~~, ,' C [t,~. + j~' min(~j, Zj)] + II w II subjectto Yi (w'xt+b)+1] i ~I 1]t~O i = I, . . . ,e (6) W . X j - b + t,j ~ 1 t,j ~ 0 j = f + 1, ... , f + k - (w·xj-b)+zj~I Zj ~ O where C > 0 is a fixed misclassification penalty. Integer programming can be used to solve this problem. The basic idea is to add a 0 or 1 decision variable, dj , for each point Xj in the working set. This variable indicates the class of the point. If dj = 1 then the point is in class 1 and if dj = 0 then the point is in class -1. This results in the following mixed integer program: W ,~~~ ',d C [t,~. + j~l (~j + Zj)] + II w II subject to Yt(w·x i- b)+1]t~I 1]t~O i=I, ... ,f W . Xj - b + t,j + A1(I - dj ) ~ 1 t,j ~ 0 j = e + 1, ... , f + k - (w · xj-b)+zj+Mdj~I Zj~O dj={O, I} (7) The constant M > 0 is chosen sufficiently large such that if dj = 0 then t,j = 0 is feasible for any optimal wand b. Likewise if dJ = 1 then Zj = O. A globally optimal 372 e e e:, e:, e:, e e e e e:,e e:, ee e -- -e- - -~ - - -- - - -• -- - - - - - - - -e. -- -----. ------- -e- - --e e 0 o o e 0 o o o o K. Bennett and A. Demiriz e:, e:, e:, • •• e:, 4 · ..... • • ~ ",. ' e:, . ... ~ • . .... /. .,.' .... • ..• . 0 • o o • 0 o o o o Figure 2: Left = solution found by RLP; Right = solution found by S3YM solution to this problem can be found using CPLEX or other commercial mixed integer programming codes [10] provided computer resources are sufficient for the problem size. Using the mathematical programming modeling language AMPL [11], we were able to express the problem in thirty lines of code plus a data file and solve it using CPLEX. 4 S3VM and Overall Risk Minimization An integer S3YM can be used to solve the Overall Risk Minimization problem. Consider the simple problem given in Figure 20 of [19]. Using RLP alone on the training data results in the separation shown in Figure 1. Figure 2 illustrates what happens when working set data is added. The training set points are shown as transparent triangles and hexagons. The working set points are shown as filled circles. The left picture in Figure 2 shows the solution found by RLP. Note that when the working set points are added, the resulting separation has very a small margin. The right picture shows the S3YM solution constructed using the unlabeled working set. Note that a much larger and clearer separation margin is found. These computational solutions are identical to those presented in [19] . We also tested S3YM on ten real-world data sets (eight from [14] and the bright and dim galaxy sets from [15]). There have been many algorithms applied successfully to these problems without incorporate working set information. Thus it was not clear a priori that S3YM would improve generalization on these data sets. For the data sets where no improvement is possible, we would like transduction using ORM to not degrade the performance of the induction via SRM approach. For each data set, we performed 10-fold cross-validation. For the three starred data sets, our integer programming solver failed due to excessive branching required within the CPLEX algorithm. On those data sets we randomly extracted 50 point working sets for each trial. The same C parameter was used for each data set in both the RLP and S3YM problems l . In all ten problems, S3YM never performed significantly worse than RLP. In three of the problems, S3YM performed significantly better. So ORM did not hurt generalization and in some cases it helped significantly. \Ve would expect this based on ORM theory. The generalization bounds for ORM depend on the difference between the training and working sets. If there is little difference, we would not expect any improvement using ORM. IThe formula for C was C = ;~f~>;;) with oX = .001, f is the size of training set, and k is the size of the working set. This formula was chosen because it worked well empirically for both methods. Semi-Supervised Support Vector Machines 373 Data Set Dim Points CV-size RLP S.1VM p-value Bright 14 2462 50* 0.02 0.018 0.343 Cancer 9 699 70 0.036 0.034 0.591 Cancer(Prognostic ) 30 569 57 0.035 0.033 0.678 Dim 14 4192 50* 0.064 0.054 0.096 Heart 13 297 30 0.173 0.160 0.104 Housing 13 506 51 0.155 0.151 0.590 Ionosphere 34 351 35 0.109 0.106 0.59 Musk 166 476 48 0.173 0.173 0.999 Pima 8 769 50* 0.220 0.222 0.678 Sonar 60 208 21 0.281 0.219 0.045 5 Conclusion \Ve introduced a semi-supervised SVM model. S3VM constructs a support vector machine using all the available data from both the training and working sets. We show how the S3VM model for I-norm linear support vector machines can be converted to a mixed-integer program. One great advantage of solving S3VM using integer programming is that the globall¥ optimal solution can be found using packages such as CPLEX. Using the integer S VM we performed an empirical investigation of transduction using overall risk minimization, a problem posed by Vapnik. Our results support the statistical learning theory results that incorporating working data improves generalization when insufficient training information is available. In every case, S3VM either improved or showed no significant difference in generalization compared to the usual structural risk minimization approach. Our empirical results combined with the theoretical results in [19], indicate that transduction via ORM constitutes a very promising research direction. Many research questions remain. Since transduction via overall risk minimization is not always be better than the basic induction via structural risk minimization, can we identify a priori problems likely to benefit from transduction? The best methods of constructing S3VM for the 2-norm case and for nonlinear functions are still open questions. Kernel based methods can be incorporated into S3VM. The practical scalability of the approach needs to be explored. We were able to solve moderately-sized problems with on the order of 50 working set points using a general purpose integer programming code. The recent success of special purpose algorithms for support vector machines [16, 17, 6] indicate that such approaches may produce improvement for S3VM as well. References [1] K. P. Bennett and E. J. Bredensteiner. Geometry in learning. In C. Gorini, E. Hart, W. Meyer, and T. Phillips, editors, Geometry at Work, Washington, D.C., 1997. Mathematical Association of America. To appear. [2] K. P. Bennett and O. 1. Mangasarian. Robust linear programming discrimination of two linearly inseparable sets. Optimization Methods and Software, 1:23- 34, 1992. [3] K. P. Bennett, D. H. Wu, and L. Auslender. On support vector decision trees for database marketing. R.P.I. Math Report No. 98-100, Rensselaer Polytechnic 374 K. Bennett and A. Demiriz Institute, Troy, NY, 1998. [4J A.M. Bensaid, L.O. Hall, J.e. Bezdek, and L.P. Clarke. Partially supervised clustering for image segmentation. Pattern Recognition, 29(5):859- 871, 199. [5J P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. Technical Report Mathematical Programming Technical Report 98-03, University of Wisconsin-Madison, 1998. To appear in ICML-98. [6J P. S. Bradley and O. L. Mangasarian. Massive data discrimination via linear support vector machines. Technical Report Mathematical Programming Technical Report 98-05, University of Wisconsin-Madison, 1998. Submitted for publication. [7J E. J. Bredensteiner and K. P. Bennett. Feature minimization within decision trees. Computational Optimization and Applications, 10:110-126, 1997. [8J C. J. C Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 1998. to appear. [9J C. Cortes and V. N. Vapnik. Support vector networks. Machine Learning, 20:273- 297, 1995. [IOJ CPLEX Optimization Incorporated, Incline Village, Nevada. Using the CPLEX Callable Library, 1994. [11] R. Fourer, D. Gay, and B. Kernighan. AMPL A Modeling Language for Mathematical Programming. Boyd and Frazer, Danvers, Massachusetts, 1993. [12J T. T. Fries and R. Harrison Fries. Linear programming support vector machines for pattern classification and regression estimation: and the sr algorithm. Research report 706, University of Sheffield, 1998. [13] O. L. Mangasarian. Parsimonious least norm approximation. Technical Report Mathematical Programming Technical Report 97-03, University of WisconsinMadison, 1997. To appear in Computational Optimization and Applications. [14] P.M. Murphy and D.W. Aha. UCI repository of machine learning databases. Department of Information and Computer Science, University of California, Irvine, California, 1992. [15J S. Odewahn, E. Stockwell, R. Pennington, R Humphreys, and W Zumach. Automated star/galaxy discrimination with neural networks. Astronomical Journal, 103(1):318- 331,1992. [16] E. Osuna, R. Freund, and F. Girosi. Support vector machines: Training and applications. AI Memo 1602, Maassachusets Institute of Technology, 1997. [17] J. Platt. Sequentional minimal optimization: A fast algorithm for training support vector machines. Technical Report Technical Report 98-14, Microsoft Research, 1998. [18] M. Vaidyanathan, R.P. Velthuizen, P. Venugopal, L.P. Clarke, and L.O. Hall. Tumor volume measurements using supervised and semi-supervised mri segmentation. In Artificial Neural Networks in Engineering Conference, ANNIE(19g4), 1994. [19] V. N. Vapnik. Estimation of dependencies based on empirical Data. Springer, New York, 1982. English translation, Russian version 1979. [20] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [21] V. N. Vapnik and A. Ja. Chervonenkis. Theory of Pattern Recognition. Nauka, Moscow, 1974. In Russian.
1998
112
1,467
A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations James M. Coughlan Smith-Kettlewell Inst. San Francisco, CA 94115 Abstract A.L.Yuille Smith-Kettlewell Inst. San Francisco, CA 94115 There has been much recent work on measuring image statistics and on learning probability distributions on images. We observe that the mapping from images to statistics is many-to-one and show it can be quantified by a phase space factor. This phase space approach throws light on the Minimax Entropy technique for learning Gibbs distributions on images with potentials derived from image statistics and elucidates the ambiguities that are inherent to determining the potentials. In addition, it shows that if the phase factor can be approximated by an analytic distribution then this approximation yields a swift "Minutemax" algorithm that vastly reduces the computation time for Minimax entropy learning. An illustration of this concept, using a Gaussian to approximate the phase factor, gives a good approximation to the results of Zhu and Mumford (1997) in just seconds of CPU time. The phase space approach also gives insight into the multi-scale potentials found by Zhu and Mumford (1997) and suggests that the forms of the potentials are influenced greatly by phase space considerations. Finally, we prove that probability distributions learned in feature space alone are equivalent to Minimax Entropy learning with a multinomial approximation of the phase factor. 1 Introduction Bayesian probability theory gives a powerful framework for visual perception (Knill and Richards 1996). This approach, however, requires specifying prior probabilities and likelihood functions. Learning these probabilities is difficult because it requires estimating distributions on random variables of very high dimensions (for example, images with 200 x 200 pixels, or shape curves of length 400 pixels). An important 762 J M. Coughlan and A. L. Yuille recent advance is the Minimax Entropy Learning theory. This theory was developed by Zhu, Wu and Mumford (1997 and 1998) and enables them to learn probability distributions for the intensity properties and shapes of natural stimuli and clutter. In addition, when applied to real world images it has an interesting link to the work on natural image statistics (Field 1987), (Ruderman and Bialek 1994), (Olshaussen and Field 1996). We wish to simplify Minimax and make the learning easier, faster and more transparent. In this paper we present a phase space approach to Minimax Entropy learning. This approach is based on the observation that the mapping from images to statistics is many-to-one and can be quantified by a phase space factnr. If this phase space factor can be approximated by an analytic function then we obtain approximate "Minutemax" algorithms which greatly speed up the learning process. In one version of this approximation, the unknown parameters of the distribution to be learned are related linearly to the empirical statistics of the image data set, and may be solved for in seconds or less. Independent of this approximation, the Minutemax framework also illuminates an important combinatoric aspect of Minimax, namely the fact that many different images can give rise to the same image statistics. This "phase space" factor explains the ambiguities inherent in learning the parameters of the unknown distribution, and motivates the approximation that reduces the problem to linear algebra. Finally, we prove that probability distributions learned in feature space alone are equivalent to Minimax Entropy learning with a multinomial approximation of the phase factor. 2 A Phase Space Perspective on Minimax We wish to learn a distribution P(I) on images, where I denotes the set of pixel values [(x, y) on a finite image lattice, and each value [(x , y) is quantized to a finite set of intensity values. (In fact, this approach is general and applies to any patterns, not just images.) We define a set of image statistics ¢1 (I), ¢2(1), ... , ¢s(I), which we concatenate as a single vector function ¢(I) . If these statistics have empirical mean d =< ¢(I) > on a dataset of images (we assume a large enough dataset for the law of large numbers to apply; see Zhu and Mumford (1997) for an analysis of the errors inherent in this assumption) then the maximum entropy distribution PM(I) with these empirical statistics is an exponential (Gibbs) distribution of the form eX·i(I) PM(I) = .... -, Z('\) where the potential X is set so that < ¢(I) > M= 1. (1) In summary, the goal of Minimax Learning is to to find an appropriate set of image filters for the domain of interest (i.e. maximally informative filters) and to estimate X given 1. Extensive computation is required to determine X; the phase space approach to Minimax Le~ning motivates approximations that make X easy to estimate. 2.1 Image Histogram Statistics The statistics we consider (following Zhu, Wu and Mumford (1997, 1998)) are defined as histograms of the responses of one or more filters applied acrOss an entire image. Consider a single filter f (linear or non-linear) with response fx(l) centered at position x in the image. Without loss of generality, we will assume the filter has quantized integer responses from 1 through f max, A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations 763 For notational convenience we transform the filter response fx(l) to a binary representation bx(I), defined as a column vector with fmax components: bx,z(l) = 6z'/x(I) , where index z ranges from 1 through f max . This vector is composed of all zeros except for the entry corresponding to the filter response, which is set to one. The image statistics vector is then a histogram vector defined as the average of the bx (I) 's over all N pixels: ¢(I) = iv L:x bx (I). The entries in ¢(I) then sum to 1. (We can generalize to the case of multiple filters f(1), f(2), . . . , f(m), as detailed in Coughlan and Yuille (1999).) 2.2 The Phase Factor The original Minimax distribution PM (I) induces a distribution PM(¢) on the statistics themselves, without reference to a particular image: (2) where g(¢) is a combinatoric phase space factor, with a corresponding normalized combinatoric distribution g(¢), defined by: g(¢o) = L 6io ,i(I), and g(¢) = g(¢)/QN , (3) I where the phase space factor g( ¢) counts the number of images 1 having statistics ¢. N is the number of pixels and Q is the number of pixel intensity levels, Le. QN is the total number of possible images I. It should be emphasized that the phase factor depends only on the set of filters chosen and is independent of the true distribution P(I). Thus the phase factor can be computed offline, independent of the image data set. In this paper we will discuss two useful approximations to g(¢): a Gaussian approximation, which yields the swift approximation for learning, and a multinomial approximation, which establishes a connection between Minimax and standard feature learning. 2.3 The Non-Uniqueness of the Potential X Given a set of filters and their empirical mean statistics d, is the potential X uniquely specified? Clearly, any solution for X may be shifted by an additive constant (Ai -+ A~ = Ai + k for all i), yielding a different normalization constant Z(~) but preserving PM(I). In this section we show that other, non-trivial ambiguities in X which preserve PM(I) can exist, stemming from the fact that some values of ¢ are inconsistent with every possible image 1 and hence never arise (in any possible image dataset). These "intrinsic" ambiguities are inherent to Minimax and are independent of the true distribution P(I). We will also discuss a second type of possible ambiguity which depends on the characteristics of the image dataset used for learning. We can uncover the intrinsic ambiguities in X by examining the covariance C of g(¢). (See Coughlan and Yuille (1999) for details on calculating the mean c and covariance C for any set of linear filters or non-linear filters that are scalar functions 764 J. M. Coughlan and A. L. Yuille of linear filters.) Defining the set of all possible statistics values <P = {¢ : g( ¢) :f. O}, the null space of G reflects degeneracy (Le. flatness) in <P. The following theorem, proved in Coughlan and Yuille (1999), shows that X is determined only up to a hyperplane whose dimension is the nullity of G. Theorem 1 (Intrinsic Ambiguity in X). Gil = 0 if and only if e().+t1)·i(I) /Z(X+ jI) and e).·i(l) /Z(X) are identical distributions on I. In addition to this intrinsic ambiguity in X, it is also possible that different values of X may yield distinct distributions which nevertheless have the same mean statistics < ¢ > on the image dataset. (As shown in Coughlan and Yuille (1999), there is a convex set of distributions, of which the true distribution P(I) is a member, which share the same mean statistics < ¢ >.) This second kind of ambiguity stems from the fact that the mean statistics convey only a fraction of the information that is contained in the true distribution P(I). To resolve this second ambiguity it is necessary to extract more information from the image data set. The simplest way to achieve this is to use a larger (or more informative) set of filters to lower the entropy of PM(I) (this topic is discussed in more detail in Zhu, Wu and Mumford (1997, 1998), Coughlan and Yuille (1999)). Alternatively, one can extend Minimax to include second-order statistics, i.e. the covariance of ¢ in addition to its mean d. This is an important topic for future research. 3 The Minutemax Approximations We now illustrate the phase space approach by showing that suitable approximations of the phase space factor g( ¢) make it easy to estimate the potential X given the empirical mean d. The resulting fast approximations to Minimax Learning are called "Minutemax" algorithms. 3.1 The Gaussian Approximation of g(¢) If the phase space factor g( ¢) may be approximated as a multi-variate Gaussian (see Coughlan and Yuille (1999) for a justification of this approximation) then the probability distribution PM(¢) = g(¢)e).·i/Z(X) reduces to another multi-variate Gaussian. (Note that we are making the Gaussian approximation in ¢ space- the space of all possible image statistics histograms-and not filter response (feature) space.) As we will see, this result greatly simplifies the problem of estimating the potential X. Recall that the mean and covariance of g( ¢) are denoted by c and G, respectively. The null space of G has dimension n and is spanned by vectors il(1), il(2) ... il(n). As discussed in Theorem 1, for all feasible values of ¢ (Le. all ¢ E <p) and all il in the null space, il· ¢ is a constant k. Thus we have that (4) where the subscript r denotes projection onto the rank of G. Thus PgatJss(¢) ex ggatJss(¢)e).·i ex U]7=l di.Ui ,k}e-!(ir-cr)TC;l(ir-cr)+)..i. Completing the square in the exponent yields PgatJss(¢) ex U17=1 di'Ui ,k}e-!(ir-Ifr)TC;l(ir-lfr) where fr A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations 765 rn[Q[]] Figure 1: From left to right: J, cand -X (as computed by the Gaussian Minutemax approximation) for first filter alone. is the projection of any .,p that satisfies .,p = c + eX. Since Pgauss ($) is a Gaussian we have < ¢ >gauss= .,p = J, and so we can write a linear equation relating X and d: d= c+cX. It can be shown (Zhu - private communication) that solving this equation is equivalent to one step of Newton-Raphson for minimization of an appropriate cost function. This will fail to be a good approximation if the cost function is highly nonquadratic. As explained in Coughlan and Yuille (1999), the Gaussian approximation is also equivalent to a second-order perturbation expansion of the partition function Z(X); higher-order corrections can be made by computing higher-order moments of g($). 3.2 Experimental Results We tested the Gaussian Minutemax procedure on two sets of filters: a single (fine scale) image gradient filter aI/ax, and a set of multi-scale image gradient filters defined at three scales, similar to those used by Zhu and Mumford (1997). In both sets, the fine scale gradient filter is linear with kernel (1, -1), representing a discretization of a/ax. In the second set, the medium scale filter kernel is (U2 , -U2)/4 and the coarse scale kernel is (U4 , -U4 )/16, where Un denotes the n x n matrix of all ones. The responses of the medium and coarse filters were rounded (i.e. quantized) to the nearest integer, thus adding a non-linearity to these filters. Finally, d was measured on a data set of over 100 natural images; the fine scale components of d are shown in the first panel of Figure (1) and were empirically very similar to the medium and coarse scale components. A X that solves d = c + cX is shown in the third panel of Figure (1) for the first filter (along with c in the second panel) and in the three panels of Figure (2) for the multi-scale filter set. The form of X is qualitatively similar to that obtained by Zhu and Mumford (1997) (bearing in mind that Zhu disregarded any filter responses with magnitude above Q/2, i.e. his filter response range is half of ours). In addition, the eigenvectors of C with small eigenvalues are large away from the origin, so one should not trust the values of the potentials there (obtained by any algorithm). Zhu and Mumford (1997) report interactions between filters' applied at different scales. This is because the resulting potentials appear different than the potential at the fine scale even though the histograms appear similar at all scales. We argue, however, that some of this "interaction" is due to the different phase factors at different scales. In other words the potentials would look different at different scales even if the empirical histograms were identical because of differing phase factors. 3.3 The Multinomial Approximation of g(¢) Many learning theories simply make probability distributions on feature space. How do they differ from Minimax Entropy Learning which works on image space? By 766 1. M Coughlan and A. L. Yuille ., ~ I ~ . , l .. ; ! ,'. Figure 2: From left to right: the fine, medium and coarse components of - X as computed by the Gaussian Minutemax approximation. ". Figure 3: Left to right: d, c, and -X as given by multinomial approximation for the a / ax filter at fine scale. examining the phase factor we will show that the two approaches are not identical in general. The feature space learning ignores the coupling between the filters which arise due to how the statistics are obtained. More precisely, the probability distribution obtained on feature space, PF, is equivalent to the Minimax distribution PM if, and only if, the phase factor is multinomial. We begin the analysis by considering a single filter. As before we define the combinatoric mean c = L:r$ g( i)i. The multinomial approximation of g( i) is equivalent to assuming that the combinatoric frequencies of filter responses are independent from pixel to pixel. Since the combinatoric frequency of filter response j E {I, 2, .. . , fmax} is Cj and there are N<pj pixels with response j, we have: ~ fm4~ N! and Pmult(<p) ex }1 (cje>'j/N)NI/Jj TIJ:l~ (N<pj)!' (5) using Pmult(i) ex 9mult(i)e5.·¢. Therefore Pmult(i) is also a multinomial. Shifting the Aj'S by an appropriate additive constant, we can make the constant of proportionality in the above equation equal to 1. In this case we have < <Pj >mult= cje>'j/N and Aj = N log( dj / Cj) by setting < <Pj >mult to the empirical mean dj . Note that if any component dj of the empirical mean is close to 0 then by the previous equation any small perturbations in dj (e.g. from sampling error) will yield large changes in Aj , making the estimate of that component unstable. We can generalize the multinomial approximation of 9(i) to the multiple filter case merely by factoring gmult(i) into separate multinomials, one for each filter. Of course, this approximation neglects all interactions among filters (and among pixels). A Phase Space Approach to Minimax Entropy Learning and the Minutemax Approximations 767 3.4 The Multinomial Approximation and Feature Learning The connection between the multinomial approximation and feature learning is straightforward once we consider a distribution on the feature vector f This distribution (denoted PF for "feature") is constructed assuming independent filter responses from pixel to pixel and with statistics matching the empirical mean d: PF(f) = TI~l dU;), where fi denotes the filter response at pixel i. Then it follows that PF(¢) is a multinomial: PF(¢) = TI;:l'" d~f/J; TIJ"':;Nf/J )!. Since dj = cje>.;/N, }=l } we have our main result that PF(¢) = Pmult(¢)' 4 Conclusion The main point of this paper is to introduce the phase space factor to quantify the mapping between images and their feature statistics. This phase space approach can: (i) provide fast approximate "Minutemax" algorithms, (ii) clarify the relationship between probability distributions learned in feature and image space, and (iii) to determine intrinsic ambiguities in the X potentials. Acknowledgements We acknowledge stimulating discussions with Song Chun Zhu. Funding was provided by the Smith-Kettlewell Institute Core Grant and the Center for Imaging Sciences ARO grant DAAN04-95-1-0494. References Coughlan, J.M. and Yuille, A.L. "The Phase Space of Minimax Entropy Learning". In preparation. 1999. Field, D. J. "Relations between the statistics of natural images and the response properties of cortical cells". Journal of the Optical Society 4,(12), 2379-2394. 1987. D.C. Knill and W. Richards. (Eds). Perception as Bayesian Inference. Cambridge University Press. 1996. Olshausen, B. A. and Field, D. J. "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature. 381, 607-609. 1996. B.D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press. 1996. Ruderman, D. and Bialek, W. "Statistics of Natural Images: Scaling in the Woods". Physical Review Letters. 73, Number 6,(8 August 1994), 814-817. 1994. S.C. Zhu, Y. Wu, and D. Mumford. "Minimax Entropy Principle and Its Application to Texture Modeling". Neural Computation. Vol. 9. no. 8. Nov. 1997. S.C. Zhu and D. Mumford. "Prior Learning and Gibbs Reaction-Diffusion". IEEE Trans. on PAMI vol. 19, no. 11. Nov. 1997. S-C Zhu, Y-N Wu and D. Mumford. FRAME: Filters, Random field And Maximum Entropy: Towards a Unified Theory for Texture Modeling. Int'l Journal of Computer Vision 27(2) 1-20, Marchi April. 1998.
1998
113
1,468
Almost Linear VC Dimension Bounds for Piecewise Polynomial Networks Peter L. Bartlett Department of System Engineering Australian National University Canberra, ACT 0200 Australia Peter.Bartlett@anu.edu.au Ron Meir Vitaly Maiorov Department of Mathematics Technion, Haifa 32000 Israel Department of Electrical Engineering Technion, Haifa 32000 Israel rmeir@dumbo.technion.ac.il Abstract We compute upper and lower bounds on the VC dimension of feedforward networks of units with piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension grows as W log W, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension grows as W 2 • 1 MOTIVATION The VC dimension is an important measure of the complexity of a class of binaryvalued functions, since it characterizes the amount of data required for learning in the PAC setting (see [BEHW89, Vap82]). In this paper, we establish upper and lower bounds on the VC dimension of a specific class of multi-layered feedforward neural networks. Let F be the class of binary-valued functions computed by a feed forward neural network with W weights and k computational (non-input) units, each with a piecewise polynomial activation function. Goldberg and Jerrum [GJ95] have shown that VCdim(F) :s Cl(W2 + Wk) = O(W2), where Cl is a constant. Moreover, Koiran and Sontag [KS97] have demonstrated such a network that has VCdim(F) ~ C2 W 2 = O(W2), which would lead one to conclude that the bounds Almost Linear VC Dimension Bounds for Piecewise Polynomial Networks 191 are in fact tight up to a constant. However, the proof used in [KS97] to establish the lower bound made use of the fact that the number of layers can grow with W. In practical applications, this number is often a small constant. Thus, the question remains as to whether it is possible to obtain a better bound in the realistic scenario where the number of layers is fixed. The contribution of this work is the proof of upper and lower bounds on the VC dimension of piecewise polynomial nets. The upper bound behaves as O(W L2 + W L log W L), where L is the number of layers. If L is fixed, this is O(W log W), which is superior to the previous best result which behaves as O(W2). Moreover, using ideas from [KS97] and [GJ95] we are able to derive a lower bound on the VC dimension which is O(WL) for L = O(W). Maass [Maa94] shows that three-layer networks with threshold activation functions and binary inputs have VC dimension O(W log W), and Sakurai [Sak93] shows that this is also true for two-layer networks with threshold activation functions and real inputs. It is easy to show that these results imply similar lower bounds if the threshold activation function is replaced by any piecewise polynomial activation function f that has bounded and distinct limits limx-t- oo f(x) and limx-too f(x). We thus conclude that if the number oflayers L is fixed, the VC dimension of piecewise polynomial networks with L ~ 2 layers and real inputs, and of piecewise polynomial networks with L ~ 3 layers and binary inputs, grows as W log W. We note that for the piecewise polynomial networks considered in this work, it is easy to show that the VC dimension and pseudo-dimension are closely related (see e.g. [Vid96]), so that similar bounds (with different constants) hold for the pseudo-dimension. Independently, Sakurai has obtained similar upper bounds and improved lower bounds on the VC dimension of piecewise polynomial networks (see [Sak99]). 2 UPPER BOUNDS We begin the technical discussion with precise definitions of the VC-dimension and the class of networks considered in this work. Definition 1 Let X be a set, and A a system of subsets of X. A set S = { Xl, . .. ,xn} is shattered by A if, for every subset B ~ S, there exists a set A E A such that SnA = B. The VC-dimension of A, denoted by VCdim(A), is the largest integer n such that there exists a set of cardinality n that is shattered by A. Intuitively, the VC dimension measures the size, n, of the largest set of points for which all possible 2n labelings may be achieved by sets A E A. It is often convenient to talk about the VC dimension of classes of indicator functions F. In this case we simply identify the sets of points X E X for which f(x) = 1 with the subsets of A, and use the notation VCdim(F). A feedforward multi-layer network is a directed acyclic graph that represents a parametrized real-valued function of d real inputs. Each node is called either an input unit or a computation unit. The computation units are arranged in L layers. Edges are allowed from input units to computation units. There can also be an edge from a computation unit to another computation unit, but only if the first unit is in a lower layer than the second. There is a single unit in the final layer, called the output unit. Each input unit has an associated real value, which is One of the components of the input vector x E Rd. Each computation unit has an associated real value, called the unit's output value. Each edge has an associated real parameter, as does each computation unit. The output of a computation unit is given by (7 CEe weze + wo), where the sum ranges over the set of edges leading to 192 P L. Bartlett, V. Maiorov and R. Meir the unit, We is the parameter (weight) associated with edge e, Ze is the output value of the unit from which edge e emerges, Wo is the parameter (bias) associated with the unit, and a : R -t R is called the activation function of the unit. The argument of a is called the net input of the unit. We suppose that in each unit except the output unit, the activation function is a fixed piecewise polynomial function of the form for i = 1, ... ,p+ 1 (and set to = -00 and tp+1 = 00), where each cPi is a polynomial of degree no more than l. We say that a has p break-points, and degree l. The activation function in the output unit is the identity function. Let ki denote the number of computational units in layer i and suppose there is a total of W parameters (weights and biases) and k computational units (k = k1 + k2 + ... + kL - 1 + 1). For input x and parameter vector a E A = R w, let f(x, a) denote the output of this network, and let F = {x f-t f(x,a) : a E RW} denote the class of functions computed by such an architecture, as we vary the W parameters. We first discuss the computation of the VC dimension, and thus consider the class of functions sgn(F) = {x f-t sgn(f(x, a)) : a E RW}. Before giving the main theorem of this section, we present the following result, which is a slight improvement of a result due to Warren (see [ABar], Chapter 8). Lemma 2.1 Suppose II (.), h (.), .. , ,f m (-) are fixed polynomials of degree at most 1 in n ~ m variables. Then the number of distinct sign vectors {sgn(Jl (a)), ... ,sgn(J m (a))} that can be generated by varying a ERn is at most 2(2eml/n)n. We then have our main result: Theorem 2.1 For any positive integers W, k ~ W, L ~ W, l, and p, consider a network with real inputs, up to W parameters, up to k computational units arranged in L layers, a single output unit with the identity activation function, and all other computation units with piecewise polynomial activation functions of degree 1 and with p break-points. Let F be the class of real-valued functions computed by this network. Then VCdim(sgn(F)) ~ 2WLlog(2eWLpk) + 2WL2log(1 + 1) + 2L. Since Land k are O(W), for fixed 1 and p this implies that VCdim(sgn(F)) = O(WLlogW + WL2). Before presenting the proof, we outline the main idea in the construction. For any fixed input x, the output of the network f(x, a) corresponds to a piecewise polynomial function in the parameters a, of degree no larger than (l + I)L-1 (recall that the last layer is linear). Thus, the parameter domain A = R W can be split into regions, in each of which the function f(x,·) is polynomial. From Lemma 2.1, it is possible to obtain an upper bound on the number of sign assignments that can be attained by varying the parameters of a set of polynomials. The theorem will be established by combining this bound with a bound on the number of regions. PROOF OF THEOREM 2.1 For an arbitrary choice of m points Xl, X2, ..• ,xm , we wish to bound K = I {(sgn(f(Xl ,a)), . .. ,sgn(J(xm, a))) : a E A }I. Almost Linear VC Dimension Bounds for Piecewise Polynomial Networks 193 Fix these m points, and consider a partition {SI, S2, ... , S N} of the parameter domain A. Clearly N K ~ L I {(sgn(J(xl , a», ... , sgn(J(xm, a») : a ESdi· i=1 We choose the partition so that within each region Si, f (Xl, .), ... ,f (x m, .) are all fixed polynomials of degree no more than (1 + I)L-1. Then, by Lemma 2.1, each term in the sum above is no more than 2 (2em(1;' I)L- l) W (1) The only remaining point is to construct the partition and determine an upper bound on its size. The partition is constructed recursively, using the following procedure. Let 51 be a partition of A such that, for all S E 51, there are constants bh,i,j E {0,1} for which for all a E S, where j E {I, ... ,m}, h E {I, ... ,kd and i E {1, ... ,pl. Here ti are the breakpoints of the piecewise polynomial activation functions, and Ph,x) is the affine function describing the net input to the h-th unit in the first layer, in response to X j. That is, where ah E R d, ah,O E R are the weights of the h-th unit in the first layer. Note that the partition 51 is determined solely by the parameters corresponding to the first hidden layer, as the input to this layer is unaffected by the other parameters. Clearly, for a E S, the output of any first layer unit in response to an Xj is a fixed polynomial in a. Now, let WI, ... , W L be the number of variables used in computing the unit outputs up to layer 1, ... , L respectively (so WL = W), and let kl , . .. , kL be the number of computation units in layer 1, ... , L respectively (recall that kL = 1). Then we can choose 51 so that 1511 is no more than the number of sign assignments possible with mkl P affine functions in WI variables. Lemma 2.1 shows that 151 1 ~ 2 (2e~~IP) WI Now, we define 5n (for n > 1) as follows. Assume that for all S in 5n - 1 and all Xj, the net input of every unit in layer n in response to Xj is a fixed polynomial function of a E S, of degree no more than (1 + l)n-1 . Let 5n be a partition of A that is a refinement of 5n- 1 (that is, for all S E 5n, there is an S' E 5n- 1 with S ~ S'), such that for all S E 5n there are constants bh,i,j E {O, I} such that sgn(Ph,x) (a) - ti ) = bh,i,j for all a E S, (2) where Ph,x) is the polynomial function describing the net input of the h-th unit in the n-th layer, in response to Xj, when a E S. Since S ~ S' for some S' E 5n- 1 , (2) implies that the output of each n-th layer unit in response to an X j is a fixed polynomial in a of degree no more than l (l + 1) n-l, for all a E S. Finally, we can choose 5n such that, for all S' E 5n- 1 we have I {S E 5n : S ~ S'}I is no more than the number of sign assignments of mknP polynomials in Wn variables of degree no more than (l + 1)n- l, and by Lemma 2.1 this is no more than 2 (2emkn~n+lr-I ) Wn . Notice also that the net input of every unit in layer n + 1 in 194 P. L. Bartlett, V Maiorov and R. Meir response to Xj is a fixed polynomial function of a ESE Sn of degree no more than (l + l)n. Proceeding in this way we get a partition SL-l of A such that for S E SL-l the network output in response to any Xj is a fixed polynomial of a E S of degree no more than l(l + 1)L-2. Furthermore, JSL-d < 2 Ce;:,P) W, TI 2 eemk'p~,+ 1)'-') W , < TI 2 CemkiP~,+ 1)'-') W; Multiplying by the bound (1) gives the result K ~ IT 2 (2emkip(l .+ l)i-l) W. i=l W t Since the points Xl, ... ,Xm were chosen arbitrarily, this .gives a bound on the maximal number of dichotomies induced by a E A on m points. An upper bound on the VC-dimension is then obtained by computing the largest value of m for which this number is at least 2m , yielding m < L + t. w, log Cempk'~,+ 1)i-1 ) < L [1 + (L - l)W log(l + 1) + W log(2empk)] , where all logarithms are to the base 2. We conclude (see for example [Vid96] Lemma 4.4) that VCdim(F) ~ 2L [(L -l)W log(l + 1) + W log (2eWLpk) + 1]. • We briefly mention the application of this result to the problem of learning a regression function E[YIX = x], from n input/output pairs {(Xi, Yi)}i=l' drawn independently at random from an unknown distribution P(X, Y). In the case of quadratic loss, L(f) = E(Y - f(X))2, one can show that there exist constants Cl ;::: 1 and C2 such that EL(f~ ) 2 • f L-(f) MPdim(F) logn n < 8 + Cl In + C2 , JET n where 82 = E [Y - E[YIX]]2 is the noise variance, i(f) = E [(E[YIX] - f(X))2] is the approximation error of f, and in is a function from the class F that approximately minimizes the sample average of the quadratic loss. Making use of recently derived bounds [MM97] on the approximation error, inf JET i(f), which are equal, up to logarithmic factors, to those obtained for networks of units with the standard sigmoidal function u{u) = (1 + e-u)-l , and combining with the considerably lower pseudo-dimension bounds for piecewise polynomial networks, we obtain much better error rates than are currently available for sigmoid networks. 3 LOWER BOUND We now compute a lower bound on the VC dimension of neural networks with continuous activation functions. This result generalizes the lower bound in [KS97], since it holds for any number of layers. Almost Linear VC Dimension Bounds for Piecewise Polynomial Networks 195 Theorem 3.1 Suppose f : R -+ R has the following properties: 1. limo-too f(a) = 1 and limo-t-oo f(a) = 0, and 2. f is differentiable at some point Xo with derivative f'(xo) =1= O. Then for any L ~ 1 and W ~ 10L - 14, there is a feedforward network with the following properties: The network has L layers and W parameters, the output unit is a linear unit, all other computation units have activation function f, and the set sgn(F) of functions computed by the network has VCdim(sgn(F» ~ l ~ J l ~ J ' where l u J is the largest integer less than or equal to u. PROOF As in [KS97], the proof follows that of Theorem 2.5 in [GJ95], but we show how the functions described in [GJ95] can be computed by a network, and keep track of the number of parameters and layers required. We first prove the lower bound for a network containing linear threshold units and linear units (with the identity activation function), and then show that all except the output unit can be replaced by units with activation function f, and the resulting network still shatters the same set. For further details of the proof, see the full paper [BMM98]. Fix positive integers M, N E N. We now construct a set of M N points, which may be shattered by a network with O(N) weights and O(M) layers. Let {ad, i = 1,2, ... ,N denote a set of N parameters, where each ai E [0,1) has an M -bit binary representation ai = E~l 2-jai,j, ai,j E {O, I}, i.e. the M-bit base two representation of ai is ai = O.ai,l ai,2 ... ai,M. We will consider inputs in B N X B M, where BN = {ei : 1 ~ i ~ N}, ei E {O, I}N has i-th bit 1 and all other bits 0, and BM is defined similarly. We show how to extract the bits of the ai, so that for input x = (el' ern) the network outputs al,rn. Since there are N M inputs of the form (el,ern ), and al,rn can take on all possible 2MN values, the result will follow. There are three stages to the computation of al,rn: (1) computing ai, (2) extracting al,k from ai, for every k, and (3) selecting al,rn among the al,ks. Suppose the network input is x = ((Ul,'" ,UN),(Vt, ... ,VM» = (el,ern ). Using one linear unit we can compute E~l Uiai = al. This involves N + 1 parameters and one computation unit in one layer. In fact, we only need N parameters, but we need the extra parameter when we show that this linear unit can be replaced by a unit with activation function f. Consider the parameter Ck = O.al,k ... al,M, that is, Ck = E~k 2k-1-jal,j for k = 1, ... ,M. Since Ck ~ 1/2 iff al,k = 1, clearly sgn(ck - 1/2) = al,k for all k. Also, Cl = al and Ck = 2Ck-l - al,k-l' Thus, consider the recursion Ck = 2Ck-l - al,k-l al,k = sgn(ck - 1/2)' with initial conditions CI = al and au = sgn(al - 1/2). Clearly, we can compute al,l, ... ,al,M-l and C2,' .. ,CM-l in another 2(M - 2) + 1 layers, using 5(M - 2) + 2 parameters in 2(M - 2) + 1 computational units. We could compute al,M in the same way, but the following approach gives fewer layers. Set b = sgn (2CM - 1 al,M - l - E~~I Vi)' If m =1= M then b = O. If m = M then the input vector (VI, ... ,VM) = eM, and thus E~~lvi = 0, implying that b = sgn(cM) = sgn(O.al,M) = al,M. 196 P L. Bartlett, V. Maiorov and R. Meir In order to conclude the proof, we need to show how the variables al,m may be recovered, depending on the inputs (VI, V2, ... ,VM). We then have al,m = b V V';~I(al,i/\vi). Since for boolean x and y, x/\y = sgn(x+y-3/2), and V';I Xi = sgn(2:,;1 Xi - 1/2), we see that the computation of al,m involves an additional 5M parameters in M + 1 computational units, and adds another 2 layers. In total, there are 2M layers and 10M + N -7 parameters, and the network shatters a set of size N M. Clearly, we can add parameters and layers without affecting the function of the network. So for any L, WEN, we can set M = lL/2J and N = W + 7 - 10M, which is at least lW/2J provided W :2: 10L - 14. In that case, the VC-dimension is at least l L /2 J l W /2 J . The network just constructed uses linear threshold units and linear units. However, it is easy to show (see [KS97], Theorem 5) that each unit except the output unit can be replaced by a unit with activation function f so that the network still shatters the set of size M N. For linear units, the input and output weights are scaled so that the linear function can be approximated to sufficient accuracy by f in the neighborhood of the point Xo. For linear threshold units, the input weights are scaled so that the behavior of f at infinity accurately approximates a linear threshold function. • References [ABar] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999 (to appear). [BEHW89] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. J. ACM, 36(4):929965, 1989. [BMM98] P. L. Bartlett, V. Maiorov, and R. Meir. Almost linear VC-dimension bounds for piecewise polynomial networks. Neural Computation, 10:2159- 2173, 1998. [GJ95] P.W. Goldberg and M.R. Jerrum. Bounding the VC Dimension of Concept Classes Parameterized by Real Numbers. Machine Learning, 18:131-148, 1995. [KS97] . P. Koiran and E.D. Sontag. Neural Networks with Quadratic VC Dimension. Journal of Computer and System Science, 54:190- 198, 1997. [Maa94] W. Maass. Neural nets with superlinear VC-dimension. Neural Computation, 6(5):877- 884, 1994. [MM97] V. Maiorov and R. Meir. On the Near Optimality of the Stochastic Approximation of Smooth Functions by Neural Networks. Submitted for publication, 1997. [Sak93] A. Sakurai. Tighter bounds on the VC-dimension of three-layer networks. In World Congress on Neural Networks, volume 3, pages 540543, Hillsdale, NJ, 1993. Erlbaum. [Sak99] A. Sakurai. Tight bounds for the VC-dimension of piecewise polynomial networks. In Advances in Neural Information Processing Systems, volume 11. MIT Press, 1999. [Vap82] V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, New York, 1982. [Vid96] M Vidyasagar. A Theory of Learning and Generalization. Springer Verlag, New York, 1996.
1998
114
1,469
Bayesian Modeling of Facial Similarity Baback Moghaddam Mitsubishi Electric Research Laboratory 201 Broadway Cambridge, MA 02139, USA babackCOmerl.com Tony Jebara and Alex Pentland Massachusettes Institute of Technology 20 Ames St. Cambridge, MA 02139, USA {jebara,sandy}COmedia.mit.edu Abstract In previous work [6, 9, 10], we advanced a new technique for direct visual matching of images for the purposes of face recognition and image retrieval, using a probabilistic measure of similarity based primarily on a Bayesian (MAP) analysis of image differences, leading to a "dual" basis similar to eigenfaces [13]. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenface matching was recently demonstrated using results from DARPA's 1996 "FERET" face recognition competition, in which this probabilistic matching algorithm was found to be the top performer. We have further developed a simple method of replacing the costly com put ion of nonlinear (online) Bayesian similarity measures by the relatively inexpensive computation of linear (offline) subspace projections and simple (online) Euclidean norms, thus resulting in a significant computational speed-up for implementation with very large image databases as typically encountered in real-world applications. 1 Introduction Current approaches to image matching for visual object recognition and image database retrieval often make use of simple image similarity metrics such as Euclidean distance or normalized correlation, which correspond to a templatematching approach to recognition [2, 5]. For example, in its simplest form, the Bayesian Modeling of Facial Similarity 911 similarity measure S(h , h) between two images hand h can be set to be inversely proportional to the norm 1111 - hll. Such a simple formulation suffers from a major drawback: it does not exploit knowledge of which types of variation are critical (as opposed to incidental) in expressing similarity. In this paper, we formulate a probabilistic similarity measure which is based on the probability that the image intensity differences, denoted by .6. = h - [2 , are characteristic of typical variations in appearance of the same object. For example, for purposes of face recognition, we can define two classes of facial image variations: intrapersonal variations ~h (corresponding, for example, to different facial expressions of the same individual) and extrapersonal variations OE (corresponding to variations between different individuals). Our similarity measure is then expressed in terms of the probability (1) where P(011.6.) is the a posteriori probability given by Bayes rule, using estimates of the likelihoods P(.6.101) and P(.6.IOE)' The likelihoods are derived from training data using an efficient subspace method for density estimation of high-dimensional data [7, 8]. This Bayesian (MAP) approach can also be viewed as a generalized nonlinear extension of Linear Discriminant Analysis (LDA) [12, 3] or "Fisher Face" techniques [1] for face recognition. Moreover, our nonlinear generalization has distinct computational/storage advantages over some of these linear methods for large databases. 2 Difference Density Modeling Consider the problem of characterizing the type of intensity differences which occur when matching two images in a face recognition task. We have two classes (intrapersonal 0 1 and extrapersonal OE) which we will assume form Gaussian distributions whose likelihoods can be estimated as P(.6.101) and P(.6.IOE) for a given intensity difference .6. = h - [2 . Given these likelihoods we can evaluate a similarity score S(h, h) between a pair of images directly in terms of the intrapersonal a posteriori probability as given by Bayes rule: S (2) where the priors P(O) can be set to reflect specific operating conditions (e.g., number of test images vs. the size of the database) or other sources of a priori knowledge regarding the two images being matched. Additionally, this particular Bayesian formulation casts the standard face recognition task (essentially an M -ary classification problem for M individuals) into a binary pattern classification problem with 0 1 and OE. This much simpler problem is then solved using the maximum a posteriori (MAP) rule i.e. , two images are determined to belong to the same individual if P(011.6.) > P(OEI.6.), or equivalently, if S(h, h) > !. To deal with the high-dimensionality of .6., we make use of the efficient density estimation method proposed by Moghaddam & Pentland [7, 8] which divides the vector space nN into two complementary subs paces using an eigenspace decomposition. This method relies on a Principal Components Analysis (PCA) [4] to form a low-dimensional estimate of the complete likelihood which can be evaluated using only the first M principal components, where M < < N. 912 B. Moghaddam. T. Jebara and A. Pentland 3 Efficient Similarity Computation Consider now a feature space of ~ vectors, the differences between two images (Ii and h). The two classes of interest in this space correspond to intrapersonal and extrapersonal variations and each is modeled as a high-dimensional Gaussian density as in Equation 3. The densities are zero-mean since for each ~ = Ii - h there exists a ~ = h - I j • (3) By PCA, the Gaussians are known to only occupy a subspace of image space (facespace) and thus, only the top few eigenvectors of the Gaussian densities are relevant for modeling. These densities are used to evaluate the similarity score in Equation 2. Computing the similarity score involves first subtracting a candidate image Ii from a database entry h . The resulting ~ image is then projected onto the eigenvectors of the extrapersonal Gaussian and also the eigenvectors of the intrapersonal Gaussian. The exponentials are computed, normalized and then combined as in Equation 2. This operation is iterated over all members of the database (many Ik images) until the maximum score is found (i.e. the match). Thus, for large databases, this evaluation is expensive but can be simplified by offline transformations. To compute the likelihoods p(~lrh) and P(~IOE) we pre-process the Ik images with whitening transformations. Each image is converted and stored as whitened subspace coefficients; i for intrapersonal space and e for extrapersonal space (see Equation 4). Here, A and V are matrices of the largest eigenvalues and eigenvectors of ~E or ~/. Typically, we have used MI = 100 and ME = 100 for 0 1 and OE respectively. (4) After this pre-processing, evaluating the Gaussians can be reduced to simple Euclidean distances as in Equation 5. Denominators are of course pre-computed. These likelihoods are evaluated and used to compute the MAP similarity S in Equation 2. Euclidean distances are computed between the lOa-dimensional i vectors as well as the lOa-dimensional e vectors. Thus, roughly 2 x (ME + AfJ) = 400 arithmetic operations are required for each similarity computation, avoiding repeated image differencing and projections. e-tlle)-ek Il 2 (211" )D/2[ ~E [1/2 (5) The ML similarity matching is even simpler since only the intra-personal class is evaluated, leading to the following modified form for the similarity measure (6) Bayesian Modeling of Facial Similarity 913 (a) (b) Figure 1: Examples of FERET frontal-view image paIrs used for (a) the Gallery set (training) and (b) the Probe set (testing). Figure 2: Face alignment system [7]. 4 Experimental Results To test our recognition strategy we used a collection of images from the ARPA FERET face database. The set of images consists of pairs of frontal-views (FA/FB) and are divided into two subsets: the "gallery" (training set) and the "probes" (testing set). The gallery images consisted of 74 pairs of images (2 per individual) and the probe set consisted of 38 pairs of images, corresponding to a subset of the gallery members. The probe and gallery datasets were captured a week apart and exhibit differences in clothing, hair and lighting (see Figure 1). Each of these images were affine normalized with a canonical model using an automatic face-processing system which normalizes for translation, scale as well as slight rotations (both in-plane and out-of-plane). This system is described in detail in [7, 8] and uses maximum-likelihood estimation of object location (in this case the position and scale of a face and the location of individual facial features) to geometrically align faces into standard normalized form as shown in Figure 2. All the faces in our experiments were geometrically aligned and normalized in this manner prior to further analysis. 4.1 Eigenface Matching As a baseline comparison, we first used an eigenface matching technique for recognition [13]. The normalized images from the gallery and the probe sets were projected onto a lOO-dimensional eigenspace similar to that shown in Figure 3 and a nearest-neighbor rule based on a Euclidean distance measure was used to match 914 B. Moghaddam, T. Jebara and A. Pentland .. .~ ...... •• alIi!t. .p. --1If4 ~~ " ..... -,-1". i ~ ~ 3 .,' ~ ",._..,. .~ -, ---Figure 3: Standard Eigenfaces. (b) Figure 4: "Dual" Eigenfaces: (a) Intrapersonal, (b) Extrapersonal each probe image to a gallery image. We note that this method corresponds to a generalized template-matching method which uses a Euclidean norm measure of similarity which is, however, restricted to the principal subspace of the data. The rank-l recognition rate obtained with this method was found to be 84%. 4.2 Bayesian Matching For our probabilistic algorithm, we first gathered training data by computing the intensity differences for a training subset of 74 intrapersonal differences (by matching the two views of every individual in the gallery) and a random subset of 296 extrapersonal differences (by matching images of different individuals in the gallery), corresponding to the classes fh and OE, respectively, and performing a separate PCA analysis on each. We note that the two mutually exclusive classes Of and OE correspond to a "dual" set of eigenfaces as shown in Figure 4. Note that the intrapersonal variations shown in Figure 4-(a) represent subtle variations due mostly to expression changes (and lighting) whereas the extrapersonal variations in Figure 4-(b) are more representative of general eigenfaces which code variations such as hair color, facial hair and glasses. These extrapersonal eigenfaces are qualitatively similar to the standard normalized intensity eigenfaces shown in Figure 3. We next computed the likelihood estimates P(~IO[) and P(~IOE) using the PCAbased method [7, 8], using subspace dimensions of M[ = 10 and ME = 30 for Of and OE, respectively. These density estimates were then used with a default setting of equal priors, P(O[) = P(OE), to evaluate the a posteriori intrapersonal probability P(O[I~) for matching probe images to those in the gallery. Therefore, for each probe image we computed probe-to-gallery differences and sorted the matching order, this time using the a posteriori probability P(~hl~) as the similarity measure. This probabilistic ranking yielded an improved rank-1 recognition rate of 90% . Bayesian Modeling of Facial Similarity 1.00 ~ o 0.90 ~ .s::. ~ E Q) 0.80 > ~ "3 E (3 0.70 0.60 -I- -4 -I ____ .L __ __ ..l_ prob.: 38' B 1 gaJlooy: ,,96 1 scored probe: ,,95 1 o 10 20 Rank t>----. MIT Sap 96 +--<l MIT Mar 95 b;------6. UMD 9---'Y Excallbur Q..-.g Rutgers ~ARLEF 30 40 915 Figure 5: Cumulative recognition rates for frontal FAjFB views for the competing algorithms in the FERET 1996 test. The top curve (labeled "MIT Sep 96") corresponds to our Bayesian matching technique. Note that second placed is standard eigenface matching (labeled "MIT Mar 95"). 4.3 The 1996 FERET Competition Our Bayesian approach to recognition has yielded even more significant improvement over simple eigenface techniques with very large face databases. The probabilistic similarity measure was tested in the September 1996 ARPA FERET face recognition competition and yielded a surprising 95% recognition accuracy (on nearly 1200 individuals) making it the top-performing system by a typical margin of 10-20% over the other competing algorithms [11] (see Figure 5). A comparison between standard eigenfaces and the Bayesian method from this test shows a 10% gain in performance afforded by the new similarity measure. Thus we note that, in this particular case, the probabilistic similarity measure has effectively halved the error rate of eigenface matching. Note that we can also use the simplified similarity measure based on the intrapersonal eigenfaces for a maximum likelihood (ML) matching technique using S' = p(~lnI) (7) instead of the maximum a posteriori (MAP) approach defined by Equation 2. Although this simplified measure has not been officially FERET tested, our own internal experiments with a database of size 2000 have shown that using S' instead of S results in only a minor (2-3%) deficit in the recognition rate while at the same time cutting the computational cost by a further factor of 2. 5 Conclusions The performance advantage of our probabilistic matching technique has been demonstrated using both a small database (internally tested) as well as a large (800+) database with an independent double-blind test as part of ARPA's September 1996 "FERET" competition, in which Bayesian similarity out-performed all competing algorithms (at least one of which was using an LDA/Fisher type method). We believe that these results clearly demonstrate the superior performance of probabilistic matching over eigenface, LDA/Fisher and other existing techniques. 916 B. Moghaddam, T. lebara and A. Pentland The results obtained with the simplified ML similarity measure (5' in Eq. 7) suggest a computationally equivalent yet superior alternative to standard eigenface matching. In other words, a likelihood similarity based on the intrapersonal density p(~lnI) alone is far superior to nearest-neighbor matching in eigenspace while essentially requiring the same number of projections. For completeness (and a slightly better performance) however, one should use the a posteriori similarity 5 in Eq. 2, at twice the computational cost of standard eigenfaces. This probabilistic framework is particularly advantageous in that the intra/extra density estimates explicitly characterize the type of appearance variations which are critical in formulating a meaningful measure of similarity. For example, the deformations corresponding to facial expression changes (which may have high image-difference norms) are, in fact, irrelevant when the measure of similarity is to be based on identity. The subspace density estimation method used for representing these classes thus corresponds to a learning method for discovering the principal modes of variation important to the classification task. References [1] V.I. Belhumeur, J.P. Hespanha, and D.J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-19(7):711-720, July 1997. [2] R. Brunelli and T. Poggio. Face recognition : Features vs. templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(10), October 1993. [3] K. Etemad and R. Chellappa. Discriminant analysis for recognition of human faces. In Proc. of Int 'l Conf. on Acoustics, Speech and Signal Processing, pages 2148-2151, 1996. [4] LT. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986. [5] M. J. Jones and T. Poggio. Model-based matching by linear combination of prototypes. AI Memo No. 1583, Artificial Intelligence Laboratory, Massachusettes Institute of Technology, November 1996. [6] B. Moghaddam, C. Nastar, and A. Pentland. Bayesian face recognition using deformable intensity differences. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, June 1996. [7] B. Moghaddam and A. Pentland. Probabilistic visual learning for object detection. In IEEE Proceedings of the Fifth International Conference on Computer Vision (ICCV'95), Cambridge, USA, June 1995. [8] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI19(7):696-710, July 1997. [9] B. Moghaddam, W. Wahid, and Alex Pentland. Beyond eigenfaces: Probabilistic matching for face recognition. In Proc. of Int 'I Conj. on A utomatic Face and Gesture Recognition, pages 30- 35, N ara, Japan, April 1998. [10] C. N astar, B. Moghaddam, and A. Pentland. Generalized image matching: Statistical learning of physically-based deformations. In Proceedings of the Fourth European Conference on Computer Vision (ECCV'96) , Cambridge, UK, April 1996. [11] P. J. Phillips, H. Moon, P. Rauss, and S. Rizvi. The FERET evaluation methodology for face-recognition algorithms. In IEEE Proceedings of Computer Vision and Pattern Recognition, pages 137-143, June 1997. [12] D. Swets and J. Weng. Using discriminant eigenfeatures for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-18(8):831-836, August 1996. [13] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 1991.
1998
115
1,470
The Bias-Variance Tradeoff and the Randomized GACV Grace Wahba, Xiwu Lin and Fangyu Gao Dept of Statistics Univ of Wisconsin 1210 W Dayton Street Madison, WI 53706 wahba,xiwu,fgao@stat.wisc.edu Dong Xiang SAS Institute, Inc. SAS Campus Drive Cary, NC 27513 sasdxx@unx.sas.com Ronald Klein, MD and Barbara Klein, MD Dept of Ophthalmalogy 610 North Walnut Street Madison, WI 53706 kleinr,kleinb@epi.ophth.wisc.edu Abstract We propose a new in-sample cross validation based method (randomized GACV) for choosing smoothing or bandwidth parameters that govern the bias-variance or fit-complexity tradeoff in 'soft' classification. Soft classification refers to a learning procedure which estimates the probability that an example with a given attribute vector is in class 1 vs class O. The target for optimizing the the tradeoff is the Kullback-Liebler distance between the estimated probability distribution and the 'true' probability distribution, representing knowledge of an infinite population. The method uses a randomized estimate of the trace of a Hessian and mimics cross validation at the cost of a single relearning with perturbed outcome data. 1 INTRODUCTION We propose and test a new in-sample cross-validation based method for optimizing the biasvariance tradeoff in 'soft classification' (Wahba et al1994), called ranG ACV (randomized Generalized Approximate Cross Validation). Summarizing from Wahba et al(l994) we are given a training set consisting of n examples, where for each example we have a vector t E T of attribute values, and an outcome y, which is either 0 or 1. Based on the training data it is desired to estimate the probability p of the outcome 1 for any new examples in the The Bias-Variance TradeofJand the Randomized GACV 621 future. In 'soft' classification the estimate p(t) of p(t) is of particular interest, and might be used by a physician to tell patients how they might modify their risk p by changing (some component of) t, for example, cholesterol as a risk factor for heart attack. Penalized likelihood estimates are obtained for p by assuming that the logit f(t), t E T, which satisfies p(t) = ef(t) 1(1 + ef(t») is in some space 1{ of functions. Technically 1{ is a reproducing kernel Hilbert space, but you don't need to know what that is to read on. Let the training set be {Yi, ti, i = 1,···, n}. Letting Ii = f(td, the negative log likelihood .c{Yi, ti, fd of the observations, given f is n .c{Yi, ti, fd = 2::[-Ydi + b(li)], (1) i=1 where b(f) = log(l + ef ). The penalized likelihood estimate of the function f is the solution to: Find f E 1{ to minimize h. (I): n h.(f) = 2::[-Ydi + b(ld) + J>.(I), (2) i=1 where 1>.(1) is a quadratic penalty functional depending on parameter(s) A = (AI, ... , Aq) which govern the so called bias-variance tradeoff. Equivalently the components of A control the tradeoff between the complexity of f and the fit to the training data. In this paper we sketch the derivation of the ranG ACV method for choosing A, and present some preliminary but favorable simulation results, demonstrating its efficacy. This method is designed for use with penalized likelihood estimates, but it is clear that it can be used with a variety of other methods which contain bias-variance parameters to be chosen, and for which minimizing the Kullback-Liebler (K L) distance is the target. In the work of which this is a part, we are concerned with A having multiple components. Thus, it will be highly convenient to have an in-sample method for selecting A, if one that is accurate and computationally convenient can be found. Let P>. be the the estimate and p be the 'true' but unknown probability function and let Pi = p(td,p>.i = p>.(ti). For in-sample tuning, our criteria for a good choice of A is the KL distance KL(p,p>.) = ~ E~I[PilogP7. + (1- pdlogg~::?)]. We may replace K L(p,p>.) by the comparative K L distance (C K L), which differs from K L by a quantity which does not depend on A. Letting hi = h (ti), the C K L is given by 1 n CKL(p,p>.) == CKL(A) = ;;, 2:: [-pd>'i + b(l>.i)). (3) i=) C K L(A) depends on the unknown p, and it is desired is to have a good estimate or proxy for it, which can then be minimized with respect to A. It is known (Wong 1992) that no exact unbiased estimate of CK L(A) exists in this case, so that only approximate methods are possible. A number of authors have tackled this problem, including Utans and M90dy(1993), Liu(l993), Gu(1992). The iterative U BR method of Gu(l992) is included in GRKPACK (Wang 1997), which implements general smoothing spline ANOVA penalized likelihood estimates with multiple smoothing parameters. It has been successfully used in a number of practical problems, see, for example, Wahba et al (1994,1995). The present work represents an approach in the spirit of GRKPACK but which employs several approximations, and may be used with any data set, no matter how large, provided that an algorithm for solving the penalized likelihood equations, either exactly or approximately, can be implemented. 622 G. Wahba et al. 2 THE GACV ESTIMATE In the general penalized likelihood problem the minimizer 1>,(-) of (2) has a representation M n 1>.(t) = L dv<Pv(t) + L CiQ>.(ti, t) (4) v=l i=l where the <Pv span the null space of 1>" Q>.(8, t) is a reproducing kernel (positive definite function) for the penalized part of 7-1., and C = (Cl' ... ,Cn)' satisfies M linear conditions, so that there are (at most) n free parameters in 1>.. Typically the unpenalized functions <Pv are low degree polynomials. Examples of Q(ti,') include radial basis functions and various kinds of splines; minor modifications include sigmoidal basis functions, tree basis functions and so on. See, for example Wahba( 1990, 1995), Girosi, Jones and Poggio( 1995). If f>.C) is of the form (4) then 1>,(f>.) is a quadratic form in c. Substituting (4) into (2) results in h a convex functional in C and d, and C and d are obtained numerically via a Newton Raphson iteration, subject to the conditions on c. For large n, the second sum on the right of (4) may be replaced by L~=1 Cik Q>. (tik , t), where the tik are chosen via one of several principled methods. To obtain the CACV we begin with the ordinary leaving-out-one cross validation function CV(.\) for the CKL: n ( _ 1 "" [-i] ( ] CV .\) - LJ-yd>.i + b 1>.i) , n (5) i=1 where fl- i ] the solution to the variational problem of (2) with the ith data point left out and fti] is the value of fl- i] at ti . Although f>.C) is computed by solving for C and d the CACV is derived in terms of the values (it"", fn)' of f at the ti. Where there is no confusion between functions f(-) and vectors (it, ... ,fn)' of values of fat tl, ... ,tn, we let f = (it, ... " fn)'. For any f(-) of the form (4), J>. (f) also has a representation as a non-negative definite quadratic form in (it, ... , fn)'. Letting L:>. be twice the matrix of this quadratic form we can rewrite (2) as n 1 h(f,Y) = L[-Ydi + b(/i)] + 2f'L:>.f. i=1 (6) Let W = W(f) be the n x n diagonal matrix with (/ii == Pi(l - Pi) in the iith position. Using the fact that (/ii is the second derivative of b(fi), we have that H = [W + L:>.] - 1 is the inverse Hessian of the variational problem (6). In Xiang and Wahba (1996), several Taylor series approximations, along with a generalization of the leaving-out-one lemma (see Wahba 1990) are applied to (5) to obtain an approximate cross validation function ACV(.\), which is a second order approximation to CV(.\) . Letting hii be the iith entry of H , the result is CV(.\) ~ ACV('\) = .!. t[-Yd>.i + b(f>.i)] + .!. t hiiYi(Yi - P>.i) . (7) n i= l n i=1 [1 - hiwii] Then the GACV is obtained from the ACV by replacing hii by ~ L~1 hii == ~tr(H) and replacing 1 - hiWii by ~tr[I - (Wl/2 HWl/2)], giving 1 ~ ] tr(H) L~l Yi(Yi - P>.i) CACV('\) = ;; t;;[-Yd>.i + b(1).i) + -n-tr[I _ (Wl/2HWl /2)] , (8) where W is evaluated at 1>.. Numerical results based on an exact calculation of (8) appear in Xiang and Wahba (1996). The exact calculation is limited to small n however. The Bias-Variance TradeofJand the Randomized GACV 623 3 THE RANDOMIZED GACV ESTIMATE Given any 'black box' which, given >., and a training set {Yi, ti} produces f>. (.) as the minimizer of (2), and thence f>. = (fA 1 , "' , f>.n)', we can produce randomized estimates of trH and tr[! - W 1/ 2 HW1/2J without having any explicit calculations of these matrices. This is done by running the 'black box' on perturbed data {Vi + <5i , td. For the Yi Gaussian, randomized trace estimates of the Hessian of the variational problem (the 'influence matrix') have been studied extensively and shown to be essentially as good as exact calculations for large n, see for example Girard(1998). Randomized trace estimates are based on the fact that if A is any square matrix and <5 is a zero mean random n-vector with independent components with variance (TJ, then E<5' A<5 = ~ tr A. See Gong et al( 1998) and u" references cited there for experimental results with multiple regularization parameters. Returning to the 0-1 data case, it is easy to see that the minimizer fA(') of 1;.. is continuous in Y, not withstanding the fact that in our training set the Yi take on only values 0 or 1. Letting if = UA1,"', f>.n)' be the minimizer of (6) given y = (Y1,"', Yn)', and if+O be the minimizer given data y+<5 = (Y1 +<51, ... ,Yn +<5n)' (the ti remain fixed), Xiang and Wahba (1997) show, again using Taylor series expansions, that if+O - ff ,....., [WUf) + ~AJ-1<5. This suggests that ~<5'Uf+O - ff) provides an estimate oftr[W(ff) + ~At1. However, u" if we take the solution ff to the nonlinear system for the original data Y as the initial value for a Newton-Raphson calculation of ff+O things become even simpler. Applying a one step Newton-Raphson iteration gives (9) Since Pjf(ff,y + <5) = -<5 + PjfUf,Y) = -<5, and [:;~f(ff,Y + <5)J- 1 [ 82 h(fY )J- 1 h f y+o,l fY [ 8 2 h(fY )J- 1 J: h f y+o,l fY 8?7if A' Y ,we ave A A + 8?7if A' Y u so t at A A [WUf) + EAt 1<5. The result is the following ranGACV function: n <5' (fY+O,l fY) ",n ( ) ranGACV(>.) = .!. ~[- 'I '+bU .)J+ A A wi=l Yi Yi - PAi . n ~ Yz At At n [<5'<5 - <5'WUf)Uf+O,l - ff)J (10) To reduce the variance in the term after the '+' in (10), we may draw R independent replicate vectors <51,'" ,<5R , and replace the term after the '+' in (1O)b 1... ",R o:(fr Hr .1 -ff) 2:7-1 y.(y.-P>..) to obtain an R-replicated y R wr=l n [O~Or-O~ W(fn(f~+Ar . l-ff)1 ranGACV(>.) function. 4 NUMERICAL RESULTS In this section we present simulation results which are representative of more extensive simulations to appear elsewhere. In each case, K < < n was chosen by a sequential clustering algorithm. In that case, the ti were grouped into K clusters and one member of each cluster selected at random. The model is fit. Then the number of clusters is doubled and the model is fit again. This procedure continues until the fit does not change. In the randomized trace estimates the random variates were Gaussian. Penalty functionals were (multivariate generalizations of) the cubic spline penalty functional>. fa1 U" (X))2, and smoothing spline ANOVA models were fit. 624 G. Wahba et at. 4.1 EXPERIMENT 1. SINGLE SMOOTHING PARAMETER In this experiment t E [0,1], f(t) = 2sin(10t), ti = (i - .5)/500, i = 1,···,500. A random number generator produced 'observations' Yi = 1 with probability Pi = el , /(1 + eli), to get the training set. Q A is given in Wahba( 1990) for this cubic spline case, K = 50. Since the true P is known, the true CKL can be computed. Fig. l(a) gives a plot of CK L(A) and 10 replicates of ranGACV(A). In each replicate R was taken as 1, and J was generated anew as a Gaussian random vector with (115 = .001. Extensive simulations with different (115 showed that the results were insensitive to (115 from 1.0 to 10-6• The minimizer of C K L is at the filled-in circle and the 10 minimizers of the 10 replicates of ranGACV are the open circles. Anyone of these 10 provides a rather good estimate of the A that goes with the filled-in circle. Fig. l(b) gives the same experiment, except that this time R = 5. It can be seen that the minimizers ranGACV become even more reliable estimates of the minimizer of C K L, and the C K L at all of the ranG ACV estimates are actually quite close to its minimum value. 4.2 EXPERIMENT 2. ADDITIVE MODEL WITH A = (Al' A2) Here t E [0,1] 0 [0,1]. n = 500 values of ti were generated randomly according to a uniform distribution on the unit square and the Yi were generated according to Pi = eli j(l + el,) with t = (Xl,X2) and f(t) = 5 sin 27rXl - 3sin27rX2. An additive model as a special case of the smoothing spline ANOVA model (see Wahba et al, 1995), of the form f(t) = /-l + h(xd + h(X2) with cubic spline penalties on hand h were used. K = 50, (115 = .001, R = 5. Figure l(c) gives a plot of CK L(Al' A2) and Figure l(d) gives a plot of ranGACV(Al, A2). The open circles mark the minimizer of ranGACV in both plots and the filled in circle marks the minimizer of C K L. The inefficiency, as measured by CKL()..)/minACKL(A) is 1.01. Inefficiencies near 1 are typical of our other similar simulations. 4.3 EXPERIMENT 3. COMPARISON OF ranGACV AND UBR This experiment used a model similar to the model fit by GRKPACK for the risk of progression of diabetic retinopathy given t = (Xl, X2, X3) = (duration, glycosylated hemoglobin, body mass index) in Wahba et al(l995) as 'truth'. A training set of 669 examples was generated according to that model, which had the structure f(Xl, X2, X3) = /-l + fl (xd + h (X2) + h (X3) + fl,3 (Xl, X3). This (synthetic) training set was fit by GRKPACK and also using K = 50 basis functions with ranG ACV. Here there are P = 6 smoothing parameters (there are 3 smoothing parameters in f13) and the ranGACV function was searched by a downhill simplex method to find its minimizer. Since the 'truth' is known, the CKL for)" and for the GRKPACK fit using the iterative UBR method were computed. This was repeated 100 times, and the 100 pairs of C K L values appears in Figure l(e). It can be seen that the U BR and ranGACV give similar C K L values about 90% of the time, while the ranG ACV has lower C K L for most of the remaining cases. 4.4 DATA ANALYSIS: AN APPLICATION Figure 1(f) represents part of the results of a study of association at baseline of pigmentary abnormalities with various risk factors in 2585 women between the ages of 43 and 86 in the Beaver Dam Eye Study, R. Klein et al( 1995). The attributes are: Xl = age, X2 =body mass index, X3 = systolic blood pressure, X4 = cholesterol. X5 and X6 are indicator variables for taking hormones, and history of drinking. The smoothing spline ANOVA model fitted was f(t) = /-l+dlXl +d2X2 + h(X3)+ f4(X4)+ h4(X3, x4)+d5I(x5) +d6I(x6), where I is the indicator function. Figure l(e) represents a cross section of the fit for X5 = no, X6 = no, The Bias-Variance Tradeoff and the Randomized GACV 625 X2, X3 fixed at their medians and Xl fixed at the 75th percentile. The dotted lines are the Bayesian confidence intervals, see Wahba et al( 1995). There is a suggestion of a borderline inverse association of cholesterol. The reason for this association is uncertain. More details will appear elsewhere. Principled soft classification procedures can now be implemented in much larger data sets than previously possible, and the ranG ACV should be applicable in general learning. References Girard, D. (1998), 'Asymptotic comparison of (partial) cross-validation, GCV and randomized GCV in nonparametric regression', Ann. Statist. 126, 315-334. Girosi, F., Jones, M. & Poggio, T. (1995), 'Regularization theory and neural networks architectures', Neural Computatioll 7,219-269. Gong, J., Wahba, G., Johnson, D. & Tribbia, J. (1998), 'Adaptive tuning of numerical weather prediction models: simultaneous estimation of weighting, smoothing and physical parameters', MOllthly Weather Review 125, 210-231. Gu, C. (1992), 'Penalized likelihood regression: a Bayesian analysis', Statistica Sinica 2,255-264. Klein, R., Klein, B. & Moss, S. (1995), 'Age-related eye disease and survival. the Beaver Dam Eye Study', Arch Ophthalmol113, 1995. Liu, Y. (1993), Unbiased estimate of generalization error and model selection in neural network, manuscript, Department of Physics, Institute of Brain and Neural Systems, Brown University. Utans, J. & Moody, J. (1993), Selecting neural network architectures via the prediction risk: application to corporate bond rating prediction, in 'Proc. First Int'I Conf. on Artificial Intelligence Applications on Wall Street', IEEE Computer Society Press. Wahba, G. (1990), Spline Models for Observational Data, SIAM. CBMS-NSF Regional Conference Series in Applied Mathematics, v. 59. Wahba, G. (1995), Generalization and regularization in nonlinear learning systems, in M. Arbib, ed., 'Handbook of Brain Theory and Neural Networks', MIT Press, pp. 426430. Wahba, G., Wang, Y., Gu, c., Klein, R. & Klein, B. (1994), Structured machine learning for 'soft' classification with smoothing spline ANOVA and stacked tuning, testing and evaluation, in J. Cowan, G. Tesauro & J. Alspector, eds, 'Advances in Neural Information Processing Systems 6', Morgan Kauffman, pp. 415-422. Wahba, G., Wang, Y., Gu, C., Klein, R. & Klein, B. (1995), 'Smoothing spline AN OVA for exponential families, with application to the Wisconsin Epidemiological Study of Diabetic Retinopathy' , Ann. Statist. 23, 1865-1895. Wang, Y. (1997), 'GRKPACK: Fitting smoothing spline analysis of variance models to data from exponential families', Commun. Statist. Sim. Compo 26,765-782. Wong, W. (1992), Estimation of the loss of an estimate, Technical Report 356, Dept. of Statistics, University of Chicago, Chicago, II. Xiang, D. & Wahba, G. (1996), 'A generalized approximate cross validation for smoothing splines with non-Gaussian data', Statistica Sinica 6, 675-692, preprint TR 930 available via www. stat. wise. edu/-wahba - > TRLIST. Xiang, D. & Wahba, G. (1997), Approximate smoothing spline methods for large data sets in the binary case, Technical Report 982, Department of Statistics, University of Wisconsin, Madison WI. To appear in the Proceedings of the 1997 ASA Joint Statistical Meetings, Biometrics Section, pp 94-98 (1998). Also in TRLIST as above. 10 (0 c:i 0 (0 c:i 10 10 c:i 0 ~ 0 C\I (0 c:i co 10 c:i (0 10 626 .'. -8 -7 CKL ranGACV -6 -5 log lambda (a) 9.29 r,,6 :0' -4 ~, O. 7 O. 9 -7 -6 -5 log lambda1 (c) o 12! o -3 \~7 0\ O. 4 -4 10 (0 c:i o (0 c:i 10 10 c:i o 10 c:i CKL -8 -7 G, Wahba et aI, -6 -5 log lambda (b) -4 -3 "f ... 0,28 (0 c:i .~ =...,. .0 . ca O .0 e a.. C\I c:i o .. ········r ..... ranGACV .' .25 0'.2,4 -7 " ...... 0:\13 :.· .. O-:!4!7 ': : 0 . . . . . . .: 0'F5 0'F8 0.[32 -6 -5 log lambda1 (d) -4 c:i ~--------.-------.--------r--~ c:i ~ __ ~ ____ ,-____ .-__ -. ____ .-__ ~ 0,56 0,58 0,60 ranGACV (e) 0,62 100 150 200 250 300 350 400 Cholesterol (mg/dL) (f) Figure 1: (a) and (b): Single smoothing parameter comparison of ranGACV and CK L. (c) and (d): Two smoothing parameter comparison of ranGACV and CK L. (e): Comparison of ranG ACV and U B R. (f): Probability estimate from Beaver Dam Study
1998
116
1,471
The Role of Lateral Cortical Competition in Ocular Dominance Development Christian Piepenbrock and Klaus Obermayer Dept. of Computer Science, Technical University of Berlin FR 2-1; Franklinstr. 28-29; 10587 Berlin, Germany' {piep,oby}@cs.tu-berlin.de; http://www.ni.cs.tu-berlin.de Abstract Lateral competition within a layer of neurons sharpens and localizes the response to an input stimulus. Here, we investigate a model for the activity dependent development of ocular dominance maps which allows to vary the degree of lateral competition. For weak competition, it resembles a correlation-based learning model and for strong competition, it becomes a self-organizing map. Thus, in the regime of weak competition the receptive fields are shaped by the second order statistics of the input patterns, whereas in the regime of strong competition, the higher moments and "features" of the individual patterns become important. When correlated localized stimuli from two eyes drive the cortical development we find (i) that a topographic map and binocular, localized receptive fields emerge when the degree of competition exceeds a critical value and (ii) that receptive fields exhibit eye dominance beyond a second critical value. For anti-correlated activity between the eyes, the second order statistics drive the system to develop ocular dominance even for weak competition, but no topography emerges. Topography is established only beyond a critical degree of competition. 1 Introduction Several models have been proposed in the past to explain the activity depending development of ocular dominance (00) in the visual cortex. Some models make the ansatz of linear interactions between cortical model neurons [2, 7], other approaches assume competitive winner-take-all dynamics with intracortical interactions [3, 5]. The mechanisms that lead to ocular dominance critically depend on this choice. In linear activity models, second order correlations of the input patterns determine the receptive fields. Nonlinear competitive models like the self-organizing map. however, use higher order statistics of the input stimuli and map their features. In this contribution. we introduce a general nonlinear 140 Oil x pLIl • pRIl • C. Pie pen brock and K. Obermayer Figure I: Model for OD development: the in~~~~~~mQQQ!~§QJ Cortex put patterns p/!J. and P!RJ-L in the LGN drive R the Hebbian modification of the cortical afferYJ LGN ent synaptic weights S~i and S~. Cortical neu~:=:::=:::. left-eye rons are in competition and interact with effecRl right-eye tive strengths f xy. Locations in the LGN are inJ dexed i or j, cortical locations are labeled x or y. Hebbian development rule which interpolates the degree of lateral competition and allows us to systematically study the role of non-linearity in the lateral interactions on pattern formation and the transition between two classes of models. 2 Ocular Dominance Map Development by Hebbian Learning Figure I shows our basic model framework for ocular dominance development. We consider two input layers in the lateral geniculate nucleus (LGN). The input patterns f1 = 1, ... , U on these layers originate from the two eyes and completely characterize the input statistics (the mean activity P is identical for all input neurons). The afferent synaptic connection strengths of cortical cells develop according to a generalized Hebbian learning rule with learning rate ",. ASLJ-L ~ f 0-J-LpLJ-L L). xi '" ~ xy y i y ( I ) An analogous rule is used for the connections from the right eyes S~. We use v = 2 in the following and rescale the length of each neurons receptive field weight vector to a constant length after a learning step. The model includes effective cortical interactions fry for the development of smooth cortical maps that spread the output activities 6~ in the neighborhood of neuron x (with a mean j = ~ Lx fry for N output neurons). The cortical output signals are connectionist neurons with a nonlinear activation function g(.), _ exp({3HJ-L) Ot = g(H~) = Lz exp(;3~r) with H~ = L(S~jPjLJ-L + s~pt!J.) , j (2) which models the effect of cortical response sharpening and competition for an input stimulus. The degree of competition is determined by the parameter {3. Such dynamics may result as an effect of local excitation and long range inhibition within the cortical layer [6, I], and in the limits of weak and strong competition, we recover two known types of developmental models-the correlation based learning model and the self-organizing map. 2.1 From Linear Neurons to Winner-take-all Networks In the limit ;3 ---+ 0 of weak cortical competition. the output 6~ becomes a linear function of the input. A Taylor series expansion around 13 = 0 yields a correlation-based-learning (CBL) rule in the average over all patterns ~st T];3L ~(l:rz j)(S~jctL + S~CflL) +const .. z .j where CfiL = b LJ-L ptJ-L p/J-L is the correlation function of the input patterns. Ocular dominance development under this rule requires correlated activity between inputs from Role of Lateral Cortical Competition in Ocular Dominance Development 141 CBL limit (3 = 2.5 (3 = 32 SOM limit Figure 2: The network response for different degrees of cortical competition: the plots show the activity rates Ly 1 xy 6~ for a network of cortical output neurons (the plots are scaled to have equal maxima). Each gridpoint represents the activity of one neuron on a 16 x 16 grid. The interactions Ixy are Gaussian (variance 2.25 grid points) and all neurons are stimulated with the same Gaussian stimulus (variance 2.25). The neurons have Gaussian receptive fields (variance (J'2 = 4.5) in a topographic map with additive noise (uniformly distributed with amplitude 10 times the maximum weight value). within one eye and anti-correlated activity (or uncorrelated activity with synaptic competition) between the two eyes [2,4]. It is important to note, however, that CBL models cannot explain the emergence of a topographic projection. The topography has to be hard-wired from the outset of the development process which is usually implemented by an "arbor function" that forces all non-topographic synaptic weights to zero. Strong competition with (3 --t 00, on the other hand, leads to a self-organizing map [3, 5], ~sf: = TJlxq(Jl)P/Jl with q(ll) = argmaXy I)S~jPf!l + s~ptJl) . j Models of this type use the higher order statistics of the input patterns and map the important features of the input. In the SOM limit, the output activity pattern is identical in shape for all input stimuli. The input influences only the location of the activity on the output layer but does not affect its shape. For intermediate values of (3, the shape of the output activity patterns depends on the input. The activity of neurons with receptive fields that match the input stimulus better than others is amplified, whereas the activity of poorly responding neurons is further suppressed as shown in figure 2. On the one hand, the resulting output activity profiles for intermediate (3 may be biologically more realistic than the winner-take-alllimit case. On the other hand, the difference between the linear response case (low (3) and the nonlinear competition (intermediate (3) is important in the Hebbian development process-it yields qualitatively different results as we show in the next section. 2.2 Simulations of Ocular Dominance Development In the following, we study the transition from linear CBL models to winner-take-all SOM networks for intermediate values of 13. We consider input patterns that are localized and show ocular dominance p.LJl = 0.5 + eyeL (11) exp (_ (i -IOC(f-L))2) with eyeL (f-L) = -eyeR (f-L) (3) l 2rr(J'2 2(J'2 Each stimulus 11 is of Gaussian shape centered on a random position loc(ll) within the input layer and the neuron index i is interpreted as a two-dimensional location vector in the input layer. The parameter eye (f-L) sets the eye dominance for each stimulus. eye = 0 produces binocular stimuli and eye = ± ~ results in uncorrelated left and right eye activities. We have simulated the development of receptive fields and cortical maps according to equations 1 and 2 (see figure 3) for square grids of model neurons with periodic boundary conditions, Gaussian cortical interactions. and 00 stimuli (equation 3). The learning 142 5 I {3* = 1. 783 analytic prediction (!) N .<ii 4 i (3+ "0 ~ <l: (!) B C (!) ;:::I 0.5 0.4 ~0.3 o C. Piepenbrock and K. Obermayer (3* = 1. 783 analytic prediction ;3+ Ir' 0 . ~ 3 fr I u o §0.2 topogr. map with OD E A I!topogr. map with OD B C ~ 1··;';;:~·;;;;;~i;:';~;;·~~·OD ·1 0.1 W . '.'?:? '.~i~'.:,~~.~.~.~~ . 0.0 ........ ...::w._~'--~_~~~~_-' o 234567 0 1234567 2 CBL~ log2 (3 -+SOM CBL~ log2 {3 -+SOM Figure 3: Simulation of ocular dominance development for a varying degree of cortical competition ;3 in a network of 16 x 16 neurons in each layer. The figure shows receptive fields sizes (left) and mean 00 value (right) as a function of cortical competition 8. Each point in the figure represents one simulation with 30000 pattern presentations. The cortical interactions are Gaussian with a variance of "{2 = 2.25 grid points. The Gaussian input stimuli are 5.66 times stronger in one eye than in the other (equation 3 with (j2 = 2.25, eye(f.-t) = ±O.35). The synaptic weights are intialized with a noisy topographic map (curves labeled "no aD") and additionally with ocular dominance stripes (curves labeled "with aD"). To determine the receptive field size we have applied a Gaussian fit to all receptive field profiles sf; and st and averaged the standard deviation (in grid points) over all neurons x . The mean 00 value is given by ~ L x J L i(Sf; - S{; )I L i(Sfx + S{!.)J. rate is set at the first stimulus presentation to change the weights of the best responding neuron by half a percent. After each learning step the weights are rescaled to enforce the constraint from equation I. The simulations yield the results expected in the CBL and SaM limit cases (small and large (3) for initially constant synaptic weight values with 5 percent additional noise. In the CBL limit, our particular choice of input patterns does not lead to the development of ocular dominance, because the necessary conditions for the input pattern correlations are not satisfied-the pattern correlations and interactions are all positive. Instead, the learning rule has only one fixpoint with uniform synaptic weights-unstructured receptive fields that cover the whole input layer. In the SaM limit, our set of stimuli leads to the emergence of a topographic projection with localized receptive fields and ocular dominance stripes. The topographic maps often develop defects which can be avoided by an annealing scheme. Instead of annealing {3 or the cortical interaction range, however, we initialize the weights with a topographic projection and some additive noise. This is a common assumption in cortical development models [2], because the fibers from the LGN first innervate the visual cortex already in a coarsely topographic order. For intermediate degrees of cortical competition, we find sharp transitions between the CBL and SaM states and distinguish three parameter regimes (see figure 3). For weak competition (A) all receptive fields are unstructured and cover the whole input layer. At some critical {3* , the receptive fields begin to form a topographic projection from the geniculate to the cortical layer. This projection (B) has no stable ocular dominance stripes, but a small degree of ocular dominance that fluctuates continuously. For yet stronger competition (C), a cortical map with stable ocular dominance stripes emerges. The simulations, however, show that a topographic map without ocular dominance remains a stable attractor of the learning dynamics (C). For increasing competition its basin of attraction becomes smaller, and smaller learning rates are necessary in order to remain within the binocular state. On the one hand, simulations with slowly increasing beta lead to a toRole of Lateral Cortical Competition in Ocular Dominance Development 143 0.6 (3* = 2.002 analytic prediction ,;3+ I I topogr. map with 00 A C local minimum: no 00 t·,. ... ,' ... ,.. ... .............. ----................................................... _ .... ___ .... ~_. -1.0 Vi o u -2.0 (3* = 2.002 analytic prediction local minimum: no 00 , ........................... ..-_-0-...... ..-...... ....... _ ........... _ .................. .... topogr. map with 00 0.0 0 I 2 3 4 5 6 7 0 I 2 3 4 5 6 7 CBLflog2 (3 -+SOM CBLflog2 (3 -+SOM Figure 4: Simulations for the learning equation 5. The figure shows the mean ocular dominance (left) and the cost (right) as a function of (3. The parameters are identical to figure 3 and eye(p) = ±O.425. pographic map and ocular dominance stripes suddenly pop up somewhere in regime Cfor small learning rates later than for large ones. On the other hand, in simulations with decreasing (3 and an initially topographic map with ocular dominance, we find a second critical (3+ at which the OD map becomes unstable. To understand the system's properties better, we analytically predict the value (3* -the point where structured receptive fields emerge-and discuss the relation to cost functions to get some intuition about the value (3+ in the following paragraph. 2.3 Analysis of the Emergence of Structured Receptive Fields For (3 < (3* the system shows basically CBL properties-in our case constant weights and unstructured receptive fields. It is possible to study the stability of this state analytically. We consider the learning equation I under a hard renormalization constraint that enforces L~l (S~i )2 + ( S~)2 = 2M 52 by rescaling the weights after each learning step. A linear perturbation analysis of the learning rule around constant weights yields a critical degree of competition (3* = (5 ).~ax ).~ aJ -1 where 5 is the strength of the constant synaptic weights. ).~ax is the largest eigenvalue of the input covariance matrix ~ ( [ C~~ C~:] _ p2) , P Cji Cji which has to be diagonalized with respect to Land R, as well as with respect to i and j. The input correlation functions for the patterns from equation 3 are given by Cfl = CflR = (~+ 2eye2)bG(i - j , 20'2) and ctR = CfiL = (~- 2eye2)bG(i - j, 20'2) where G Cr, 0'2) is a two-dimensional Gaussian with variance 0'2. The eigenvalues with respect to Land R in this symmetric case_ are the sum and difference terms of the correlation functions /{sum = .l(CLL + C~R _ p 2) and /{d.lff = .l(C L.L - GLR ) The term I{ sum lJ p Jl J l lJ P Jl Jl' lJ is larger for positive input correlations and in the next step we have to find the eigenvalues of this matrix. For periodic boundary conditions and in the limit of large networks, we can approximate the eigenvalue by the fourier transform of the Gaussian and finally obtain ).~ax = exp ( (0'211' /m)2) (for a square grid of M = m x m neurons). ).~ ax is the largest eigenvalue of (J Jlxz ~) and Gaussian cortical interactions Ixy with variance ,'2 on N = n x n output neurons yield ).~ ax = exp ( ~h211' /n)2). Stronger competition beyond the point (3* leads to the formation of structured receptive fields. It is interesting to note, that the critical (3* does not depend on eye(p) , the strength of ocularity in the input patterns. The predicted value for (3* is plotted in figure 3 and matches the transition found in the simulations. 144 C. Piepenbrock and K. Obermayer 2.4 Hebbian Development With a Global Objective Function The learning equation 1 does not optimize a global cost function [5]. To understand the dynamics of the 00 development better and to interpret the transition at /3*, we derive a learning rule very similar to equation 1 that minimizes the global cost function E, 1 E = u L O~cost~ with cost~ = - L Ixy(S~jPjLJ.I + SijprJ.l) . (4) J.I,X y,j We minimize this cost function in a stochastic network of binary output neurons 0'; that compete for the input stimuli, i.e. one output neuron is active at any given time. The probability for a neuron y to become active in response to pattern J.l. depends on its advantage in cost over the currently active neuron x: P( O~ = 1 -t O~ = 1) exp[-/3(costi cost~)] I:z exp[ - /3( cost'; cost~)] This type of output dynamics leads to a Boltzmann probability distribution for the state of the system. We marginalize over all possible network outputs and derive a learning rule by gradient descent on the log likelihood of a particular set of synaptic connections (subject to I:i(S~i)n + (s~)n = const.). L a L R a "" 1 ~Sxi T/ aSL log Prob( {Sxi' Sxi}) = aSL log L.J Z exp( -BE) . Xl Xl {O~} Finally, we obtain a learning rule that contains the expectation values 0'; (or mean fields) of the binary outputs, ~SLJ.I='TI""I Ot'pLJ.lwithOJ.l= exp(,8I:y,jlxy (S{;jP/J.I+S{hprt')) 5 Xl ./ L.J xy Y z X '\"' . '\"' L LJ.I R RJ.I)· ( ) y D Z exp(,8Dy ,j Izy (Syj Pj +SyjPj ) This learning rule is almost identical to equation 1, it only contains an additional cortical interaction inside the output term 0';, but it has the advantage of an underlying cost function. Figure 4 shows the development of ocular dominance according to equation 5 and the associated cost is plotted for each state of the system. The value {3* of the first transition is calculated analogously to the previous section and ).:nax becomes the maximum eigenvalue of the matrix (J[ I:y Ixylyz -1) which is ).~ax = exp (- ('"y27l'/m)2). Around,8+ a topographic map without ocular dominance is a stable state and it remains stable for larger ,8. In addition, a different minimum of the cost function equation 4 emerges at (3+: an ocular dominance map with a lower associated cost. This shows that an ocular dominance map becomes the preferred state of the system beyond,8+ although the binocular topographic map is still stable. In the SOM limit,8 -t 00 the binocular topographic map becomes unstable and ocular dominance stripes develop. The value,8+ marks the first emergence of an ocular dominance map. For the simulations in the figures 3 and 4 we have used positive correlations between the two eyes-a realistic assumption for 00 map development. For weaker correlations (eye(J.l.) approaches ± ~), ,8+ decreases. For anti-correlated stimuli, an ocular dominance map develops even in the CBL limit [4] (this, however, requires additional model assumptions like inhibition between the layers within the LGN). Such a map has no topographic structure (if not imposed by an arbor function) but mostly monocular receptive fields. The value ,8* is not affected directly by those changes and the monocular receptive fields localize, if ;3* is exceeded. Consequently, the "feature" 00 emerges, if it is dominant in the relevant pattern statistics-for anti-correlated eyes around ,8 = 0, and for positive between-eyecorrelations only in the regime of higher order moments at /3+ . Role of Lateral Cortical Competition in Ocular Dominance Development 145 3 Conclusions We have introduced a model for cortical development with a variable degree of cortical competition. For weak competition it has CBL models, for strong competition the SOM as limit cases. Localized stimuli with ocular dominance require a minimum degree of cortical competition to develop a topographic map, and a stronger degree of competition for the emergence of ocular dominance stripes. Anti-correlated activity between the two eyes lets OD emerge for weak competition and localized fields only beyond a critical degree of competition. A Taylor series expansion of the learning equation I yields a CBL model that uses only second order input statistics. For increasing !3 the higher order terms become significant which consist of the higher moments of the input patterns. In this contribution, we have used only simple activity blobs in two eyes, but it is well known that in the winner-take-all limit features like orientation selectivity can emerge as well [3]. The soft cortical competition in our model implements a mechanism of response sharpening in which the input patterns do still influence the output pattern shape. This should relax the biologically unplausible assumption of winner-take-all dynamics of SOM models and yields similar ocular dominance maps. Cortical microcircuits-local cortical amplifiershave been proposed as a cortical module of computation [6]. Our model suggests that such circuits may be important to sharpen the responses during the development and to permit the emergence offeature mapping simple cell receptive fields. Our model shows that small changes in the degree of cortical competition may result in qualitative changes of the emerging receptive fields and cortical maps. Such changes in competition could be a result of the maturation of the intra-cortical connectivity. A slowly increasing degree of cortical competition could make the cortical neurons sensitive to more and more complex features of the input stimuli. Acknowledgements This work was supported by the Boehringer Ingelheim Fonds (C. Piepenbrock) and by DFG grant Ob 102/2-1. References [1] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Bioi. Cyb., 27:77-87, 1977. [2] K. D. Miller, J. B. Keller, and M. P. Stryker. Ocular dominance column development: Analysis and simulation. Science, 245:605-615, 1989. [3] K. Obermayer, H. Ritter, and K. Schulten. A principle for the formation of the spatial structure of cortical feature maps. Proc. Nat. Acad. Sci. USA, 87:8345-49, 1990. [4] C. Piepenbrock, H. Ritter, and K. Obermayer. The joint development of orientation and ocular dominance: Role of constraints. Neur. Comp., 9:959-970, 1997. [5] M. Riesenhuber, H.-U. Bauer, and T. Geisel. Analyzing phase transitions in highdimensional self-organizing maps. Biol. Cyb., 75:397-407, 1996. [6] D. C. Somers, S. B. Nelson, and M. Sur. An emergent model of orientation selectivity in cat visual cortical simple cells. 1. Neurosci., 15:5448-5465, 1995. [7] A. L. Yuille, J. A. Kolodny, and C. W. Lee. Dimension reduction, generalized deformable models and the development of ocularity and orientation. Neur. Netw., 9:309319,1996.
1998
117
1,472
General Bounds on Bayes Errors for Regression with Gaussian Processes Manfred Opper Neural Computing Research Group Dept. of Electronic Engineering and Computer Science, Aston University, Birmingham, B4 7ET United Kingdom oppermGaston.ac.uk Francesco Vivarelli Centro Ricerche Ambientali Montecatini, via Ciro Menotti, 48 48023 Marina di Ravenna, Italy fvivarelliGcramont.it Abstract Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments. 1 Introduction Nonparametric Bayesian models which are based on Gaussian priors on function spaces are becoming increasingly popular in the Neural Computation Community (see e.g.[2, 3, 4, 7, 1]). Since the model classes considered in this approach are infinite dimensional, the application of Vapnik - Chervonenkis type of methods to determine bounds for the learning curves is nontrivial and has not been performed so far (to our knowledge). In these methods, the target function to be learnt is fixed and input data are drawn independently at random from a fixed (unknown) distribution. The approach of this paper is different. Here, we assume that the target is actually drawn at random from a known prior distribution, and we are interested in developing simple bounds on the average prediction performance (with respect to the prior) which hold for a fixed set of inputs. Only at a later stage, an average over the input distribution is made. General Bounds on Bayes Errors for Regression with Gaussian Processes 303 2 Regression with Gaussian processes To explain the Gaussian process scenario for regression problems [4J, we assume that observations Y E R at input points x E RD are corrupted values of a function 8(x) by an independent Gaussian noise with variance u2 . The appropriate stochastic model is given by the likelihood _ ( y _9 (.,))2 e 2 .. 2 pe(Ylx) = ~ (1) The goal of a learner is to give an estimate of the function 8(x), based on a set of observed example data Dt = ((Xl, Yl)"'" (Xt) Yt)) . As the prior information about the unknown function 8(x) we asume that 8 is a realization of a Gaussian random field with zero mean and covariance C(x, x') = 18[8(x)8(x')J. (2) It is useful to expand the random functions as 00 (3) k=O in a complete set of deterministic functions ¢>k (x) with random Gaussian coefficients Wk. As is well known, if the ¢>k are chosen as orthonormal eigenfunctions of the integral equation / C(x,x')¢>k(x')p(x')dx' = Ak¢>k(X), (4) with eigenvalues Ak and a nonnegative weight function p(x), the a priori statistics of WI is simple. They are independent Gaussian variables which satisfy 18[wkwd = AkOkl' 3 Prediction and Bayes error Usually, the posterior mean of 8(x) is chosen as the prediction 8(x) on a new point x based on a dataset Dn = (Xl, yI), ... , (xn) Yn). Its explicit form can be easily derived by using the expansion 8(x) = Lk Wk¢>k(X), and the fact that for Gaussian random variables, their mean coincides with their most probable value. Maximizing the log posterior, with respect to the W k) one finds for the infinite dimensional vector W ~ (Wk)k=O, ... ,oo the result W = (u2J + AV) - 1 b where Vkl = L~=l ¢>k(Xi)¢>I(xd Akl = AkOkl and bk = L~=1 AkYi¢>k(xd Fixing the set of inputs xn, the Bayesian prediction error at a point x is given by c(xlxn) ~ 18 (8(x) - 8(x)f (5) Evaluating (5) yields, after some work, the expression c(xlxn) = u2 Tr { (u 2 J + AV) -1 AU(x) } (6) with the matrix Ukl(X) = ¢>k(X)¢>I(X). U has the properties that ~ L~=1 U(Xi) = V and J dx p(x)U(x) = I. We define the Bayesian training error as the empirical average of the error (5) at the n datapoints of the training set and the Bayesian generalization error as the average error over all x weighted by the function p(x). We get .!. Tr { A V (I + A V / u 2 ) -1 } n Tr { A (I + A V / u2) -1 } . (7) (8) 304 M. Opper and F Vivarelli 4 Entropic error In order to understand the next type of error [9], we assume that the data arrive sequentially, one after the other. The predictive distribution after t - 1 training data at the new input Xt is the posterior expectation of the likelihood (1), Le. Let L t as the Bayesian average of the relative entropy (or Kullback Leibler divergence) between the predictive distribution and the true distribution Pe from which the data were generated, Le. Lt = lE [D K L (Pel I P) ]. It can also be shown that Lt = ! In (1 + ~g(X~f-l») . Hence, when the prediction error is small, we will have (9) The cumulative entropic error Ee (xn) is defined by summing up all the losses (which gives an integrated learning curve) from t = 1 up to time n and one can show that E(xn) = tLt(Xt,Dt-d = lEDKL (Pellpn) = ~Trln (I + AV/(12) (10) t=l where P; = rr=l Pe(yilxd and pn = lE[n~=l Pe(Yilxd]. The first equality may be found e.g. in [9], and the second follows from direct calculation. 5 Bounds for fixed set of inputs In order to get bounds on (7),(8) and (10), we use a lemma, which has been used in Quantum Statistical Mechanics to get bounds on the free energy. The lemma (for the special function f(x) = e-.B X ) was proved by Sir Rudolf Peierls in 1938 [10]. In order to keep the paper self contained, we have included the proof in the appendix. Lemma 1 Let H be a real symmetric matrix and f a convex real function. Then Tr f(H) ~ L::k f(Hkk). By noting, that for concave functions the bound goes in the other direction, we immediately get < (12 L Ak Vk k 2 L AkVk (11) ct <(1 n (12 + Ak Vkk (12 + nAkVk k k > L (12 Ak L (12 Ak (12) Cg (12 + Ak Vkk ~ (12 + nAkVk k k E{xn) < ~ LIn (1 + V kk A k/(12) ~ ~ LIn (1 + nVkAk/(12) k k (13) where in the rightmost inequalities, we assume that all n inputs are in a compact region V, and we define Vk = sUPxE'D 4>~(x). 1 IThe entropic case may also be proved by Hadamard's inequality. General Bounds on Bayes Errorsfor Regression with Gaussian Processes 305 6 A verage case bounds Next, we assume that the input data are drawn at random and denote by ( ... ) the expectations with respect to the distribution. We do not have to assume independence here, but only the fact that all marginal distributions for the n inputs are identical! Using Jensen's inequality Ct (Ct(xn)) ~ 0-22: )..kUk (14) k 0-2 + n)..kUk 0-2 ).. cg (cg(xn)) ~ 2: 0-2 + n~ U (15) k k k 1 E (E(xn)) ~ 22: In (1 + nUk)..kj0-2) (16) k where now Uk = (¢'Hx)). This result is especially simple, when the weighting function p(x) is a probability density and the inputs have the marginal distribution p(x). In this case, we simply have Uk = 1. In this case, training and generalization error sandwich the bound 2" )..k cb = 0- L2 \. k 0- + n/\k (17) We expect that the bound Cb becomes asymptotically exact, when n -+ 00. This should be intuitively clear, because training and generalization error approach each other asymptotically. This fact may also be understood from (9), which shows that the cumulative entropic error is within a factor of ! asymptotically equal to the cumulative generalization error. By integrating the lower bound (17) over n, we obtain precisely the upper bound on E with a factor 2, showing that upper and lower bounds show the same behaviour. 7 Simulations We have compared our bounds with simulations for the average training error and generalization error for the case that the data are drawn from p( x). Results for the entropic error will be given elsewhere. We have specialized on the case, where the covariance kernel is of the RBF form C(x,x') = exp[(x X')2j)..2], and p(x) = (27r)-~e-~X2, for which, following Zhu et al. (1997), the k-th eigenvalue of the spectrum (k = 0 ... 00) can be written as)..k = abk, where a = VC,b = c/)..2, c = 2(1+2j)..2+v'1+4/)..2)-I, and)" is the lengthscale of the process. We estimated the average generalisation error for each training set based on the exact analytical expressions (8) and (7) over the distribution of the datasets by using a Monte Carlo approximation. To begin with, let us consider x E R. We sampled the I-dimensIOnal input space generating 100 training sets whose data points were normally distributed around zero with unit variance. For each generation, the expected training and generalisation errors for a GP have been evaluated using up to 1000 data points. We set the value of the lengthscale2 ).. to 0.1 and we let the noise level 0-2 assume several values (0-2 = 10-4 , 10-3 , 10-2,10-1 , 1). Figure 1 shows the results we obtained when 2The value of the lengthscale ..\ has the effect of stretching the training and learning curves; thus the results of the experiments performed with different ..\ are qualitatively similar to those presented. 306 M Opper and F Vivarelli ~ " " ~ '" £ 9 0.1 £1 . ~ .. 0.1 ." ... 0.01 , , 10 100 1000 10 n n (a) >. = 0.1, (72 = 0.1 (b) >. = 0.1, (72 = 1 Figure 1: The Figures show the graphs of the training and learning curves with their bound fb(n) obtained with>. = 0.1; the noise level is set to 0.1 in Figure l(a) and to 1 in Figure l(b). In all the graphs, ft and fg(n) are drawn by the solid line and their 95% confidence interval is signed by the dotted curves. The bound fb(n) is drawn by the dash-dotted lines. (12 = 0.1 (Figure l(a)) and (12 = 1 (Figure l{b)). The bound €b{n) lies within the training and learning curves, being an upper bound for €t (n) and a lower bound for €g(n). This bound is tighter for the processes with higher noise level; in particular, for large datasets the error bars on the curves €t (n) and €g (n) overlap the bound €b(n). The curves €t{n), €g(n) and €b(n) approach zero as O(log(n)/n). Our bounds can also be applied to higher dimensions D > 1 using the covariance C{x, x') = exp (-llx - x'112 />.2) (18) for x, x' E RD. Obviously the integral kernel C is just a direct product of RBF kernels, one for each coordinate of x and x'. The eigenvalue problem (4) can be immediately reduced to the one for a single variable. Eigenfunctions and eigenvalues are simply products of those for the single coordinate problems. Hence, using a bit of combinatorics, the bound Cb can be written as _ 00 (k + D - 1) (12aDbk Cb - L k (72 + naDbk , k=O (19) where a and b have been defined above. We performed experiments when x E R2 and x E R5. The correlation lengths along each direction of the input space has been set to 1 and the noise level was (12 = 1.0. The graphs of the curves, with their error bars are reported in Figure 2{a) (for x E R2) and in Figure 2{b) (for x E R5 ). 8 Discussion Based on the minimal requirements on training inputs and covariances, we conjecture that our bounds cannot be improved much without making more detailed assumptions on models and distributions. We can observe from the simulations that the tightness of the bound €b{n) depends on the dimension of the input space. In particular, for large datasets €b{n) is tighter for small dimension of the input space; Figure 2{a) shows this quite clearly since €b{n) overlaps the error bars of the General Bounds on Bayes Errors for Regression with Gaussian Processes 307 -t;,----_ 0.1 10 10 100 1000 n n (a) d = 2 (b) d = 5 Figure 2: The Figures show the graphs of the training and learning curves with their bound Eb(n) obtained with the squared exponential covariance function with A = 1 and (72 = 1; the input space is R2 (Figure 2( a» and R 5 (Figure 2(b». In all the Figures, Et and Eg(n) are drawn by the solid line and their 95% confidence interval is signed by the dotted curves. The bound Eb(n) is drawn by the dash-dotted lines. training and learning curves for large n. Numerical simulations performed using modified Bessel covariance functions of order r (describing random processes r - 1 time mean square differentiable) have shown that the bound €b(n} becomes tighter for smoother processes. Acknowledgement: We are grateful for many inspiring discussions with C.K.I. Williams. M.O. would like to thank Peter Sollich for his conjecture that (17) is an exact lower bound on the generalization error, which motivated part of this work. F. V. was supported by a studentship of British Aerospace. 9 Appendix: Proof of the lemma 1 Let {~(j)} be a complete set of orthonormal eigenvectors and {Ei} the correspond(i) (i) ing set of eigenvalues of H, i.e. we have the properties Ll Hkl~l = Ei ~k , (i) (i) (i) (j) Li ~k ~l = 8kl , and Lk ~k ~k = 8ij . Then we get Tr f(H} L f(Ed = L L(~~i)}2 f(Ei} i k i > ~f (~(~~»2Ei) ~ ~f (~~~i) ~H .. ~fi») = Lf(Hkk} k The second equality follows from orthonormality, because Lk(~~i)}2 = 1. The inequality uses the fact that by completeness, for any k, we have Li(~~i)}2 = 1 and we may regard the (~~i)}2 as probabilities, such that by convexity, Jensen's inequality can be used. After using the eigenvalue equation, the sum over i was carried out with the help of the completeness relation, in order to obtain the last line. 308 M. Opper and F Vivarelli References [1] D. J. C. Mackay, Gaussian Processes, A Replacement for Neural Networks, NIPS tutorial 1997. May be obtained from http://wol.ra.phy.cam.ac.uk/pub/mackay/. [2] R. Neal, Bayesian Learning for Neural Networks, Lecture Notes in Statistics, Springer (1996). [3] C. K. I. Williams, Computing with Infinite Networks, in Neural Information Processing Systems 9, M. C. Mozer, M.1. Jordan and T. Petsche, eds., 295-30l. MIT Press (1997). [4] C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer and M. E. Hasselmo eds., 514-520, MIT Press (1996). [5] R. M. Neal, Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification, Technical Report CRG-TR-97-2, Dept. of Computer Science, University of Toronto (1997). [6] M. N. Gibbs and D. J. C. Mackay, Variational Gaussian Process Classifiers, Preprint Cambridge University (1997). [7] D. Barber and C. K. I. Williams, Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo, in Neural Information Processing Systems 9, M. C. Mozer, M. I. Jordan and T. Petsche, eds., 340-346. MIT Press (1997). [8] C. K. I. Williams and D. Barber, Bayesian Classification with Gaussian Processes, Preprint Aston University (1997). [9] D. Haussler and M. Opper, Mutual Information, Metric Entropy and Cumulative Relative Entropy Risk, The Annals of Statistics, Vol 25, No 6, 2451 (1997). [10] R. Peierls, Phys. Rev. 54, 918 (1938). [11] H. Zhu, C. K. I. Williams, R. Rohwer and M. Morciniec, Gaussian Regression and Optimal Finite Dimensional Linear Models, Technical report NCRG /97/011, Aston University (1997).
1998
118
1,473
Bayesian peA Christopher M. Bishop Microsoft Research St. George House, 1 Guildhall Street Cambridge CB2 3NH, u.K. cmbishop@microsoft.com Abstract The technique of principal component analysis (PCA) has recently been expressed as the maximum likelihood solution for a generative latent variable model. In this paper we use this probabilistic reformulation as the basis for a Bayesian treatment of PCA. Our key result is that effective dimensionality of the latent space (equivalent to the number of retained principal components) can be determined automatically as part of the Bayesian inference procedure. An important application of this framework is to mixtures of probabilistic PCA models, in which each component can determine its own effective complexity. 1 Introduction Principal component analysis (PCA) is a widely used technique for data analysis. Recently Tipping and Bishop (1997b) showed that a specific form of generative latent variable model has the property that its maximum likelihood solution extracts the principal sub-space of the observed data set. This probabilistic reformulation of PCA permits many extensions including a principled formulation of mixtures of principal component analyzers, as discussed by Tipping and Bishop (l997a). A central issue in maximum likelihood (as well as conventional) PCA is the choice of the number of principal components to be retained. This is particularly problematic in a mixture modelling context since ideally we would like the components to have potentially different dimensionalities. However, an exhaustive search over the choice of dimensionality for each of the components in a mixture distribution can quickly become computationally intractable. In this paper we develop a Bayesian treatment of PCA, and we show how this leads to an automatic selection of the appropriate model dimensionality. Our approach avoids a discrete model search, involving instead the use of continuous hyper-parameters to determine an effective number of principal components. Bayesian peA 383 2 Maximum Likelihood peA Consider a data set D of observed d-dimensional vectors D = {t n } where n E {I, ... ,N}. Conventional principal component analysis is obtained by first computing the sample covariance matrix given by N 1"" -T S = N L) t n - t) (tn - t) (1) n=l where t = N- 1 Ln tn is the sample mean. Next the eigenvectors Ui and eigenvalues .Ai of S are found, where SUi = .AiUi and i = 1, ... ,d. The eigenvectors corresponding to the q largest eigenvalues (where q < d) are retained, and a reduced-dimensionality representation of the data set is defined by Xn = U T (t n - t) where U q = (U 1 , . .. ,Uq). It is easily shown that PCA corresponds to the linear projection of a data set under which the retained variance is a maximum, or equivalently the linear projection for which the sum-of-squares reconstruction cost is minimized. A significant limitation of conventional PCA is that it does not define a probability distribution. Recently, however, Tipping and Bishop (1997b) showed how PCA can be reformulated as the maximum likelihood solution of a specific latent variable model, as follows. We first introduce a q-dimensionallatent variable x whose prior distribution is a zero mean Gaussianp(x) = N(O, Iq) and Iq is the q-dimensional unit matrix. The observed variable t is then defined as a linear transformation ofx with additive Gaussian noise t = Wx+ p,+€ where W is a d x q matrix, p, is a d-dimensional vector and € is a zero-mean Gaussiandistributed vector with covariance (72Id. Thus p(tlx) = N(Wx + p" (72Id). The marginal distribution of the observed variable is then given by the convolution of two Gaussians and is itself Gaussian p(t) = J p(tlx)p(x) dx = N(p" C) (2) where the covariance matrix C = WWT + (72Id. The model (2) represents a constrained Gaussian distribution governed by the parameters p" Wand (72. The log probability of the parameters under the observed data set D is then given by N L(p"W, (72) = -2 {dln(2rr) +lnlCl +Tr[C-1S]} (3) where S is the sample covariance matrix given by (I). The maximum likelihood solution for p, is easily seen to be P,ML = t. It was shown by Tipping and Bishop (l997b) that the stationary points of the log likelihood with respect to W satisfy WML = Uq(Aq - (72Iq)1/2 (4) where the columns of U q are eigenvectors of S, with corresponding eigenvalues in the diagonal matrix A q • It was also shown that the maximum of the likelihood is achieved when the q largest eigenvalues are chosen, so that the columns of U q correspond to the principal eigenvectors, with all other choices of eigenvalues corresponding to saddle points. The maximum likelihood solution for (72 is then given by d 2 1 "" (7ML = ~ ~ .Ai q i=q+l (5) which has a natural interpretation as the average variance lost per discarded dimension. The density model (2) thus represents a probabilistic formulation of PCA. It is easily verified that conventional PCA is recovered in the limit (72 -+ O. 384 C. M Bishop Probabilistic PCA has been successfully applied to problems in data compression, density estimation and data visualization, and has been extended to mixture and hierarchical mixture models. As with conventional PCA, however, the model itself provides no mechanism for determining the value of the latent-space dimensionality q. For q = d - 1 the model is equivalent to a full-covariance Gaussian distribution, while for q < d - 1 it represents a constrained Gaussian in which the variance in the remaining d - q directions is modelled by the single parameter (j2 . Thus the choice of q corresponds to a problem in model complexity optimization. If data is plentiful, then cross-validation to compare all possible values of q offers a possible approach. However, this can quickly become intractable for mixtures of probabilistic PCA models if we wish to allow each component to have its own q value. 3 Bayesian peA The issue of model complexity can be handled naturally within a Bayesian paradigm. Armed with the probabilistic reformulation of PCA defined in Section 2, a Bayesian treatment of PCA is obtained by first introducing a prior distribution p(p" W, (j2) over the parameters of the model. The corresponding posterior distribution p(p" W , (j2ID) is then obtained by multiplying the prior by the likelihood function, whose logarithm is given by (3), and normalizing. Finally, the predictive density is obtained by marginalizing over the parameters, so that (6) In order to implement this framework we must address two issues: (i) the choice of prior distribution, and (ii) the formulation of a tractable algorithm. Our focus in this paper is on the specific issue of controlling the effective dimensionality of the latent space (corresponding to the number of retained principal components). Furthermore, we seek to avoid discrete model selection and instead use continuous hyper-parameters to determine automatically an appropriate effective dimensionality for the latent space as part of the process of Bayesian inference. This is achieved by introducing a hierarchical prior p(Wla) over the matrix W, governed by a q-dimensional vector of hyper-parameters a = {0:1, ... ,O:q}. The dimensionality of the latent space is set to its maximum possible value q = d - 1, and each hyper-parameter controls one of the columns of the matrix W through a conditional Gaussian distribution of the form (7) where {Wi} are the columns of W. This form of prior is motivated by the framework of automatic relevance determination (ARD) introduced in the context of neural networks by Neal and MacKay (see MacKay, 1995). Each O:i controls the inverse variance of the corresponding Wi, so that if a particular O:i has a posterior distribution concentrated at large values, the corresponding Wi will tend to be small, and that direction in latent space will be effectively 'switched off'. The probabilistic structure of the model is displayed graphically in Figure I. In order to make use of this model in practice we must be able to marginalize over the posterior distribution of W. Since this is analytically intractable we have developed three alternative approaches based on (i) type-II maximum likelihood using a local Gaussian approximation to a mode of the posterior distribution (MacKay, 1995), (ii) Markov chain Monte Carlo using Gibbs sampling, and (iii) variational inference using a factorized approximation to the posterior distribution. Here we describe the first of these in more detail. Bayesian peA 385 Figure 1: Representation of Bayesian PCA as a probabilistic graphical model showing the hierarchical prior over W governed by the vector of hyper-parameters ex. The box. denotes a 'plate' comprising a data set of N independent observations of the visible vector tn (shown shaded) together with the corresponding hidden variables X n . The location W MP of the mode can be found by maximizing the log posterior distribution given, from Bayes' theorem, by 1 d-l Inp(WID) = L - 2 L aillw ill 2 + const. i=1 (8) where L is given by (3). For the purpose of controlling the effective dimensionality of the latent space, it is sufficient to treat J.L, (12 and Q as parameters whose values are to be estimated, rather than as random variables. In this case there is no need to introduce priors over these variables, and we can determine J.L and (12 by maximum likelihood. To estimate ex we use type-II maximum likelihood, corresponding to maximizing the marginal likelihood p( D I ex) in which we have integrated over W using the quadratic approximation. It is easily shown (Bishop, 1995) that this leads to a re-estimation formula for the hyperparameters ai of the form /i ai := II W ill 2 (9) where /i ::::: d - ai Tri (H- 1 ) is the effective number of parameters in Wi, H is the Hessian matrix given by the second derivatives of Inp(WID) with respect to the elements of W (evaluated at W MP), and Tri (.) denotes the trace of the sub-matrix corresponding to the vector Wi. For the results presented in this paper, we make the further simplification of replacing / i in (9) by d, corresponding to the assumption that all model parameters are 'well-determined'. This significantly reduces the computational cost since it avoids evaluation and manipulation of the Hessian matrix. An additional consequence is that vectors Wi for which there is insufficient support from the data wiII be driven to zero, with the corresponding a i -t 00, so that un-used dimensions are switched off completely. We define the effective dimensionality of the model to be the number of vectors Wi whose values remain non-zero. The solution for W MP can be found efficiently using the EM algorithm, in which the Estep involves evaluation of the expected sufficient statistics of the latent-space posterior distribution, given by M- 1W T (tn J.L) (12M + (xn)(xn) T (10) (II) 386 C. M Bishop where M = (WTW + a 2Iq). The M-step involves updating the model parameters using W [ptn-I-')(X~)] [pxnX~)H'Ar (12) N (;2 ~d L {litn J-t1l 2 2(x~)WT(tn - J-t) + Tr [(XnX~)WTW]} (13) n=l where A = diag(ad. Optimization of Wand a2 is alternated with re-estimation of n, using (9) with '"'Ii = d, until all of the parameters satisfy a suitable convergence criterion. As an illustration of the operation of this algorithm, we consider a data set consisting of 300 points in 10 dimensions, in which the data is drawn from a Gaussian distribution having standard deviation 1.0 in 3 directions and standard deviation 0.5 in the remaining 7 directions. The result of fitting both maximum likelihood and Bayesian PCA models is shown in Figure 2. In this case the Bayesian model has an effective dimensionality of qeff = 3. • • • • · • • · • • • • • • • • • • • • • • • • • • • • · • • • • • Figure 2: Hinton diagrams of the matrix W for a data set in 10 dimensions having m = 3 directions with larger variance than the remaining 7 directions. The left plot shows W from maximum likelihood peA while the right plot shows WMP from the Bayesian approach, showing how the model is able to discover the appropriate dimensionality by suppressing the 6 surplus degrees of freedom. The effective dimensionality found by Bayesian PCA will be dependent on the number N of points in the data set. For N ~ 00 we expect qeff ~ d -1, and in this limit the maximum likelihood framework and the Bayesian approach will give identical results. For finite data sets the effective dimensionality may be reduced, with degrees of freedom for which there is insufficient evidence in the data set being suppressed. The variance of the data in the remaining d - qeff directions is then accounted for by the single degree of freedom defined by a2 . This is illustrated by considering data in 10 dimensions generated from a Gaussian distribution with standard deviations given by {1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1}. In Figure 3 we plot qeff (averaged over 50 independent experiments) versus the number N of points in the data set. These results indicate that Bayesian PCA is able to determine automatically a suitable effective dimensionality qeff for the principal component subspace, and therefore offers a practical alternative to exhaustive comparison of dimensionalities using techniques such as cross-validation. As an illustration of the generalization capability of the resulting model we consider a data set of 20 points in 10 dimensions generated from a Gaussian distribution having standard deviations in 5 directions given by (1.0,0.8,0.6, 0.4,0.2) and standard deviation 0.04 in the remaining 5 directions. We fit maximum likelihood PCA models to this data having q values in the range 1-9 and compare their log likelihoods on both the training data and on an independent test set, with the results (averaged over 10 independent experiments) shown in Figure 4. Also shown are the corresponding results obtained from Bayesian PCA. Figure 3: Plot of the average effective dimensionality of the Bayesian PCA model versus the number N of data points for data in a IO-dimensional space. 8 c '8. 6 <'l ~ 4 Q; a. 2 8 £ a Qi "'" =-2 ~ -4 , , , , , , , ,0 .. .... _--------6~~~2~~3--~4--~5---6--~7--~8--~9~ q Figure 4: Plot of the log likelihood for the training set (dashed curve) and the test set (solid curve) for maximum likelihood PCA models having q values in the range 1-9, showing that the best generalization is achieved for q = 5 which corresponds to the number of directions of significant variance in the data set. Also shown are the training (circle) and test (cross) results from a Bayesian PCA model, plotted at the average effective q value given by qeff = 5.2. We see that the Bayesian PCA model automatically discovers the appropriate dimensionality for the principal component subspace, and furthermore that it has a generalization performance which is close to that of the optimal fixed q model. 4 Mixtures of Bayesian peA Models Given a probabilistic formulation of PCA it is straightforward to construct a mixture distribution comprising a linear superposition of principal component analyzers. In the case of maximum likelihood PCA we have to choose both the number IvI of components and the latent space dimensionality q for each component. For moderate numbers of components and data spaces of several dimensions it quickly becomes intractable to explore the exponentially large number of combinations of q values for a given value of M. Here Bayesian PCA offers a significant advantage in allowing the effective dimensionalities of the models to be determined automatically. As an illustration we consider a density estimation problem involving hand-written digits from the CEDAR database. The data set comprises 8 x 8 scaled and smoothed gray-scale images of the digits '2', '3' and '4', partitioned randomly into 1500 training, 900 validation and 900 test points. For mixtures of maximum likelihood PCA the model parameters can be 388 C. M Bishop determined using the EM algorithm in which the M-step uses (4) and (5), with eigenvector and eigenvalues obtained from the weighted covariance matrices in which the weighting coefficients are the posterior probabilities for the components determined in the E-step. Since, for maximum likelihood PCA, it is computationally impractical to explore independent q values for each component we consider mixtures in which every component has the same dimensionality. We therefore train mixtures having M E {2, 4,6, 8, 10, 12, 14, 16, 18} for all values q E {2, 4, 8, 12, 16, 20, 25, 30,40, 50}. In order to avoid singularities associated with the more complex models we omit any component from the mixture for which the value of (7 2 goes to zero during the optimization. The highest log likelihood on the validation set ( - 295) is obtained for M = 6 and q = 50. For mixtures of Bayesian PCA models we need only explore alternative values for M , which are taken from the same set as for the mixtures of maximum likelihood PCA. Again, the best performance on the validation set (-293) is obtained for M = 6. The values of the log likelihood for the test set were -295 (maximum likelihood PCA) and -293 (Bayesian PCA). The mean vectors I-Li for each of the 6 components of the Bayesian PCA mixture model are shown in Figure 5. 62 54 63 60 62 59 Figure 5: The mean vectors for each of the 6 components in the Bayesian PCA mixture model, displayed as an 8 x 8 image, together with the corresponding values of the effective dimensionality. The Bayesian treatment of PCA discussed in this paper can be particularly advantageous for small data sets in high dimensions as it can avoid the singularities associated with maximum likelihood (or conventional) PCA by suppressing unwanted degrees of freedom in the model. This is especially helpful in a mixture modelling context, since the effective number of data points associated with specific 'clusters' can be small even when the total number of data points appears to be large. References Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press. MacKay, D. J. C. (1995). Probable networks and plausible predictions - a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems 6 (3), 469-505. Tipping, M. E. and C. M. Bishop (1997a). Mixtures of principal component analysers. In Proceedings lEE Fifth International Conference on Artificial Neural Networks. Cambridge, u.K., July. , pp. 13-18. Tipping, M. E. and C. M. Bishop (1997b). Probabilistic principal component analysis. Accepted for publication in the Journal of the Royal Statistical Society, B.
1998
119
1,474
Active Noise Canceling using Analog NeuroChip with On-Chip Learning Capability Jung-Wook Cho and Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu, Taejon 305-701, Korea sylee@ee.kaist.ac.kr Abstract A modular analogue neuro-chip set with on-chip learning capability is developed for active noise canceling. The analogue neuro-chip set incorporates the error backpropagation learning rule for practical applications, and allows pin-to-pin interconnections for multi-chip boards. The developed neuro-board demonstrated active noise canceling without any digital signal processor. Multi-path fading of acoustic channels, random noise, and nonlinear distortion of the loud speaker are compensated by the adaptive learning circuits of the neuro-chips. Experimental results are reported for cancellation of car noise in real time. 1 INTRODUCTION Both analog and digital implementations of neural networks have been reported. Digital neuro-chips can be designed and fabricated with the help of well-established CAD tools and digital VLSI fabrication technology [1]. Although analogue neurochips have potential advantages on integration density and speed over digital chips[2], they suffer from non-ideal characteristics of the fabricated chips such as offset and nonlinearity, and the fabricated chips are not flexible enough to be used for many different applications. Also, much careful design is required, and the fabricated chip characteristics are fairly dependent upon fabrication processes. For the implementation of analog neuro-chips, there exist two different approaches, i.e., with and without on-chip learning capability [3,4], Currently the majority of analog neuro-chips does not have learning capability, while many practical applications require on-line adaptation to continuously changing environments, and must have online adaptation learning capability. Therefore neuro-chips with on-chip learning capability are essential for such practical applications. Modular architecture is also Active Noise Canceling with Analog On-Chip Learning Neuro-Chip 665 advantageous to provide flexibility of implementing many large complex systems from same chips. Although many applications have been studied for analog neuro-chips, it is very important to find proper problems where analog neuro-chips may have potential advantages over popular DSPs. We believe applications with analog input/output signals and high computational requirements are those good problems. For example, active noise controls [5] and adaptive equalizers [6,7] are good applications for analog neuro-chips. In this paper we report a demonstration of the active noise canceling, which may have many applications in real world. A modular analog neuro-chip set is developed with on-chip learning capability, and a neuro-board is fabricated from multiple chips with PC interfaces for input and output measurements. Unlike our previous implementations for adaptive equalizers with binary outputs [7], both input and output values are analogue in this noise canceling. ..-.---1-1 .... 0 II' xl, ~t---(~.-~-.lI'"'"'"iI---.. (~ ~ ._._.- i') Figure 1. Block diagram of a synapse cell Figure 2. Block diagram of a neuron cell 2 ANALOG NEURO-CHIP WITH ON-CHIP LEARNING We had developed analog neuro-chips with error backpropagation learning capability. With the modular architecture the developed analog neuro-chip set consists of a synapse chip and a neuron chip.[8] The basic cell of the synapse chip is shown in Figure 1. Each synapse cell receives two inputs, i.e., pre-synaptic neural activation x and error correction term 8, and generates two outputs, i.e., feed-forward signal wx and back-propagated error w8. Also it updates a stored weight w by the amount of x8. Therefore, a synapse cell consists of three multiplier circuits and one analogue storage for the synaptic weight. Figure 2 shows the basic cell in the neuron chip, which collects signals from synapses in the previous layer and distributes to synapses in the following layer. Each neuron body receives two inputs, i.e., post-synaptic neural activation 0 and back-propagated error 8 from the following layer, and generates two outputs, i.e., Sigmoid-squashed neural activation 0 and a new backpropagated error 8 multiplied by a bell-shaped Sigmoid-derivative. The backpropagated error may be input to the synapse cells in the previous layer. To provide easy connectivity with other chips, the two inputs of the synapse cell are represented as voltage, while the two outputs are as currents for simple current summation. On the other hand the inputs and outputs of the neuron cell are represented as currents and voltages, respectively. For simple pin-to-pin connections between chips, one package pin is maintained to each input and output of the chip. No time666 J.-W Cho and s.-Y. Lee multiplexing is introduced, and no other control is required for multi-chip and multilayer systems. However, it makes the number of package pins the main limiting factor for the number of synapse and neuron cells in the developed chip sets. Although many simplified multipliers had been reported for high-density integration, their performance is limited in linearity, resolution, and speed. For on-chip learning, it is desirable to have high precision, and a faithful implementation of the 4-quadranr Gilbert multipliers is used. Especially, the mUltiplier for weight updates in the synapse cell requires high precision.[9] The synaptic weight is stored on a capacitor, and an MaS switch is used to allow current flow from the multiplier to the capacitor during a short time interval for weight adaptation. For applications like active noise controls [5] and telecommunications [6,7], tapped analog delay lines are also designed and integrated in the synapse chip. To reduce offset accumulation, a parallel analog delay line is adopted. Same offset voltage is introduced for operational amplifiers at all nodes [10] . Diffusion capacitors with 2.2 pF are used for the storage of the tapped analog delay line. In a synapse chip 250 synapse cells are integrated in a 25xl0 array with a 25-tap analog delay line. Inputs may be applied either from the analog delay line or from external pins in parallel. To select a capacitor in the cell for refresh, decoders are placed in columns and rows. The actual size of the synapse cell is 14111m x 17911m, and the size of the synapse chip is 5.05mm x 5.05mm. The chip is fabricated in a 0.811m single-poly CMOS process. On the other hand, the neuron chip has a very simple structure, which consists of 20 neuron cells without additional circuits. The Sigmoid circuit [3] in the neuron cell uses a differential pair, and the slope and amplitude are controlled by a voltage-controlled resistor [II]. Sigmoid-derivative circuit is also using differential pair with min-select circuit. The size of the neuron cell is 177.2I1m x 62.4l1m. PC DSP TMS320C51 Synapse Chip Neuron Chip , ~ _ ' Target I N t-.,.'----t~ I Output I N I : -:-r!1"iJ-[B~h:---'-: _._.-._Input I r:1-r:"1' ___ ;--..; L!..r~ ... _ ~Q_-I_""', __ -L-___ --I ANN Board Figure 3: Block diagram of the analog neuro-board PC GDAB tv.c : 32ch ArIC : 48<:h D1 : 16bitll DO : 48bitll Active Noise Canceling with Analog On-Chip Learning Neuro-Chip 667 Using these chip sets, an analog neuro-system is constructed. Figure 3 shows a brief block diagram of the analog neuro-system, where an analogue neuro-board is interfaced to a host computer through a GDAB (General Data Acquisition Board). The GDAB board is specially designed for the data interface with the analogue neuro-chips. The neuro-board has 6 synapse chips and 2 neuron chips with the 2-layer Perceptron architecture. For test and development purposes, a DSP, ADC and DAC are installed on the neuro-board to refresh and adjust weights. Forward propagation time of the 2 layers Perceptron is measured as about 30 f..lsec. Therefore the computation speed of the neuro-board is about 266 MCPS (Mega Connections Per Second) for recall and about 200 MCUPS (Mega Connections Updates Per Second) for error backpropagation learning. To achieve this speed with a DSP, about 400 MIPS is required for recall and at least 600 MIPS for error-back propagation learning. Noise Source C1 (z) Channel Adaptive Filter Signal or Multilayer Perceptron Figure 4: Structure of a feedforward active noise canceling Error 3 ACTIVE NOISE CANCELING USING NEURO-CHIP Basic architecture of the feed forward active noise canceling is shown in Figure 4. An area near the microphone is called "quiet zone," which actually means noise should be small in this area. Noise propagates from a source to the quiet zone through a dispersive medium, of which characteristics are modeled as a finite impulse response (FIR) filter with additional random noise. An active noise canceller should generate electric signals for a loud speaker, which creates acoustic signals to cancel the noise at the quiet zone. In general the electric-to-acoustic signal transfer characteristics of the loud speaker is nonlinear, and the overall active noise canceling (ANC) system also becomes nonlinear. Therefore, multilayer Perceptron has a potential advantage over popular transversal adaptive filters based on linear-mean.-square (LMS) error minimization. Experiments had been conducted for car noise canceling. The reference signal for the noise source was extracted from an engine room, while a compact car was running at 60 kmlhour. The difference of the two acoustic channels, i.e., H(z) = C1 (z) / C2 (z) , addition noise n, and nonlinear characteristics of the loud speaker need be compensated. Two different acoustic channels are used for the experiments. The first channel Hl (z) = 0.894 + 0.447z-1 is a minimum phase channel, while the second non668 J.-W Cho and S-Y. Lee minimum phase channel H2 (z) = 0.174 + 0.6z -I + 0.6z -2 + 0.174z -3 characterizes frequency-selective multipath fading with a deep spectral amplitude null. A simple cubic distortion model was used for the characteristics of the loud speaker.[12] To compare performance of the neuro-chip with digital processors, computer simulation was first conducted with error backpropagation algorithm for a single hidden-layer Perceptron as well as the LMS algorithm for a transversal adaptive filter. Then, the same experimental data were provided to the developed neuro-board by a personal computer through the GDAB. o ro -5 a:: c o :.;:; g -10 . "0 CI.> 0::: CI.> .~ -15 o z - 20 ~-----~------~----~------~--=-~ o 15 -5 0::: c o ....... ~ -10 "0 CI.> 0::: CI.> .~ -15 o z o 5 10 15 20 25 Signal-to-Distortion Ratio (a) -20~----~----~----~----~----~ o 5 10 15 20 Signal-to-Distortion Ratio (b) 25 Figure 5: Noise Reduction Ratio (dB) versus Signal-to-Distortion Ratio (dB) for (a) a simple acoustic channel HI (z) and (b) a multi-path fading acoustic channel H2 (z) _ Here, '+', '*', 'x', and '0' denote results ofLMS algorithm, neural networks simulation, neural network simulation with 8-bit input quantization, and neuro-chips, respectively_ Active Noise Canceling with Analog On-Chip Learning Neuro-Chip 669 Results for the channels HI (z) and H2 (z) are shown in Figures 5(a) and 5(b), respectively. Each point in these figures denotes the result of one experiment with different parameters. The horizontal axes represent Signal-to-Distortion Ratio (SDR) of the speaker nonlinear characteristics. The vertical axes represent Noise Reduction Ratio (NRR) of the active noise canceling systems. As expected, severe nonlinear distortion of the loud speaker resulted in poor noise canceling for the LMS canceller. However, the performance degradation was greatly reduced by neural network canceller. With the neuro-chips the performance was worse than that of computer simulation. Although the neuro-chip demonstrated active noise canceling and worked better than LMS cancellers for very small SDRs, i.e., very high nonlinear distortions, its performance became saturated to -8 dB and -5 dB NRRs, respectively. The performance saturation was more severe for the harder problem with the complicated H 2 (z ) channel. The performance degradation with neuro-chips may come from inherent limitations of analogue chips such as limited dynamic ranges of synaptic weights and signals, unwanted offsets and nonlinearity, and limited resolution of the learning rate and sigmoid slope. [9] However, other side effects of the GDAB board, i.e., fixed resolution of AID converters and D/A converters for data 110, also contributed to the performance degradation. The input and output resolutions of the GDAB were J 6 bit and 8 bit, respectively. Unlike actual real-world systems the input values of the experimental analogue neuro-chips are these 8-bit quantized values. As shown in Figures 5, results of the computer simulation with 8-bit quantized target values showed much degraded performance compared to the floating-point simulations. Therefore, a significant portion of the poor performance in the experimental analogue system may be contributed from the AID converters, and the analogue system may work better in real world systems. Actual acoustic signals are plotted in Figure 6. The top, middle, and bottom signals denote noise , negated speaker signal, and residual noise at the quiet zone, respectively. Figure 6: Examples of noise, negated loud-speaker canceling signal, and residual error 670 J.-w. Cho and s.-Y. Lee 4 CONCLUSION In this paper we report an experimental results of active noise canceling using analogue neuro-chips with on-chip learning capability. Although the its performance is limited due to nonideal characteristics of analogue chip itself and also peripheral devices, it clearly demonstrates feasibility of analogue chips for real world applications. Acknowledgements This research was supported by Korean Ministry of Information and Telecommunications. References [1] T. Watanabe, K. Kimura, M. Aold, T. Sakata & K. Ito (1993) A Single 1.5-V Digital Chip for a 106 Synapse Neural Network, IEEE Trans. Neural Network, VolA, No.3, pp.387-393. [2J T. Morie and Y. Amemiya (1994) An All-Analog Expandable Neural Network LSI with On-Chip Backpropagation Learning, IEEE Journal of Solid State Circuits, vo1.29, No.9, pp.1086-1093. [3J J.-W. Cho, Y. K. Choi, S.-Y. Lee (1996) Modular Neuro-Chip with On-Chip Learning and Adjustable Learning Parameters, Neural Processing Letters, VolA, No.1. [4J J. Alspector, A. Jayakumar, S. Luna (1992) Experimental evaluation of learning in neural microsystem, Advances in Neural Information Processing Systems 4, pp. 871-878. [5 J B. Widrow, et al. (1975) Adative Noise Cancelling: Principles and Applications, Proceeding of IEEE, Vo1.63, No.12, pp.1692-1716. [6] J. Choi, S.H. Bang, BJ. Sheu (1993) A Programmable Analog VLSI Neural Network Processor for Communication Receivers, IEEE Transaction on Neural Network, VolA, No.3, ppA84-495. [7J J.-W. Cho and S.-Y. Lee (1998) Analog neuro-chips with on-chip learning capability for adaptive nonlinear equalizer, Proc. lJCNN, pp. 581-586, May 4-9, Anchorage, USA. [8J J. Van der Spiegel, C. Donham, R. Etienne-Cummings, S. Fernando (1994) Large scale analog neural computer with programmable architecture and programmable time constants for temporal pattern analysis, Proc. ICNN, pp. 1830-1835. [9J Y.K. Choi, K.H. Ahn, and S.Y. Lee (1996) Effects of multiplier offsets on onchip learning for analog neuro-chip, Neural Processing Letters, vol. 4, No.1, 1-8. [1OJ T. Enomoto, T. Ishihara and M. Yasumoto (1982) Integrated tapped MaS analogue delay line using switched-capacitor technique, Electronics Lertters, Vo1.l8, pp.193-194. [11 J P.B. Allen, D.R. Holberg (1987) CMOS Analog Circuit Design, Holt, Douglas Rinehart and Winston. [12J F. Gao and W.M. Snelgrove (1991) Adaptive linearization of a loudspeaker, Proc. International Conference on Acoustics, Speech and Signal processing, pp. 3589-3592.
1998
12
1,475
A Precise Characterization of the Class of Languages Recognized by Neural Nets under Gaussian and other Common Noise Distributions Wolfgang Maass* Inst. for Theoretical Computer Science, Technische Universitat Graz Klosterwiesgasse 3212, A-80lO Graz, Austria email: maass@igi.tu-graz.ac.at Abstract Eduardo D. Sontag Oep. of Mathematics Rutgers University New Brunswick, NJ 08903, USA email: sontag@hilbert.rutgers.edu We consider recurrent analog neural nets where each gate is subject to Gaussian noise, or any other common noise distribution whose probability density function is nonzero on a large set. We show that many regular languages cannot be recognized by networks of this type, for example the language {w E {O, I} * I w begins with O}, and we give a precise characterization of those languages which can be recognized. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for constructing feed forward analog neural nets that are robust with regard to analog noise of this type. 1 Introduction A fairly large literature (see [Omlin, Giles, 1996] and the references therein) is devoted to the construction of analog neural nets that recognize regular languages. Any physical realization of the analog computational units of an analog neural net in technological or biological systems is bound to encounter some form of "imprecision" or analog noise at its analog computational units. We show in this article that this effect has serious consequences for the computational power of recurrent analog neural nets. We show that any analog neural net whose computational units are subject to Gaussian or other common noise distributions cannot recognize arbitrary regular languages. For example, such analog neural net cannot recognize the regular language {w E {O, I} * I w begins with O}. • Partially supported by the Fonds zur F6rderung der wissenschaftlichen Forschnung (FWF), Austria, project P12153. 282 W Maass and E. D. Sontag A precise characterization of those regular languages which can be recognized by such analog neural nets is given in Theorem 1.1. In section 3 we introduce a simple technique for making feedforward neural nets robust with regard to the same types of analog noise. This method is employed to prove the positive part of Theorem 1.1. The main difficulty in proving Theorem 1.1 is its negative part, for which adequate theoretical tools are introduced in section 2. Before we can give the exact statement of Theorem 1.1 and discuss related preceding work we have to give a precise definition of computations in noisy neural networks. From the conceptual point of view this definition is basically the same as for computations in noisy boolean circuits (see [Pippenger, 1985] and [Pippenger, 1990]). However it is technically more involved since we have to deal here with an infinite state space. We will first illustrate this definition for a concrete case, a recurrent sigmoidal neural net with Gaussian noise, and then indicate the full generality of our result, which makes it applicable to a very large class of other types of analog computational systems with analog noise. Consider a recurrent sigmoidal neural net N consisting of n units, that receives at each time step t an input Ut from some finite alphabet U (for example U = {O, I}). The internal state of N at the end of step t is described by a vector Xt E [-1, l]n, which consists of the outputs of the n sigmoidal units at the end of step t. A computation step of the network N is described by Xt+1 = a(Wxt + h + UtC + Vi) where W E IRnxn and c, h E IRn represent weight matrix and vectors, a is a sigmoidal activation function (e.g., a(y) = 1/(1 + e- Y» applied to each vector component, and VI, V2 , • •• is a sequence of n-vectors drawn independently from some Gaussian distribution. In analogy to the case of noisy boolean circuits [Pippenger, 1990] one says that this network N recognizes a language L ~ U* with reliability c (where c E (O,~] is some given constant) if immediately after reading an arbitrary word w E U* the network N is with probability 2: ~ + c in an accepting state in case that w E L, and with probability :::; ! - c in an accepting state in case that w rf. LI. We will show in this article that even if the parameters of the Gaussian noise distribution for each sigmoidal unit can be determined by the designer of the neural net, it is impossible to find a size n, weight matrix W, vectors h, C and a reliability c E (0, !] so that the resulting recurrent sigmoidal neural net with Gaussian noise accepts the simple regular language {w E {0,1}*1 w begins with O} with reliability c. This result exhibits a fundamental limitation for making a recurrent analog neural net noise robust, even in a case where the noise distribution is known and of a rather benign type. This quite startling negative result should be contrasted with the large number of known techniques for making a feedforward boolean circuit robust against noise, see [Pippenger, 1990]. Our negative result turns out to be of a very general nature, that holds for virtually all related definitions of noisy analog neural nets and also for completely different models for analog computation in the presence of Gaussian or similar noise. Instead of the state set [-1, l]n one can take any compact set n ~ IRn , and instead of the map (x, u) t-+ W x + h + uc one can consider an arbitrary map I : n x U ~ 0 for a compact set 0 ~ IRn where f (', u) is Borel measurable for each fixed U E U. Instead of a sigmoidal activation function a and a Gaussian distributed noise vector V it suffices to assume that a : IRn ~ n is some arbitrary Borel measurable function and V is some IRn -valued random variable with a density ¢(.) that has a wide support2• In order to define a computation in such system we consider for 1 According to this definition a network N that is after reading some w E U· in an accepting state with probability strictly between t - c and t + c does not recognize any language L ~ U·. 2More precisely: We assume that there exists a subset no of n and some constant Co > 0 such Analog Neural Nets with Gaussian Noise 283 each U E U the stochastic kernel Ku defined by Ku(x, A) := Prob [a(f(x, u) + V) E A] for x E n and A S;;; n. For each (signed, Borel) measure /-l on n, and each U E U, we let lKu/-l be the (signed, Borel) measure defined on n by (lKu/-l)(A) := J Ku(x, A)d/-l(x) . Note that lKu /-l is a probability measure whenever /-l is. For any sequence of inputs W = U1 , .•. ,Ur , we consider the composition of the evolution operators lKu. : (1) If the probability distribution of states at any given instant is given by the measure /-l, then the distribution of states after a single computation step on input U E U is given by lKu /-l, and after r computation steps on inputs W = U1,"" U r , the new distribution is IKw /-l, where we are using the notation (1). In particular, if the system starts at a particular initial state ~, then the distribution of states after r computation steps on W is IKwbe, where be is the probability measure concentrated on {O. That is to say, for each measurable subset Fen Prob [Xr+1 E F I Xl =~, input = w] = (lKwbe)(F). We fix an initial state ~ E n, a set of "accepting" or "final" states F, and a "reliability" level E > 0, and say that the resulting noisy analog computational system M recognizes the language L S;;; U* if for all w E U* : wEL ~ w(j.L 1 (lKwbe)(F) ;::: 2 + E 1 (lKwbe)(F) :::; 2 - E . In general a neural network that simulates a DFA will carry out not just one, but a fixed number k of computation steps (=state transitions) of the form x' = a(W x + h + uc + V) for each input symbol U E U that it reads (see the constructions described in [Omlin, Giles, 1996], and in section 3 of this article). This can easily be reflected in our model by formally replacing any input sequence w = UI, U2 , . . . , U r from U* by a padded sequence W = UI , bk - I , U2 , bk - I , ... ,Ur , bk - I from (U U {b})*, where b is a blank symbol not in U, and bk - I denotes a sequence of k - 1 copies of b (for some arbitrarily fixed k ;::: 1). This completes our definition of language recognition by a noisy analog computational system M with discrete time. This definition essentially agrees with that given in [Maass, Orponen, 1997]. We employ the following common notations from formal language theory: We write WI w2 for the concatenation of two strings WI and W2, ur for the set of all concatenations of r strings from U, U* for the set of all concatenations of any finite number of strings from U, and UV for the set of all strings WI W2 with WI E U and W2 E V . The main result of this article is the following: Theorem 1.1 Assume that U is some arbitrary finite alphabet. A language L S;;; U* can be recognized by a noisy analog computational system of the previously specified type if and only if L = E1 U U* E2 for two finite subsets E1 and E2 of U* . A corresponding version of Theorem 1.1 for discrete computational systems was previously shown in [Rabin, 1963]. More precisely, Rabin had shown that probabilistic automata with strictly positive matrices can recognize exactly the same class of languages L that occur in our Theorem 1.1. Rabin referred to these languages as definite languages. Language recognition by analog computational systems with analog noise has previously been investigated in [Casey, 1996] for the special case of bounded noise and perfect reliability that the following two properties hold: <jJ(v) :::: Co for all v E Q := a-I (no) - n (that is, Q is the set consisting of all possible differences z - y, with a(z) E no and yEn) and a-I (no) has finite and nonzero Lebesgue measuremo =). (a- 1 (no)). 284 W Maass and E. D. Sontag (i.e. ~ l vll:S;71 ¢>(v)dv = 1 for some small TJ > 0 and c = 1/2 in our terminology), and in [Maass, Orponen, 1997] for the general case. It was shown in [Maass, Orponen, 1997] that any such system can only recognize regular languages. Furthermore it was shown there that if ~lvll:S;71 ¢>(v)dv = 1 for some small TJ > 0 then all regular languages can be recognized by such systems. In the present paper we focus on the complementary case where the condition "~lvll:S;71 ¢>(v)dv = 1 for some small '" > 0" is not satisfied, i.e. analog noise may move states over larger distances in the state space. We show that even if the probability of such event is arbitrarily small, the neural net will no longer be able to recognize arbitrary regular languages. 2 A Constraint on Language Recognition We prove in this section the following result for arbitrary noisy computational systems M as specified at the end of section 1: Theorem 2.1 Assume that U is some arbitrary alphabet. If a language L ~ U* is recognized by M, then there are subsets E1 and E2 of u:S;r, for some integer r, such that L = E1 U U* E2. In other words: whether a string w E U* belongs to the language L can be decided by just inspecting the first r and the last r symbols of w. 2.1 A General Fact about Stochastic Kernels Let (5, S) be a measure space, and let K be a stochastic kernel3. As in the special case of the Ku's above, for each (signed) measure f-t on (5, S), we let II<¥t be the (signed) measure defined on S by (II<¥t)(A) := J K(x, A)df-t(x) . Observe that II<¥t is a probability measure whenever f-t is. Let c > 0 be arbitrary. We say that K satisfies Doeblin's condition (with constant c) if there is some probability measure p on (5, S) so that K(x, A) ~ cp(A) for all x E 5, A E S. (2) (Necessarily c ::; 1, as is seen by considering the special case A = 5.) This condition is due to [Doeblin, 1937]. We denote by Ilf-til the total variation of the (signed) measure f-t. Recall that Ilf-til is defined as follows. One may decompose 5 into a disjoint union of two sets A and B, in such a manner that f-t is nonnegative on A and nonpositive on B. Letting the restrictions of f-t to A and B be "f-t+" and "-f-t-" respectively (and zero on B and A respectively), we may decompose f-t as a difference of nonnegative measures with disjoint supports, f-t = f-t+ f-t- . Then, Ilf-til = f-t+ (A) + f-t- (B). The following Lemma is a "folk" fact ([Papinicolaou, 1978]). Lemma 2.2 Assume that K satisfies Doeblin's condition with constant c. Let f-t be any (signed) measure such that f-t(5) = o. Then 111I<¥t11 ::; (1 - c) 1If-t1l. • 2.2 Proof of Theorem 2.1 Lemma 2.3 There is a constant c > 0 such that Ku satisfies Doeblin's condition with constantc,foreveryu E U. Proof Let no, co, and 0 < rno < 1 be as in the second footnote, and introduce the following (Borel) probability measure on no: Ao(A) := ~A (0'-1 (A)) . rno 3That is to say, K(x,·) is a probability distribution for each x, and K(-, A) is a measurable function for each Borel measurable set A. Analog Neural Nets with Gaussian Noise Pick any measurable A ~ no and any yEn. Then, Z(y, A) Prob [O"(y + V) E A] = Prob [y + V E 0"-1 (A)] = (¢(v) dv ~ coA(Ay) = CoA (0"-1 (A») = comoAo(A) , JAy 285 where Ay := 0"-1 (A) - {y} ~ Q. We conclude that Z(y, A) ~ cAo(A) for all y, A, where c = como. Finally, we extend the measure AO to all of n by assigning zero measure to the complement of no, that is, p(A) := Ao(A n no) for all measurable subsets A of n . Pick u E U; we will show that Ku satisfies Doeblin's condition with the above constant c (and using p as the "comparison" measure in the definition). Consider any x E nand measurable A ~ n. Then, Ku(x, A) = Z(f(x, u), A) ~ Z(f(x, u), A n no) ~ cAo(A n no) = cp(A) , as required. • For every two probability measures 1-'1,1-'2 on n, applying Lemma 2.2 to I-' := 1-'1 -1-'2, we know that 111Ku1-'1 -1Ku1-'211 ::; (1 - c) 111-'1 1-'211 for each u E U. Recursively, then, we conclude: IllKwl-'l -lKw1-'211 ::; (1 - ct 111-'1 - 1-'211::; 2(1 - ct for all words w of length ~ r. Now pick any integer r such that (1 - ct < 2c. From Equation (3), we have that (3) for all w of length ~ r and any two probability measures 1-'1,1-'2. In particular, this means that, for each measurable set A, (4) for all such w. (Because, for any two probability measures VI and V2, and any measurable set A, 2Ivl(A) - v2(A)1 ::; Ilvl v211 ·) Lemma 2.4 Pick any v E U* and wE ur. Then w E L {:::::::} vw E L . Proof Assume that w E L, that is, (lKw t5e)(F) ~ ~+E. Applying inequality (4) to the measures 1-'1 := t5e and 1-'2 := lKvt5e and A = F, we have that l(lKwt5e)(F) - (lKvw t5e)(F) I < 2E,andthisimpliesthat(lKvwt5e)(F) > ~-E,i.e. , vw E L. (Since ~-E < (lKvwt5e)(F) < ~ + E is ruled out.) If w ~ L, the argument is similar. • We have proved that So, where El := L n u~r and E2 := L n ur are both included in u~r. This completes the proof of Theorem 2.1. • 286 W. Maass and E. D. Sontag 3 Construction of Noise Robust Analog Neural Nets In this section we exhibit a method for making feedforward analog neural nets robust with regard to arbitrary analog noise of the type considered in the preceding sections. This method will be used to prove in Corollary 3.2 the missing positive part of the claim of the main result (Theorem 1.1) of this article. Theorem 3.1 Let C be any (noiseless) feedfOlward threshold circuit, and let u : ~ -+ [-1, 1] be some arbitrary function with u( u) -+ 1 for u -+ 00 and u( u) -+ -1 for u -+ -00. Furthermore assume that 8, p E (0, 1) are some arbitrary given parameters. Then one can transform for any given analog noise of the type considered in section 1 the noiseless threshold circuit C into an analog neural net Nc with the same number of gates, whose gates employ the given function u as activation function, so that for any circuit input ~ E {-I, l}m the output of the noisy analog neural net Nc differs with probability ~ 1- 8 by at most p from the output ofC. Idea of the proof Let k be the maximal fan-in of a gate in C, and let w be the maximal absolute value of a weight in C. We choose R > ° so large that the density function ¢>(.) of the noise vector V satisfies for each gate with n inputs in C { ¢>(v)dv:c:; 28 for i= 1, ... ,n. JIVil~R n Furthermore we choose Uo > ° so large that u(u) ~ 1 - p/(wk) for u ~ Uo and u(u) :c:; -1 + p/(wk) for u :c:; -Uo . Finally we choose a factor "/ > ° so large that ,,/(1- p) - R ~ Uo. LetNc be the analog neural net that results from C through multiplication of all weights and thresholds with "/ and through replacement of the Heaviside activation functions of the gates in C by the given activation function u. • The following Corollary provides the proof of the positive part of our main result Theorem 1.1. It holds for any u considered in Theorem 3.1. Corollary 3.2 Assume that U is some arbitrary finite alphabet, and language L ~ U* is of the form L = El U U* E2 for two arbitrary finite subsets El and E2 of U*. Then the language L can be recognized by a noisy analog neural net N with any desired reliability E E (0, ~), in spite of arbitrary analog noise of the type considered in section 1. Proof. We first construct a feed forward threshold circuit C for recognizing L, that receives each input symbol from U in the form of a bitstring u E {a, 1}' (for some fixed I ~ log2 \ U\), that is encoded as the binary states of l input units of the boolean circuit C. Via a tapped delay line of fixed length d (which can easily be implemented in a feedforward threshold circuit by d layers, each consisting of l gates that compute the identity function on a single binary input from the preceding layer) one can achieve that the feed forward circuit C computes any given boolean function of the last d sequences from {O, 1}1 that were presented to the circuit. On the other hand for any language of the form L = El U U* E2 with E 1 , E2 finite there exists some dEN such that for each w E U* one can decide whether w E L by just inspecting the last d characters of w. Therefore a feedforward threshold circuit C with a tapped delay line of the type described above can decide whether wE L. We apply Theorem 3.1 to this circuit C for 8 = p = min(~ E, t). We define the set F of accepting states for the resulting analog neural net Nc as the set of those states where the computation is completed and the output gate of Nc assumes a value ~ 3/4. Then according to Theorem 3.1 the analog neural net Nc recognizes L with reliability E. To be formally precise, one has to apply Theorem 3.1 to a threshold circuit C that receives its Analog Neural Nets with Gaussian NOise 287 input not in a single batch, but through a sequence of d batches. The proof of Theorem 3.1 readily extends to this case. • 4 Conclusions We have exhibited a fundamental limitation of analog neural nets with Gaussian or other common noise distributions whose probability density function is nonzero on a large set: They cannot accept the very simple regular language {w E {O, 1 }*I w begins with O}. This holds even if the designer of the neural net is allowed to choose the parameters of the Gaussian noise distribution and the architecture and parameters of the neural net. The proof of this result introduces new mathematical arguments into the investigation of neural computation, which can also be applied to other stochastic analog computational systems. We also have presented a method for makingfeedfOlward analog neural nets robust against the same type of noise. This implies that certain regular languages, such as for example {w E {O, 1 }*I wends with O} can be recognized by a recurrent analog neural net with Gaussian noise. In combination with our negative result this yields a precise characterization of all regular languages that can be recognized by recurrent analog neural nets with Gaussian noise, or with any other noise distribution that has a large support. References [Casey, 1996] Casey, M., "The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction", Neural Computation 8,1135-1178,1996. [Doeblin, 1937] Doeblin, W., "Sur Ie proprietes asymtotiques de mouvement regis par certain types de chaInes simples", Bull. Math. Soc. Roumaine Sci. 39(1): 57-115; (2) 3-61,1937. [Maass, Orponen, 1997] Maass, W., and Orponen, P. "On the effect of analog noise on discrete-time analog computations", Advances in Neural Information Processing Systems 9, 1997, 218-224; journal version: Neural Computation 10(5), 1071-1095, 1998. [amlin, Giles, 1996] amlin, C. W., Giles, C. L. "Constructing deterministic finite-state automata in recurrent neural networks", J. Assoc. Comput. Mach. 43 (1996), 937972. [Papinicolaou, 1978] Papinicolaou, G., "Asymptotic Analysis of Stochastic Equations", in Studies in Probability Theory, MAA Studies in Mathematics, vol. 18, 111-179, edited by M. Rosenblatt, Math. Assoc. of America, 1978. [Pippenger, 1985] Pippenger, N., "On networks of noisy gates", IEEE Sympos. on Foundations of Computer Science, vol. 26, IEEE Press, New York, 30-38, 1985. [Pippenger, 1989] Pippenger, N., ':Invariance of complexity measures for networks with unreliable gates", J. of the ACM, vol. 36, 531-539,1989. [Pippenger, 1990] Pippenger, N., "Developments in 'The Synthesis of Reliable Organisms from Unreliable Components' ", Proc. of Symposia in Pure Mathematics, vol. 50, 311-324,1990. [Rabin, 1963] Rabin, M., "Probabilistic automata", Information and Control, vol. 6, 230245, 1963.
1998
120
1,476
General-purpose localization of textured • • Image regions Rutb Rosenboltz· XeroxPARC 3333 Coyote Hill Rd. Palo Alto, CA 94304 Abstract We suggest a working definition of texture: Texture is stuff that is more compactly represented by its statistics than by specifying the configuration of its parts. This definition suggests that to fmd texture we look for outliers to the local statistics, and label as texture the regions with no outliers. We present a method, based upon this idea, for labeling points in natural scenes as belonging to texture regions, while simultaneously allowing us to label lowlevel, bottom-up cues for visual attention. This method is based upon recent psychophysics results on processing of texture and popout. 1 WHAT IS TEXTURE, AND WHY DO WE WANT TO FIND IT? In a number of problems in computer VlSlon and image processing, one must distinguish between image regions that correspond to objects and those which correspond to texture, and perform different processing depending upon the type of region. Current computer vision algorithms assume one magically knows this region labeling. But what is texture? We have the notion that texture involves a pattern that is somehow homogeneous, or in which signal changes are "too complex" to describe, so that aggregate properties must be used instead (Saund, 1998). There is by no means a firm division between texture and objects; rather, the characterization often depends upon the scale of interest (Saund, 1998). • Email: rruth@parc.xerox.com 818 R. Rosenholtz Ideally the defmition of texture should probably depend upon the application. We investigate a definition that we believe will be of fairly general utility: Texture is stuff that seems to belong to the local statistics. We propose extracting several texture features, at several different scales, and labeling as texture those regions whose feature values are likely to have come from the local distribution. Outliers to the local statistics tend to draw our attention (Rosenholtz, 1997, 1998). The phenomenon is often referred to as "popout." Thus while labeling (locally) statistically homogeneous regions as texture, we can simultaneously highlight salient outliers to the local statistics. Our revised defmition is that texture is the absence of popout. In Section 2, we discuss previous work in both human perception and in fmding texture and regions of interest in an image. In Section 3, we describe our method. We present and discuss results on a number of real images in Section 4. 2 PREVIOUS WORK See (Wolfe, 1998) for a review of the visual search literature. Popout is typically studied using simple displays, in which an experimental subject searches for the unusual, target item, among the other, distractor items. One typically attempts to judge the "saliency," or degree to which the target pops out, by studying the efficiency of search for that item. Typically popout is modeled by a relatively lowlevel operator, which operates independently on a number of basic features of the image, including orientation, contrast/color, depth, and motion. In this paper, we look only at the features of contrast and orientation. Within the image-processing field, much of the work in fmding texture has defmed as texture any region with a high luminance variance, e.g. Vaisey & Gersho (1992). Unfortunately, the luminance variance in a region containing an edge can be as high as that in a textured region. Won & Park (1997) use model fitting to detect image blocks containing an edge, and then label blocks with high variance as containing texture. Recently, several computer vision researchers have also tackled this problem. Leung & Malik (1996) found regions of completely deterministic texture. Other researchers have used the defmition that if the luminance goes up and then down again (or vice versa) it's texture (Forsyth et aI, 1996). However, this method will treat lines as if they were texture. Also, with no notion of similarity within a texture (also lacking in the image-processing work), one would mark a "fault" in a texture as belonging to that texture. This would be unacceptable for a texture synthesis application, in which a routine that tried to synthesize such a texture would most likely fail to reproduce the (highly visible) fault. More recently, Shi and Malik (1998) presented a method for segmenting images based upon texture features. Their method performs extremely well at the segmentation task, dividing an image into regions with internal similarity that is high compared to the similarity across regions. However, it is difficult to compare with their results, since they do not explicitly label a subset of the resulting regions as texture. Furthermore, this method may also tend to mark a "fault" in a texture as belonging to that texture. This is both because the method is biased against separating out small regions, and because the grouping of a patch with one region depends as much upon the difference between that patch and other regions as it does upon the similarity between the patch and the given region. Very little computer vision work has been done on attentional cues. Milanese et al (1993) found salient image regions using both top-down information and a bottomup "conspicuity" operator, which marks a local region as more salient the greater the General-Purpose Localization o/Textured Image Regions 819 difference between a local feature value and the mean feature value in the surrounding region. However. for the same difference in means. a local region is less salient when there is a greater variance in the feature values in the surrounding region (Duncan & Humphreys. 1989; Rosenholtz. 1997). We use as our saliency measure a test for outliers to the local distribution. This captures. in many cases. the dependence of saliency on difference between a given feature value and the local mean. relative to the local standard deviation. We will discuss our saliency measure in greater detail in the following section. 3 FINDING TEXTURE AND REGIONS OF INTEREST We compute multiresolution feature maps for orientation and contrast. and then look for outliers in the local orientation and contrast statistics. We do this by fast creating a 3-level Gaussian pyramid representation of the image. To extract contrast. we filter the pyramid with a difference of circularly symmetric Gaussians. The response of these filters will oscillate. even in a region with constant-contrast texture (e.g. a sinewave pattern). We approximate a computation of the maximum response of these filters over a small region by fast squaring the filter responses. and then filtering the contrast energy with an appropriate Gaussian. Finally. we threshold the contrast to eliminate low-contrast regions ("flat" texture). These thresholds (one for each scale) were set by examining the visibility of sinewave patterns of various spatial frequencies. We compute orientation in a simple and biologically plausible way. using Bergen & Landy's (1991) "back pocket model" for low-level computations: 1. Filter the pyramid with horizontal. vertical. and ±45° oriented Gaussian second derivatives. 2. Compute opponent energy by squaring the filter outputs. pooling them over a region 4 times the scale of the second derivative filters. and subtracting the vertical from the horizontal response and the +45 0 from the _45 0 response. 3. Normalize the opponent energy at each scale by dividing by the total energy in the 4 orientation energy bands at that scale. The result is two images at each scale of the pyramid. To a good approximation. in regions which are strongly oriented. these images represent kcos(26) and ksin(26). where 6 is the local orientation at that scale. and k is a value between 0 and 1 which is related to the local orientation specificity. Orientation estimates from points with low specificity tend to be very noisy. In images of white noise. 80% of the estimates of k fall below 0.5. therefore with 80% confidence. an orientation specificity of k>0.5 did not occur due to chance. We use this value to threshold out orientation estimates with low "orientedness.·· We then estimate D, the local feature distribution, for each feature and scale, using the method of Parzen windows. The blurring of the distribution estimate by the Parzen window mimics uncertainty in estimates of feature values by the visual system. We collect statistics over a local integration region. For texture processing. the size of this region is ind.ependent of viewing distance, and is roughly lOS in diameter, where S is the support of the Gaussian 2nd derivative filters used to extract the texture features (Kingdom & Keeble, 1997; Kingdom et ai, 1995). We next compute a non-parametric measure of saliency: ( P(v ID) , saliency = -IO~ ) maxP(x ID) % (1) 820 R. Rosenholtz Note that if D were Gaussian N(~,a2), this simplifies to (X_tt)2 '1a2 (2) which should be compared to the standard parametric test for outliers, which uses the measure (x - tt)/ a. Our saliency measure is essentially a more general, nonparametric form of this measure (i.e. it does not assume a Gaussian distribution). Points with saliency less than 0.5 are labeled as candidate texture points. If D were Gaussian, this would correspond to feature estimates within one standard deviation of the mean. Points with saliency greater than 3.1 are labeled as candidates for bottom-up attentional cues. If D were Gaussian, this would correspond to feature estimates more than 2.50 from the mean, a standard parametric test for outliers. One could, of course, keep the raw saliency values, as a measure of the likelihood that a region contained texture, rather than setting a hard threshold. We use a hard threshold in our examples to better display the results. Both the texture images and the region of interest images are median-filtered to remove extraneous points. 4 EXPERIMENTAL RESULTS Figure 3 shows several example images. Figures 2, 3, and 4 show texture found at each scale of processing. The striped and checkered patterns represent oriented and homogeneous contrast texture, respectively. The absence of an image in any of these figures means that no texture of the given type was found in that image at the given scale. Note that we perform no segmentation of one texture from another. For the building image, the algorithm labeled bricks and window panes as fme-scale texture, and windows and shutters as coarser-scale texture. The leopard skin and low-frequency stripes in the lower right comer of the leopard image were correctly labeled as texture. In the desk image, the "wood" texture was correctly identified. The regular pattern of windows were marked as texture in the hotel image. In the house image, the wood siding, trees, and part of the grass were labeled as texture (much of the grass was low contrast and labeled as "flat" texture). One of the bushes is correctly identified as having coarser texture than the other has. In the lighthouse image, the house sans window, fence, and tower were marked, as well as a low-frequency oriented pattern in the clouds. Figure 5 shows the regions of interest that were found (the striped and plaid patterns here have no meaning but were chosen for maximum visibility). Most complex natural scenes had few interesting low-level attentional areas. In the lighthouse image, the life preserver is marked. In the hotel, curved or unusual angular windows are identified as attentional cues, as well as the top of the building. Both of these results are in agreement with psychophysical results showing that observers quickly identify curved or bent lines among straight lines (reviewed in Wolfe, 1998). The simpler desk scene yields more intuitive results, with each of the 3 objects labeled, as well as the phone cord. Bottom-up attentional cues are outliers to the local distribution of features, and we have suggested that texture is the absence of such outliers. This definition captures some of the intuition that texture is homogeneous and statistical in nature. We presented a method for fmding contrast and orientation outliers, and results both on localizing texture and on finding popout in natural images. For the simple desk image, the algorithm highlights salient regions that correspond to our notions of the important objects in the scene. On complicated natural scenes, its results are less intuitive; suggesting that search in natural scenes makes use of higher-level General-Purpose Localization o/Textured Image Regions 821 processing such as grouping into objects. This result should not be terribly surprising, but serves as a useful check on simple low-level models of visual attention. The algorithm does a good job of identifying textured regions at a number of different scales, with the results perhaps more intuitive at finer scales. Acknowledgments This work was partially supported by an NRC postdoctoral award at NASA Ames. Many thanks to David Marimont and Eric Saund for useful discussions. References J. R. Bergen and M. S. Landy (1991), "Computational modeling of visual texture segmentation," Computational Models of Visual Processing, Landy and Movshon (eds.), pp. 252-271, MIT Press, Cambridge, MA. J. Duncan and G. Humphreys (1989), "Visual search and stimulus similarity," Psych. Review 96, pp. 433-458. D. Forsyth, J. Malik, M. Fleck, H. Greenspan, T. Leung, S. Belongie, C. Carson, and C. Bregler (1996), "Finding pictures of objects in collections of images," ECCV Workshop on Object Representation, Cambridge. F. A. A. Kingdom, D. Keeble, D., and B. Moulden (1995), "Sensitivity to orientation modulation in micropattern-based textures," Vis. Res. 35, 1, pp. 79-91. F. A. A. Kingdom and D. Keeble (1997), "The mechanism for scale invariance in orientation-defined textures." Invest. Ophthal. and Vis. Sci. (Suppl.) 38, 4, p. 636. T. K. Leung and J. Malik (1996), "Detecting, localizing, and grouping repeated scene elements from an image," Proc. 4th European Con! On Computer Vision, 1064, 1, pp. 546-555, Springer-Verlag, Cambridge. R. Milanese, H. Wechsler, S. Gil, J. -M. Bost, and T. Pun (1993), "Integration of bottom-up and top-down cues for visual attention using non-linear relaxation," Proc. IEEE CVPR, pp. 781-785, IEEE Computer Society Press, Seattle. R. Rosenholtz (1997), "Basic signal detection theory model does not explain search among heterogeneous distractors." Invest. Ophthal. and Vis. Sci. (Suppl.) 38, 4, p. 687. R. Rosenholtz (1998), "A simple saliency model explains a number of motion popout phenomena." Invest. Ophthal. and Vis. Sci. (Suppl.) 39,4, p. 629. E. Saund (1998), "Scale and the ShapelTexture Continuum," Xerox Internal Technical Memorandum. J. Shi and J. Malik (1998), "SelfInducing Relational Distance and its Application to Image Segmentation," Proc. jth European Con! on Computer Vision, Burkhardt and Neumann (eds.), 1406, 1, pp. 528-543, Springer, Freiburg. J. Vaisey and A. Gersho (1992), "Image compression with variable block size segmentation." IEEE Trans. Signal Processing 40,8, pp. 2040-2060. J. M. Wolfe (1998), "Visual search: a review," Attention, H. Pashler (ed.), pp. 1374, Psychology Press Ltd., Hove, East Sussex, UK. C. S. Won and D. K. Park (1997), "Image block classification and variable block size segmentation using a model-fitting criterion," Opt. Eng. 36, 8, pp. 2204-2209. 822 R. Rosenholtz Figure I: Original images. (a) (b) Figure 2: Fine-scale texture. (a) oriented texture, (b) homogeneous contrast texture. -. ~" . ~ 1". .... & , .~ • .. ' • • I. , . .. ' .. • -.. (a) (b) Figure 3: Medium-scale texture. (a) oriented texture, (b) homogeneous contrast texture. General-Purpose Localization of Textured Image Regions (a) (b) Figure 4: Coarse-scale texture. (a) oriented texture, (b) homogeneous contrast texture . .. . , I, • • Figure 5: Regions of interest. 823
1998
121
1,477
Making Templates Rotationally Invariant: An Application to Rotated Digit Recognition Shurneet Baluja baluja@cs.cmu.edu J ustsystem Pittsburgh Research Center & School of Computer Science, Carnegie Mellon University Abstract This paper describes a simple and efficient method to make template-based object classification invariant to in-plane rotations. The task is divided into two parts: orientation discrimination and classification. The key idea is to perform the orientation discrimination before the classification. This can be accomplished by hypothesizing, in turn, that the input image belongs to each class of interest. The image can then be rotated to maximize its similarity to the training images in each class (these contain the prototype object in an upright orientation). This process yields a set of images, at least one of which will have the object in an upright position. The resulting images can then be classified by models which have been trained with only upright examples. This approach has been successfully applied to two real-world vision-based tasks: rotated handwritten digit recognition and rotated face detection in cluttered scenes. 1 Introduction Rotated text is commonly used in a variety of situations, ranging from advertisements, logos, official post-office stamps, and headlines in magazines, to name a few. For examples, see Figure 1. We would like to be able to recognize these digits or characters, regardless of their rotation. Figure 1: Common examples of images which contain text that is not axis aligned include logos, post-office stamps, magazine headlines and consumer advertisements. 848 S. Baluja The focus of this paper is on the recognition of rotated digits. The simplest method for creating a system which can recognize digits rotated within the image-plane is to employ existing systems which are designed only for upright digit recognition [Le Cun et aI., 1990][Le Cun et a!., 1995a][Le Cun et ai., 1995b][Lee, 1991][Guyon et a!., 1989]. By repeatedly rotating the input image by small increments and applying the recognition system at each rotation, the digit will eventually be recognized. As will be discussed in this paper, besides being extremely computationally expensive, this approach is also errorprone. Because the classification of each digit must occur in many orientations, the likelihood of an incorrect match is high. The procedure presented in this paper to make templates rotationally invariant is significantly faster and more accurate than the one described above. Detailed descriptions of the procedure are given in Section 2. Section 3 demonstrates the applicability of this approach to a real-world vision-based task, rotated handwritten digit recognition. Section 4 closes the paper with conclusions and suggestions for future research. It also briefly describes the second application to which this method has been successfully applied, face detection in cluttered scenes. 2 Making Templates Rotationally Invariant The process to make templates rotationally invariant is easiest to describe in the context of a binary classification problem; the extension to multiple classes is discussed later in this section. Imagine a simplified version of the digit recognition task: we want a detector for a single digit. Suppose we wish to tell whether the input contains the digit '3' or not. The challenge is that the '3' can be rotated within the image plane by an arbitrary amount. Recognizing rotated objects is a two step process. In the first step, a "De-Rotation" network is applied to the input image. This network analyzes the input before it is given to a "Detection" network. If the input contains a '3', the De-Rotation network returns the digit's angle of rotation. The window can then be rotated by the negative of that angle to make the '3' upright. Note that the De-Rotation network does not require a '3' as input. If a non-'3' image is encountered, the De-Rotation network will return an unspecified rotation. However, a rotation of a non-'3' will yield another (perhaps different) image of a non-'3'. When the resulting image is given to the Detection network it will not detect a '3'. On the other hand, a rotated '3', which may not have been detected by the Detection network alone, will be rotated to an upright position by the De-Rotation network, and will subsequently be detected as a '3' by the Detection network. The Detection network is trained to output a positive value only if the input contains an upright '3', and a negative value otherwise (even if it contains a rotated '3 '). It should be noted that the methods described here do not require neural networks. As shown in [Le Cun et al., 1995a, Le Cun et ai., 1995b] a number of other classifiers can be used. The De-Rotation and Detection networks are used sequentially. First, the input image is processed by the De-Rotation network which returns an angle of rotation, assuming the image contains a '3'. A simple geometric transformation of the image is performed to undo this rotation. If the original image contained a '3', it would now be upright. The resulting image is then passed to the Detection network. If the original image contained a '3', it can now be successfully detected. This idea can easily be extended to multiple-class classification problems: a De-Rotation network is trained for each object class to be recognized. For the digit recognition problem, 10 De-Rotation networks are trained, one for each of the digits 0 .. 9. To classify the digits once they are upright, a single classification network is used with 10 outputs (instead of the detection networks trained on individual digits - alternative approaches will be described later in this paper). The classification network is used in the standard manner; the output with the maximum value is taken as the classification. To classify a new image, the following procedure is used: Making Templates Rotationally Invariant 849 For each digitD (0 $; D $; 9): 1. Pass image through De-Rotation-network-D. This returns the rotation angle. 2. Rotate the image by (-1.0 * returned rotation angle). 3. Pass the de-rotated image to the classification network. 4. If the classification network's maximum output is output D, the activation of output D is recorded. Otherwise digit D is eliminated as a candidate. In most cases, this will eliminate all but one of the candidates. However, in some cases more than one candidate will remain. In these cases, the digit with the maximum recorded activation (from Step 4) is returned. In the unlikely event that no candidates remain, either the system can reject the sample as one it cannot classify, or it can return the maximum value which would have been recorded in Step 4 if none of the examples were rejected. 2.1 Network Specifics To train the De-Rotation networks, images of rotated digits were input, with the rotation angle as the target output. Examples of rotated digits are shown in Figure 2. Each image is 28x28 pixels. The upright data sets are from the MNIST database [Le Cun et at., 1995a]. ·~~GJmmm·mm. ::&lIi ••• \: •• .• - .-: ~IJ-~~---: • . •. : .......... · ... ·IitIIUI: .. .. iB ••• :Rllillll : •• 11 • • • • • : •• :tM.",.m:IWMI:II.n ~B.::mll~II:l\Wa ~R.J aLIla U ~.:II.r. ~ . I!II: : ... ~ ...... _ ........ ~u. .......... ;a ..... : ..... ;aa ......... . •••• ll •• : •• ! •• !: •• ill.·.11 Figure 2: 8 examples of each of the 10 digits to be recognized. The first example in each group of eight is shown with no rotation; it is as it appears in the MNIST data set. The second through eighth examples show the same digit rotated in-plane by random amounts. In the classification network, each output represents a distinct class; therefore, the standard l-of-N output representation was used with 10 outputs. To represent a continuous variable (the angle of rotation) in the outputs of the De-Rotation network, we used a Gaussian output encoding [Pomerleau, 1992] with 90 output units. With the Gaussian encoding, instead of only training the network to activate a single output (as is done in l-of-N encoding), outputs close to the desired output are also activated in proportion to their distance from the desired output. This representation avoids the imposed discontinuities of the strict l-of-N encoding for images which are similar, but have only slight differences in rotations. Further, this representation allows finer granularity with the same number of output units than would be possible if a l-of-N encoding was used [Pomerleau, 1992]. The network architecture for both the classification and the De-Rotation networks consists of a single hidden layer. However, unlike a standard fully-connected network, each hidden unit was only connected to a small patch of the 28x28 input. The De-Rotation networks used groups of hidden units in which each hidden unit was connected to only 2x2, 3x3, 4x4 & 5x5 patches of the inputs (in each of these groups, the patches were spaced 2x2 pixels apart; therefore, the last three groups had overlapping patches). This is similar to the networks used in [Baluja, 1997][Rowley et. at, 1998a, 1998b] for face detection. Unlike the convolution networks used by [Le Cun et aI., 1990], the weights into the hidden units were not shared.1 Note that many different local receptive field configurations were tried; almost all had equivalent performance. 850 S. Ba/uja 3 Rotated Handwritten Digit Recognition To create a complete rotationally invariant digit recognition system, the first step is to segment each digit from the background. The second is to recognize the digit which has been segmented. Many systems have been proposed for segmenting written digits from background clutter [Jain & Yu, 1997][Sato et ai., 1998][Satoh & Kanade, 1997]. In this paper, we concentrate on the recognition portion of the task. Given a segmented image of a potentially rotated digit, how do we recognize the digit? The first experiment conducted was to establish the base-line performance. We used only the standard, upright training set to train a classification network (this training set consists of 60,000 digits). This network was then tested on the testing set (the testing set contains 10,000 digits). In addition to measuring the performance on the upright testing set, the entire testing set was also rotated. As expected, performance rapidly degrades with rotation. A graph of the performance with respect to the rotation angle is shown in Figure 3. i ] 1 J Perfonaaneeo' Network trained wtth t Jpript Dlpts and Tcwhd on Rotated Dlgtu Figure 3: Performance of the classification network trained only with upright images when tested on rotated images. As the angle of rotation increases, performance degrades. Note the spike around 180 degrees, this is because some digits look the same even when they are upside-down. The peak performance is approximately 97.5% (when the digits are upright). It is interesting to note that around 1800 rotation, performance slightly rises. This is because some of the digits are symmetric across the center horizontal axis - for example the digits '8', '1', '2' & '5' can be recognized upside-down. Therefore, at these orientations, the upright detector works well for these digits. As mentioned earlier, the simplest method to make an upright digit classifier handle rotations is to repeatedly rotate the input image and classify it at each rotation. Thefirst drawback to this approach is the severe computational expense. The second drawback is that because the digit is examined at many rotations, it may appear similar to numerous digits in different orientations. One approach to avoid the latter problem is to classify the digit as the one that is voted for most often when examined over all rotations. To ensure that this process is not biased by the size of the increments by which the image is rotated, various angle increments are tried. As shown in the first row of Table I, this method yields low Table I: Exhaustive Search over all possible rotations Number of Angle IncreQ1ents Tried Exhaustive Search Method 360 100 50 (1 degree/increment) (3.6 degree/increment) (7.2 degreeslincrement Most frequent vote (over all rotations) 59.5% 66.0% 65.0% Most frequent vote - counted onl y when votes are positi ve 75.2% 74.5% 74.0% (over all rotations) 1. Note that in the empirical comparisons presented in [Le Cun et ai., 1995aJ, convolution networks performed extremely well in the upright digit recognition task. However, due to limited computation resources, we were unable to train these networks, as each takes 14-20 days to train. The network used here was trained in 3 hours, and had approximately a 2.6% misclassification rate on the upright test set. The best networks reported in [Le Cun et ai, 1995aJ have less than 1 % error. It should be noted that the De-Rotation networks trained in this study can easily be used in conjunction with any classification procedure, including convolutional networks. Making Templates Rotationally Invariant 851 classification accuracies. One reason for this is that a vote is counted even when the classification network predicts all outputs to be less than 0 (the network is trained to predict +1 when a digit is recognized, and -1 when it is not). The above experiment was repeated with the following modification: a vote was only counted when the maximum output of the classification network was above O. The result is shown in the second row of Table I. The classification rate improved by more than 10%. Given these base-line performance measures2, we now have quantitative measurements with which to compare the effectiveness of the approach described in this paper. The performance of the procedure used here, with 10 "De-Rotation" networks and a single classification network, is shown in Figure 4. Note that unlike the graph shown in Figure 3, there is very little effect on the classification performance with the rotation angle. 0 .•• 0,1. ~~----------------~~ -,*10.00 -100.00 0.00 100.110 Ito.OO ~011""'1oft Figure 4: Performance ofthe combined DeRotation network and classification network system proposed in this paper. Note that the performance is largely unaffected by the rotation. The average performance, over all rotations, is 85.0%. To provide some intuition of how the De-Rotation networks perform, Figure 5 shows examples of how each De-Rotation networks transform each digit. Each De-Rotation network suggests a rotation which makes the digit look as much like the one with which the network was trained. For example, De-Rotation-Network-5 will suggest a rotation that will make the input digit look as much like the digit '5' as possible; for example, see DeRotation-Network-5's effect on the digit '4'. Original Digit o 2 345 6 7 8 9 Digit rotated by De-Rotation-Network-O Digit rotated by De-Rotation-Network-l Digit rotated by De-Rotation-Network-2 Digit rotated by De-Rotation-Network-3 Digit rotated by De-Rotation-Network-4 Digit rotated by De-Rotation-Network-5 Digit rotated by De-Rotation-Network-6 Digit rotated by De-Rotation-Network-7 Digit rotated by De-Rotation-Network-8 Digit rotated by De-Rotation-Network-9 Figure 5: Digits which have been rotated by the angles specified by each of the De-rotation networks. As expected (if the method is working), the digits on the diagonal (upper left to bottom right) appear upright. 2. Another approach is to train a single network to handle both rotation and classification by using rotated digits as inputs, and the digit's classification as the target output. Experiments with the approach yielded results far below the techniques presented here. 852 S. Baluja As shown in Figure 4, the average classification accuracy is approximately 85.0%. The performance is not as good as with the upright case alone, which had a peak performance of approximately 97.5% (Figure 3). The high level of performance achieved in the upright case is unlikely for rotated digits: if all rotations are admissible, some characters are ambiguous. The problem is that when working correctly, De-Rotation-Network-D will suggest an angle of rotation that will make any input image look as much like the digit D as possible through rotation. In most cases when the input image is not the digit D, the rotation will not cause the image to look like D. However, in some cases, such as those shown in Figure 6(right), the digit will be transformed enough to cause a classification error. Some of these errors will most likely never be correctable (for example, '6' and '9' in some instances); however, there is hope for correcting some of the others. Figure 6 presents the complete confusion matrix. As can be seen in the examples in Figure 6(right), the digit '4' can be rotated to appear similar to a'S'. Nonetheless, there often remain distinctive features that allow real '5's to be differentiated from the rotated '4's. However, the classification network is unable to make these distinctions because it was not trained with the appropriate examples. Remember, that since the classification network was only trained with the upright digit training set, rotated '4's are never encountered during training. This reflects a fundamental discrepancy in the training/testing procedure. The distributions of images which were used to train the classification network is different than the distributions on which the network is tested. To address this problem, the classification mechanism is modified. Rather than using the single 1-oj-1O neural network classifier used previously, 10 individual Detection networks are used. Each detection network has a single binary output that signifies whether the input contains the digit (upright) with which the network was trained. Each De-Rotation network is paired with the respective Detection network. The crucial point is that rather than training the Detection-Network-D with the original upright images in the training set, each image (whether it is a positive or negative example) is first passed through DeRotation-Network-D. Although this makes training Detection-Network-D difficult since all the digits are rotated to appear as much like upright-D's as possible by De-RotationNetwork-D, the distribution of training images matches the testing distribution more closely. In use, when a new image is presented, it is passed through the lO network pairs. Candidate digits are eliminated if the binary output from the detection network does not signal a detection. Preliminary results with this new approach are extremely promising; the classification accuracy increases dramatically - to 93% when averaged over all rotations. This is a more than a 50% reduction in error over the previously described approach. Predicted Digit o 2345678 a 94 -.~ is 2 ] 4 ~ 5 6 7 -90 --5 88 -3 3 2 10 88 --4 89 3 3 2 87 4 2 88 3 3 5 6 74 3 3 -3 -89 -7 25 --64 Original Image 1 I mage Rotated to Look Like Mistake Digit r1mage Rotated to Look Like Correct Digit A. :: •• :.! , •.•• ! .•.• 111 B. !:a.:al liBII! 'JaBEI. c·III·BEt: "11.'111 '-•.• : •. D. [ •• :.1 :11.11111 ' .•• ~ •• E·l •.•• l [ •• ·.la.II:.! F. ::.:11:111 i· •.• r.~~.:.:·.l G. [ •••• 1 11:11'1:: ; •• 11· Figure 6: Example errors. (LEFI) Confusion Matrix (only entries account for 2% or more entries are filled in for ease of reading). (RlGH1) some of the errors made in classification. 3 examples of each of the errors are shown. Row A: '4' mistaken as '5'. Row B: '5' mistaken as '6', Row C: '7' mistaken as '2'. Row D: '7' mistaken as '6'. Row E: '8' mistaken as '4', Row F: '9' mistaken as '5', Row G: '9' mistaken as '6'. Making Templates Rotationally Invariant 853 4 Conclusions and Future Work This paper has presented results on the difficult problem of rotated digit recognition. First, we presented base-line results with naive approaches such as exhaustively checking all rotations. These approaches are both slow and have large error rates. Second, we presented results with a novel two-stage approach which is both faster and more effective than the naive approaches. Finally, we presented preliminary results with a new approach that more closely models the training and testing distributions. We have recently applied the techniques presented in this paper to the detection of faces in cluttered scenes. In previous studies, we presented methods for finding all upright frontal faces [Rowley et aT., 1998aJ. By using the techniques presented here, we were able to detect all frontal faces, including those which were rotated within the image plane [Baluja, 1997][Rowley et al., 1998b J. The methods presented in this paper should also be directly applicable to full alphabet rotated character recognition. In this paper, we examined each digit individually. A straight-forward method to eliminate some of the ambiguities between rotationally similar digits is to use contextual information. For example, if surrounding digits are all rotated to the same amount, this provides strong hints about the rotation of nearby digits. Further, in most real-world cases, we might expect digits to be close to upright; therefore, one method of incorporating this information is to penalize matches which rely on large rotation angles. This paper presented a general way to make template-based recognition rotation invariant. In this study, both the rotation estimation procedures and the recognition templates were implemented with neural-networks. Nonetheless, for classification, any technique which implements a form of templates, such as correlation templates, support vector machines, probabilistic networks, K-Nearest Neighbor, or principal component-based methods, could have easily been employed. Acknowledgements The author would like to thank Kaari Aagstad for her reviews of many successive drafts of this paper. References Baluja, S. (1997) "Face Detection with In-Plane Rotation: Early Concepts and Preliminary Results," Justsystem Pittsburgh Research Center Technical Report. JPRC-TR-97-001. Guyon, I, Poujaud, I., Personnaz, L, Dreyfus, G., Denker, J. LeCun, Y. (1989) "Comparing Different Neural Net Architectures for Classifying Handwritten Digits", in IlCNN II 127-132. Jain, A. & Yu, B. (1997) "Automatic Text Location in Images and Video Frames", TR: MSUCPS: TR 97-33. Le Cun, Y., Jackel, D., Bottou, L, Cortes, c., Denker, J. Drucker, J. Guyon, I, Miller, U. Sackinger, E. Simard, P. Vapnik, V. (1995a) "Learning Algorithms for Classification: A Comparison on Handwritten Digit Recognition". Neural Networks: The Statistical Mechanics Perspective, Oh, J., Kwon, C. & Cho, S. (Ed.), pp. 261-276. LeCun, Y., Jackel, L. D., Bottou, L., Brunot, A., Cortes, C., Denker, J. S., Drucker, H., Guyon, I., Muller, U. A., Sackinger, E., Simard, P. and Vapnik, V. (1995b), Comparison of learning algorithms for handwritten digit recognition," ICANN, Fogelman, F. and Gallinari, P., 1995, pp. 53-60. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. and Jackel, L. D. (1990), "Handwritten digit recognition with a back-propagation network," Advances in Neural Information Processing Systems 2 (NIPS '89), Touretzky, David (Ed.), Morgan Kaufman. Lee, Y. (1991) "Handwritten Digit Recognition using K-NN, RBF and Backpropagation Neural Networks", Neural Computation, 3, 3. Pomerleau, D.A. (1993) Neural Network Perception for Mobile Robot Guidance, Kluwer Academic Rowley, H., Baluja, S. & Kanade, T. (1998a) "Neural Network-Based Face Detection," IEEE-Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 20, No.1, January, 1998. Rowley, H., Baluja, S. & Kanade, T. (1998b) "Rotation Invariant Neural Network-Based Face Detection," to appear in Proceedings of Computer Vzsion and Pattern Recognition, 1998. Sato, T, Kanade, T., Hughes, E. & Smith, M. (1998) "Video OCR for Digital News Archives" to appear in IEEE International Workshop on Content-Based Access of Image and Vzdeo Databases. Satoh, S. & Kanade, T. (1997) "Name-It: Association of face and name in Video", in Proceedings of IEEE Conference on Computer Vzsion and Pattern Recognition, 1997.
1998
122
1,478
Outcomes of the Equivalence of Adaptive Ridge with Least Absolute Shrinkage Yves Grandvalet Stephane Canu Heudiasyc, UMR CNRS 6599, Universite de Technologie de Compiegne, BP 20.529, 60205 Compiegne cedex, France Yves.Grandvalet@hds.utc.fr Abstract Adaptive Ridge is a special form of Ridge regression, balancing the quadratic penalization on each parameter of the model. It was shown to be equivalent to Lasso (least absolute shrinkage and selection operator), in the sense that both procedures produce the same estimate. Lasso can thus be viewed as a particular quadratic penalizer. From this observation, we derive a fixed point algorithm to compute the Lasso solution. The analogy provides also a new hyper-parameter for tuning effectively the model complexity. We finally present a series ofpossible extensions oflasso performing sparse regression in kernel smoothing, additive modeling and neural net training. 1 INTRODUCTION In supervised learning, we have a set of explicative variables x from which we wish to predict a response variable y. To solve this problem, a learning algorithm is used to produce a predictor J( x) from a learning set Sf. = {(Xi, yd H=l of examples. The goal of prediction may be: 1) to provide an accurate prediction of future responses, accuracy being measured by a user-defined loss function; 2) to quantify the effect of each explicative variable in the response; 3) to better understand the underlying phenomenon. Penalization is extensively used in learning algorithms. It decreases the predictor variability to improve the prediction accuracy. It is also expected to produce models with few non-zero coefficients if interpretation is planned. Ridge regression and Subset Selection are the two main penalization procedures. The former is stable, but does not shrink parameters to zero, the latter gives simple models, but is unstable [1]. These observations motivated the search for new penalization techniques such as Garrotte, Non-Negative Garrotte [1], and Lasso (least absolute shrinkage and selection operator) [10]. 446 Y. Grandvalet and S. Canu Adaptive Ridge was proposed as a means to automatically balance penalization on different coefficients. It was shown to be equivalent to Lasso [4]. Section 2 presents Adaptive Ridge and recalls the equivalence statement. The following sections give some of the main outcomes ofthis connection. They concern algorithmic issues in section 3, complexity control in section 4, and some possible generalizations oflasso to non-linear regression in section 5. 2 ADAPTIVE RIDGE REGRESSION For clarity of exposure, the formulae are given here for linear regression with quadratic loss. The predictor is defined as j( x) = rff x, with rff = (f31, ... , f3d). Adaptive Ridge is a modification of the Ridge estimate, which is defined by the quadratic constraint ~~ = 1 f3; ~ C applied to the parameters. It is usually computed by minimizing the Lagrangian l d d jj = Argmin L ( L f3j Xij - Yi) 2 + A L f3; , {3 i=l j=l j=l (1) where A is the Lagrange multiplier varying with the bound C on the norm of the parameters. When the ordinary least squares (OLS) estimate maximizes likelihood 1 , the Ridge estimate may be seen as a maximum a posteriori estimate. The Bayes prior distribution is a centered normal distribution, with variance proportional to 1/ A. This prior distribution treats all covariates similarly. It is not appropriate when we know that all covariates are not equally relevant. ----0 The garrotte estimate [1] is based on the OLS estimate,8 . The standard quadratic constraint is replaced by ~~ = 1 f3] (iJf ~ C. The coefficients with smaller OLS estimate are thus more heavily penalized. Other modifications are better explained with the prior distribution viewpoint. Mixtures of Gaussians may be used to cluster different set of covariates. Several models have been proposed, with data dependent clusters [9], or classes defined a priori [7]. The Automatic Relevance Determination model [8] ranks in the latter type. In [4], we propose to use such a mixture, in the form t d d ---. "" "" 2 "" 2 ,8= ArgmmL.,., (L.,.,f3j Xij - Yi) + L.,.,Ajf3j {3 i=l j=l j=l (2) Here, each coefficient has its own prior distribution. The priors are centered normal distributions with variances proportional to 1/ Aj. To avoid the simultaneous estimation of these d hyper-parameters by trial, the constraint 1 d 1 1 d L ~ = ~ , Aj > 0 (3) j=l J is applied on A = (A1, .. . , Ad)T, where A is a predefined value. This constraint is a link between the d prior distributions. Their mean variance is proportional to 1/ A. The values of Aj are automatically2 induced from the sample, hence the qualifieradaptative. Adaptativity refers here to the penalization balance on {Pj }, not to the tuning of the hyper-parameter A. 1 If {(.C,} are independently and identically drawn from some distribution, and that some{3" exists, such that Y. = {3" T (.C, + e, where c is a centered normal random variable, then the empirical cost "'0 based on the quadratic loss is proportional to the log-likelihood of the sample. The OLS estimate{3 is thus the maximum likelihood estimate offJ'. 2 Adaptive Ridge, as Ridge or Lasso, is not scale invariant, so that the covariates should be normalized to produce sensible estimates. Equivalence of Adaptive Ridge with Least Absolute Shrinkage 447 It was shown [4] that Adaptive Ridge and least absolute value shrinkage are equivalent, in the sense that they yield the same estimate. We remind that the Lasso estimate is defined by e d d j3 = Argmin L (L (3j Xij - Yi ) 2 subject to j3 i=l j =l L l{3j l ~ f{ . j =l (4) The only difference in the definition of the Adaptive Ridge and the Lasso estimate is that the Lagrangian form of Adaptive Ridge uses the constraint CL1=1 l{3j 1)2/ d ~ f{ 2. 3 OPTIMIZATION ALGORITHM Tibshirani [10] proposed to use quadratic programming to find the l,asso solution, with 2d variables (positive and negative parts of (3j ) and 2d + 1 constraints (signs of positive and negative parts of (3j plus constraint (4)). Equations (2) and (3) suggest to use a fixed point (FP) algorithm. At each step s, the FP algorithm estimates the optimal parameters ) . .y) of the Bayes prior based on the estimate (3) S -1 ) , and then maximizes the posterior to compute the current estimate (3) S ) • As the parameterization (j3, A) may lead to divergent solutions, we define new variables and r;: . C j = V I; for J = 1, .. . , d (5) The FP algorithm updates alternatively c and -y as follows: cj = ,,",d (s -1 )2 { (S)2 d ,jS -1 )2 L..,k=l /k -y (s) = (diag( c (s) )XT X diag( c (s) ) + AI) -1 diag( c (S) )XT y (6) where Xij = Xi j , I is the identity matrix, and diag( c) is the square matrix with the vector c on its diagonal. The algorithm can be initialized by the Ridge or the OLS estimate. In the latter case,,B(1) is the garrotte estimate. Practically, 'if lys-1) is small compared to numerical accuracy, then c~s) is set to zero. In turn, ,ys) is zero, and the system to be solved in the second step to determine -y can be reduced to the other var~ables. If cJ' is set to zero at any time during the optimization process, the final estimate {3j will be zero. The computations are simplified, but it is not clear whether global convergence can be obtained with this algorithm. It is easy to show the convergence towards a local minimum, but we did not find general conditions ensuring global convergence. If these conditions exist, they rely on initial conditions. Finally, we stress that the optimality conditions for c (or in a less rigorous sense for A) do not depend on the first part of the cost minimized in (2). In consequence, the equivalence between Adaptive Ridge and lasso holds/or any model or loss/unction. The FP algorithm can be applied to these other problems, without modifying the first step. 4 COMPLEXITY TUNING The Adaptive Ridge estimate depends on the learning set Sf. and on the hyper-parameter A. When the estimate is defined by (2) and (3), the analogy with Ridge suggests A as the 448 Y. Grandvalet and S. Canu ~ "natural" hyper-parameter for tuning the complexity of the regressor. As ..\ goes to zero, j3 r-<> approac~es the OLS estimatej3 , and the number of effective parameters is d. As ..\ goes to infinity, (3 goes to zero and the number of effective parameters is zero. When the estimate is defined by (4), there is no obvious choice for the hyper-parameter controlling complexity. Tibshirani [10] proposed to use v = 'Lf=1 l.8j 1/ 'Lf=l ~ I. As v goes ~ r-<> ~ to one,{3 approaches{3 ; as v goes to infinity, {3goes to zero. The weakness of v is that it is explicitly defined from the OLS estimate. As a result, it is variable when the design matrix is badly conditioned. The estimation of v is thus harder, and the overall procedure looses in stability. This is illustrated on an experiment following Breiman's benchmark [1] with 30 highly correlated predictors lE(XjXk) = plj-k l, with p = 1 - 10-3 . We generate 1000 Li.d. samples of size £ = 60. For each sampie s1, the modeling error (ME) is computed for several values of v and'\. We select v k and ,\k achieving the lowest ME. For one sample, there is a one to one mapping from v to'\. Thus ME is the same for vk and ,\k. Then, we compute v* and ..\* achieving the best average ME on the 1000 samples. As vk and ,\k achieve the lowest ME for s1, the ME for s1 is higher or equal for v* and ,\ *. Due to the wide spread of {Vk }, the average loss encountered is twice for v* than for ,\*: 1/1000 'L!~10 (ME(s~, v*) - ME(s; , vk )) = 4.6 10- 2, and 1/1000'L!~010 (ME(s~ , ..\ * ) - ME(s1 ,,\k)) = 2.310- 2 . The average modeling error are ME(v*) = 1.910-1 and ME("\*) = 1.710-1. The estimates of prediction error, such as leave-one-out cross-validation tend to be variable. Hence, complexity tuning is often based on the minimization of some estimate of the mean prediction error (e.g bootstrap, K-fold cross-validation). Our experiment supports that, regarding mean prediction error, the optimal ,\ performs better than the optimal v . Thus, ,\ is the best candidate for complexity tuning. Although,\ and v are respectively the control parameter of the FP and QP algorithms, the preceding statement does not imply that we should use the FP algorithm. Once the solution 73 is known, v or ,\ are easily computed. The choice of one hyper-parameter is not linked to the choice of the optimization algorithm. 5 APPLICATIONS Adaptive Ridge may be applied to a variety of regression techniques. They include kernel smoothing, additive and neural net modeling. 5.1 KERNEL SMOOTHING Soft-thresholding was proved to be efficient in wavelet functional e~timation [2]. Kernel smoothers [5] can also benefit from the sparse representation given by soft-thresholding methods. For these regressors, l( x) = 'L1=1 f3i K(x , xd+f3o, there are as many covariates as pairs in the sample. The quadratic procedure of Lasso with 2£ + 1 constraints becomes computationally expensive, but the FP algorithm of Adaptive Ridge is reasonably fast to converge. An example of least squares fitting is shown in fig. 1 for the motorcycle dataset [5]. On this example, the hyperparameter ,\ has been estimated by .632 bootstrap (with 50 bootstrap replicates) for Ridge and Adaptive Ridge regressions. For tuning..\, it is not necessary to determine the coefficients {3 with high accuracy. Hence, compared to Ridge regression, Equivalence of Adaptive Ridge with Least Absolute Shrinkage 449 the overall amount of computation required to get the Adaptive Ridge estimate was about six times more important. For evaluation, Adaptive Ridge is ten times faster as Ridge regression as the final fitting uses only a few kernels (11 out of 133). -AR -- - - R + "'+ x -1+ + ++ + + + +t+ + Figure 1: Adaptive Ridge (AR) and Ridge (R) in kernel smoothing on the motorcycle data. The + are data points, and. are the prototypes corresponding to the kernels with non-zero coefficients in AR. The Gaussian kernel used is represented dotted in the lower right-hand corner. Girosi [3] showed an equivalence between a version of least absolute shrinkage applied to kernel smoothing, and Support Vector Machine (SVM). However, Adaptive Ridge, as applied here, is not equivalent to SVM, as the cost minimized is different. The fit and prototypes are thus different from the fit and support vectors that would be obtained from SVM. 5.2 ADDITIVE MODELS Additive models [6] are sums of univariate functions, f( x) = LJ = 1 fj (x j ). In the nonparametric setting, {fj} are smooth but unspecified functions. Additive models are easily represented and thus interpretable, but they require the ch~ice of the relevant covariates to be included in the model, and of the smoothness of each Ij. In the form presented in the two previous sections, Adaptive Ridge regression penalizes differently each individual coefficient, but it is easily extended to the pooled penalization of coefficients. Adaptive Ridge may th~ be used as an alternative to BRUTO [6] to balance the penalization parameters on each Ij . A classical choice for fj is cubic spline smoothing. Let B j denote the £ x (£ + 2) matrix of the unconstrained B-spline basis, evaluated at Xij. Let 51 j be the (£ + 2) x (f + 2) matrix corresponding to the penalization of the second derivative of J;. The coefficients of fj in the unconstrained B-spline basis are noted /3j. The "natural" extension of Adaptive Ridge is to minimize d d II L B j/3j - YI12 + L ,\jf3]' 51 j/3j , (7) j=l j=l subject to constraint (3). This problem is easily shown to have the same solution as the minimization of II t, Bjf3j - yll' + A (t, Jf3J fljf3j) 2 (8) Note that if the cost (8) is optimized with respect to a single covariate, the solution is a usual smoothing spline regression (with quadratic penalization). In the multidimensional case, 450 Y. Grandvalet and S. Canu ex] =rif o'j/3j = J {Ij'(t)}2dt may be used to summarize the non-linearity of Ij, thus lajl can be interpreted as a relevance index operating besides linear dependence of feature j. The penalizer in (8) is a least absolute shrinkage operator applied to ex j. Hence, formula (8) may be interpreted as "quadratic penalization within, and soft-thresholding between covariates".The FP algorithm of section 3 is easily modified to minimize (8), and backfitting may be used to solve the second step of this procedure. A simulated example in dimension five is shown in fig. 2. The fitted univariate functions are plotted for five values of'\. There is no dependency between the the explained variable and the last covariate. The other covariates affect the response, but the dependency on the first features is smoother, hence easier to capture and more relevant for the spline smoother. For a small value of '\, the univariate functions are unsmooth, and the additive model is interpolating the data. For,\ = 10-4 , the dependencies are well estimated on all covariates. As ,\ increases, the cov~riates with higher coordinate number are more heavily penalized, and the corresponding Ij tend to be linear. ~ I 0 II "<t I 0 " " " . ....< . ., ...... " ' .. .... +-t-. ....< ~----~-------+------~------~----~ t +. .. .... :.:... :. .. .. " ....<~: ______ J-______ ~ ______ ~ ____ ~ ______ -J Figure 2: Adaptive Ridge in additive modeling on simulated data. The true model is y = Xl + cos( rrx2) + cOS(2rrx3) + cos(3rrx4) + E. The covariates are independently drawn from a uniform distribution on [-1, 1] and E is a Gaussian noise of standard deviation (j = 0.3. The solid curves are the estimated univariate functions for different values of '\, and + are partial residuals. Linear trends are not penalized in cubic spline smoothing. Thus, when after convergence ~T ~ /3j n j/3j = 0, the jth covariate is not eliminated. This can be corrected by applying Adapbve Ridge a second time. To test if a significant linear trend can be detected, a linear (penalized) model may be used for 1;, the remaining h, k #- j being cubic splines. 5.3 MLP FITTING The generalization to the pooled penalization of coefficients can also be applied to MultiLayered Perceptrons to control the complexity of the fit. If weights are penalized individually, Adaptive Ridge is equivalent to the Lasso. If weights are pooled by layer, Adaptive Ridge automatically tunes the amount of penalization on each layer, thus avoiding the multiple hyper-parameter tuning necessary in weight-decay [7]. Equivalence of Adaptive Ridge with Least Absolute Shrinkage Figure 3: groups of weights for two examples of Adaptive Ridge in MLP fitting. Left: hidden node soft-thresholding. Right: input penalization and selection, and individual smoothing coefficient for each output unit. 451 Two other interesting configurations are shown in fig. 3. If weights are pooled by incoming and outcoming weights of a unit, node penalization/pruning is performed. The weight groups may also gather the outcoming weights from each input unit, orthe incoming weights from each output unit (one set per input plus one per output). The goal here is to penalize/select the input variables according to their relevance, and each output variable according to the smoothness of the corresponding mapping. This configuration proves itself especially useful in time series prediction, where the number of inputs to be fed into the network is not known in advance. There are also more complex choices of pooling, such as the one proposed to encourage additive modeling in Automatic Relevance Determination [8]. References [1] L. Breiman. Heuristics of instability and stabilization in model selection. The Annals of Statistics, 24(6):2350-2383, 1996. [2] D.L Donoho and I.M. Johnstone. Minimax estimation via wavelet shrinkage. Ann. Statist., 26(3):879--921,1998. [3] F. Girosi. An equivalence between sparse approximation and support vector machines. Technical Report 1606, M.LT AI Laboratory, Cambridge, MA., 1997. [4] Y. Grandvalet. Least absolute shrinkage is equivalent to quadratic penalization. In L. Niklasson, M. Boden, and T Ziemske, editors, ICANN'98, volume 1 of Perspectives in Neural Computing, pages 201-206. Springer, 1998. [5] W. HardIe. Applied Nonparametric Regression, volume 19 of Economic Society Monographs. Cambridge University Press, New York, 1990. [6] TJ. Hastie and R.J. Tibshirani. Generalized Additive Models, volume 43 of Monographs on Statistics and Applied Probability. Chapman & Hall, New York, 1990. [7] D.J.C. MacKay. A practical Bayesian framework for backprop networks. Neural Computation, 4(3):448-472,1992. [8] R. M. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics. Springer, New York, 1996. [9] S.I. Nowlan and G.E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Computation, 4(4):473-493, 1992. [10] R.I. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, B, 58(1):267-288, 1995.
1998
123
1,479
Bayesian modeling of human concept learning Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute of Technology, Cambridge, MA 02139 jbt@psyche.mit.edu Abstract I consider the problem of learning concepts from small numbers of positive examples, a feat which humans perform routinely but which computers are rarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding to axis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitati ve insights into more complex, realistic cases of concept learning. 1 Introduction The ability to learn concepts from examples is one of the core capacities of human cognition. From a computational point of view, human concept learning is remarkable for the fact that very successful generalizations are often produced after experience with only a small number of positive examples of a concept (Feldman, 1997). While negative examples are no doubt useful to human learners in refining the boundaries of concepts, they are not necessary in order to make reasonable generalizations of word meanings, perceptual categories, and other natural concepts. In contrast, most machine learning algorithms require examples of both positive and negative instances of a concept in order to generalize at all, and many examples of both kinds in order to generalize successfully (Mitchell, 1997). This paper attempts to close the gap between human and machine concept learning by developing a rigorous theory for concept learning from limited positive evidence and testing it against real behavioral data. I focus on a simple abstract task of interest to both cognitive science and machine learning: learning axis-parallel rectangles in ?Rm . We assume that each object x in our world can be described by its values (XI, ... , xm) on m real-valued observable dimensions, and that each concept C to be learned corresponds to a conjunction of independent intervals (mini (C) ~ Xi ~ maXi (C» along each dimension 60 (a) r-------------. I I I + I + I I :C + + I I t.. ... ____ ... __ ...... ___ , (b) (e) 1. B. Tenenbaum \- ...... - . ~ ~ " ...... ~ . " - >" ~ • ! Figure 1: (a) A rectangle concept C. (b-c) The size principle in Bayesian concept learning: of the man y hypotheses consistent wi th the observed posi ti ve examples, the smallest rapidly become more likely (indicated by darker lines) as more examples are observed. i. For example, the objects might be people, the dimensions might be "cholesterol level" and "insulin level", and the concept might be "healthy levels". Suppose that "healthy levels" applies to any individual whose cholesterol and insulin levels are each greater than some minimum healthy level and less than some maximum healthy level. Then the concept "healthy levels" corresponds to a rectangle in the two-dimensional cholesterol/insulin space. The problem of generalization in this setting is to infer, given a set of positive (+) and negative (-) examples of a concept C, which other points belong inside the rectangle corresponding to C (Fig. 1 a.). This paper considers the question most relevant for cognitive modeling: how to generalize from just a few positive examples? In machine learning, the problem of learning rectangles is a common textbook example used to illustrate models of concept learning (Mitchell, 1997). It is also the focus of stateof-the-art theoretical work and applications (Dietterich et aI., 1997). The rectangle learning task is not well known in cognitive psychology, but many studies have investigated human learning in similar tasks using simple concepts defined over two perceptually separable dimensions such as size and color (Shepard, 1987). Such impoverished tasks are worth our attention because they isolate the essential inductive challenge of concept learning in a form that is analytically tractable and amenable to empirical study in human subjects. This paper consists of two main contributions. I first present a new theoretical analysis of the rectangle learning problem based on Bayesian inference and contrast this model's predictions with standard learning frameworks (Section 2). I then describe an experiment with human subjects on the rectangle task and show that, of the models considered, the Bayesian approach provides by far the best description of how people actually generalize on this task when given only limited positive evidence (Section 3). These results suggest an explanation for some aspects of the ubiquotous human ability to learn concepts from just a few positive examples. 2 Theoretical analysis Computational approaches to concept learning. Depending on how they model a concept, different approaches to concept learning differ in their ability to generalize meaningfully from only limited positive evidence. Discriminative approaches embody no explicit model of a concept, but only a procedure for discriminating category members from members of mutually exclusive contrast categories. Most backprop-style neural networks and exemplar-based techniques (e.g. K -nearest neighbor classification) fall into this group, along with hybrid models like ALCOVE (Kruschke, 1992). These approaches are ruled out by definition; they cannot learn to discriminate positive and negative instances ifthey have seen only positive examples. Distributional approaches model a concept as a probability distribution over some feature space and classify new instances x as members of C if their Bayesian Modeling of Human Concept Learning 61 estimated probability p(xIG) exceeds a threshold (J. This group includes "novelty detection" techniques based on Bayesian nets (Jaakkola et al., 1996) and, loosely, autoencoder networks (Japkowicz et al., 1995). While p(xIG) can be estimated from only positive examples, novelty detection also requires negative examples for principled generalization, in order to set an appropriate threshold (J which may vary over many orders of magnitude for different concepts. For learning from positive evidence only, our best hope are algorithms that treat a new concept G as an unknown subset of the universe of objects and decide how to generalize G by finding "good" subsets in a hypothesis space H of possible concepts. The Bayesian framework. For this task, the natural hypothesis space H corresponds to all rectangles in the plane. The central challenge in generalizing using the subset approach is that any small set of examples will typically be consistent with many hypotheses (Fig. Ib). This problem is not unique to learning rectangles, but is a universal dilemna when trying to generalize concepts from only limited positive data. The Bayesian solution is to embed the hypothesis space in a probabilistic model of our observations, which allows us to weight different consistent hypotheses as more or less likely to be the true concept based on the particular examples observed. Specifically, we assume that the examples are generated by random sampling from the true concept. This leads to the size principle: smaller hypotheses become more likely than larger hypotheses (Fig. Ib - darker rectangles are more likely), and they become exponentially more likely as the number of consistent examples increases (Fig. lc). The size principle is the key to understanding how we can learn concepts from only a few positive examples. Formal treatment. We observe n positive examples X = {xCI), ... , x Cn )} of concept G and want to compute the generalization/unction p(y E GIX), i.e. the probability that some new object y belongs to G given the observations X. Let each rectangle hypothesis h be denoted by a quadruple (11,/2,81,82), where Ii E [-00,00] is the location of h's lower-left comer and 8i E [0,00] is the size of h along dimension i. Our probabilistic model consists of a prior density p( h) and a likelihood function p( X I h) for each hypothesis h E H. The likelihood is determined by our assumption of randomly sampled positive examples. In the simplest case, each example in X is assumed to be independently sampled from a uniform density over the concept C. For n examples we then have: p(Xlh) (1) o otherwise, where Ihl denotes the size of h. For rectangle (11,/2,81,82), Ihl is simply 8182. Note that because each hypothesis must distribute one unit mass oflikelihood over its volume for each example cJx h p(xlh)dh = 1), the probability density for smaller consistent hypotheses is greater than for larger hypotheses, and exponentially greater as a function of n. Figs. Ib,c illustrate this size principle for scoring hypotheses (darker rectang!es are more likely). The appropriate choice of p( h) depends on our background knowledge. If we have no a priori reason to prefer any rectangle hypothesis over any other, we can choose the scaleand location-invariant uninformative prior, p( h) = P(ll, 12, 81 ,82) = 1/(81,82), In any realistic application, however, we will have some prior information. For example, we may know the expected size O'i of rectangle concepts along dimension i in our domain, and then use the associated maximum entropy prior P(ll, 12, 81,82) = exp{ -( 81/0'1 + 82/ 0'2)}. The generalization function p(y E GIX) is computed by integrating the predictions of all hypotheses, weighted by their posterior probabilities p( h IX): p(y E GIX) = r p(y E Glh) p(hIX) dh, (2) lhEH where from Bayes' theorem p(hIX) ex: p(Xlh)p(h) (normalized such that fhEH p(hIX)dh = 1), and p(y E Clh) = 1 if y E hand 0 otherwise. Under the 62 J. B. Tenenbaum uninformative prior, this becomes: (3) Here ri is the maximum distance between the examples in X along dimension i, and di equals 0 if y falls inside the range of values spanned by X along dimension i, and otherwise equals the distance from y to the nearest example in X along dimension i. Under the expected-size prior, p(y E GIX) has no closed form solution valid for all n. However, except for very small values of n (e.g. < 3) and ri (e.g. < 0'i/1O), the following approximation holds to within 10% (and usually much less) error: (4) Fig. 2 (left column) illustrates the Bayesian learner's contours of equal probability of generalization (at p = 0.1 intervals), for different values of nand ri. The bold curve corresponds to p(y E GIX) = 0.5, a natural boundary for generalizing the concept. Integrating over all hypotheses weighted by their size-based probabilities yields a broad gradient of generalization for small n (row 1) that rapidly sharpens up to the smallest consistent hypothesis as n increases (rows 2-3), and that extends further along the dimension with a broader range ri of observations. This figure reflects an expected-size prior with 0'1 = 0'2 = axiLwidthl2; using an uninformative prior produces a qualitatively similar plot. Related work: MIN and Weak Bayes. Two existing subset approaches to concept learning can be seen as variants of this Bayesian framework. The classic MIN algorithm generalizes no further than the smallest hypothesis in H that includes all the positive examples (Bruner et al., 1956; Feldman, 1997). MIN is a PAC learning algorithm for the rectangles task, and also corresponds to the maximum likelihood estimate in the Bayesian framework (Mitchell, 1997). However, while it converges to the true concept as n becomes large (Fig. 2, row 3), it appears extremely conservative in generalizing from very limited data (Fig. 2, row 1). An earlier approach to Bayesian concept learning, developed independently in cognitive psychology (Shepard, 1987) and machine learning (Haussler et al., 1994; Mitchell, 1997), was an important inspiration for the framework of this paper. I call the earlier approach weak Bayes, because it embodies a different generative model that leads to a much weaker likelihood function than Eq. 1. While Eq. 1 came from assuming examples sampled randomly from the true concept, weak Bayes assumes the examples are generated by an arbitrary process independent of the true concept. As a result, the size principle for scoring hypotheses does not apply; all hypotheses consistent with the examples receive a likelihood of 1, instead of the factor of 1/lhln in Eq. 1. The extent of generalization is then determined solely by the prior; for example, under the expected-size prior, (5) Weak Bayes, unlike MIN, generalizes reasonably from just a few examples (Fig. 2, row 1). However, because Eq. 5 is independent of n or ri, weak Bayes does not converge to the true concept as the number of examples increases (Fig. 2, rows 2-3), nor does it generalize further along axes of greater variability. While weak Bayes is a natural model when the examples really are generated independently of the concept (e.g. when the learner himself or a random process chooses objects to be labeled "positive" or "negative" by a teacher), it is clearly limited as a model oflearning from deliberately provided positive examples. In sum, previous subset approaches each appear to capture a different aspect of how humans generalize concepts from positive examples. The broad similarity gradients that emerge Bayesian Modeling of Human Concept Learning 63 from weak Bayes seem most applicable when only a few broadly spaced examples have been observed (Fig. 2, row 1), while the sharp boundaries of the MIN rule appear more reasonable as the number of examples increases or their range narrows (Fig. 2, rows 2-3). In contrast, the Bayesian framework guided by the size principle automatically interpolates between these two regimes of similarity-based and rule-based generalization, offering the best hope for a complete model of human concept learning. 3 Experimental data from human subjects This section presents empirical evidence that our Bayesian model - but neither MIN nor weak Bayes - can explain human behavior on the simple rectangle learning task. Subjects were given the task of guessing 2-dimensional rectangular concepts from positive examples only, under the cover story of learning about the range of healthy levels of insulin and cholesterol, as described in Section 1. On each trial of the experiment, several dots appeared on a blank computer screen. Subjects were told that these dots were randomly chosen examples from some arbitrary rectangle of "healthy levels," and their job was to guess that rectangle as nearly as possible by clicking on-screen with the mouse. The dots were in fact randomly generated on each trial, subject to the constraints ofthree independent variables that were systematically varied across trials in a (6 x 6 x 6) factorial design. The three independent variables were the horizontal range spanned by the dots (.25, .5, 1, 2, 4, 8 units in a 24-unit-wide window), vertical range spanned by the dots (same), and number of dots (2,3,4,6, 10,50). Subjects thus completed 216 trials in random order. To ensure that subjects understood the task, they first completed 24 practice trials in which they were shown, after entering their guess, the "true" rectangle that the dots were drawn from. I The data from 6 subjects is shown in Fig. 3a, averaged across subjects and across the two directions (horizontal and vertical). The extent d of subjects' rectangles beyond r, the range spanned by the observed examples, is plotted as a function of rand n, the number of examples. Two patterns of generalization are apparent. First, d increases monotonically with r and decreases with n. Second, the rate of increase of d as a function of r is much slower for larger values of n. Fig. 3b shows that neither MIN nor weak Bayes can explain these patterns. MIN always predicts zero generalization beyond the examples - a horizontal line at d = 0 - for all values of rand n. The predictions of weak Bayes are also independent of rand n: d = 0" log 2, assuming subjects give the tightest rectangle enclosing all points y with p(y E G\X) > 0.5. Under the same assumption, Figs. 3c,d show our Bayesian model's predicted bounds on generalization using uninformative and expected-size priors, respectively. Both versions of the model capture the qualitative dependence of d on rand n, confirming the importance of the size principle in guiding generalization independent of the choice of prior. However, the uninformative prior misses the nonlinear dependence on r for small n, because it assumes an ideal scale invariance that clearly does not hold in this experiment (due to the fixed size of the computer window in which the rectangles appeared). In contrast, the expected-size prior naturally embodies prior knowledge about typical scale in its one free parameter 0". A reasonable value of 0" = 5 units (out of the 24-unit-wide window) yields an excellent fit to subjects' average generalization behavior on this task. 4 Conclusions In developing a model of concept learning that is at once computationally principled and able to fit human behavior precisely, I hope to have shed some light on how people are able I Because dots were drawn randomly, the "true" rectangles that subjects saw during practice were quite variable and were rarely the "correct" response according to any theory considered here. Thus it is unlikely that this short practice was responsible for any consistent trends in subjects' behavior. 64 1. B. Tenenbaum to infer the correct extent of a concept from only a few positive examples. The Bayesian model has two key components: (1) a generalization function that results from integrating the predictions of all hypotheses weighted by their posterior probability; (2) the assumption that examples are sampled from the concept to be learned, and not independently of the concept as previous weak Bayes models have assumed. Integrating predictions over the whole hypothesis space explains why either broad gradients of generalization (Fig. 2, row 1) or sharp, rule-based generalization (Fig. 2, row 3) may emerge, depending on how peaked the posterior is. Assuming examples drawn randomly from the concept explains why learners do not weight all consistent hypotheses equally, but instead weight more specific hypotheses higher than more general ones by a factor that increases exponentially with the number of examples observed (the size principle). This work is being extended in a number of directions. Negative instances, when encountered, are easily accomodated by assigning zero likelihood to any hypotheses containing them. The Bayesian formulation applies not only to learning rectangles, but to learning concepts in any measurable hypothesis space - wherever the size principle for scoring hypotheses may be applied. In Tenenbaum (1999), I show that the same principles enable learning number concepts and words for kinds of objects from only a few positive examples. 2 I also show how the size principle supports much more powerful inferences than this short paper could demonstrate: automatically detecting incorrectly labeled examples, selecting relevant features, and determining the complexity of the hypothesis space. Such inferences are likely to be necessary for learning in the complex natural settings we are ultimately interested in. Acknowledgments Thanks to M. Bernstein, W. Freeman, S. Ghaznavi, W. Richards, R Shepard, and Y. Weiss for helpful discussions. The author was a Howard Hughes Medical Institute Predoctoral Fellow. References Bruner, J. A., Goodnow,J. S., & Austin, G. J. (1956). A study of thinking. New York: Wiley. Dietterich, T, Lathrop, R, & Lozano-Perez, T (1997). Solving the multiple-instance problem with axis-parallel rectangles. ArtificiaL Intelligence 89(1-2), 31-71. Feldman, J. (1997). The structure of perceptual categories. J. Math. Psych. 41, 145-170. Haussler, D., Keams, M., & Schapire, R (1994). Bounds on the sample complexity of Bayesian learning using infonnation theory and the VC-dimension. Machine Learning 14, 83-113. Jaakkola, T., Saul, L., & Jordan, M. (1996) Fast learning by bounding likelihoods in sigmoid type belief networks. Advances in NeuraL Information Processing Systems 8. Japkowicz, N., Myers, C., & Gluck, M. (1995). A novelty detection approach to classification. Proceedings of the 14th InternationaL Joint Conference on AritificaL InteLLigence. Kruschke, J. (1992). ALCOVE: An exemplar-based connectionist model of category learning. Psych. Rev. 99,22-44. Mitchell, T (1997). Machine Learning. McGraw-Hill. Muggleton, S. (preprint). Learning from positive data. Submitted to Machine Learning. Shepard, R (1987). Towards a universal law of generalization for psychological science. Science 237,1317-1323. Thnenbaum, J. B. (1999). A Bayesian Frameworkfor Concept Learning. Ph. D. Thesis, MIT Department of Brain and Cognitive Sciences. 2In the framework of inductive logic programming, Muggleton (preprint) has independently proposed that similar principles may allow linguistic grammars to be learned from positive data only. Bayesian Modeling of Human Concept Learning 65 Bayes MIN weak Bayes n=6 n= 12 Figure 2: Performance of three concept learning algorithms on the rectangle task. 52.5 i 2 e ~ 1.5 & '0 1 C ~ 0.5 •• 0 (a) Average data from 6 subjects ~ ~--~~--~----~-----o 2 4 6 8 r: Range spanned by n examples (c) Bayesian model (uninformative prior) 2.5 2 1.5 o 2 4 6 8 (b) MIN and weak Bayes models 2.5 2 1.5 weak Bayes (0 :: 2) "In weak Bayes (0:: 1) "In 0.5 MIN 0 "In 0 2 4 6 8 (d) Bayesian model (expected-size prior) 2.5 2 1.5 2 4 6 8 n::2 n::3 n=4 n=6 n", 10 n= 50 Figure 3: Data from human subjects and model predictions for the rectangle task. PART II NEUROSCIENCE
1998
124
1,480
Maximum Conditional Likelihood via Bound Maximization and the CEM Algorithm Tony J ebara and Alex Pentland Vision and Modeling, MIT Media Laboratory, Cambridge MA http://www.rnedia.rnit.edu/ ~ jebara { jebara,sandy }~rnedia.rnit.edu Abstract We present the CEM (Conditional Expectation Maximi::ation) algorithm as an extension of the EM (Expectation M aximi::ation) algorithm to conditional density estimation under missing data. A bounding and maximization process is given to specifically optimize conditional likelihood instead of the usual joint likelihood. We apply the method to conditioned mixture models and use bounding techniques to derive the model's update rules. Monotonic convergence, computational efficiency and regression results superior to EM are demonstrated. 1 Introduction Conditional densities have played an important role in statistics and their merits over joint density models have been debated. Advantages in feature selection, robustness and limited resource allocation have been studied. Ultimately, tasks such as regression and classification reduce to the evaluation of a conditional density. However, popularity of maximumjoint likelihood and EM techniques remains strong in part due to their elegance and convergence properties. Thus, many conditional problems are solved by first estimating joint models then conditioning them . This results in concise solutions such as the N adarya-Watson estimator [2], Xu's mixture of experts [7], and Amari's em-neural networks [1]. However, direct conditional density approaches [2, 4] can offer solutions with higher conditional likelihood on test data than their joint counter-parts. Maximum Conditional Likelihood via Bound Maximization and CEM 495 01 1~~ 1 '" !i. 15 20 (a) La = -4.2 L; = -2.4 (b) Lb = -5.2 Lb = -1.8 Figure 1: Average Joint (x, y) vs. Conditional (ylx) Likelihood Visualization Pop at [6] describes a simple visualization example where 4 clusters must be fit with 2 Gaussian models as in Figure 1. Here, the model in (a) has a superior joint likelihood (La> Lb) and hence a better p(x, y) solution. However, when the models are conditioned to estimate p(ylx), model (b) is superior (Lb > L~). Model (a) yields a poor unimodal conditional density in y and (b) yields a bi-modal conditional density. It is therefore of interest to directly optimize conditional models using conditionallikelihood. We introduce the CEM (Conditional Expectation Maximization) algorithm for this purpose and apply it to the case of Gaussian mixture models. 2 EM and Conditional Likelihood For joint densities, the tried and true EM algorithm [3] maximizes joint likelihood over data. However, EM is not as useful when applied to conditional density estimation and maximum conditional likelihood problems. Here, one typically resorts to other local optimization techniques such as gradient descent or second order Hessian methods [2]. We therefore introduce CEM, a variant of EM , which targets conditional likelihood while maintaining desirable convergence properties. The CEM algorithm operates by directly bounding and decoupling conditional likelihood and simplifies M-step calculations. In EM, a complex density optimization is broken down into a two-step iteration using the notion of missing data. The unknown data components are estimated via the E-step and a simplified maximization over complete data is done in the M-step. In more practical terms, EM is a bound maximization: the E-step finds a lower bound for the likelihood and the M-step maximizes the bound. M P(Xi, Yi18) = L p(m, Xi, Yi18 ) (1) m=l Consider a complex joint density p(Xi , Yi 18) which is best described by a discrete (or continuous) summation of simpler models (Equation 1). Summation is over the 'missing components' m . l:!.l L:~llog(P(Xi' Yi 18t ») -Iog(p(xi ' YiI8t- 1)) ",\,N ",\,M h I p(m,X. ,Y.let) h h p(m,x. ,y.let- I) L...i=l L...m=l im og p(m,X"Y.let I) were im = ",\,M L...,,=I p(n ,x"y .Iet- I) > By appealing to Jensen's inequality, EM obtains a lower bound for the incremental log-likelihood over a data set (Equation 2) . Jensen's inequality bounds the logarithm of the sum and the result is that the logarithm is applied to each simple (2) 496 T. Jebara and A. Pentland model p(m, Xi , yd8) individually. It then becomes straightforward t.o compute the derivatives with respect to e and set to zero for maximization (M-step) . AJ ,\"M I 8) " I 8) L...m=IP(m,xi,y;j8) p( Y i Xi, = L..: p( m , Y i Xi , = =c""-7M+------'-----'m=l Lm=IP(m,XiI8 ) (3) However, the elegance of EM is compromised when we consider a conditioned density as in Equation :3. The corresponding incremental conditional log-likelihood, L:l.lc, is shown in Equation 4. L~llog(p(Yilxi' 8 t )) -log(p(ydxi, 8 t - 1) LM t LM I t) ,\"N I m_I P(m ,X. ,Y.10) -I n_IP(n ,X, 0 L.... og H og M !=1 L;"=I p(m ,X. ,Y.10 t l ) Ln=1 p(n ,X.10 t - l ) (4) The above is a difference between a ratio of joints and a ratio of marginals. If Jensen's inequality is applied to the second term in Equation 4 it yields an upper bound since the term is subtracted (this would compromise convergence). Thus, only the first ratio can be lower bounded with Jensen (Equation 5). L:l.jC>~~h' I p(m,xi,YiI8t ) -I L~~lp(n,xiI8t) - L..: L..: 2m og ( 18t - 1) og M i=1 m=1 p m, Xi, Yi Ln=1 p(n, XiI8 t - 1) (5) Note the lingering logarithm of a sum which prevents a simple M-Step. At this point, one would resort to a Generalized EM (GEM) approach which requires gradient or second-order ascent techniques for the M-step. For example, Jordan et al. overcome the difficult M-step caused by EM with an Iteratively Re-Weighted Least Squares algorithm in the mixtures of experts architecture [4]. 3 Conditional Expectation Maximization The EM algorithm can be extended by substituting Jensen's inequality for a different bound. Consider the upper variational bound of a logarithm x-I 2: log(x) (which becomes a lower bound on the negative log). The proposed logarithm's bound satisfies a number of desiderata: (1) it makes contact at the current operating point1, (2) it is tangential to the logarithm, (3) it is a tight bound, (4) it is simple and (5) it is the variational dual of the logarithm. Substituting this linear bound into the incremental conditional log-likelihood maintains a true lower bounding function Q (Equation 6). The Mixture of Experts formalism [4J offers a graceful representation of a conditional density using experts (conditional sub-models) and gates (marginal sub-models). The Q function adopts this form in Equation 7. 1 The current operating point is 1 since the e t model in the ratio is held fixed at the previous iteration's value e t - 1 . Maximum Conditional Likelihood via Bound Maximization and CEM 497 L~l L~=1 {him(logp(Yilm,Xi,et ) +logp(m,xile t ) - Zim) - riP(m,xile) + ir} where Zim = log(p(m,xi,Yilet -- 1)) and ri = (L~=1p(n'Xilet-1) )-1 Computing this Q function forms the CE-step in the Conditional Expectation Maximization algorithm and it results in a simplified M-step. Note the absence of the logarithm of a sum and the decoupled models. The form here allows a more straightforward computation of derivatives with respect to e t and a more tractable M-Step. For continuous missing data, a similar derivation holds. At this point, without loss of generality, we specifically attend to the case of a conditioned Gaussian mixture model and derive the corresponding M-Step calculations. This serves as an implementation example for comparison purposes. 4 CEM and Bound Maxinlization for Gaussian Mixtures In deriving an efficient M-step for the mixture of Gaussians, we call upon more bounding techniques that follow the CE-step and provide a monotonically convergent learning algori thm . The form ofthe condi tional model we will train is obtained by conditioning a joint mixture of Gaussians. We write the conditional density in a experts-gates form as in Equation 8. We use unnormalized Gaussian gates N(x; p,~) = exp( - ~(x p)T~-1 (x - p» since conditional models do not require true marginal densities over x (i.e. that necessarily integrate to 1). Also, note that the parameters of the gates (0:' , px , :Exx ) are independent of the parameters of the experts (vm,rm,om). Both gates and experts are optimized independently and have no variables in common. An update is performed over the experts and then over the gates. If each of those causes an increase, we converge to a local maximum of conditional loglikelihood (as in Expectation Conditional Maximization [5]). p(Ylx,8) To update the experts, we hold the gates fixed and merely take derivatives of the Q function with respect to the expert parameters (<l>m = {v m , rm, am} ) and set them to O. Each expert is effectively decoupled from other terms (gates, other experts, etc.). The solution reduces to maximizing the log of a single conditioned Gaussian and is analytically straightforward. 8Q(e t ,e(t-l») 8<1>'" (9) Similarly, to update the gate mixing proportions, derivatives of the Q function are taken with respect to O:'m and set to O. By holding the other parameters fixed , the update equation for the mixing proportions is numerically evaluated (Equation 10). N N O:'m := LriN(xi;P~,:E~x) le(l-I) {Lhim}-l (10) i=l i=l (7) (8) 498 T. Jebara and A. Pentland c, O~ ,1 \ 01 I \ ~ r i 01 I \ \ at ,f ',_ 00 , ',,---, //---~ ~, ,>/~ ..... : ' ., i \, _1 -2 _I ..!..: I ':li :~ j--c' '--... 01' ----- -00 '-,-- 10 Ii ''''\ ~ I ~ jr' --...-.;==i -J I·Ji I " ( a) f Function (b) Bound on f..L ( c) g Function (d) Bound on I: xx Figure 2: Bound Width Computation and Example Bounds 4.1 Bounding Gate Means Taking derivatives of Q and setting to a is not as straightforward for the case of the gate means (even though they are decoupled). What is desired is a simple update rule (i.e. computing an empirical mean). Therefore, we further bound the Q function for the M-step. The Q function is actually a summation of sub-elements Qim and we bound it instead by a summation of quadratic functions on the means (Equation 11). N M N M Q(et , e(t-1)) = L L Q(et , e(t-1))im > L L kim Wimllf..L~ ciml1 2 (11) i=l m=l i=l m=l Each quadratic bound has a location parameter cim (a centroid), a scale parameter Wim (narrowness), and a peak value at kim. The sum of quadratic bounds makes contact with the Q function at the old values of the model et - 1 where the gate mean was originally f..L':* and the covariance is I:':x*' To facilitate the derivation, one may assume that the previous mean was zero and the covariance was identity if the data is appropriately whitened with respect to a given gate. The parameters of each quadratic bound are solved by ensuring that it contacts the corresponding Qim function at et - 1 and they have equal derivatives at contact (i.e. tangential contact). Sol ving these constraints yields quadratic parameters for each gate m and data point i in Equation 12 (kim is omitted for brevity). > (12) The tightest quadratic bound occurs when Wim is minimal (without violating the inequality). The expression for Wim reduces to finding the minimal value, wim, as in Equation 13 (here p2 = xT xd. The f function is computed numerically only once and stored as a lookup table (see Figure 2(a)). We thus immediately compute the optimal wim and the rest of the quadratic bound's parameters obtaining bounds as in Figure 2(b) where a Qim is lower bounded. 1 2 ' , max 1 2 e- 2 C eCP - cp - 1 h· 1 2 h· * _ . { - -p } + 1m _ . - -p f( ) + 1m Wim rlCl'm C e 2 2 -- r l Cl'm e 2 p -c 2 2 (13) The gate means f..L~ are solved by maximizing the sum of the M x N parabolas which bound Q. The update is f..L': = (2: wimCim) (2: wim)-l. This mean is subsequently unwhitened to undo earlier data transformations. Maximum Conditional Likelihood via Bound Maximization and CEM (a) Data (b) CEM p(ylx) (c) CEM IC :l -10'-, " !' ~ ' . ..... : J ... . (d) EM fit (e) EM p(ylx) Figure 3: Conditional Density Estimation for CEM and EM 4.2 Bounding Gate Covariances 499 [ ,~, ] •. i 1 '~---- --:----" (f) EM IC Having derived the update equation for gate means, we now turn our attention to the gate covariances. We bound the Q function with logarithms of Gaussians. Maximizing this bound (a sum of log-Gaussians) reduces to the maximum-likelihood estimation of a covariance matrix. The bound for a Qim sub-component is shown in Equation 14. Once again, we assume the data has been appropriately whitened with respect to the gate's previous parameters (the gate's previous mean is 0 and previous covariance is identity). Equation 15 solves for the log-Gaussian parameters (again p2 = XTXi). Q(Dt,D(t-1));m > I (N) k T ",m -1 I I",m I (14) QQ. _ og = im WimCimL..xx Cim W im og L..xx (15) > The computation for the minimal Wim simplifies to wim = riQ:mg(p) . The 9 function is derived and plotted in Figure 2(c). An example of a log-Gaussian bound is shown in Figure 2( d) a sub-component of the Q function. Each sub-component corresponds to a single data point as we vary one gate's covariance. All M x N log-Gaussian bounds are computed (one for each data point and gate combination) and are summed to bound the Q function in its entirety. To obtain a final answer for the update of the gate covariances E~ we simply maximize the sum of log Gaussians (parametrized by wim, kim, Cim). The update is E~x = (2: WimCimCim T) (2: wim)-l. This covariance is subsequently unwhitened, inverting the whitening transform applied to the data. 5 Results The CEM algorithm updates the conditioned mixture of Gaussians by computing him and rim in the CE steps and interlaces these with updates on the experts, mixing proportions, gate means and gate covariances. For the mixture of Gaussians, each CEM update has a computation time that is comparable with that of an EM update (even for high dimensions). However, conditional likelihood (not joint) is monotonically increased. Consider the 4-cluster (x , y) data in Figure 3(a). The data is modeled with a conditional density p(ylx) using only 2 Gaussian models. Estimating the density with CEM yields the p(ylx) shown in Figure 3(b). CEM exhibits monotonic conditional likelihood growth (Figure 3(c)) and obtains a more conditionally likely model. In 500 T. Jebara and A. Pentland Algorithm Abalone Table 1: Test Results. Class label regression accuracy data. (CNNO=cascadecorrelation, a hidden units, CCN5=5 hidden LD=linear discriminant). the EM case, a joint p(x, y) clusters the data as in Figure 3(d). Conditioning it yields the p(ylx) in Figure 3(e). Figure 3(f) depicts EM's non-monotonic evolution of conditional log-likelihood. EM produces a superior joint likelihood but an inferior conditional likelihood. Note how the CEM algorithm utilized limited resources to capture the multimodal nature of the distribution in y and ignored spurious bimodal clustering in the x feature space. These properties are critical for a good conditional density p(ylx). For comparison , standard databases were used from DCI 2. Mixture models were trained with EM and CEM , maximizingjoint and conditional likelihood respectively. Regression results are shown in Table 1. CEM exhibited, monotonic conditional loglikelihood growth and out-performed other methods including EM with the same 2-Gaussian model (EM2 and CEM2). 6 Discussion We have demonstrated a variant of EM called CEM which optimizes conditional likelihood efficiently and monotonically. The application of CEM and bound maximization to a mixture of Gaussians exhibited promising results and better regression than EM . In other work, a MAP framework with various priors and a deterministic annealing approach have been formulated. Applications of the CEM algorithm to non-linear regressor experts and hidden Markov models are currently being investigated. Nevertheless, many applications CEM remain to be explored and hopefully others will be motivated to extend the initial results. Acknowledgements Many thanks to Michael Jordan and Kris Popat for insightful discussions. References [1] S. Amari. Information geometry of em and em algorithms for neural networks. Neural Networks, 8(9), 1995. [23] C. Bishop. Neural Networks Jor Pattern Recognition. Oxford Press, 1996. [ ] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal oj the Royal Statistical Society, B39, 1977. [4] M. Jordan and R. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural Computation, 6:181- 214, 1994. [5] X. Meng and D. Rubin. Maximum likelihood estimation via the ecm algorithm : A general framework. Biometrika, 80(2), 1993. [6] A. Popat. Conjoint probabilistic subband modeling (phd. thesis). Technical Report 461, M.LT. Media Laboratory, 1997. [7] 1. Xu, M. Jordan, and G. Hinton. An alternative model for mixtures of experts. In Neural InJormation Processing Systems 7, 1995. 2http://www.ics.uci.edu/'''-'mlearn/MLRepository.html
1998
125
1,481
Source Separation as a By-Product of Regularization Sepp Hochreiter Fakultat fur lnformatik Technische Universitat Munchen 80290 M unchen, Germany hochreit~informatik.tu-muenchen.de Abstract J urgen Schmidhuber IDSIA Corso Elvezia 36 6900 Lugano, Switzerland juergen~idsia.ch This paper reveals a previously ignored connection between two important fields: regularization and independent component analysis (ICA). We show that at least one representative of a broad class of algorithms (regularizers that reduce network complexity) extracts independent features as a by-product. This algorithm is Flat Minimum Search (FMS), a recent general method for finding low-complexity networks with high generalization capability. FMS works by minimizing both training error and required weight precision. According to our theoretical analysis the hidden layer of an FMS-trained autoassociator attempts at coding each input by a sparse code with as few simple features as possible. In experiments the method extracts optimal codes for difficult versions of the "noisy bars" benchmark problem by separating the underlying sources, whereas ICA and PCA fail. Real world images are coded with fewer bits per pixel than by ICA or PCA. 1 INTRODUCTION In the field of unsupervised learning several information-theoretic objective functions (OFs) have been proposed to evaluate the quality of sensory codes. Most OFs focus on properties of the code components we refer to them as code componentoriented OFs, or COCOFs. Some COCOFs explicitly favor near-factorial, minimally redundant codes of the input data [2, 17, 23, 7, 24] while others favor local codes [22,3, 15]. Recently there has also been much work on COCOFs encouraging biologically plausible sparse distributed codes [19,9, 25, 8, 6, 21, 11, 16]. While COCOFs express desirable properties of the code itself they neglect the costs of constructing the code from the data. E.g., coding input data without redun460 S. Hochreiter and J Schmidhuber dancy may be very expensive in terms of information required to describe the codegenerating network, which may need many finely tuned free parameters. We believe that one of sensory coding's objectives should be to reduce the cost of code generation through data transformations, and postulate that an important scarce resource is the bits required to describe the mappings that generate and process the codes. Hence we shift the point of view and focus on the information-theoretic costs of code generation. We use a novel approach to unsupervised learning called "lowcomplexity coding and decoding" (LOCOCODE [14]). Without assuming particular goals such as data compression, subsequent classification, etc., but in the spirit of research on minimum description length (MDL), LOCOCODE generates so-called lococodes that (1) convey information about the input data, (2) can be computed from the data by a low-complexity mapping (LCM), and (3) can be decoded by an LCM. We will see that by minimizing coding/decoding costs LOCOCODE can yield efficient, robust, noise-tolerant mappings for processing inputs and codes. Lococodes through regularizers. To implement LOCOCODE we apply regularization to an autoassociator (AA) whose hidden layer activations represent the code. The hidden layer is forced to code information about the input data by minimizing training error; the regularizer reduces coding/decoding costs. Our regularizer of choice will be Flat Minimum Search (FMS) [13]. 2 FLAT MINIMUM SEARCH: REVIEW AND ANALYSIS FMS is a general gradient-based method for finding low-complexity networks with high generalization capability. FMS finds a large region in weight space such that each weight vector from that region has similar small error. Such regions are called "flat minima". In MDL terminology, few bits of information are required to pick a weight vector in a "flat" minimum (corresponding to a low-complexity network) the weights may be given with low precision. FMS automatically prunes weights and units, and reduces output sensitivity with respect to remaining weights and units. Previous FMS applications focused on supervised learning [12, 13]. Notation. Let 0, H,I denote index sets for output, hidden, and input units, respectively. For lEO U H, the activation yl of unit 1 is yl = f (SI), where SI = Em Wlmym is the net input of unit 1 (m E H for lEO and mEl for 1 E H), Wlm denotes the weight on the connection from unit m to unit l, f denotes the activation function, and for mEl, ym denotes the m-th component of an input vector. W = 1(0 x H) U (H x 1)1 is the number of weights. Algorithm. FMS' objective function E features an unconventional error term: B = i'; ~UH log ~ (::'~j) 2 + W log ~ (i';~UH L I ~ ~ )') 2 kED 8Wij E = Eq + >'B is minimized by gradient descent, where Eq is the training set mean squared error (MSE), and >. a positive "regularization constant" scaling B's influence. Choosing>' corresponds to choosing a tolerable error level (there is no a priori "optimal" way of doing so). B measures the weight precision (number of bits needed to describe all weights in the net). Given a constant number of output units, FMS can be implemented efficiently, namely, with standard backprop's order of computational complexity [13]. Source Separation as a By-Product of Regularization 461 2.1 FMS: A Novel Analysis Simple basis functions (BFs). A BF is the function determining the activation of a code component in response to a given input. Minimizing B 's term T1 := ~ log~ -y( 8 k)2 ~ ~ 8w·· i,j : iEDuH kED tJ obviously reduces output sensitivity with respect to weights (and therefore units). T1 is responsible for pruning weights (and, therefore, units). T1 is one reason why low-complexity (or simple) BFs are preferred: weight precision (or complexity) is mainly determined by :!~j' Sparseness. Because T1 tends to make unit activations decrease to zero it favors sparse codes. But T1 also favors a sparse hidden layer in the sense that few hidden units contribute to producing the output. B's second term T2 := WlogL ( L kED i,j : iEDuH punishes units with similar influence on the output. We reformulate it: T2 = Wlog ("j, ~UH u ,u, ~OUH See intermediate steps in [14]. We observe: (1) an output unit that is very sensitive with respect to two given hidden units will heavily contribute to T2 (compare the numerator in the last term of T2). (2) This large contribution can be reduced by making both hidden units have large impact on other output units (see denominator in the last term of T2). Few separated basis functions. Hence FMS tries to figure out a way of using (1) as few BFs as possible for determining the activation of each output unit, while simultaneously (2) using the same BFs for determining the activations of as many output units as possible (common BFs). (1) and T1 separate the BFs: the force towards simplicity (see T1) prevents input information from being channelled through a single BF; the force towards few BFs per output makes them non-redundant. (1) and (2) cause few BFs to determine all outputs. Summary. Collectively T1 and T2 (which make up B) encourage sparse codes based on few separated simple basis functions producing all outputs. Due to space limitations a more detailed analysis (e.g. linear output activation) had to be left to a TR [14] (on the WWW). 462 S. Hochreiter and J. Schmidhuber 3 EXPERIMENTS We compare LOCOCODE to "independent component analysis" (ICA, e.g., [5, 1, 4, 18]) and "principal component analysis" (PCA, e.g., [20]). ICA is realized by Cardoso's JADE algorithm, which is based on whitening and subsequent joint diagonalization of 4th-order cumulant matrices. To measure the information conveyed by resulting codes we train a standard backprop net on the training set used for code generation. Its inputs are the code components; its task is to reconstruct the original input. The test set consists of 500 off-training set exemplars (in the case of real world images we use a separate test image). Coding efficiency is the average number of bits needed to code a test set input pixel. The code components are scaled to the interval [0,1] and partitioned into discrete intervals. Assuming independence of the code components we estimate the probability of each discrete code value by Monte Carlo sampling on the training set. To obtain the test set codes' bits per pixel (Shannon's optimal value) the average sum of all negative logarithms of code component probabilities is divided by the number of input components. All details necessary for reimplementation are given in [14]. Noisy bars adapted from [10, 11]. The input is a 5 x 5 pixel grid with horizontal and vertical bars at random positions. The task is to extract the independent features (the bars). Each of the 10 possible bars appears with probability k. In contrast to [10, 11] we allow for bar type mixing this makes the task hamer. Bar intensities vary in [0.1, 0.5]; input units that see a pixel of a bar are activated correspondingly others adopt activation -0.5. We add Gaussian noise with variance 0.05 and mean a to each pixel. For ICA and PCA we have to provide information about the number (ten) of independent sources (tests with n assumed sources will be denoted by ICA-n and PCA-n). LOCOCODE does not require this using 25 hidden units (HUs) we expect LOCOCODE to prune the 15 superfluous HUs. Results. See Table 1. While the reconstruction errors of all methods are similar, LOCOCODE has the best coding efficiency. 15 of the 25 HUs are indeed automatically pruned: LOCOCODE finds an optimal factorial code which exactly mirrors the pattern generation process. PCA codes and ICA-15 codes, however, are unstructured and dense. While ICA-lO codes are almost sparse and do recognize some sources, the sources are not clearly separated like with LOCOCODE compare the weight patterns shown in [14]. Real world images. Now we use more realistic input data, namely subsections of: 1) the aerial shot of a village, 2) an image of wood cells, and 3) an image of striped piece of wood. Each image has 150 x 150 pixels, each taking on one of 256 gray levels. 7 x 7 (5 x 5 for village) pixels subsections are randomly chosen as training inputs. Test sets stem from images similar to 1), 2), and 3). Results. For the village image LOCOCODE discovers on-center-off-surround hidden units forming a sparse code. For the other two images LOCOCODE also finds appropriate feature detectors see weight patterns shown in [14J. Using its compact, low-complexity features it always codes more efficiently than ICA and PCA. Source Separation as a By-Product of Regularization 463 expo input meth. num. rec. code bits per pixel: # intervals field camp. error type 10 20 50 100 bars 5x5 LOC 10 1.05 sparse 0.584 0.836 1.163 1.367 bars 5 x 5 lCA 10 1.02 sparse 0.811 1.086 1.446 1.678 bars 5 x 5 PCA 10 1.03 dense 0.796 1.062 1.418 1.655 bars 5x5 lCA 15 0.71 dense 1.189 1.604 2.142 2.502 bars 5 x 5 PCA 15 0.72 dense 1.174 1.584 2.108 2.469 village 5x5 LOC 8 1.05 sparse 0.436 0.622 0.895 1.068 village 5 x 5 lCA 8 1.04 sparse 0.520 0.710 0.978 1.165 village 5x5 PCA 8 1.04 dense 0.474 0.663 0.916 1.098 village 5 x 5 lCA 10 1.11 sparse 0.679 0.934 1.273 1.495 village 5x5 PCA 10 0.97 dense 0.578 0.807 1.123 1.355 village 7x7 LOC 10 8.29 sparse 0.250 0.368 0.547 0.688 village 7x7 lCA 10 7.90 dense 0.318 0.463 0.652 0.796 village 7x7 PCA 10 9.21 dense 0.315 0.461 0.648 0.795 village 7x7 lCA 15 6.57 dense 0.477 0.694 0.981 1.198 village 7x7 PCA 15 8.03 dense 0.474 0.690 0.972 1.189 cell 7x7 LOC 11 0.840 sparse 0.457 0.611 0.814 0.961 cell 7x7 lCA 11 0.871 sparse 0.468 0.622 0.829 0.983 cell 7x7 PCA 11 0.722 sparse 0.452 0.610 0.811 0.960 cell 7x7 lCA 15 0.360 sparse 0.609 0.818 1.099 1.315 cell 7x7 PCA 15 0.329 dense 0.581 0.798 1.073 1.283 piece 7x7 LOC 4 0.831 sparse 0.207 0.269 0.347 0.392 piece 7x7 lCA 4 0.856 sparse 0.207 0.276 0.352 0.400 piece 7x7 PCA 4 0.830 sparse 0.207 0.269 0.348 0.397 piece 7x7 lCA 10 0.716 sparse 0.535 0.697 0.878 1.004 piece 7x7 PCA 10 0.534 sparse 0.448 0.590 0.775 0.908 Table 1: Overview of experiments: name of experiment, input field size, coding method, number of relevant code components (code size), reconstruction error, nature of code observed on the test set. PCA's and ICA 's code sizes need to be pre wired. LOCOCODE's, however, are found automatically (we always start with 25 HUs). The final 4 columns show the coding efficiency measured in bits per pixel, assuming the real-valued HU activations are partitioned into 10, 20, 50, and 100 discrete intervals. LOCOCODE codes most effiCiently. 4 CONCLUSION According to our analysis LOCOCODE attempts to describe single inputs with as few and as simple features as possible. Given the statistical properties of many visual inputs (with few defining features), this typically results in sparse codes. Unlike objective functions of previous methods, however, LOCOCODE's does not contain an explicit term enforcing, say, sparse codes sparseness or independence are not viewed as a good things a priori. Instead we focus on the information-theoretic complexity of the mappings used for coding and decoding. The resulting codes typically compromise between conflicting goals. They tend to be sparse and exhibit low but not minimal redundancy if the cost of minimal redundancy is too high. Our results suggest that LOCOCODE'S objective may embody a general principle of unsupervised learning going beyond previous, more specialized ones. We see that there is at least one representative (FMS) of a broad class of algorithms (regularizers that reduce network complexity) which (1) can do optimal feature extraction as a by-product, (2) outperforms traditional ICA and PCA on visual source separation tasks, and (3) unlike ICA does not even need to know the number of independent sources in advance. This reveals an interesting, previously ignored connection be464 S. Hochreiter and J. Schmidhuber tween regularization and ICA, and may represent a first step towards unification of regularization and unsupervised learning. More. Due to space limitations, much additional theoretical and experimental analysis had to be left to a tech report (29 pages, 20 figures) on the WWW: see [14]. Acknowledgments. This work was supported by DFG grant SCHM 942/3-1 and DFG grant BR 609/10-2 from "Deutsche Forschungsgemeinschaft". References [1] S. Amari, A. Cichocki, and H.H. Yang. A new learning algorithm for blind signal separation. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 757-763. The MIT Press, Cambridge, MA, 1996. [2] H. B. Barlow, T. P. Kaushal, and G. J. Mitchison. Finding minimum entropy codes. Neural Computation, 1(3):412- 423, 1989. [3] H. G. Barrow. Learning receptive fields. In Proceedings of the IEEE 1st Annual Conference on Neural Networks, volume IV, pages 115- 121. IEEE, 1987. [4] A. J. Bell and T. J . Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6):11291159,1995. [5] J.-F. Cardoso and A. Souloumiac. Blind beamforming for non Gaussian signals. lEE Proceedings-F, 140(6):362- 370, 1993. [6] P. Dayan and R. Zemel. Competition and multiple cause models. Neural Computation, 7:565- 579, 1995. [7] G. Deco and L. Parra. Nonlinear features extraction by unsupervised redundancy reduction with a stochastic neural network. Technical report, Siemens AG, ZFE ST SN 41, 1994. [8] D. J . Field. What is the goal of sensory coding? Neural Computation, 6:559601, 1994. [9] P. Foldilik and M. P. Young. Sparse coding in the primate cortex. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 895898. The MIT Press, Cambridge, Massachusetts, 1995. [10] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The wake-sleep algorithm for unsupervised neural networks. Science, 268:1158- 1161,1995. [11] G. E. Hinton and Z. Ghahramani. Generative models for discovering sparse distributed representations. Philosophical Transactions of the Royal Society B, 352:1177- 1190,1997. [12] S. Hochreiter and J. Schmidhuber. Simplifying nets by discovering fiat minima. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 529- 536. MIT Press, Cambridge MA, 1995. [13] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):142,1997. [14] S. Hochreiter and J . Schmidhuber. LOCOCODE. Technical Report FKI-22297, Revised Version, Fakultat fUr Informatik, Technische Universitat Miinchen, 1998. Source Separation as a By-Product of Regularization 465 [15] T. Kohonen. Self-Organization and Associative Memory. Springer, second ed., 1988. [16] M. S. Lewicki and B. A. Olshausen. Inferring sparse, overcomplete image codes using an efficient coding framework. In M. 1. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10, 1998. To appear. [17J R. Linsker. Self-organization in a perceptual network. IEEE Computer, 21:105117,1988. [18] L. Molgedey and H. G. Schuster. Separation of independent signals using timedelayed correlations. Phys. Reviews Letters, 72(23):3634- 3637, 1994. [19] M. C. Mozer. Discovering discrete distributed representations with iterative competitive learning. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 627- 634. San Mateo, CA: Morgan Kaufmann, 1991. [20J E. Oja. Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1(1):61- 68, 1989. [21] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607609, 1996. [22] D. E. Rumelhart and D. Zipser. Feature discovery by competitive learning. In Parallel Distributed Processing, pages 151- 193. MIT Press, 1986. [23J J. Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863- 879, 1992. [24] S. Watanabe. Pattern Recognition: Human and Mechanical. Willey, New York, 1985. [25] R. S. Zemel and G. E. Hinton. Developing population codes by minimizing description length. In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 11- 18. San Mateo, CA: Morgan Kaufmann, 1994.
1998
126
1,482
Analog VLSI Cellular Implementation of the Boundary Contour System Gert Cauwenberghs and James Waskiewicz Department of Electrical and Computer Engineering Johns Hopkins University 3400 North Charles Street Baltimore, MD 21218-2686 E-mail: {gert, davros }@bach. ece. jhu. edu Abstract We present an analog VLSI cellular architecture implementing a simpli. fied version of the Boundary Contour System (BCS) for real-time image processing. Inspired by neuromorphic models across several layers of visual cortex, the design integrates in each pixel the functions of simple cells, complex cells, hyper-complex cells, and bipole cells, in three orientations interconnected on a hexagonal grid. Analog current-mode CMOS circuits are used throughout to perform edge detection, local inhibition, directionally selective long-range diffusive kernels, and renormalizing global gain control. Experimental results from a fabricated 12 x 10 pixel prototype in 1.2 J-tm CMOS technology demonstrate the robustness of the architecture in selecting image contours in a cluttered and noisy background. 1 Introduction The Boundary Contour System (BCS) and Feature Contour System (FCS) combine models for processes of image segmentation, feature filling, and surface reconstruction in biological vision systems [1 ],[2]. They provide a powerful technique to recognize patterns and restore image quality under excessive fixed pattern noise, such as in SAR images [3]. A related model with similar functional and structural properties is presented in [4]. The motivation for implementing a relatively complex model such as BCS and FCS on the focal-plane is dual. First, as argued in [5], complex neuromorphic active pixel designs become viable engineering solutions as the feature size of the VLSI technology shrinks significantly below the optical diffraction limit, and more transistors can be stuffed in each pixel. The pixel design that we present contains 88 transistors, likely the most complex 658 Bipole Cells (long-range orientational cooperaHon) Input Image (locally normalized and contrast enhanced; diffused) BCS FCS G. Cauwenberghs and J. Waskiewicz ...... Diffusive Network ...... Local itlhibiticnt Focal-Plane Receptors; ...... Ri1ndom-Access Inputs Figure 1: Diagram of BCSIFCS model for image segmentation, feature filling, and surface reconstruction. Three layers represent simple, complex and bipole cells. active pixel imager ever put on silicon. Second, our motivation is to extend the functionality of previous work on analog VLSI neuromorphic image processors for image boundary segmentation, e.g. [6, 7, 5, 8,9] which are based on simplified physical models that do not include directional selectivity and/or long-range signal aggregation for boundary fonnation in the presence of significant noise and clutter. The analog VLSI implementation of BCS reported here is a first step towards this goal, with the additional objectives of real-time, low-power operation as required for demanding target recognition applications. As an alternative to focal-plane optical input, the image can be loaded electronically through random-access pixel addressing. The BCS model encompasses visual processing at different levels, including several layers of cells interacting through shunting inhibition, long-range cooperative excitation, and renonnalization. The implementation architecture, shown schematically in Figure 1, partitions the BCS model into three levels: simple cells, complex and hypercomplex cells, and bipole cells. Simple cells compute unidirectional gradients of nonnalized intensity obtained from the photoreceptors. Complex (hyper-complex) cells perfonn spatial and directional competition (inhibition) for edge fonnation. Bipole cells perfonn long-range cooperation for boundary contour enhancement, and exert positive feedback (excitation) onto the hypercomplex cells. Our present implementation does not include the FCS model, which completes and fills features through diffusive spatial filtering of the image blocked by the edges fonned in BCS. 2 Modified BeS Algorithm and Implementation We adopted the BCS algorithm for analog continuous-time implementation on a hexagonal grid, extending in three directions u, v and w on the focal plane as indicated schematically in Figure 2. For notational convenience, let subscript 0 denote the center pixel and ±u, ±v and ±w its six neighbors. Components of each complex cell "vector" C i at grid location i, along three directions of edge selectivity, are indicated with superscript indices u, v and w. In the implemented circuit model, a pixel unit consists of a photosensor (or random-access analog memory) sourcing a current indicating light intensity, gradient computation and rectification circuits implementing simple cells in three directions, and one complex (hyperAnalog VLSI Cellular Implementation of the Boundary Contour System 659 Figure 2: Hexagonal arrangement of Bes pixels, at the level of simple and complex cells, extending in three directions u, v and w in the focal plane. complex) cell and one bipole cell for each of the three directions. The photosensors generate a current Ii that is proportional to intensity. Through current mirrors, the currents Ii propagate in the three directions u, v, and w as noted in Figure 2. Rectified finite-difference gradient estimates of Ii are obtained for each of the three hexagonal directions. These gradients excite the complex cells cl. Lateral inhibition among spatially (i) and directionally (j) adjacent complex cells implement the function of hypercomplex cells for edge enhancement and noise reduction. The complex output (Cl) is inhibited by local complex cell outputs in the two competing directions of j. Co is additionally inhibited by the complex cells of the four nearest neighbors in competing locations i with parallel orientation. A directionally selective interconnected diffusive network of bipole cells Bf, interacting with the complex cells cl, provides long range cooperative feedback, and enhances smooth edge contours while reducing spurious edges due to image clutter. cl is excited by bipole interaction received from the bipole cell Bf on the line crossing i in the same direction j. The operation of the (hyper-)complex cells in the hexagonal arrangement is summarized in the following equation, for one of the three directions u: where: 1. 1~(Iv + Iw) - 101 represents the rectified gradient input as approximated on the hexagonal grid; 2. 0:(C8 + Co) is the inhibition from locally opposing directions; 3. o:'(C;: + c::; + c~v + C~w) is inhibition from non-aligned neighbors in the same direction; and 4. f3B8 is the excitation through long-range cooperation from the bipole cell. 660 G. Cauwenberghs and J Waskiewicz Figure 3: Network of bipole cells, implemented on a hexagonal resistive grid using orientationally tuned diffusors extending in three directions. glat! gvert determines the spatial extent of the dipole, whereas glat! gcross sets the directional selectivity. The bipole cell resistive grid (Figure 3) implements a three-fold cross-coupled, directionally polarized, long-range diffusive kernel, formulated as follows: (2) where K::, K::, and K~ represent spatial convolutional kernels implementing bipole fields symmetrically polarized in the u, v and w directions. Diffusive kernels can be efficiently implemented with a distributed representation using resistive diffusive elements [7, 10]. Three linear networks of diffusor elements are used, complemented with cross-links of adjustable strength, to control the degree of direction selectivity and the spatial spread of the kernel. Finally, the result (2) is locally normalized, before it is fed back onto the complex cells. 3 Analog VLSI Implementation The simplified circuit diagram of the BCS cell, including simple, complex and bipole cell functions on a hexagonal grid, is shown in Figure 4. The image is acquired either optically from phototransistors on the focal-plane, or in direct electronic format through random-access pixel addressing, Figure 4 (a). The simple cell portion in Figure 4 (b) combines the local intensity 10 with intensities Iv and Iw received from neighboring cells to compute the rectified gradient in (l), using distributed current mirrors and an absolute value circuit. A pMOS load converts the complex cell output into a voltage representation C8 for distribution to neighboring nodes and complementary orientations: local inhibition for spatial and directional competition in Figure 4 (c), and longrange cooperation through the bipole layer in Figure 4 (d). The linear diffusive kernel is implemented in current-mode using ladder structures of subthreshold MOS transistors [7], three families extending in each direction with cross-links for directional dispersion as indicated in Figure 3. Voltage biases control the spatial extent and directional selectivity of the interactions, as Analog VLSI Cellular Implementation of the Boundary Contour System 661 Va Va' Vo Cov Cow C+~ ~_WU 4 ~ PHOTO CoU II~Y (C) Vin T Vnorm (a) Cou Bou B+uu V+v4 Cou Bov BoW Vve~ VbM~1Mm Vnorm ~IBoU Vthresh4 v~ (b) (d) (e) Figure 4: Simplified circuit schematic of one BCS cell in the hexagonal array, showing only one of three directions, the other directions being symmetrical in implementation. (a) Photosensor and random-access input selection circuit. (b) Simple cell rectified gradient calculation. (c) Complex cell spatial and orientational inhibition. (d) Bipole cell directionallong range cooperation. ( e) Bipole global gain and threshold control. well as the relative strength of inhibition and excitation, and the level of renormalization, for the complex and bipole cells. The values for gvert. glat and gcross controlling the bipole kernel are set externally by applying gate bias voltages Vvert. Vlat and Vcross, respectively. Likewise, the constants a, a' and /3 in (1) are set independently by the applied source voltages Va, Va' and Vt1. Global normalization and thresholding of the bipo1e response for improved stability of edge formation is achieved through an additional diffusive network that acts as a localized Gilbert-type current normalizer (only partially shown in Figure 4 (e». 4 Experimental Results A prototype 12 x 10 pixel array has been fabricated and tested. The pixel unit, illustrated in Figure 5 (a), has been designed for testability, and has not been optimized for density. The pixel contains 88 transistors including a phototransistor, a large sample-and-hold capacitor, and three networks of interconnections in each of the three directions, requiring a fanin/fan-out of 18 node voltages across the interface of each pixel unit. A micrograph of the Tiny 2.2 x 2.2 sq. mm chip, fabricated through MOSIS in 1.2 J.Lm CMOS technology, is shown in Figure 5 (b). We have tested the BCS chip both under focal-plane optical inputs, and random-access direct electronic inputs. Input currents from optical input under ambient room lighting conditions are around 30 nA. The experimental results reported here are obtained by feeding test inputs electronically. The response of the BCS chip to two test images of interest are shown in Figures 6 and 7. 662 G. Cauwenberghs and J Waskiewicz (a) (b) Figure 5: BCS processor. (a) Pixel layout. (b) Chip micrograph. (a) ~~~~~-ir\~"':";-4;­ \ \ -- ~~~ -\- -\-,""""':"--+-4;- .. - .. -\ -,,'I \ I II-~ -->';.... ->.;-* :..;-.;:..; -I- ' -\-~ :";- ':""'l){ X \ \ \ '. -\ 1 \ -:- -:-- ' \ + :..;* :..; .:,, \- ..,.+ , ...\,.. \ - \ \ \ X \ \ / \ ..... -*-\-+-r '\: *- ~ (b) . , \ - " '. . .. .. , " \ . . . , \ --~---~ -\-+ ':"""T ... \ ){ + +~~ --.:.,.. , --:- ,***\ /\ \-" -\-*~ *-\-*+ , .:.,..-';.' \ l \ . -'.;... - '* * \ '. -'o- ~ -'o-\-\-- -\-....,.* \- ....... \ \ \ \ Y \ j \ \ \ \ j \ . \ (c) Figure 6: Experimental response of the BCS chip to a curved edge. (a) Reconstructed input image. (b) Complex field. (c) Bipolefield. The thickness of the bars on the grid represent the measured components in the three directions. Figure 6 illustrates the interpolating directional response to a curved edge in the input. varying in direction between two of the principal axes (u and w in the example). Interpolation between quantized directions is important since implementing more axes on the grid incurs a quadratic cost in complexity. The second example image contains a bar with two gaps of different diameter. for the purpose of testing BCS's capacity to extend contour boundaries across clutter. The response in Figure 7 illustrates a characteristic of bipole operation. in which short-range discontinuities are bridged but large ones are preserved. 5 Conclusions An analog VLSI cellular architecture implementing the Boundary Contour System (BCS) on the focal plane has been presented. A diffusive kernel with distributed resistive networks has been used to implement long-range interactions of bipole cells without the need of excessive global interconnects across the array of pixels. The cellular model is fairly easy to implement. and succeeds in selecting boundary contours in images with significant clutter. Analog VLSI Cellular Implementation of the Boundary Contour System 663 (a) -T~~+-\-~ ---:--...!.r-\-~ \.\ \ ''' \ \\\\ ---~---~-I \ I , \ \ I \ I X +-*~",-"'"-~­ \ ; \ 1" \ . X \ -\- * - -- -I- - -\- + -\----1 .. ----T++ ~ + - +++* • \ \ ,v . • , '. • \, . '\ \ \, . \ \ (b) ~~ -\-~ ~ ___ -+---Io;-*"'\ \ \ \ \ \ \ , \ \ \ -~ .. ~+~...!.::-+ .. ~ 1 \ ~ \ \ \ I \ ~ \ -f--'-*-'I""-\-~+-'-*...:_ \ \ \ ~ \ 1 '. \ X \ '\ ,; '. :,' , 1. \ \ '/ \ ----+~~--~ ~ j x x Y ~ x \ X / ~ \\ .:\\ \ \\\\ \ "\ -.\\\\\ '~\ \~~ \\\\\\\ (c) Figure 7: Experimental response of the BCS chip to a bar with two gaps of different size. (a) Reconstructed input image. (b) Complex field. (c) Bipolefield. Experimental results from a 12 x 10 pixel prototype demonstrate expected BCS operation on simple examples. While this size is small for practical applications, the analog cellular architecture is fully scalable towards higher resolutions. Based on the current design, a 10, OOO-pixel array in 0.5 J.tm CMOS technology would fit a 1 cm2 die. Acknowledgments This research was supported by DARPA and ONR under MURI grant NOO0l4-95-1-0409. Chip fabrication was provided through the MOSIS service. References [1] S. Grossberg, "Neural Networks for Visual Perception in Variable Illumination," Optics News, pp. 5-10, August 1988. [2] S. Grossberg, "A Solution of the Figure-Ground Problem for Biological Vision," Neural Networks, vol. 6, pp. 463-482, 1993. [3] S. Grossberg, E. Mingolla, and J. Williamson, "Synthetic Aperture Radar Processing by a Multiple Scale Neural System for Boundary and Surface Representation," Neural Networks, vol. 9 (1), January 1996. [4] Z.P. Li, "A Neural Model of Contour Integration in the Primary Visual Cortex," Neural Computation, vol. 10 (4), pp. 903-940, 1998. [5] K.A. Boahen, "A Retinomorphic Vision System," IEEE Micro, vol. 16 (5), pp. 30-39, Oct. 1996. [6] lG. Harris, C. Koch, and J. Luo, "A Two-Dimensional Analog VLSI Circuit for Detecting Discontinuities in Early Vision," Science, vol. 248, pp. 1209-1211, June 1990. [7] A.G. Andreou, K.A. Boahen, P.O. Pouliquen, A. Pavasovic, R.E. Jenkins, and K. Strohbehn, "Current-Mode Subthreshold MOS Circuits for Analog VLSI Neural Systems," IEEE Transactions on Neural Networks, vol. 2 (2), pp 205-213, 1991. [8] L. Dron McIlrath, "A CCD/CMOS Focal-Plane Array Edge Detection Processor Implementing the Multiscale Veto Algorithm," IEEE 1. Solid State Circuits, vol. 31 (9), pp 1239-1248, 1996. [9] P. Venier, A. Mortara. X. Arreguit and E.A. Vittoz, "An Integrated Cortical Layer for Orientation Enhancement," IEEE 1. Solid State Circuits, vol. 32 (2), pp 177-186, Febr. 1997. [10] E. Fragniere, A. van Schaik and E. Vittoz, "Reactive Components for Pseudo-Resistive Networks," Electronic Letters, vol. 33 (23), pp 1913-1914, Nov. 1997.
1998
127
1,483
Unsupervised Classification with Non-Gaussian Mixture Models using ICA Te-Won Lee, Michael S. Lewicki and Terrence Sejnowski Howard Hughes Medical Institute Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037, USA {tewon,lewicki,terry}Osalk.edu Abstract We present an unsupervised classification algorithm based on an ICA mixture model. The ICA mixture model assumes that the observed data can be categorized into several mutually exclusive data classes in which the components in each class are generated by a linear mixture of independent sources. The algorithm finds the independent sources, the mixing matrix for each class and also computes the class membership probability for each data point. This approach extends the Gaussian mixture model so that the classes can have non-Gaussian structure. We demonstrate that this method can learn efficient codes to represent images of natural scenes and text. The learned classes of basis functions yield a better approximation of the underlying distributions of the data, and thus can provide greater coding efficiency. We believe that this method is well suited to modeling structure in high-dimensional data and has many potential applications. 1 Introd uction Recently, Blind Source Separation (BSS) by Independent Component Analysis (ICA) has shown promise in signal processing applications including speech enhancement systems, telecommunications and medical signal processing. ICA is a technique for finding a linear non-orthogonal coordinate system in multivariate data. The directions of the axes of this coordinate system are determined by the data's second- and higher-order statistics. The goal of the ICA is to linearly transform the data such that the transformed variables are as statistically independent from each Unsupervised Classification with Non-Gaussian Mixture Models Using ICA 509 other as possible (Bell and Sejnowski, 1995; Cardoso and Laheld, 1996; Lee et al., 1999a). ICA generalizes the technique of Principal Component Analysis (PCA) and, like PCA, has proven a useful tool for finding structure in data. One limitation of ICA is the assumption that the sources are independent. Here, we present an approach for relaxing this assumption using mixture models. In a mixture model (Duda and Hart, 1973), the observed data can be categorized into several mutually exclusive classes. When the class variables are modeled as multivariate Gaussian densities, it is called a Gaussian mixture model. We generalize the Gaussian mixture model by modeling each class with independent variables (ICA mixture model). This allows modeling of classes with non-Gaussian (e.g., platykurtic or leptokurtic) structure. An algorithm for learning the parameters is derived using the expectation maximization (EM) algorithm. In Lee et al. (1999c), we demonstrated that this approach showed improved performance in data classification problems. Here, we apply the algorithm to learning efficient codes for representing different types of images. 2 The ICA Mixture Model We assume that the data were generated by a mixture density (Duda and Hart, 1973): K p(xI8) = LP(xICk,(h)p(Ck), (1) k=l where 8 = ((}l,'" ,(}K) are the unknown parameters for each p(xICk, (}k), called the component densities. We further assume that the number of classes, K, and the a priori probability, p(Ck ), for each class are known. In the case of a Gaussian mixture model, p(XICk , (}k) ex N(f-Lk, Ek)' Here we assume that the form of the component densities is non-Gaussian and the data within each class are described by an ICA model. Xk = AkSk + bk, (2) where Ak is a N x M scalar matrix (called the basis or mixing matrix) and b k is the bias vector for class k. The vector Sk is called the source vector (these are also the coefficients for each basis vector). It is assumed that the individual sources Si within each class are mutually independent across a data ensemble. For simplicity, we consider the case where Ak is full rank, i.e. the number of sources (M) is equal to the number of mixtures (N). Figure 1 shows a simple example of a dataset that can be described by ICA mixture model. Each class was generated from eq.2 using a different A and b. Class (0) was generated by two uniform distributed sources, whereas class (+) was generated by two Laplacian distributed sources (P(s) ex exp( -lsl)). The task is to model the unlabeled data points and to determine the parameters for each class, A k , bk and the probability of each class p( Ck lx, (}l:K) for each data point. A learning algorithm can be derived by an expectation maximization approach (Ghahramani, 1994) and implemented in the following steps: • Compute the log-likelihood of the data for each class: 10gp(xICk,(}k) = logp(sk) -log(det IAkl), where (}k = {Ak, bk,Sd· • Compute the probability for each class given the data vector x (C I () ) p(XI(}k' Ck)p(Ck) p k x, 1 . K ==--'---:---:--,:--:-=-:-:-----:--::-:-:. LkP(xl(}k,Ck)P(Ck) (3) (4) 510 T.-w. Lee, M. S. Lewicki and T. 1. Sejnowski 10 + + + + + + + '" )( ++ + + ... + + + + + -5 -10 -5 0 10 Xl Figure 1: A simple example for classifying an ICA mixture model. There are two classes (+) and (0); each class was generated by two independent variables, two bias terms and two basis vectors. Class (0) was generated by two uniform distributed sources as indicated next to the data class. Class (+) was generated by two Laplacian distributed sources with a sharp peak at the bias and heavy tails. The inset graphs show the distributions of the source variables, Si ,k, for each basis vector. • Adapt the basis functions A and the bias terms b for each class. The basis functions are adapted using gradient ascent 8 ex: 8Ak 10gp(xIBI:K) 8 p(Cklx, B1:K) 8Ak 10gp(xICk, Ok). (5) Note that this simply weights any standard ICA algorithm gradient by p(Cklx,OI:K). The gradient can also be summed over multiple data points. The bias term is updated according to b Lt Xtp( Ck IXt, BI:K ) k-Ltp(Cklxt,OI:K) , (6) where t is the data index (t = 1, ... , T) . The three steps in the learning algorithm perform gradient ascent on the total likelihood of the data in eq .1. The extended infomax ICA learning rule is able to blindly separate mixed sources with sub- and super-Gaussian distributions. This is achieved by using a simple type of learning rule first derived by Girolami (1998). The learning rule in Lee et al. (1999b) uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and super-Gaussian regimes. The learning rule expressed in terms of W = A-I, called the filter matrix is: AWex: [1 - K tanh(u)uT - uuT ] W , (7) Unsupervised Classification with Non-Gaussian Mixture Models Using lCA 511 where ki are elements of the N-dimensional diagonal matrix K and u = Wx. The unmixed sources u are the source estimate s (Bell and Sejnowski, 1995). The ki's are (Lee et al., 1999b) ki = sign (E[sech2ui]E[u~] - E[Ui tanh UiJ) . (8) The source distribution is super-Gaussian when ki = 1 and sub-Gaussian when ki = -1. For the log-likelihood estimation in eq.3 the term log p{ s) can be approximated as follows S2 logp(s) ex- 2:logcoshsn ; n S2 logp(s) ex+ 2: log cosh Sn - ; n super-Gaussian sub-Gaussian (9) Super-Gaussian densities, are approximated by a density model with heavier tail than the Gaussian density; Sub-Gaussian densities are approximated by a bimodal density (Girolami, 1998). Although the source density approximation is crude it has been demonstrated that they are sufficient for standard leA problems (Lee et al., 1999b). When learning sparse representations only, a Laplacian prior (p(s) ex exp{ -lsi» can be used for the weight update which simplifies the infomax learning rule to ~ W ex [I - sign(u)uT ] W, (10) logp(s) ex - 2: ISnl Laplacian prior n 3 Learning efficient codes for images Recently, several approaches have been proposed to learn image codes that utilize a set of linear basis functions. Olshausen and Field (1996) used a sparseness criterion and found codes that were similar to localized and oriented receptive fields. Similar results were presented by Bell and Sejnowski (1997) using the infomax algorithm and by Lewicki and Olshausen (1998) using a Bayesian approach. By applying the leA mixture model we present results which show a higher degree of flexibility in encoding the images. We used images of natural scenes obtained from Olshausen and Field (1996) and text images of scanned newspaper articles. The training set consisted of 12 by 12 pixel patches selected randomly from both image types. Figure 2 illustrates examples of those image patches. Two complete basis vectors Al and A2 were randomly initialized. Then, for each gradient in eq.5 a stepsize was computed as a function of the amplitude of the basis vectors and the number of iterations. The algorithm converged after 100,000 iterations and learned two classes of basis functions as shown in figure 3. Figure 3 (top) shows basis functions corresponding to natural images. The basis functions show Gabor1-like structure as previously reported in (Olshausen and Field, 1996; Bell and Sejnowski, 1997; Lewicki and Olshausen, 1998). However, figure 3 (bottom) shows basis functions corresponding to text images. These basis functions resemble bars with different lengths and widths that capture the high-frequency structure present in the text images. 3.1 Comparing coding efficiency We have compared the coding efficiency between the leA mixture model and similar models using Shannon's theorem to obtain a lower bound on the number of bits 1 Gaussian modulated siusoidal 512 T-W Lee, M S. Lewicki and T. J. Sejnowski am M ~ W3 tiifi ~'1 Z a .!IE R m I!!B!!tlr;a .. lIPS ~j.l 111 ~ au t:k.1 __ Ui .. :1111 ~ lUi OJ BII KG Figure 2: Example of natural scene and text image. The 12 by 12 pixel image patches were randomly sampled from the images and used as inputs to the ICA mixture model. required to encode the pattern. #bits ~ -log2 P(xIA) - Nlog2(O"x), (11) where N is the dimensionality of the input pattern x and o"x is the coding precision (standard deviation of the noise introduced by errors in encoding). Table 1 compares the coding efficiency of five different methods. It shows the number of bits required to encode three different test data sets (5000 image patches from natural scenes, 5000 image patches from text images and 5000 image patches from both image types) using five different encoding methods (ICA mixture model, nature trained ICA, text trained ICA, nature and text trained ICA, and PCA trained on all three test sets). It is clear that ICA basis functions trained on natural scene images exhibit the best encoding when only natural scenes are presented (column: nature). The same applies to text images (column: text). Note that text training yields a reasonable basis for both data sets but nature training gives a good basis only for nature. The ICA mixture model shows the same encoding power for the individual test data sets, and it gives the best encoding when both image types are present. In this case, the encoding difference between the ICA mixture model and PCA is Significant (more than 20%). ICA mixtures yielded a small improvement over ICA trained on both image types. We expect the size of the improvement to be greater in situations where there are greater differences among the classes. An advantage of the mixture model is that each image patch is automatically classified .. 4 Discussion The new algorithm for unsupervised classification presented here is based on a maximum likelihood mixture model using ICA to model the structure of the classes. We have demonstrated here that the algorithm can learn efficient codes to represent different image types such as natural scenes and text images. In this case, the learned classes of basis functions show a 20% improvement over PCA encoding. ICA mixture model should show better image compression rates than traditional compression algorithm such as JPEG. The ICA mixture model is a nonlinear model in which each class is modeled as a linear process and the choice of class is modeled using probabilities. This model Unsupervised Classification with Non-Gaussian Mixture Models Using ICA 513 Figure 3: (Left) Basis function class corresponding to natural images. (Right) Basis function class corresponding to text images. Table 1: Comparing coding efficiency Test data Training set and model Nature Text Nature and Text ICA mixtures 4.72 5.20 4.96 Nature trained ICA 4.72 9.57 7.15 Text trained ICA 5.00 5.19 5.10 Nature and text trained ICA 4.83 5.29 5.07 peA 6.22 5.97 6.09 Codmg efficIency (bIts per pIxel) of five methods IS compared for three test sets. Coding precision was set to 7 bits (Nature: U x = 0.016 and Text: U x = 0.029). can therefore be seen as a nonlinear ICA model. Furthermore, it is one way of relaxing the independence assumption over the whole data set. The ICA mixture model is a conditional independence model, i.e., the independence assumption holds within only each class and there may be dependencies among classes. A different view of the ICA mixture model is to think of the classes of being an overcomplete representation. Compared to the approach of Lewicki and Sejnowski (1998), the main difference is that the basis functions learned here are mutually exclusive, i.e. each class uses its own set of basis functions. This method is similar to other approaches including the mixture density networks by Bishop (1994) in which a neural network was used to find arbitrary density functions. This algorithm reduces to the Gaussian mixture model when the source priors are Gaussian. Purely Gaussian structure, however, is rare in real data sets. Here we have used priors of the form of super-Gaussian and sub-Gaussian densities. But these could be extended as proposed by Attias (1999). The proposed model was used for learning a complete set of basis functions without additive noise. However, the method can be extended to take into account additive Gaussian noise and an overcomplete set of basis vectors (Lewicki and Sejnowski, 1998). In (Lee et al., 1999c), we have performed several experiments on benchmark data sets for classification problems. The results were comparable or improved over those obtained by AutoClass (Stutz and Cheeseman, 1994) which uses a Gaussian mixture 514 T.-w. Lee, M. S. Lewicki and T. J. Sejnowski model. Furthermore, we showed that the algorithm can be applied to blind source separation in nonstationary environments. The method can switch automatically between learned mixing matrices in different environments (Lee et al., 1999c). This may prove to be useful in the automatic detection of sleep stages by observing EEG signals. The method can identify these stages due to the changing source priors and their mixing. Potential applications of the proposed method include the problem of noise removal and the problem of filling in missing pixels. We believe that this method provides greater flexibility in modeling structure in high-dimensional data and has many potential applications. References Attias, H. (1999). Blind separation of noisy mixtures: An EM algorithm for independent factor analysis. Neural Computation, in press. Bell, A. J. and Sejnowski, T. J. (1995). An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Computation, 7:1129-1159. Bell, A. J. and Sejnowski, T. J. (1997). The 'independent components' of natural scenes are edge filters. Vision Research, 37(23):3327-3338. Bishop, C. (1994). Mixture density networks. Technical Report, NCRG/4288. Cardoso, J.-F. and Laheld, B. (1996). Equivariant adaptive source separation. IEEE Trans. on S.P., 45(2):434- 444. Duda, R. and Hart, P. (1973). Pattern classification and scene analysis. Wiley, New York. Ghahramani, Z. (1994). Solving inverse problems using an em approach to density estimation. Proceedings of the 1993 Connectionist Models Summer School, pages 316--323. Girolami, M. (1998). An alternative perspective on adaptive independent component analysis algorithms. Neural Computation, 10(8):2103-2114. Lee, T.-W., Girolami, M., Bell, A. J., and Sejnowski, T. J. (1999a). A unifying framework for independent component analysis. International Journal on Mathematical and Computer Models, in press. Lee, T.-W., Girolami, M., and Sejnowski, T. J. (1999b). Independent component analysis using an extended infomax algorithm for mixed sub-gaussian and supergaussian sources. Neural Computation, 11(2):409-433. Lee, T.-W., Lewicki, M. S., and Sejnowski, T. J. (1999c). ICA mixture models for unsupervised classification and automatic context switching. In International Workshop on ICA, Aussois, in press. Lewicki, M. and Olshausen, B. (1998). Inferring sparse, overcomplete'image codes using an efficient coding framework. In Advances in Neural Information Processing Systems 10, pages 556-562. Lewicki, M. and Sejnowski, T. J. (1998). Learning nonlinear overcomplete represenations for efficient coding. In Advances'in Neural Information Processing Systems 10, pages 815-821. Olshausen, B. and Field, D. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607-609. Stutz, J. and Cheeseman, P. (1994). Autoclass - a Bayesian approach to classification. Maximum Entropy and Bayesian Methods, Kluwer Academic Publishers.
1998
128
1,484
Barycentric Interpolators for Continuous Space & Time Reinforcement Learning Remi Munos & Andrew Moore Robotics Institute, Carnegie Mellon University Pittsburgh, PA 15213, USA. E-mail: {munos, awm }@cs.cmu.edu Abstract In order to find the optimal control of continuous state-space and time reinforcement learning (RL) problems, we approximate the value function (VF) with a particular class of functions called the barycentric interpolators. We establish sufficient conditions under which a RL algorithm converges to the optimal VF, even when we use approximate models of the state dynamics and the reinforcement functions. 1 INTRODUCTION In order to approximate the value function (VF) of a continuous state-space and time reinforcement learning (RL) problem, we define a particular class of functions called the barycentric interpolator, that use some interpolation process based on finite sets of points. This class of functions, including continuous or discontinuous piecewise linear and multi-linear functions, provides us with a general method for designing RL algorithms that converge to the optimal value function. Indeed these functions permit us to discretize the HJB equation of the continuous control problem by a consistent (and thus convergent) approximation scheme, which is solved by using some model of the state dynamics and the reinforcement functions. Section 2 defines the barycentric interpolators. Section 3 describes the optimal control problem in the deterministic continuous case. Section 4 states the convergence result for RL algorithms by giving sufficient conditions on the applied model. Section 5 gives some computational issues for this method, and Section 6 describes the approximation scheme used here and proves the convergence result. Barycentric Interpolators for Continuous Reinforcement Learning 1025 2 DEFINITION OF BARYCENTRIC INTERPOLATORS Let I:0 = {~di be a set of points distributed at some resolution <5 (see (4) below) on the state space of dimension d. For any state x inside some simplex (6, ... , ~n), we say that x is the barycenter of the {~di=Ln inside this simplex with positive coefficients P(XI~i) of sum 1, called the barycentric coordinates, if x = Li=1..np(xl~i)'~i' Let VO (~i) be the value of the function at the points ~i. Va is a barycentric interpolator if for any state x which is the barycenter of the points {~di=1.n for some simplex (6, ... ,~n), with the barycentric coordinates p(xl~d, we have: (1) Moreover we assume that the simplex (~1' ... , ~n) is of diameter 0(<5). Let us describe some simple barycentric interpolators: • Piecewise linear functions defined by some triangulation on the state space (thus defining continuous functions), see figure La, or defined at any x by a linear combination of (d + 1) values at any points (6, ... , ~d+ d 3 x (such functions may be discontinuous at some boundaries), see figure Lb . • Piecewise multi-linear functions defined by a multi-linear combination of the 2d values at the vertices of d-dimensional rectangles, see figure 1.c. In this case as well, we can build continuous interpolations or allow discontinuities at the boundaries of the rectangles. An important point is that the convergence result stated in Section 4 does not require the continuity of the function. This permits us to build variable resolution triangulations (see figure 1.b) or grid (figure 1.c) easily. ~.I , x+ + ~ (a) (b) (c) Figure 1: Some examples of barycentric approximators. These are piecewise continuous (a) or discontinuous (b) linear or multi-linear (c) interpolators. Remark 1 In the general case, for a given x, the choice of a simplex (6, ... , ~n) 3 x is not unique (see the two sets of grey and black points in figure l.b and l.c), and once the simplex (~1' ... , ~n) 3 x is defined, if n > d + 1 (for example in figure l.c), then the choice of the barycentric coordinates P(XI~i) is also not unique. Remark 2 Depending on the interpolation method we use, the time needed for computing the values will vary. Following {Dav96}, the continuous multi-linear interpolation must process 2d values, whereas the linear continuous interpolation inside a simplex processes (d + 1) values in 0 (d log d) time. 1026 R. Munos and A. W. Moore In comparison to [Gor95], the functions used here are averagers that satisfy the barycentric interpolation property (1). This additional geometric constraint permits us to prove the consistency (see (15) below) ofthe approximation scheme and thus the convergence to the optimal value in the continuous time case. 3 THE OPTIMAL CONTROL PROBLEM Let us describe the optimal control problem in the deterministic and discounted case for continuous state-space and time variables and define the value function that we intend to approximate. We consider a dynamical system whose state dynamics depends on the current state x(t) E () (the state-space, with 0 an open subset of JRd) and control u(t) E U (compact subset) by a differential equation : dx dt = f(x(t), u(t)) (2) From equation (2), the choice of an initial state x and a control function u(t) leads to a unique trajectories x (t) (see figure 2). Let r be the exit time from 0 (with the convention that if x(t) always stays in 0, then r = (0) . Then, we define the functional J as the discounted cumulative reinforcement : J(x; u(.)) = loT -/r(x(t), u(t))dt + -{ R(x(r)) where r(x, u) is the running reinforcement and R(x) the boundary reinforcement. 'Y is the discount factor (0 ~ 'Y < 1). We assume that f, rand R are bounded and Lipschitzian, and that the boundary 80 is C2 . RL uses the method of Dynamic Programming (DP) that introduces the value function (VF) : the maximal value of J as a function of initial state x : V(x) = sup J(x; u(.)). u(.) From the DP principle, we deduce that V satisfies a first-order differential equation, called the Hamilton-Jacobi-Bellman (HJB) equation (see [FS93] for a survey) : Theorem 1 If V is differentiable at x E 0, let DV(x) be the gradient of V at x , then the following HJB equation holds at x. H(V, DV, x) ~f V(x) In'Y + sup[DV(x).f(x, u) + r(x , u)] = 0 (3) uEU The challenge of RL is to get a good approximation of the VF, because from V we can deduce the optimal control : for state x, the control u· (x) that realizes the supremum in the HJB equation provides an optimal (feed-back) control law. The following hypothesis is a sufficient condition for V to be continuous within 0 (see [Bar94]) and is required for proving the convergence result of the next section. Hyp 1: For x E 80, let nt(x) be the outward normal of 0 at x , we assume that : -If3u E U, s.t . f(x, u) .nt(x) ~ 0 then 3v E U, s.t. f(x, v)nt(x) < O. -If3u E U, s.t. f(x, u) .nt(x) ~ 0 then 3v E U, s.t. f(x, v)nt(x) > O. which means that at the states (if there exist any) where some trajectory is tangent to the boundary, there exists, for some control, a trajectory strictly coming inside and one strictly leaving the state space. Barycentric Interpolators for Continuous Reinforcement Learning 1027 o • x • • o • Figure 2: The state space and the set of points EO (the black dots belong to the interior and the white ones to the boundary). The value at some point e is updated, at step n, by the discounted value at point '1n E (el, 6, 6). The main requirement for convergence is that the points '1n approximate '1 in the sense : P('1nl{.) = p('1I{.) + 0(0) (i.e. the '1n belong to the grey area). O--------QO~------~~~ 4 THE CONVERGENCE RESULT Let us introduce the set of points I;0 = {~di' composed of the interior (I;0 n 0) and the boundary (8I;° = I; \ 0), such that its convex hull covers the state space 0, and performing a discretization at some resolution 6 : VxEO, inf IIX-~ill::;6 and VxE80 inf Ilx-~jll::;6 (4) €.EE6no €jE&E6 Moreover, we approximate the control space U by some finite control spaces UO C U such that for 6 ::; 6', UO' c UO and liffiO-+o UO = U. We would like to update the value of any: - interior point ~ E I;0 nO with the discounted values at state 77n(~, u) (figure 2) : V~+l (~) +- sup ["YTn(€,u)V~(77n(~' u)) + Tn(~, u) . rn(~, u)] (5) uEU 6 for some state 77n(~, u), some time delay Tn(~, u) and some reinforcement rn(~, u) . - boundary point ~ E 8I;° with some terminal reinforcement Rn(~) : V~+1 (~) +- Rn(~) (6) The following theorem states that the values V~ computed by a RL algorithm using the model (because of some a priori partial uncertainty of the state dynamics and the reinforcement functions) 77n(~, u), Tn(~, u), rn(~, u) and Rn(~) converge to the optimal value function as the number of iterations n -+ 00 and the resolution 6 -+ O. Let us define the state 77(~, u) (see figure 2) : 77(~, u) = ~ + T(~, u).f(~, u) (7) for some time delay T(~, u) (with k16 ::; T(~, u) ::; k26 for some constants kl > 0 and k2 > 0), and let p(77I~i) (resp. P(77nl~d) be the barycentric coordinate of 77 inside a simplex containing it (resp. 77n inside the same simplex). We will write 77, 77n , T, 1', .. . , instead of 77(~, u), 77n(~, u), T(~, u), r(~, u), ... when no confusion is possible. Theorem 2 Assume that the hypotheses of the previous sections hold, and that for any resolution 6, we use barycentric interpolators VO defined on state spaces I;0 (satisfying (4)) such that all points of I;0 nO are regularly updated with rule (5) and all points of 8I;° are updated with rule (6) at least once. Suppose that 77n , Tn, rn and Rn approximate 77, T, rand R in the sense: V~i, P(77nl~d p(77I~i) + 0(6) (8) Tn T + 0(62) (9) rn 1'+0(6) (10) Rn R + 0(6) (11) 1028 R. Munos and A. W. Moore then we have limn-+oo V; V uniformly on any compact 0 C 0 (i.e. "Ie > 0, "10 0-+0 ° compact C 0, 3~, 3N, such that "18 ~ ~,Vn 2: N, SUp~6nn IVn - VI ~ e). Remark 3 For a given value of 8, the rule (5) is not a DP updating rule for some Markov Decision Problem (MDP) since the values l7n, Tn, rn depend on n. This point is important in the RL framework since this allows on-line improvement of the model of the state dynamics and the reinforcement functions. Remark 4 This result extends the previous results of convergence obtained by Finite-Element or Finite-Difference methods (see {Mun97}}. This theoretical result can be applied by starting from a rough EO (high 8) and by combining to the iteration process (n ~ 00) some learning process of the model (l7n ~ 17) and a increasing process of the number of points (8 ~ 0). 5 COMPUTATIONAL ISSUES From (8) we deduce that the method will also converge if we use an approximate barycentric interpolator, defined at any state x E (~1"'" ~n) by the value of the barycentric interpolator at some state x' E (~1' ... , ~n) such that p(X'I~i) = p(XI~i) + 0(8) (see figure 3) . The fact that we need not be completely accurate can be Approx-linear Linear ~3 X x' ~4 Figure 3: The linear function and the approximation error around it (the grey area). The value of the approximate linear function plotted here at some state x is equal to the value of the linear one at x'. Any such approximate barycenter interpolator can be used in (5). used to our advantage. First, the computation of barycentric coordinates can use very fast approximate matrix methods. Second, the model we use to integrate the dynamics need not be perfect. We can make an 0(&2) error, which is useful if we are learning a model from data: we need simply arrange to not gather more data than is necessary for the current 8. For example, if we use nearest neighbor for our dynamics learning, we need to ensure enough data so that every observation is 0(82) from its nearest neighbor. If we use local regression, then a mere 0(8) density is all that is required [Om087, AMS97]. 6 PROOF OF THE CONVERGENCE RESULT 6.1 Description of the approximation scheme We use a convergent scheme derived from Kushner (see [Kus90]) in order to approximate the continuous control problem by a finite MDP. The HJB equation is discretized, at some resolution 8, into the following DP equation : for ~ E EO nO, VO(~) = FO [vo(.)] (~) ~f sUPUEU6 {"IT L~t p(l7l~i).v°(~d + T.r} (12) and for ~ E BEo, VO (~) = R(~) . This is a fixed-point equation and we can prove that, thanks to the discount factor "I, it satisfies the "strong" contraction property: SUP~6 jv;+l - vo I ~ ,\. sup~61V; - vo I for some ,\ < 1 (13) Barycentric Interpolators for Continuous Reinforcement Learning 1029 from which we deduce that there exists exactly one solution Va to the DP equation, which can be computed by some value iteration process : for any initial Voa, we iterate V~+l f- Fa [V~] . Thus for any resolution 8, the values V~ -+ Va as 71 -+ 00. Moreover, as va is a barycentric interpolator and from the definition (7) of "I , Fa [va (.)] (~) = sUPuEU6 {-yT va (~ + T.f(~ , u)) + T.r} (14) from which we deduce that the scheme Fa is consistent : in a formal sense, limsuPa--+o ilFa[W](x) - W(x)1 '" H(W, DW,x) (15) and obtain, from the general convergence theorem of [BS91] (and a result of strong unicity obtained from hyp.l)' the convergence of the scheme : va -+ V as 8 -+ O. 6.2 Use of the "weak contraction" result of convergence Since in the RL approach used here, we only have an approximation "In , Tn , ... of the true values "I, T, ... , the strong contraction property (13) does not hold any more. However, in previous work ([Mun98]), we have proven the convergence for some weakened conditions, recalled here : If the values V~ updated by some algorithm satisfy the "weak" contraction property with respect to a solution va of a convergent approximation scheme (such as the previous one (12)) : sUPE6no 1V~+1 - Va I < (1 - k.8) . SUPE6 IV~ - va 1+ 0(8) (16) SUP&E61V~+1 - va I 0(8) (17) for some positive constant k, (with the notation f(8) :S 0(8) iff 39(8) = 0(8) with f(8) :S 9(8)) then we have limn-+oo V~ = V uniformly on any compact 0 C 0 a--+O (i.e. Vf > 0, VO compact C 0, 3~ and N such that V8 :S ~,Vn ~ N , SUPE6nn IV~ - Vi :S f) . 6.3 Proof of theorem 2 We are going to use the approximations (8), (9), (10) and (11) to deduce that the weak contraction property holds, and then use the result of the previous section to prove theorem 2. The proof of (17) is immediate since, from (6) and (11) we have : V~ E a'L.o, 1V~+1(~) va(~)1 = I Rn(~) R(~)I = 0(8) Now we need to prove (16) . Let us estimate the error En(~) = va(~) V~(~) between the value Va of the DP equation (12) and the values V~ computed by rule (5) after one iteration : En+d~) = SUPuEU6 {LE' [-{ p(TJI~d· Va (~d "(Tn P(TJn I~d.v~ (~d] + T.T' - Tn .rn} En+d~) = SUp { "(T LE, [P( ryl~;) - p( ryn I~d] Va (~;) + b T "(Tn] L€, p( "In I~i)' va (~d uEU6 + "(Tn L€. P(TJn I~i)' [va (~;) V~(~;)] + Tn [1' - rn] + [T - Tn] r} By using (9) (from which we deduce : -{ = "(Tn + 0(82 )) and (10), we deduce : IEn+d~)1 < SUPuEU6 {"(T ·IL€, [P(TJI~;) - P(TJnl~d] Va (~d I (18) +"(Tn L€, P(TJnl~i).lVa(~d V~(~i)l} + 0(82 ) . 1030 R. Munos and A. W Moore From the basic properties of the coefficients p( 1J1~d and p( 1Jn I~;) we have: LE, [P(1JI~i) P(1Jnl~d] VO(~i) = L(, [P(1JI~d P(1Jnl~d] [VO(~d VO(~)] (19) Moreover, IVO(~d VO(~)I ~ IVO(~d V(~i)1 + 1V(~i) V(~)I + IV(~) VO(~)I· From the convergence of the scheme V O, we have sUPE6nn Ivo - Vi ~ 0 for any compact nCo and from the continuity of V and the fact that the support of the simplex {O 3 1J is 0(0), we have sUPE6nn 1V(~d - V(~)I ~ 0 and deduce that sUPE 6 nn Jv°(~i) VO(~)I o~ O. Thus, from (19) and (8) , we obtain: ILE' [P(1JI~) - P(1Jnl~)] VO(~dl = 0(0) (20) The "weak" contraction property (16) holds: from the property of the exponential function ,Tn ~ 1 - 2f In ~ for small values of Tn, from (9) and that T 2: klO , we deduce that ,Tn ~ 1 - ¥ In ~ + 0(02 ), and from (18) and (20) we deduce that : IV;+l(~) VO(~)I ~ (1- k.0)SUPE61V;+d~) VO(~)I + 0(0) with k = ¥ In 1 , and the property (16) holds. Thus the "weak contraction" result ~ "I of convergence (described in section 6.2) applies and convergence occurs. FUTURE WORK This work proves the convergence to the optimal value as the resolution tends to the limit, but does not provide us with the rate of convergence. Our future work will focus on defining upper bounds of the approximation error, especially for variable resolution discretizations, and we will also consider the stochastic case. ACKNOWLEDGMENTS This research was sponsored by DASSAULT-AVIATION and CMU. References [AMS97] c. G. Atkeson, A. W. Moore, and S. A. Schaal. Locally Weighted Learning. AI Review, 11:11- 73, April 1997. [Bar94] Guy Barles. Solutions de viscosite des equations de Hamilton-Jacobi, volume 17 of Mathematiques et Applications. Springer-Verlag, 1994. [BS91] Guy Barles and P.E. Souganidis. Convergence of approximation schemes for fully nonlinear second order equations. Asymptotic Analysis, 4:271- 283, 1991. [Dav96] Scott Davies. Multidimensional triangulation and interpolation for reinforcement learning. Advances in Neural Information Processing Systems, 8, 1996. [FS93] Wendell H. Fleming and H. Mete Soner. Controlled Markov Processes and Viscosity Solutions. Applications of Mathematics. Springer-Verlag, 1993. [Gor95] G. Gordon. Stable function approximation in dynamic programming. International Conference on Machine Learning, 1995. [Kus90] Harold J. Kushner. Numerical methods for stochastic control problems in continuous time. SIAM J. Control and Optimization, 28:999- 1048, 1990. [Mun97] Remi Munos. A convergent reinforcement learning algorithm in the continuous case based on a finite difference method. International Joint Conference on A rtificial Intelligence, 1997. [Mun98] Remi Munos. A general convergence theorem for reinforcement learning in the continuous case. European Conference on Machine Learning, 1998. [Omo87] S. M. Omohundro. Efficient Algorithms with Neural Network Behaviour. Journal of Complex Systems, 1(2):273-347, 1987.
1998
129
1,485
Coordinate Transformation Learning of Hand Position Feedback Controller by U sing Change of Position Error Norm Eimei Oyama* Mechanical Eng. Lab. Namiki 1-2, Tsukuba Science City Ibaraki 305-8564 Japan Abstract Susumu Tachi The University of Tokyo Hongo 7-3-1, Bunkyo-ku Tokyo 113-0033 Japan In order to grasp an object, we need to solve the inverse kinematics problem, i.e., the coordinate transformation from the visual coordinates to the joint angle vector coordinates of the arm. Although several models of coordinate transformation learning have been proposed, they suffer from a number of drawbacks. In human motion control, the learning of the hand position error feedback controller in the inverse kinematics solver is important. This paper proposes a novel model of the coordinate transformation learning of the human visual feedback controller that uses the change of the joint angle vector and the corresponding change of the square of the hand position error norm. The feasibility of the proposed model is illustrated using numerical simulations. 1 INTRODUCTION The task of calculating every joint angle that would result in a specific hand position is called the inverse kinematics problem. An important topic in neuroscience is the study of the learning mechanisms involved in the human inverse kinematics solver. We questioned five pediatricians about the motor function of infants suffering from serious upper limb disabilities. The doctors stated that the infants still were able to touch and stroke an object without hindrance. In one case, an infant without a thumb had a major kinematically influential surgical operation, transplanting an index finger as a thumb. After the operation, the child was able to learn how to use the index finger like a thumb [1]. In order to explain the human motor learning • Phone:+81-298-58-7298, Fax:+81-298-58-7201, e-mail:eimei@mel.go.jp Coordinate Transformation Learning of Feedback Controller 1039 capability, we believe that the coordinate transformation learning of the feedback controller is a necessary component. Although a number of learning models of the inverse kinematics solver have been proposed, a definitive learning model has not yet been obtained. This is from the point of view of the structural complexity of the learning model and the biological plausibility of employed hypothesis. The Direct Inverse Modeling employed by many researchers [2] requires the complex switching of the input signal of the inverse model. When the hand position control is performed, the input of the inverse model is the desired hand position, velocity, or acceleration. When the inverse model learning is performed, the input is the observed hand position, velocity, or acceleration. Although the desired signal and the observed signal could coincide, the characteristics of the two signals are very different. Currently, no research has succeesfully modeled the switching system. Furthermore, that learning model is not "goal-directed"; i.e., there is no direct way to find an action that corresponds to a particular desired result. The Forward and Inverse Modeling proposed by Jordan [3] requires the back-propagation signal, a technique does not have a biological basis. That model also requires the complex switching of the desired output signal for the forward model. When the forward model learning is performed, the desired output is the observed hand position. When the inverse kinematics solver learning is performed, the desired output is the desired hand position. The Feedback Error Learning proposed by Kawato [4] requires a pre-existing accurate feedback controller. It is necessary to obtain a learning model that possesses a number of characteristics: (1) it can explain the human learning function; (2) it has a simple structure; and (3) it is biologically plausible. This paper presents a learning model of coordinate transformation function of the hand position feedback controller. This model uses the joint angle vector change and the corresponding change of square of the hand position error norm. 2 BACKGROUND 2.1 Discrete Time First Order Model of Hand Position Controller Let 8 E Rm be the joint angle vector and x ERn be the hand position/orientation vector given by the vision system. The relationship between x and 8 is expressed as x = /(8) where / is a C1 class function. The Jacobian of the hand position vector is expressed as J(8) = 8/(8)/88. Let Xd be the desired hand position and e = Xd X = Xd - /(8) be the hand position error vector. In this paper, an inverse kinematics problem is assumed to be a least squares minimization problem that calculates 8 in order to minimize the square of the hand position error norm S(xd,8) = le1 2/2 = IXd - /(8)1 2/2. First, the feed-forward controller in the human inverse kinematics solver is disregarded and the following first order control system, consisting of a learning feedback controller, is considered: Xti Desired Hand Position + 8(k + 1) = 8(k) + A8(k) Position Error e(k) Feedback • Di~rbance Joint Angle H Arm Hand NOIse d(k) Vector uman Position + ~k~ (f?-'f(8) X(!i... tPp/.8, e) (+-~ 8(k) Figure 1: Configuration of 1-st Order Model of Hand Position Controller (1) 1040 E. Oyama and S. Tachi a8(k) = ~fb(8(k), e(k)) + d(k) (2) e(k) = Xd - f(8(k)) (3) where d(k) is assumed to be a disturbance noise from all components except the hand position control system. Figure 1 shows the configuration of the control system. In this figure, Z-l is the operator that indicates a delay in the discrete time signal by a sampling interval of tl.t. Although the human hand position control system includes higher order complex dynamics terms which are ignored in Equation (2), McRuer's experimental model of human compensation control suggests that the term that converts the hand position error to the hand velocity is a major term in the human control system [5]. We consider Equation (2) to be a good approximate model for the analysis of human coordinates transformation learning. The learner ~ fb (8, e) E R m, which provides the hand position error feedback, is modeled using the artificial neural network. In this paper, the hand position error feedback controller learning by observing output x(k) is considered without any prior knowledge of the function f (8). 2.2 Learning Model of the Neural Network Let ~fb(8, e) be the desired output of the learner ~fb(8, e). ~fb(8, e) functions as a teacher for ~fb(8,e). Let ~jb(8 , e) be the updated output of ~fb(8,e) by the learning. Let E[t(8, e)18, e] be the expected value of a scalar, a vector, or a matrix function t(8,e) when the input vector (8,e) is given. We assume that ~fb(8 , e) is an ideal learner which is capable of realizing the mean of the desired output signal, completely. ~+ fb(8, e) can be expressed as follows: ~jb(8, e) ~ E[~fb(8 , e)18, e] = ~fb(8 , e) + E[a~fb(8, e)18, e] (4) a~fb(8 , e) = ~fb(8 , e) ~fb(8, e) (5) When the expected value of a~fb(8, e) is expressed as: E[a~fb(8,e)18,e] ~ Gfbe Rfb~fb(8 , e) , Rfb E Rm xm is a positive definite matrix, and the inequality I 8~jb(8 , e) I = I 8(G fbe - (Rfb I)~fb(8, e» I < 1 8~fb(8, e) 8~ fb(8, e) is satisfied, the final learning result can be expressed as: ~fb(8 , e) ~ Rjb1Gfbe by the iteration of the update of ~fb(8 , e) expressed in Equation (4). 3 USE OF CHANGE OF POSITION ERROR NORM 3.1 A Novel Learning Model of Feedback Controller (6) (7) (8) The change of the square of the hand position error norm tl.S = S(Xd, 8 + a8) S(Xd, 8) reflects whether or not the change of the joint angle vector A8 is in proper direction. The propose novel learning model can be expressed as follows: ~fb(8, e) = -atl.Sa8 (9) where a is a small positive real number. We now consider a large number of trials of Equation (2) with a large variety of initial status 8(0) with learnings conducted at the point of the input space of the feedback controller (8, e) = (8(k -1), e(k -1» at time k. tl.S and a8 can be calculated as follows. tl.S S(k) - S(k - 1) = ~(le(kW -Ie(k - 1W) (10) a8 = a8(k - 1) (11) Coordinate Transformation Learning of Feedback Controller Hand Position Error e(k) e(k) .---------, Change of Square of Input for Learning e(k-l) --Hand Position Error Norm Error Signal for k'/ '---,..--T----' Feedback Input for Input for Controller Learning Control (J(k-l) 8(k) Change of Joint Angle Vector d8(k) Dist""'-z d(k) NoiJe Figure 2: Configuration of Learning Model of Feedback Controller Figure 2 shows the conceptual diagram of the proposed learning model. 1041 Let p(qI8, e) be the probability density function of a vector q at at the point (8, e) in the input space of ~fb(8, e). In order to simplify the analysis of the proposed learning model, d(k) is assumed to satisfy the following equation: p(dI8, e) = p( -dI8, e) (12) When d8 is small enough, the result of the learning using Equation (9) can be expressed as: ~fb(8, e) ~ a(~R9JT (8)J(8) + 1)-1 R9JT (8)e R9 = E[d8d8TI8, e] (13) (14) where JT (8)e is a vector in the steepest descent direction of S(Xd, 8). When d(k) is a non-zero vector, R9 is a positive definite symmetric matrix and (~R9JT J + 1)-1 is a positive definite matrix. When a is appropriate, ~ fb(8, e) as expressed in Equation (13) can provide appropriate output error feedback control. The derivation of the above result will be illustrated in Section 3.2. A partially modified steepest descent direction can be obtained without using the forward model or the back-propagation signal, as Jordan's forward modeling [3]. Let Rd be the covariance matrix of the disturbance noise d(k). When a is infinitesimal, R9 ~ Rd is established and an approximate solution ~fb(8,e) ~ aRdJT(8)e is obtained. 3.2 Derivation of Learning Result The change of the square of the hand position error norm llS(Xd, 8) by d8 can be determined as: llS(xd, 8) = 8S~;, 8) d8 + ~d8T H(Xd, 8)d8 + O(d83 ) (15) = -eT(J(8) + ~ 8~~8) i&l d8)d8 + ~d8T J T (8)J(8)d8 + O(d83 ) where i&l is a 2-operand operator that indicates the Croneker's product. H(Xd,8) E Rmxm is the Hessian of S(Xd, 8). O(d83 ) is the sum of third and higher order terms of d8 in each equation. When d8 is small enough, the following approximate equations are obtained: 1 18J(8) dx ~ J(8)d8 ~ J(8 + 2"d8)d8 ~ (J(8) + 2 88 i&l d8)d8 (16) Therefore, llS can be approximated as follows: 1 llS ~ _eT J(8)d8 + 21dXI2 (17) 1042 E. Oyama and S. Tachi Since eT J AOAO = AOAOT JT e and IAxI2 AO = AOAOT JT J AO are determined, tl.S AO can be approximated as: (18) Considering AOnjb defined as AOnjb = AO - .jb(O,e), the expected value of the product of AO and tl.S at the point (O,e) in the input space of .jb(O,e) can be approximated as follows: E[tl.SAOIO, e] TIT ~ -ReJ e + 2ReJ J.jb(O,e) (19) 1 T T + 2E[AOAO J J AOnjblO, e] When the arm is controlled according to Equation (2), AOnjb is the disturbance noise d(k). Since d(k) satisfies Equation (12), the following equation is established. E[AOAOT JT JAOnjbIO,e] = 0 (20) Therefore, the expected value of A.jb(O, e) can be expressed as; TaT E[A.jb(O, e)IO, e] ~ aReJ e - (2ReJ J + I).jb(O, e) (21) When a is small enough, the condition described in Equation (7) is established. The learning result expressed as Equation (13) is obtained as described in Section 2.2. It should be noted that the learning algorithm expressed in Equation (9) is applicable not only to S(Xd,O), but also to general penalty functions of hand position error norm lei. The proposed learning model synthesizes a direction that decreases S(Xd,O) by summing after weighting AO based on the increase or decrease of S(Xd, 0). The feedback controller defined in Equation (13) requires a number of iterations to find a correct inverse kinematics solution, as the coordinates transformation function of the controller is incomplete. However, by using Kawato's feedback error learning [4], the second feedback controller; the feed-forward controller; or the inverse kinematics model that has a complete coordinate transformation function can be obtained as shown in Section 4. 4 TRACKING CONTROL SYSTEM LEARNING In this section, we will consider the case where Xd changes as xd(k)(k 1,2, ... ). The hybrid controller that includes the learning feed-forward controller .ff(O(k), AXd(k)) E Rm that transforms the change of the desired hand position AXd(k) = xd(k + 1) - xd(k) to the joint angle vector space is considered: AO(k) = .ff(O(k), AXd(k)) + .,b(O(k),e(k)) + d(k) (22) e(k) = xd(k) - x(k) (23) The configuration of the hybrid controller is illustrated in Figure 3. By using the modified change of the square of the error norm expressed as: 1 2 2 tl.S = 2(lxd(k - 1) - x(k)1 - le(k - 1)1 ) (24) and AO(k) as defined in Equation (22), the feedback controller learning rule defined in Equation (9) is useful for the tracking control system. A sample holder for memorizing xd(k -1) is necessary for the calculation of tl.S. When the distribution Coordinate Transformation Learning of Feedback Controller Error Signal for ~:,..-:--::----:-"' Fccdforword Fccdforwonl 4},(8,Lix" (k»)- J*(8)Lh" (k) Controller CoDtrolJer i----r---, :I ~ J(8)J*(8)=1 ./i(8'. aj(8) iixd(k) ~ I arrPosition I!m>r + e(k) ~(k) Desired Hand Position Feedback • HWIIIJIArm ~ 8(k) 8(k) X=f(O) Figure 3: Configuration of Hybrid Controller Hand Position x(k) 1043 of 4Xd(k) satisfies Equation (20), Equation (13) still holds. When 4Xd(k) has no correlation with d(k) and 4Xd(k) satisfies p(4XdI8, e) = p( -4XdI8, e), Equation (20) is approximately established after the feed-forward controller learning. Using 48(k) defined in Equation (2) and e(k) defined in Equation (23), tl.S defined in Equation (10) can be useful for the calculation of ~fb(8, e). Although the learning calculation becomes simpler, the learning speed becomes much lower. Let~' ff(8(k), 4Xd(k)) be the desired output of ~,,(8(k), 4Xd(k)). According to Kawato's feedback error learning [4], we use ~',,(8(k), 4Xd(k)) expressed as: ~',,(8(k), 4Xd(k)) = (1 >..)~,,(8(k), 4Xd(k)) + ~fb(8(k + 1), e(k + 1)) (25) where >.. is a small, positive, real number for stabilizing the learning process and ensuring that equation ~,,(8,O) ~ 0 holds. If >.. is small enough, the learning feed-forward controller will fulfill the equation: J~,,(8, 4Xd) ~ 4Xd (26) 5 NUMERICAL SIMULATION Numerical simulation experiments were performed in order to evaluate the performance of the proposed model. The inverse kinematics of a 3 DOF arm moving on a 2 DOF plane were considered. The relationship between the joint angle vector 8 = (81'(h, (3 ) T and the hand position vector x = (x, y) T was defined as: x = Xo + Ll cos(8t} + L2 cos(81 + (2 ) + L3 cos(81 + 82 + (3 ) (27) y = Yo + Ll sin(81) + L2 sin(81 + (2 ) + L3 sin(81 + 82 + (3 ) (28) The range for 81 was (-300 ,1200 ); the range for 82 was (0 0 ,1200 ); and the range for 83 was (_750 ,750 ). Ll was 0.30 m, L2 was 0.25 m and L3 was 0.15 m. Random straight lines were generated as desired trajectories for the hand. The tracking control trials expressed as Equation (22) with the learning of the feedback controller and the feed-forward controller were performed. The standard deviation of each component of d was 0.01. Learnings based on Equations (9), (22), (24), and (25) were conducted 20 times in one tracking trial. 1,000 tracking trials were conducted to estimate the RMS(Root Mean Square) of e(k). In order to accelerate the learning, a in Equation (9) was modified as a = 0.5/(Itl.xI2 + 0.11tl.(12). >.. in Equation (25) was set to O.OOL Two neural networks with 4 layers were used for the simulation. The first layer had 5 neurons and the forth layer had 3 neurons. The other layers had 15 neurons each. The first layer and the forth layer consisted of linear neurons. The initial values of weights of the neural networks were generated by using uniform random numbers. The back-propagation method without optimized learning coefficients was utilized for the learning. 1044 E. Oyama and S. Tachi El00~------------------~ y ....... 0.5 ... g w 10.2 +-r--___ --......... __ ___ ............. -r.........f ~ 10°101102103104105106107 CE: Number of Trials o 0.5 x Figure 4: Learning Process of Controller Figure 5: One Example of Tracking Control Figure 4 shows the progress of the proposed learning model. It can be seen that the RMS error decreases and the precision of the solver becomes higher as the number of trials increases. The RMS error became 9.31 x 1O-3m after 2 x 107 learning trials. Figure 5 illustrates the hand position control by the inverse kinematics solver after 2 x 107 learning trials. The number near the end point of the arm indicates the value of k. The center of the small circle in Figure 5 indicates the desired hand position. The center of the large circle indicates the final desired hand position. Through learning, a precise inverse kinematics solver can be obtained. However, for RMS error to fall below 0.02, trials must be repeated more than 106 times. In such cases, more efficient learner or a learning rule is necessary. 6 CONCLUSION A learning model of coordinate transformation of the hand position feedback controller was proposed in this paper. Although the proposed learning model may take a long time to learn, it is capable of learning a correct inverse kinematics solver without using a forward model, a back-propagation signal, or a pre-existing feedback controller. We believe that the slow learning speed can be improved by using neural networks that have a structure suitable for the coordinate transformation. A major limitation of the proposed model is the structure of the learning rule, since the learning rule requires the calculation of the product of the change of the error penalty function and the change of the joint angle vector. However, the existence of such structure in the nervous system is unknown. An advanced learning model which can be directly compared with the physiological and psychological experimental results is necessary. References [1] T. Ogino and S. Ishii, "Long-term Results after Pollicization for Congenital Hand Deformities," Hand Surgery, 2, 2,pp.79-85,1997 [2] F. H. Guenther and D. M. Barreca," Neural models for flexible control of red undant systems," in P. Morasso and V. Sanguineti (Eds.), Self-organization, Computational Maps, and Motor Control. Amsterdam: Elsevier, pp.383-421,1997 [3J M.1. Jordan, "Supervised Learning and Systems with Excess Degrees of Freedom," COINS Technical Report,88-27,pp.1-41,1988 [4J M. Kawato, K. Furukawa and R. Suzuki, "A Hierarchical Neural-network Model for Control and Learning of Voluntary Movement," Biological Cybernetics, 57, pp.169-185, 1987 [5] D.T. McRuer and H. R. Jex, "A Review of Quasi-Linear Pilot Models," IEEE Trans. on Human Factors in Electronics, HFE-8, 3, pp.38-51, 1963
1998
13
1,486
Exploiting generative models discriminative classifiers • In Tommi S. Jaakkola* MIT Artificial Intelligence Laboratorio 545 Technology Square Cambridge, MA 02139 David Haussler Department of Computer Science University of California Santa Cruz, CA 95064 Abstract Generative probability models such as hidden ~larkov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often result in classification performance superior to that of the model based approaches. An ideal classifier should combine these two complementary approaches. In this paper, we develop a natural way of achieving this combination by deriving kernel functions for use in discriminative methods such as support vector machines from generative probability models. We provide a theoretical justification for this combination as well as demonstrate a substantial improvement in the classification performance in the context of D~A and protein sequence analysis. 1 Introduction Speech, vision, text and biosequence data can be difficult to deal with in the context of simple statistical classification problems. Because the examples to be classified are often sequences or arrays of variable size that may have been distorted in particular ways, it is common to estimate a generative model for such data, and then use Bayes rule to obtain a classifier from this model. However. many discriminative methods, which directly estimate a posterior probability for a class label (as in Gaussian process classifiers [5]) or a discriminant function for the class label (as in support vector machines [6]) have in other areas proven to be superior to * Corresponding author. 488 T. S. Jaakkola and D. Haussler generative models for classification problems. The problem is that there has been no systematic way to extract features or metric relations between examples for use with discriminative methods in the context of difficult data types such as those listed above. Here we propose a general method for extracting these discriminatory features using a generative model. V{hile the features we propose are generally applicable, they are most naturally suited to kernel methods. 2 Kernel methods Here we provide a brief introduction to kernel methods; see, e.g., [6] [5] for more details. Suppose now that we have a training set of examples Xl and corresponding binary labels 51 (±1) . In kernel methods. as we define them. the label for a new example X is obtained from a weighted sum of the training labels. The weighting of each training label 52 consists of two parts: 1) the overall importance of the example Xl as summarized with a coefficient '\1 and 2) a measure of pairwise "similarity" between between XI and X, expressed in terms of a kernel function K(X2' X). The predicted label S for the new example X is derived from the following rule: s ~ sign ( ~ S, '\,K(X,. X) ) (1) We note that this class of kernel methods also includes probabilistic classifiers, in \vhich case the above rule refers to the label with the maximum probability. The free parameters in the classification rule are the coefficients '\1 and to some degree also the kernel function K . To pin down a particular kernel method. two things need to be clarified. First, we must define a classification loss. or equivalently, the optimization problem to solve to determine appropriate values for the coefficients '\1' Slight variations in the optimization problem can take us from support vector machines to generalized linear models. The second and the more important issue is the choice of the kernel function - the main topic of this paper. \Ve begin with a brief illustration of generalized linear models as kernel methods. 2.1 Generalized linear models For concreteness we consider here only logistic regression models. while emphasizing that the ideas are applicable to a larger class of models l . In logistic regression models, the probability of the label 5 given the example X and a parameter vector e is given by2 P(5IX. e) = (7 (5eT X) (2) where (7(z) = (1 + e- z) - l is the logistic function. To control the complexity of the model when the number of training examples is small we can assign a prior distribution p(e) over the parameters. \Ve assume here that the prior is a zero mean Gaussian with a possibly full covariance matrix L:. The maximum a posteriori (l\IAP) estimate for the parameters e given a training set of examples is found by 1 Specifically. it applies to all generalized linear models whose transfer functions are log-concave. 2Here we assume that the constant + 1 is appended to every feature vector X so that an adjustable bias term is included in the inner product eT X. Exploiting Generative Models in Discriminative Classifiers 489 maximizing the following penalized log-likelihood: I: log P(S, IX 1 , B) + log P(B) where the constant c does not depend on B. It is straightforward to show, simply by taking the gradient with respect to the parameters, that the solution to this (concave) maximization problem can be written as3 (4) Xote that the coefficients A, appear as weights on the training examples as in the definition of the kernel methods. Indeed. inserting the above solution back into the conditional probability model gives (5) By identifying !..:(X/. X) = X;'f.X and noting that the label with the maximum probability is the aile that has the same sign as the sum in the argument. this gives the decision rule (1). Through the above derivation, we have written the primal parameters B in terms of the dual coefficients A,.J. Consequently. the penalized log-likelihood function can be also written entirely in terms of A, : the resulting likelihood function specifies how the coefficients are to be optimized. This optimization problem has a unique solution and can be put into a generic form. Also, the form of the kernel function that establishes the connection between the logistic regression model and a kernel classifier is rather specific, i.e .. has the inner product form K(X,. X) = X;'f.X. However. as long as the examples here can be replaced with feature vectors derived from the examples. this form of the kernel function is the most general. \Ve discuss this further in the next section. 3 The kernel function For a general kernel fUIlction to be valid. roughly speaking it only needs to be positive semi-definite (see e.g. [7]). According to the t-Iercer's theorem. any such valid kernel function admits a representation as a simple inner product bet\\'een suitably defined feature vectors. i.e .. !":(X,.Xj) = 0\,0.'\) . where the feature vectors come from some fixed mapping X -> ¢.'\. For example. in the previous section the kernel function had the form X;'f.Xj ' which is a simple inner product for the transformed feature vector ¢.'\ = 'f. 1- X. Specifying it simple inner product in the feature space defines a Euclidean metric space. Consequently. the Euclidean distances between the feature vectors are obtained directly from the kernel fUllction: with the shorthand notation K ,} = 3This corresponds to a Legendre transformation of the loss functions log a( z) . .}This is possible for all those e that could arise as solutions to the maximum penalized likelihood problem: in other words. for all relevant e. 490 T. S. Jaakkola and D. Haussler K(Xi , Xj) we get II<Px, - <PxJ W = K ti - 2Ktj + K jj . In addition to defining the metric structure in the feature space, the kernel defines a pseudo metric in the original example space through D(Xi,Xj) = II<px. - <pxJII. Thus the kernel embodies prior assumptions about the metric relations between the original examples. No systematic procedure has been proposed for finding kernel functions, let alone finding ones that naturally handle variable length examples etc. This is the topic of the next section. 4 Kernels from generative probability models: the Fisher kernel The key idea here is to derive the kernel function from a generative probability model. We arrive at the same kernel function from two different perspectives, that of enhancing the discriminative power of the model and from an attempt to find a natural comparison between examples induced by the generative model. Both of these ideas are developed in more detail in the longer version of this paper[4]. We have seen in the previous section that defining the kernel function automatically implies assumptions about metric relations between the examples. We argue that these metric relations should be defined directly from a generative probability model P(XIO). To capture the generative process in a metric between examples we use the gradient space of the generative model. The gradient of the log-likelihood with respect to a parameter describes how that parameter contributes to the process of generating a particular example5 . This gradient space also naturally preserves all the structural assumptions that the model encodes about the generation process. To develop this idea more generally, consider a parametric class of models P(XIO) , o E e. This class of probability models defines a Riemannian manifold Ale with a local metric given by the Fisher information matrix6 I, where I = Ex{UxU{}, Us = \1 () log P(XIB), and the expectation is over P(XIO) (see e.g. [1]). The gradient of the log-likelihood, Us , is called the Fisher score, and plays a fundamental role in our development. The local metric on lvle defines a distance between the current model P(XIO) and a nearby model P(XIO+J). This distance is given by D(O, 0+15) = ~JT 16, which also approximates the KL-divergence between the two models for a sufficiently small 6. The Fisher score Us = \l(} log P(XIB) maps an example X into a feature vector that is a point in the gradient space of the manifold Ale. We call this the Fisher score mapping. This gradient Us can be used to define the direction of steepest ascent in log P(X 10) for the example X along the manifold, i.e. , the gradient in the direction 6 that maximizes log P( X 10) while traversing the minimum distance in the manifold as defined by D(O, 0 + 6). This latter gradient is known as the natural gradient (see e.g. [1]) and is obtained from the ordinary gradient via <Ps = I - I Ux. We will call the mapping X ~ <Px the natural mapping of examples into feature vectors7. The natural kernel of this mapping is the inner product between these 5For the exponential family of distributions, under the natural parameterization (), these gradients, less a normalization constant that depends on (), form sufficient statistics for the example. 6For simplicity we have suppressed the dependence of I and Ux on the parameter setting (), or equivalently, on the position in the manifold. 7 Again, we have suppressed dependence on the parameter setting () here. Exploiting Generative Models in Discriminative Classifiers 491 feature vectors relative to the local Riemannian metric: (6) We call this the Fisher kernel owing to the fundamental role played by the Fisher scores in its definition. The role of the information matrix is less significant; indeed, in the context of logistic regression models, the matrix appearing in the middle of the feature vectors relates to the covariance matrix of a Gaussian prior, as show above. Thus, asymptotically, the information matrix is immaterial, and the simpler kernel KU(Xi , Xj) ex u.Z, Ux) provides a suitable substitute for the Fisher kernel. We emphasize that the Fisher kernel defined above provides only the basic comparison between the examples, defining what is meant by an "inner product" between the examples when the examples are objects of various t.ypes (e.g. variable length sequences). The way such a kernel funct.ion is used in a discriminative classifier is not specified here. Using the Fisher kernel directly in a kernel classifier, for example, amounts to finding a linear separating hyper-plane in the natural gradient. (or Fisher score) feature space. The examples may not. be linearly separable in this feature space even though the natural metric st.ructure is given by t.he Fisher kernel. It may be advantageous to search in the space of quadratic (or higher order) decision boundaries, which is equivalent to transforming the Fisher kernel according to R(Xt , Xj) = (1 + K(Xt • x)))m and using the resulting kernel k in the classifier. \Ve are now ready to state a few properties of the Fisher kernel function. So long as the probability model P(XIB) is suitably regular then the Fisher kernel derived from it is a) a valid kernel function and b) invariant to any invertible (and differentiable) transformation of the model parameters. The rather informally stated theorem below motivates the use of this kernel function in a classification setting. Theorem 1 Given any suitably regular probability model P(XIB) with parameters B and assuming that the classification label is included as a latent variable, the Fisher kernel K(X 1 , X)) = V~, I-I Ux] derived from this model and employed in a kernel classifier is. asymptotically. never inferior to the MAP decision rule from this model. The proofs and other related theorems are presented in the longer version of this paper [4]. To summarize, we have defined a generic procedure for obtaining kernel functions from generative probability models. Consequently the benefits of generative models are immediately available to the discriminative classifier employing this kernel function. We now turn the experimental demonstration of the effectiveness of such a combined classifier. 5 Experimental results Here we consider two relevant examples from biosequence analysis and compare the performance of the combined classifier to the best generative models used in these problems. vVe start with a DNA splice site classification problem, where the objective is to recognize true splice sites, i.e. , the boundaries between expressed regions (exons) in a gene and the intermediate regions (introns). The dat.a set used in our experiments consisted of 9350 DNA fragments from C. elegans. Each of the 492 T S. Jaakkola and D. Haussler 2029 true examples is a sequence X over the DNA alphabet {A, G, T, C} of length 25; the 7321 false examples are similar sequences that occur near but not at 5' splice sites. All recognition rates we report on this data set are averages from 7-fold cross-validation. To use the combined classifier in this setting requires us to choose a generative model for the purpose of deriving the kernel function. In order to test how much the performance of the combined classifier depends on the quality of the underlying generative model, we chose the poorest model possible. This is the model where the DKA residue in each position in the fragment is chosen independently of others, i.e., P(XIB) = n;!l P(XzIBz) and, furthermore, the parameters Bz are set such that P( Xzl OJ) = 1/4 for all i and all Xl E {A. G, T, C} . This model assigns the same probability to all examples X. We can still derive the Fisher kernel from such a model and use it in a discriminative classifier. In this case we used a logistic regression model as in (5) with a quadratic Fisher kernel K(X/. X j ) = (1 + K(Xz, Xj))2. Figure 1 shows the recognition performance of this kernel method, using the poor generative model, in comparison to the recognition performance of a naive Bayes model or a hierarchical mixture model. The comparison is summarized in ROC style curves plotting false positive errors (the errors of accepting false examples) as a function of false negative errors (the errors of missing true examples) when we vary the classification bias for the labels. The curves show that even with such a poor underlying generative model, the combined classifier is consistently better than either of the better generative models alone. In the second and more serious application of the combined classifier. we consider the well-known problem of recognizing remote homologies (evolutionary/structural similarities) between protein sequences8 that have low residue identity. Considerable recent work has been done in refining hidden l\Iarkov models for this purpose as reviewed in [2], and such models current achieve the best performance. We use these state-of-the-art HMMs as comparison cases and also as sources for deriving the kernel function. Here we used logistic regression with the simple kernel K u (X1 ' X J)' as the number of parameters in the Hj\IMs was several thousand. The experiment was set up as follows. We picked a particular superfamily (glycosyltransferases) from the TIl'vI-barrel fold in the SCOP protein structure classification [3], and left out one of the four major families in this superfamily for testing while training the HMJlvI as well as the combined classifier on sequences corresponding to the remaining three families. The false training examples for the discriminative method came from those sequences in the same fold but not in the same superfamily. The test sequences consisted of the left-out family (true examples) and proteins outside the TIM barrel fold (false examples). The number of training examples varied around 100 depending on the left-out family. As the sequences among the four glycosyltransferase families are extremely different, this is a challenging discrimination problem. Figure lc shows the recognition performance curves for the HMM and the corresponding kernel method, averaged over the four-way cross validation. The combined classifier yields a substantial improvement in performance over the HJl..IM alone. 8These are variable length sequences thus rendering many discriminative methods inapplicable. Exploiting Generative Models in Discriminative Classifiers 493 022 022 002' 02 0 2 0'. 002 ~ !!O16 .016 ;01. ~O ,. 1012 'to 12 0015 ~ 0' i 0 ' ~ 0 08 ~008 00' 006 006 000' 004 004 a) 0020 002 0 04 006 008 0' b) 0020 002 004 006 008 0' c) FaN t'leQllttve tata False I'Wgalllle rate 00 0" 06 la1H~_rala Figure 1: a) & b) Comparison of classification performance between a kernel classifiers from the uniform model (solid line) and a mixture model (dashed line) . In a) the mixture model is a naive Bayes model and in b) it has three components in each class. c) Comparison of homology recognition performance between a hidden Mar kov model (dashed line) and the corresponding kernel classifier (solid line). 6 Discussion The model based kernel function derived in this paper provides a generic mechanism for incorporating generative models into discriminative classifiers. For discrimination, the resulting combined classifier is guaranteed to be superior to the generative model alone with little additional computational cost. Vie note that t he power of the new classifier arises to a large ext.ent from the use of Fisher scores as features in place of original examples. It is possible to use these features with any classifier. e.g. a feed-forward neural net, but kernel methods are most naturally suited for incorporating them. Finally we note that while we have used classification to guide the development of the kernel function, the results are directly applicable to regression. clustering. or even interpolation problems, all of which can easily exploit metric relations among the examples defined by the Fisher kernel. References [1] S.-I. Amari. Natural gradient works efficient ly in learning. Neural Computation, 10:251- 276, 1998. [2] R. Durbin, S. Eddy, A. Krogh, and G. :\Iitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Aczds. Cambridge C niversity Press, 1998. [3] T. Hubbard , A. Murzin, S. Brenner, and C. Chothia. seo?: a structural classification of proteins database. NA R , 25(1) :236- 9, Jan. 1997. [4] T. S. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. 1998. Revised and extended version. \Vill be available from http ://r,l'.lY . ai . mit . edu/ rvtommi. [5] D. J. C. MacKay. Introduction to gaussian processes. 1997. Available from http ://wol.ra .phy . cam .ac.uk/mackay/. [6] V. Vapnik. The nature of statistical learning theory. Springer-Verlag. 1995. [7] G. Wahba. Spline models for observational data. CB:\IS-NSF Regional Conference Series in Applied t>.lathematics, 1990.
1998
130
1,487
Learning a Continuous Hidden Variable Model for Binary Data Daniel D. Lee Bell Laboratories Lucent Technologies Murray Hill, NJ 07974 ddlee~bell-labs.com Haim Sompolinsky Racah Institute of Physics and Center for Neural Computation Hebrew University Jerusalem, 91904, Israel haim~fiz.huji . ac.il Abstract A directed generative model for binary data using a small number of hidden continuous units is investigated. A clipping nonlinearity distinguishes the model from conventional principal components analysis. The relationships between the correlations of the underlying continuous Gaussian variables and the binary output variables are utilized to learn the appropriate weights of the network. The advantages of this approach are illustrated on a translationally invariant binary distribution and on handwritten digit images. Introduction Principal Components Analysis (PCA) is a widely used statistical technique for representing data with a large number of variables [1]. It is based upon the assumption that although the data is embedded in a high dimensional vector space, most of the variability in the data is captured by a much lower climensional manifold. In particular for PCA, this manifold is described by a linear hyperplane whose characteristic directions are given by the eigenvectors of the correlation matrix with the largest eigenvalues. The success of PCA and closely related techniques such as Factor Analysis (FA) and PCA mixtures clearly indicate that much real world data exhibit the low dimensional manifold structure assumed by these models [2, 3]. However, the linear manifold structure of PCA is not appropriate for data with binary valued variables. Binary values commonly occur in data such as computer bit streams, black-and-white images, on-off outputs of feature detectors, and electrophysiological spike train data [4]. The Boltzmann machine is a neural network model that incorporates hidden binary spin variables, and in principle, it should be able to model binary data with arbitrary spin correlations [5]. Unfortunately, the 516 D. D. Lee and H. Sompolinsky Figure 1: Generative model for N-dimensional binary data using a small number p of continuous hidden variables. computational time needed for training a Boltzmann machine renders it impractical for most applications. In these proceedings, we present a model that uses a small number of continuous hidden variables rather than hidden binary variables to capture the variability of binary valued visible data. The generative model differs from conventional peA because it incorporates a clipping nonlinearity. The resulting spin configurations have an entropy related to the number of hidden variables used, and the resulting states are connected by small numbers of spin flips. The learning algorithm is particularly simple, and is related to peA by a scalar transformation of the correlation matrix. Generative Model Figure 1 shows a schematic diagram of the generative process. As in peA, the model assumes that the data is generated by a small number P of continuous hidden variables Yi . Each of the hidden variables are assumed to be drawn independently from a normal distribution with unit variance: P(Yi) = exp( -yt /2)/~. (1) The continuous hidden variables are combined using the feedforward weights Wij , and the N binary output units are then calculated using the sign of the feedforward acti vations: P Xi = L WijYj (2) j=l Si sgn(xi). (3) Since binary data is commonly obtained by thresholding, it seems reasonable that a proper generative model should incorporate such a clipping nonlinearity. The generative process is similar to that of a sigmoidal belief network with continuous hidden units at zero temperature. The nonlinearity will alter the relationship between the correlations of the binary variables and the weight matrix W as described below. The real-valued Gaussian variables Xi are exactly analogous to the visible variables of conventional peA. They lie on a linear hyperplane determined by the span of the matrix W, and their correlation matrix is given by: cxx = (xxT ) = WW T . (4) Learning a Continuous Hidden Variable Model for Binary Data Y2 """"'" -t-"' ..... . ~ .. , ...... ,.",'" +.' . ' , , . , . 4+ : .... "" "'/"~'l=LWl 'Y'~O •• ' : J J . . . . . . . . . +++ , . . . . . . . . . : x3 r , , , "" x2~ 0 , , , "~ 517 Figure 2: Binary spin configurations Si in the vector space of continuous hidden variables Yj with P = 2 and N = 3. By construction, the correlation matrix CXX has rank P which is much smaller than the number of components N. Now consider the binary output variables Si = sgn(xd· Their correlations can be calculated from the probability distribution of the Gaussian variables Xi: where (CSS)ij = (SiSj) = J IT dYk P(Xk) sgn(Xi) sgn(Xj) k (5) (6) The integrals in Equation 5 can be done analytically, and yield the surprisingly simple result: (CSS ) .. _ sin-1 'J (2) [C~.X 1 'J 11" JCfix elf . (7) Thus, the correlations of the clipped binary variables CSS are related to the correlations of the corresponding Gaussian variables CXX through the nonlinear arcsine function. The normalization in the denominator of the arcsine argument reflects the fact that the sign function is unchanged by a scale change in the Gaussian variables. Although the correlation matrix CSS and the generating correlation matrix cn are easily related through Equation 7, they have qualitatively very different properties. In general, the correlation matrix CSS will no longer have the low rank structure of CXX. As illustrated by the translationally invariant example in the next section, the spectrum of CSS may contain a whole continuum of eigenvalues even though cxx has only a few nonzero eigenvalues. PCA is typically used for dimensionality reduction of real variables; can this model be used for compressing the binary outputs Si? Although the output correlations CSS no longer display the low rank structure of the generating CXX , a more appropriate measure of data compression is the entropy of the binary output states. Consider how many of the 2N possible binary states will be generated by the clipping process. The equation Xi = Ej WijYj = 0 defines a P - 1 dimensional hyperplane in the P-dimensional state space of hidden variables Yj, which are shown as dashed lines in Figure 2. These hyperplanes partition the half-space where Si = +1 from the 518 5;=+1 5;=-1 L IL.--__ --II ______ ...... 1 , '-, , C)()( D. D. Lee and H. Sompolinsky css '., ., , , , ... ... "' ... ... ... 10.2 '-----'-__ ~~ ___ ~.............J 10° 10' 102 Eigenvalue rank Figure 3: Translationally invariant binary spin distribution with N = 256 units. Representative samples from the distribution are illustrated on the left, while the eigenvalue spectrum of CSS and CXX are plotted on the right. region where Si = -1. Each of the N spin variables will have such a dividing hyperplane in this P-dimensional state space, and all of these hyperplanes will generically be unique. Thus, the total number of spin configurations Si is determined by the number of cells bounded by N dividing hyperplanes in P dimensions. The number of such cells is approximately NP for N » P, a well-known result from perceptrons [6]. To leading order for large N, the entropy of the binary states generated by this process is then given by S = P log N. Thus, the entropy of the spin configurations generated by this model is directly proportional to the number of hidden variables P . How is the topology of the binary spin configurations Si related to the PCA manifold structure of the continuous variables Xi? Each of the generated spin states is represented by a polytope cell in the P dimensional vector space of hidden variables. Each polytope has at least P + 1 neighboring polytopes which are related to it by a single or small number of spin flips. Therefore, although the state space of binary spin configurations is discrete, the continuous manifold structure of the underlying Gaussian variables in this model is manifested as binary output configurations with low entropy that are connected with small Hamming distances. Translationally Invariant Example In principle, the weights W could be learned by applying maximum likelihood to this generative model; however, the resulting learning algorithm involves analytically intractable multi-dimensional integrals. Alternatively, approximations based upon mean field theory or importance sampling could be used to learn the appropriate parameters [7]. However, Equation 7 suggests a simple learning rule that is also approximate, but is much more computationally efficient [8]. First, the binary correlation matrix CSS is computed from the data. Then the empirical CSS is mapped into the appropriate Gaussian correlation matrix using the nonlinear transformation: CXX = sin(7l'Css /2). This results in a Gaussian correlation matrix where the variances of the individual Xi are fixed at unity. The weights Ware then calculated using the conventional PCA algorithm. The correlation matrix cxx is diagonalized, and the eigenvectors with the largest eigenvalues are used to form the columns of Learning a Continuous Hidden Variable Model for Binary Data 519 w to yield the best low rank approximation CXX ~ WWT . Scaling the variables Xi will result in a correlation matrix CXX with slightly different eigenvalues but with the same rank. The utility of this transformation is illustrated by the following simple example. Consider the distribution of N = 256 binary spins shown in Figure 3. Half of the spins are chosen to be positive, and the location of the positive bump is arbitrary under the periodic boundary conditions. Since the distribution is translationally invariant, the correlations CIl depend only on the relative distance between spins Ii - jl. The eigenvectors are the Fourier modes, and their eigenvalues correspond to their overlap with a triangle wave. The eigenvalue spectrum of css is plotted in Figure 3 as sorted by their rank. In this particular case, the correlation matrix CSS has N /2 positive eigenvalues with a corresponding range of values. Now consider the matrix CXX = sin(-lI'Css /2). The eigenvalues of CXX are also shown in Figure 3. In contrast to the many different eigenvalues CSS, the spectrum of the Gaussian correlation matrix CXX has only two positive eigenvalues, with all the rest exactly equal to zero. The corresponding eigenvectors are a cosine and sine function. The generative process can thus be understood as a linear combination of the two eigenmodes to yield a sine function with arbitary phase. This function is then clipped to yield the positive bump seen in the original binary distribution. In comparison with the eigenvalues of CSS, the eigenvalue spectrum of CXX makes obvious the low rank structure of the generative process. In this case, the original binary distribution can be constructed using only P = 2 hidden variables, whereas it is not clear from the eigenvalues of CSS what the appropriate number of modes is. This illustrates the utility of determining the principal components from the calculated Gaussian correlation matrix cxx rather than working directly with the observable binary correlation matrix CSS. Handwritten Digits Example This model was also applied to a more complex data set. A large set of 16 x 16 black and white images of handwritten twos were taken from the US Post Office digit database [9]. The pixel means and pixel correlations were directly computed from the images. The generative model needs to be slightly modified to account for the non-zero means in the binary outputs. This is accomplished by adding fixed biases ~i to the Gaussian variables Xi before clipping: Si = sgn(~i + Xi). (8) The biases ~i can be related to the means of the binary outputs through the expression: ~i = J2CtX erf- 1 (Si). (9) This allows the biases to be directly computed from the observed means of the binary variables. Unfortunately, with non-zero biases, the relationship between the Gaussian correlations CXX and binary correlations CSS is no longer the simple expression found in Equation 7. Instead, the correlations are related by the following integral equation: Given the empirical pixel correlations CSS for the handwritten digits, the integral in Equation 10 is numerically solved for each pair of indices to yield the appropriate 520 D. D. Lee and H Sompolinsky 102 ~------~------~------~-------.------~ .... CSS ..... .... .... ..... "'to " ~ , , , 103 L-____ ~ ______ ~ __ ~ __ ~ ______ ~ ______ ~ 50 100 150 200 250 Eigenvalue Rank Morph 2 2 2 2 ;2 a 2 ~ a Figure 4: Eigenvalue spectrum of CSS and CXx for handwritten images of twos. The inset shows the P = 16 most significant eigenvectors for cxx arranged by rows. The right side of the figure shows a nonlinear morph between two different instances of a handwritten two using these eigenvectors. Gaussian correlation matrix CXX . The correlation matrices are diagonalized and the resulting eigenvalue spectra are shown in Figure 4. The eigenvalues for CXX again exhibit a characteristic drop that is steeper than the falloff in the spectrum of the binary correlations CSs. The corresponding eigenvectors of CXX with the 16 largest positive eigenvalues are depicted in the inset of Figure 4. These eigenmodes represent common image distortions such as rotations and stretching and appear qualitatively similar to those found by the standard PCA algorithm. A generative model with weights W corresponding to the P = 16 eigenvectors shown in Figure 4 is used to fit the handwritten twos, and the utility of this nonlinear generative model is illustrated in the right side of Figure 4. The top and bottom images in the figure are two different examples of a handwritten two from the data set, and the generative model is used to morph between the two examples. The hidden values Yi for the original images are first determined for the different examples, and the intermediate images in the morph are constructed by linearly interpolating in the vector space of the hidden units. Because of the clipping nonlinearity, this induces a nonlinear mapping in the outputs with binary units being flipped in a particular order as determined by the generative model. In contrast, morphing using conventional PCA would result in a simple linear interpolation between the two images, and the intermediate images would not look anything like the original binary distribution [10]. The correlation matrix CXX also happens to contain some small negative eigenvalues. Even though the binary correlation matrix CSS is positive definite, the transformation in Equation 10 does not guarantee that the resulting matrix CXx will also be positive definite. The presence of these negative eigenvalues indicates a shortcoming of the generative processs for modelling this data. In particular, the clipped Gaussian model is unable to capture correlations induced by global Learning a Continuous Hidden Variable Model for Binary Data 521 constraints in the data. As a simple illustration of this shortcoming in the generative model, consider the binary distribution defined by the probability density: P({s}) tX lim.B-+ooexp(-,BLijSiSj). The states in this distribution are defined by the constraint that the sum of the binary variables is exactly zero: Li Si = O. Now, for N 2: 4, it can be shown that it is impossible to find a Gaussian distribution whose visible binary variables match the negative correlations induced by this sum constraint. These examples illustrate the value of using the clipped generative model to learn the correlation matrix of the underlying Gaussian variables rather than using the correlations of the outputs directly. The clipping nonlinearity is convenient because the relationship between the hidden variables and the output variables is particularly easy to understand. The learning algorithm differs from other nonlinear PCA models and autoencoders because the inverse mapping function need not be explicitly learned [11, 12]. Instead, the correlation matrix is directly transformed from the observable variables to the underlying Gaussian variables. The correlation matrix is then diagonalized to determine the appropriate feedforward weights. This results in a extremely efficient training procedure that is directly analogous to PCA for continuous variables. We acknowledge the support of Bell Laboratories, Lucent Technologies, and the US-Israel Binational Science Foundation. We also thank H. S. Seung for helpful discussions. References [1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag. [2] Bartholomew, DJ (1987). Latent variable models and factor analysis. London: Charles Griffin & Co. Ltd. [3] Hinton, GE, Dayan, P & Revow, M (1996). Modeling the manifolds of images of handwritten digits. IEEE Transactions on Neural networks 8,65- 74. [4] Van Vreeswijk, C, Sompolinsky, H, & Abeles, M. (1999). Nonlinear statistics of spike trains. In preparation. [5] Ackley, DH, Hinton, GE, & Sejnowski, TJ (1985). A learning algorithm for Boltzmann machines. Cognitive Science 9, 147-169. [6] Cover, TM (1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Electronic Comput. 14, 326- 334. [7] Tipping, ME (1999). Probabilistic visualisation of high-dimensional binary data. Advances in Neural Information Processing Systems ~1. [8] Christoffersson, A (1975). Factor analysis of dichotomized variables. Psychometrika 40, 5- 32. [9] LeCun, Yet al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation i, 541-551. [10] Bregler, C, & Omohundro, SM (1995). Nonlinear image interpolation using manifold learning. Advances in Neural Information Processing Systems 7,973980. [11] Hastie, T and Stuetzle, W (1989). Principal curves. Journal of the American Statistical Association 84, 502-516. [12] Demers, D, & Cottrell, G (1993). Nonlinear dimensionality reduction. Advances in Neural Information Processing Systems 5, 580-587.
1998
131
1,488
Perceiving without Learning: from Spirals to Inside/Outside Relations Ke Chen" and DeLiang L. Wang Department of Computer and Information Science and Center for Cognitive Science The Ohio State University, Columbus, OH 43210-1277, USA {kchen,dwang}@cis.ohio-state.edu Abstract As a benchmark task, the spiral problem is well known in neural networks. Unlike previous work that emphasizes learning, we approach the problem from a generic perspective that does not involve learning. We point out that the spiral problem is intrinsically connected to the inside/outside problem. A generic solution to both problems is proposed based on oscillatory correlation using a time delay network. Our simulation results are qualitatively consistent with human performance, and we interpret human limitations in terms of synchrony and time delays, both biologically plausible. As a special case, our network without time delays can always distinguish these figures regardless of shape, position, size, and orientation. 1 INTRODUCTION The spiral problem refers to distinguishing between a connected single spiral and disconnected double spirals, as illustrated in Fig. 1. Since Minsky and Papert (1969) first introduced the problem in their influential book on perceptrons, irhas received much attention and has become a benchmark task in neural networks. Many solutions have been attempted using different learning models since Lang and Witbrock (1988) reported that the problem could not be solved with a standard multilayer perceptron. However, resulting learning systems are only able to produce decision regions highly constrained by the spirals defined in a training set, thus specific in shape, position, size, and orientation. Moreover, no explanation is provided as to why the problem is difficult for human subjects to solve. Grossberg and Wyse (1991) proposed a biologically plausible neural network architecture for figure-ground separation and reported their network can distinguish between connected and disconnected spirals. In their paper, however, no demonstration was given to the spiral problem, and their model does not exhibit the limitations that humans do. * Also with National Laboratory of Machine Perception and Center for Information Science, Peking University, Beijing 100871, China. E-mail: chen@cis.pku.edu.cn Perceiving without Learning 11 There is a related problem in the study of visual perception, i.e., the perception of inside/outside relations. Considering the visual input of a single closed curve, the task of perceiving the inside/outside relation is to determine whether a specific pixel lies inside or outside the closed curve. For the human visual system, the perception of inside/outside relations often appears to be immediate and effortless (see an example in Fig. 2(a». As illustrated in Fig. 2(b), however, the immediate perception is not available for humans when the bounding contour becomes highly convoluted (Ullman 1984). Ullman (1984) suggested the computation of spatial relation through the use of visual routines. Visual routines result in the conjecture that the inside/outside is inherently sequential. As pointed out recently by Ullman (1996), the processes underlying the perception of inside/outside relations are as yet unknown and applying visual routines is simply one alternative . •• (a) (b) Fig. 1: The spiral problem. (a) a connected single spiral. (b) disconnected double spirals (adapted from Minsky and Papert 1969. 1988). (a) (b) Fig. 2: Inside/Outside relations. (a) an example (adapted from lulesz 1995). (b) another example (adapted from Ullman 1984). Theoretical investigations of brain functions indicate that timing of neuronal activity is a key to the construction of neuronal assemblies (Milner 1974, Malsburg 1981). In particular, the discovery of synchronous oscillations in the visual cortex (Singer & Gray 1995) has triggered much interest to develop computational models for oscillatory correlation. Recently, Terman and Wang (1995) proposed locally excitatory globally inhibitory oscillator networks (LEGION). They theoretically showed that LEGION can rapidly achieve both synchronization in a locally coupled oscillator group representing each object and desynchronization among a number of oscillator groups representing different objects. More recently, Campbell and Wang (1998) have studied time delays in networks of relaxation oscillators and analyzed the behavior of LEGION with time delays. Their studies show that loosely synchronous solutions can be achieved under a broad range of initial conditions and time delays. Therefore, LEGION provides a computational framework to study the process of visual perception from a standpoint of oscillatory correlation. We explore both the spiral problem and the inside/outside relations by oscillatory correlation in this paper. We show that computation through LEGION with time delays yields a generic solution to these problems since time delays inevitably occur in information transmission of a biological system. This investigation indicates that perceptual performance would be limited if local activation cannot be rapidly propagated due to time delays. As a special case, LEGION without time delays reliably distinguishes between connected and disconnected spirals and discriminates the inside and the outside regardless of shape, position, size, and orientation. Thus, we suggest that this kind of problems may be better solved by a neural oscillator network rather than by sophisticated learning. 2 METHODOLOGY The architecture of LEGION used in this paper is a two-dimensional network. Each oscillator is connected to its four nearest neighbors, and the global inhibitor (GI) receives excitation from each oscillator on the network and in turn inhibits each oscillator (Terman 12 K. Chen and D. L. Wang & Wang 1995). In LEGION, a single oscillator, i, is defined as dXi 3 = 3x· X· y. + 1- + S · + p dt ' , , , , (la) dy· ( ) dt' = € A +; tanh(,8xi) - Yi . (lb) Here Ii represents external stimulation to the oscillator, and Si represents overall coupling from other oscillators and the GI in the network. The symbol p denotes the amplitude of a Gaussian noise. Other parameters €, ,8, A, and; are chosen to control a periodic solution of the dynamic system. The periodic solution alternates between the silent and the active phases of near steady-state behavior (Terman & Wang 1995). The coupling term Si at time tis Si = 2: WikSoo(Xk(t - T) ,Oz) - WzSoo(z,Oz), (2) kEN(i) where Soo(x, 0) = 1/(1 + exp[-II:(x - 0)]) and the parameter II: controls the steepness of the sigmoid function. Wik is a synaptic weight from oscillator k to oscillator i, and N (i) is the set of its immediate neighbors. T is a time delay in interactions (Campbell & Wang 1998), and Oz is a threshold over which an oscillator can affect its neighbors. Wz is the positive weight used for the inhibition from the global inhibitor z, whose activity is defined as dz dt = ¢(uoo - z). (3) where U oo = 0 if Xi < Oz for every oscillator i, and Uoo = 1 if Xi(t) 2: Oz for at least one oscillator i. Here Oz represents a threshold to determine whether the GI z sends inhibition to oscillators, and the parameter ¢ determines the rate at which the inhibitor reacts to stimulation from oscillators. We use pattern formation to refer to the behavior that all the oscillators representing the same object are synchronous, while the oscillators representing different objects are desynchronous. Terman and Wang (1995) have analytically shown that such a solution can be achieved in LEGION without time delays. However, a solution may not be achieved when time delays are introduced. Although the loose synchrony concept has been introduced to describe time delay behavior (Campbell & Wang 1998), it does not indicate pattern formation in an entire network even when loose synchrony is achieved because loose synchrony is a local concept defined in terms of pairs of neighboring oscillators. Here we introduce a measure called min-max difference in order to examine whether pattern formation is achieved. Suppose that oscillators Oi and OJ represent two pixels in the same object, and the oscillator Ok represents a pixel in a different object. Moreover, let t S denote the time at which oscillator Os enters the active phase. The min-max difference measure is defined as Iti - tj I < TRB and Iti - tk I 2: TRB, where TRB is the time period of an active phase. Intuitively, this measure suggests that pattern formation is achieved if any two oscillators representing two pixels in the same object have some overlap in the active phase, while any two oscillators representing two pixels belonging to different objects never stay in the active phase simultaneously. This definition of pattern formation applies to both exact synchrony in LEGION without time delays and loose synchrony with time delays. 3 SIMULATIONS For a given image consisting of N x N pixels, a two-dimensional LEGION network with N x N oscillators is used so that each oscillator in the network corresponds to one pixel in the image. In the following simulations, the equations 1-3 were numerically solved using the fourth-order Runge-Kutta method. We illustrate stimulated oscillators with black squares. All oscillators were initialized randomly. A large number of simulations have Perceiving without Learning 13 been conducted with a broad range of parameter values and network sizes (Chen & Wang 1997). Here we report typical results using a specific set of parameter values. 3.1 THE SPIRAL PROBLEM For simulations, the two images in Fig. 1 were sampled as two binary images with 29 x 29 pixels. For these images, two problems can be addressed: (I) When an image is presented, can one determine whether it contains a single spiral or double spirals? (2) Given a point on a two-dimensional plane, can one determine whether it is inside or outside a specific spiral? (a) (b) (c) Fig. 3: Results of LEGION with a time delay T 0.002T (T is the period of oscillation) for the spiral problem. The parameter values used in this simulation are t = 0.003, {3 = 500, , = 24.0, >. = 21.5, OT = 6.0, p = 0.03, K = 500, ()x = -0.5, ()z = 0.1, ¢ = 3.0, Wz = 1.5, Is = 1.0, and Iu = -1.0 where Is and Iu are external input to stimulated and unstimulated oscillators, respectively. We first applied LEGION with time delays to the single spiral image in Fig. lea). Fig. 3(a) illustrates the visual stimulus, where black pixels correspond to the stimulated oscillators and white oneS correspond to the unstimulated oscillators. Fig. 3(b) shows a sequence of snapshots after the network was stabilized except for the first snapshot which shows the random initial state of the network. These snapshots are arranged in temporal order first from left to right and then from top to bottom. We observe from these snapshots that an activated oscillator in the spiral propagates its activation to its two immediate neighbors with some time delay, and the process of propagation forms a traveling wave along the spiral. We emphasize that, at any time, only the oscillators corresponding to a portion of the spiral stay in the active phase together, and the entire spiral can never be in the active phase simultaneously. Thus, based on the oscillatory correlation theory, our system cannot group the whole spiral together, which indicates that our system fails to realize that the pixels in the spiral belong to the same pattern. Note that the convoluted part of the background behaves similarly. Fig. 3(c) shows the temporal trajectories of the combined x activities of the oscillators representing the spiral (S) and the background (B) as well as the temporal activity of the GI. According to the min-max difference measure, Fig. 3(c) shows that pattern formation cannot be achieved. In order to illustrate the effects of time delays, we applied LEGION without time delays to the same image. Simulation results show that pattern formation is achieved, and the single spiral can be segregated from the background by the second period (Chen & Wang 1997). Thus, LEGION without time delays can readily solve the spiral problem in this case. The failure to group the spiral in Fig. 3 is caused by time delays in the coupling of neighboring oscillators. We also applied LEGION with time delays to the double spirals image in Fig. 1 (b). Fig. 4(a) shows the visual stimulus. Fig. 4(b) shows a sequence of snapshots arranged in the same order as in Fig. 3(b). We observe from these snapshots that starting from an end of one spiral a traveling wave is formed along the spiral and the activated oscillators representing the spiral propagate their activation. Due to time delays, however, only the oscillators corresponding to a portion of the spiral stay in the active phase together, and the entire 14 K. Chen and D. L. Wang spiral is never in the active phase simultaneously. The oscillators representing the other spiral have the same behavior. The results show that the pixels in anyone of double spirals cannot be grouped as the same pattern. We mention that the behavior of our system for the convoluted part of the background is similar to that for the double spirals. It is also evident from Fig. 4(c) that the pattern formation is not achieved after the network was stabilized. We also applied LEGION without time delays to the double spirals image for the same purpose as described before. Simulation results also show that anyone of spirals can be segregated from both the other spiral and the background by the second period (Chen & Wang 1997). Once again, it indicates that the failure to group the double spirals in Fig. 4 results from time delays. (a) (b) (c) Fig. 4: Results of LEGION without time delays for the spiral problem. The parameter values used are the same as listed in the caption of Fig. 3. In (c), SI and S2 represent two disconnected spirals. Band GI denote background and the global inhibitor, respectively. For the spiral problem, pattern formation means that solutions to the two problems in question can be provided to the questions of counting the number of objects or identifying whether two pixels belong to the same spiral or not. No such solutions are available when pattern formation is not achieved. Hence, our system cannot solve the spiral problem in general. Only under the special condition of no time delay can our system solve the problem. 3.2 INSIDFlOUTSIDE RELATIONS For simulations, the two pictures in Fig. 2 were sampled as binary images with 43 x 43 pixels. We first applied LEGION with time delays to the two images in Fig. 2. Figures 5(a) and 6(a) show the visual stimuli, where black pixels represent areas A and B that correspond to stimulated oscillators and white pixels represent the boundary that corresponds to unstimulated oscillators. Figures 5(b) and 6(b) illustrate a sequence of snapshots after networks were stabilized except for the first snapshot which shows the random initial states of networks. Figures 5(c) and 6(c) show temporal trajectories of the combined x activities of the oscillators representing areas A and B as well as the GI, respectively. (a) (b) A IjdL-I~ ~-­ B I UJl-==-'--JLJL GI (c) Fig. 5: Results of LEGION with a time delay T = 0.002T for Fig. 2(a). The parameter values used in this simulation are € = 0.004, 'Y = 14.0, >. = 11.5 and the other parameter values are the same Perceiving without Learning 15 as listed in the caption of Fig. 3. In (c), A, B, and GI denote areas A, B, and the global inhibitor, respectively. l I JI I L II ~nsTl (a) (b) (c) Fig. 6: Results of LEGION with a time delay T = O.002T for Fig. 2(b). The parameter values used and other statements are the same as listed in the caption of Fig. 5. We observe from Fig. 5(b) that the activation of an oscillator can rapidly propagate through its neighbors to other oscillators representing the same area, and eventually all the oscillators representing the same area (A or B) stay together in the active phase simultaneously, though they generally enter the active phase at different times due to time delays. Thus, on the basis of oscillatory correlation, our system can group an entire area (A or B) together and recognize all the pixels in area A or B as elements of the same area. According to the min-max difference measure, Fig. 5(c) shows that pattern fonnation is achieved by the second period. In contrast, we observe from Fig. 6(b) that although an activated oscillator rapidly propagates its activation in open regions as shown in the last three snapshots, propagation is limited once the traveling wave spreads in spiral-like regions as shown in earlier snapshots. As a result, at any time, only the oscillators corresponding to a portion of either area stay in the active phase together, and the oscillators representing the whole area are never in the active phase simultaneously. Thus, on the basis of oscillatory correlation, our system cannot group the whole area, and fails to identify the pixels of one area as belonging to the same pattern. Furthennore, according to the min-max difference measure, Fig. 6(c) shows that pattern fonnation is not achieved after the network was stabilized. In order to illustrate the effects of time delays and show how to use an oscillator network to perceive inside/outside relations, we applied LEGION without time delays to the two images in Fig. 2. Our simulations show that LEGION without time delays readily segregates two areas in both cases by the second period (Chen & Wang 1997). Thus, the failure to group each area in Fig. 6 is also attributed to time delays in the coupling of neighboring oscillators. In general, the above simulations suggest that oscillatory correlation provides a way to address inside/outside relations by a neural network; when pattern formation is achieved, a single area segregates from other areas that appear in the same image. For a specific point on the two-dimensional plane, the inside/outside relations can be identified by examining whether the oscillator representing the point synchronizes with the oscillators representing a specific area or not. 4 DISCUSSION AND CONCLUSION It has been reported that many neural network models can solve the spiral problem through learning. However, their solutions are subject to limitations because generalization abilities of resulting learning systems highly depend on the training set. As pointed out by Minsky and Papert (1969), solving the spiral problem is equivalent to detecting connectedness. They showed that connectedness cannot be computed by any diameter-limited or order-limited perceptrons (Minsky & Papert 1969). This limitation holds for multilayer perceptrons regardless of learning scheme (Minsky & Papert 1988, p.252). Unfortunately, 16 K. Chen and D. L. Wang few people have discussed generality of their solutions. In contrast, our simulations have shown that LEGION without time delays can always distinguish these figures regardless of shape, position, size, and orientation. We emphasize that no learning is involved in LEGION. In terms of performance, we suggest that the spiral problem may be better solved by a network of oscillators without learning. Our system provides an alternative way to perceive inside/outside relations from a neural computation perspective. Our method is significantly distinguished from visual routines (Ullman 1984, 1996). First, the visual routine method is described as serial algorithms, while our system is an inherently parallel and distributed process although its emergent behavior reflects a degree of serial nature of the problems. Second, the visual routine method does not make a qualitative distinction between rapid effortless perception that corresponds to simple boundaries and slow effortful perception that corresponds to convoluted boundaries - the time a visual routine, e.g. the coloring method, takes varies continuously. In contrast, our system makes such a distinction: effortless perception with simple boundaries corresponds to when pattern formation is achieved, and effortful perception with convoluted boundaries corresponds to when pattern formation is not achieved. Third, perhaps more importantly conceptually, our system does not invoke high-level serial process to solve such problems like inside/outside relations; its solution involves the same mechanism as it does for parallel image segmentation (see Wang & Terman 1997). Acknowledgments: Authors are grateful to S. Campbell for many discussions. This work was supported in part by an NSF grant (IRI-94233 12), an ONR grant (NOOOI4-93-10335), and an ONR Young Investigator Award (NOOO14-96-1-00676)to DLW. References Campbell, S. & Wang, D.L. (1998) Relaxation oscillators with time delay coupling. Physica D 111:151-178. Chen, K. & Wang, D.L. (1997) Perceiving without learning: from spirals to inside/outside relations. Technical Report OSU-C1SRC-8/97-TR38, The Ohio State University. Grossberg, S. & Wyse, L. (1991) A neural network architecture for figure-ground separation of connected scenic figures. Neural Networks 4:723-742. Julesz, B. (1995), Dialogues on perception. MIT Press. Lang, K. & Witbrock, M. (1988) Learning to tell two spirals apart. Proceeding of 1988 Connectionist Models Summer School, pp. 52-59, Morgan Kaufmann. Milner, P. (1974) A model for visual shape recognition. Psychological Review 81:512-535. Minsky, M. & Papert, R. (1969) Perceptrons. MIT Press. Minsky, M. & Papert, R. (1988) Perceptrons (extended version). MIT Press. Singer, W. & Gray, C.M. (1995) Visual feature integration and the temporal correlation hypothesis. Annual Review of Neuroscience 18:555-586. Terman, D. & Wang, D.L. (1995) Global competition and local cooperation in a network of neural oscillators. Physica D 81:148-176. Ullman, S. (1984) Visual routines. Cognition 18:97-159. Ullman, S. (1996) High-level vision. MIT Press. von der Malsburg, C. (1981) The correlation theory of brain function. Internal Report 81-2, MaxPlanck-Institute for Biophysical Chemistry. Wang, D.L. & Terman, D. (1997) Image segmentation based on oscillatory correlation. Neural Computation 9:805-836.
1998
132
1,489
Optimizing Classifiers for Imbalanced Training Sets Grigoris Karakoulas Global Analytics Group Canadian Imperial Bank of Commerce 161 Bay St., BCE-ll, Toronto ON, Canada M5J 2S8 Email: karakoulOcibc.ca John Shawe-Taylor Department of Computer Science Royal Holloway, University of London Egham, TW20 OEX England Email: jstOdcs.rhbnc.ac.uk Abstract Following recent results [9, 8] showing the importance of the fatshattering dimension in explaining the beneficial effect of a large margin on generalization performance, the current paper investigates the implications of these results for the case of imbalanced datasets and develops two approaches to setting the threshold. The approaches are incorporated into ThetaBoost, a boosting algorithm for dealing with unequal loss functions. The performance of ThetaBoost and the two approaches are tested experimentally. Keywords: Computational Learning Theory, Generalization, fat-shattering, large margin, pac estimates, unequal loss, imbalanced datasets 1 Introduction Shawe-Taylor [8] demonstrated that the output margin can also be used as an estimate of the confidence with which a particular classification is made. In other words if a new example has an output value we]) clear of the threshold we can be more confident of the associated classification than when the output value is closer to the threshold. The current paper applies this result to the case where there are different losses associated with a false positive, than with a false negative. If a significant number of data points are misclassified we can use the criterion of minimising the empirical loss. If, however, the data is correctly classified the empirical loss is zero for all correctly separating hyperplanes. It is in this case that the approach can provide insight into how to choose the hyperplane and threshold. In summary, the paper suggests ways in which a hyperplane should be optimised for imbalanced datasets where the loss associated with misclassifying the less prevalent class is higher. 254 G. Karakoulas and J Shawe-Taylor 2 Background to the Analysis Definition 2.1 [3} Let F be a set of real-valued functions. We say that a set of points X is ,-shattered by F if there are real numbers rx indexed by x E X such that for all binary vectors b indexed by X, there is a function fb E F realising dichotomy b with margin ,. The fat-shattering dimension Fat.:F of the set F is a function from the positive real numbers to the integers which maps a value, to the size of the largest ,-shattered set, if this is finite, or infinity otherwise. In general we are concerned with classifications obtained by thresholding real-valued functions. The classification values will be {-I, I} instead of the usual {O, I} in order to simplify some expressions. Hence, typically we will consider a set F of functions mapping from an input space X to the reals. For the sake of simplifying the presentation of our results we will assume that the threshold used for classification is O. The results can be extended to other thresholds without difficulty. Hence we implicitly use the classification functions H = T(F) = {T(J) : f E F}, where T(f) is the function f thresholded at O. We will say that f has , margin on the training set {(Xi, Yi) : i = 1, ... , m} , if minl<i<m{yd(Xi)} = ,. Note that a positive margin implies that T(f) is consistent. -Definition 2.2 Given a real-valued function f : X --+ [-1, 1] used for classification by thresholding at 0, and probability distribution P on X x {-I, I}, we use er p (f) to denote the following probability erp(f) = P{(x, y) : yf(x) ::; O}. Further suppose o ::; TJ ::; 1, then we use er p(fITJ) to denote the probability er p (f I TJ) = P { ( x, y) : y f ( x) ::; 0 II f ( x ) I 2: TJ}· The probability erp(fITJ) is the probability of misclassification of a randomly chosen example given that it has a margin of TJ or more. We consider the following restriction on the set of real-valued functions. Definition 2.3 The real-valued function class F is closed under addition of constants if TJ E ~, f E F :::> f + TJ E F. Note that the linear functions (with threshold weights) used in perceptrons [9] satisfy this property as do neural networks with linear output units. Hence, this property applies to the Support Vector Machine, and the neural network examples. We now quote a result from [8]. Theorem 2.4 [8} Let F be a class of real-valued functions closed under addition of constants with fat-shattering dimension bounded by Fat.:Fh) which is continuous from the right. With probability at least 1 - t5 over the choice of a random m sample (Xi, Yi) drawn according to P the following holds. Suppose that for some f E F, TJ > 0, 1. yd(xd 2: -TJ + 2, for all (Xi, yd in the sample, 2. n = I{i: yi/(xd 2: TJ + 2,}I, 3. n 2: 3J2m(2dln(288m) log2(12em) + In(32m2jt5)), Let d = Fat.:Fh /6). Then the probability that a new example with margin TJ 1S misclassified is bounded by ~ (2dlog2(288m)lOg2(12em) + log2 32;n2). Optimizing Classijers for Imbalanced Training Sets 255 3 Unequal Loss Functions We consider the situation where the loss associated with an example is different for misclassification of positive and negative examples. Let Lh(x, y) be the loss associated with the classification function h on example (x, y). For the analysis considered above the loss function is taken to be Lh(X, y) = Ih(x) - YI, that is 1 if the point x is misclassified and 0 otherwise. This is also known as the discrete loss. In this paper we consider a different loss function for classification functions. Definition 3.1 The loss function L{3 is defined as L{3(x, y) = f3y + (1 - y), if h (x) f. y, and 0, otherwise. We first consider the classical approach of minimizing the empirical loss, that is the loss on the training set. Since, the loss function is no longer binary the standard theoretical results that can be applied are much weaker than for the binary case. The algorithmic implications will, however, be investigated under the assumption we are using a hyperplane parallel to the maximal margin hyperplane. The empirical risk is given by ER(h) = 2:::1 Lf3(Xi, yd, for the training set {(Xi,Yi): i = 1, ... ,m}. Assuming that the training set can be correctly classified by the hypothesis class this criterion will not be able to distinguish between consistent hypotheses, hence giving no reason not to choose the standard maximal margin choice. However, there is a natural way to introduce the different losses into the maximal margin quadratic programming procedure [1]. Here, the constraints given are specified as Yi ((w . Xi) + 0) ~ 1, i ~ 1,2, ... , m. In order to force the hyperplane away from the positive points which will incur greater loss, a natural heuristic is to set Yi = -1 for negative examples and Yi = 1/ f3 for positive points, hence making them further from the decision boundary. In the case where consistent classification is possible, the effect of this will be to move the hyperplane parallel to itself so that the margin on the positive side is f3 times that on the negative side. Hence, to solve the problem we simply use the standard maximal margin algorithm [1] and then replace the threshold 0 with 1 b = 1 + f3[(w, x+) + f3(w· x-)), (1) where x+ (x-) is one of the closest positive (negative) points. The alternative approach we wish to employ is to consider other movements of the hyperplane parallel to itself while retaining consistency. Let ,0 be the margin of the maximal margin hyperplane. We consider a consistent hyperplane hI] with margin ,0 + "I to the positive examples, and ,0 - "I to the negative example. The basic analytic tool is Theorem 2.4 which will be applied once for the positive examples and once for the negative examples (note that classifications are in the set {-I, I}). Theorem 3.2 Let ho be the maximal margin hyperplane with margin ,0, while hI] is as above with "I < ,0· Set ,+ = (,0 + "I) /2 and ,- = (,0 - "I) /2. With probability at least 1 - J over the choice of a random m sample (Xi, Yi) drawn according to P the following holds. Suppose that for ho 1. no = I{i: YihO(xd ~ 2"1 + ,o}1, 2. no ~ 3J2m(dln(288m) log2(12em) + In(8/J)), Let d+ = FatF(r+ /6) and d- = FatF(r- /6). Then we can bound the expected loss by 256 G. Karakoulas and 1. Shawe- Taylor Proof: Using Theorem 2.4 we can bound the probability of error given that the correct classification is positive in terms of the expression with the fat shattering dimension d+ and n = no, while for a negative example we can bound the probability of error in terms of the expression with fat shattering dimension d- and n = m. Hence, the expected loss can be bounded by taking the maximum of the second bound with n+ in place of m together with a factor {3 in front of the second log term and the first bound multiplied by {3 •• The bound obtained suggests a way of optimising the choice of "I, namely to minimise the expression for the fat shattering dimension of linear functions [9]. Solving for "I in terms of {a and {3 gives "I = {a (( W - 1) / ( W + 1) ) . (2) This choice of "I does not in general agree with that suggested by the choice of the threshold b in the previous section. In a later section we report on initial experiments for investigating the performance of these different choices. 4 The ThetaBoost Algorithm The above idea for adjusting the margin in the case of unequal loss function can also be applied to the AdaBoost algorithm [2] which has been shown to maximise the margin on the training examples and hence the generalization can be bounded in terms of the margin and the fat-shattering dimension of the functions that can be produced by the algorithm [6]. We will first develop a boosting algorithm for unequal loss functions and then extend it for adjustable margin. More specifically, assume: (i) a set of training examples (Xl, yd, ... , (Xrn, Yrn) where Xi E X and Y E Y = {-I, + I} j (ii) a weak learner that outputs hypotheses h : X -r {-I, + I} and (iii) the unequal loss function L(3 (y) of Definition 3.1. We assign initial weight Dl (i) = w+ to the n+ positive examples and Dt{ i) = wto the n- negative examples, where w+n+ + w- n- = 1. The values can be set so that w+ /w- = {3 or they can be adjusted using a validation set. The generalization of AdaBoost to the case of an unequal loss function is given as the AdaUBoost algorithm in Figure 1. We adapt theorem 1 in [7] for this algorithm. Theorem 4.1 Assuming the notation and algorithm of Figure 1, the following bound holds on the training error of H T w+li: H(xd #- Yi = 11 + w-li: H(xd #- Yi = -11::; IT Zt. (3) t=l The choice of w+ and w- will force uneven probabilities of misclassification on the training set, but to ensure that the weak learners concentrate on misclassified positive examples we define Z (suppressing the subscript) as (4) i Thus, to minimize training error we should seek to minimize Z with respect to Q' (the voting coefficient) on each iteration of boosting. Following [7], we introduce the notation W++, W_+, W+_ and W __ , where for Sl and S2 E {-I, +1} D(i) (5) i :y, =31 ,h(x ,)=32 Optimizing Classifers for Imbalanced Training Sets 257 By equating to zero the first derivative of (4) with respect to a, Z'(a), and using (5) we have - exp( -0'/ J3)W++/ ,6+exp(a/ ,6)W_+/,6+exp(a)W+_ -exp( -a)W __ = o. Letting Y = exp(a) we get a polynomial in Y: (6) where C1 = -W++/,6, C2 = W_+/,6, C3 = W+_, and C4 = -W __ . The root of this polynomial can be found numerically. Since Z" (a) > 0, Z' (a) can have at most one zero and this gives the unique minimum of Z(a). The solution for a from (6) is used (as at) when taking the distance of a training example from the standard threshold on each iteration of the AdaUBoost algorithm in Figure 1 as well as when combining the weak learners in H(x). The ThetaBoost algorithm searches for a positive and a negative support vector (SV) point such that the hyperplane separating them has the largest margin. Once these SV points are found we can then apply the formulas (1) and (2) of Sections 3.1 and 3.2 respectively to compute values for adjusting the threshold. See Figure 2 for the complete algorithm. Algorithm AdaUBoost(X, Y, (3) 1. Initialize Dt{i) as described above. 2. For t = 1, ... , T • train weak learner using distribution Dt; • get weak hypothesis ht ; • choose at E lR ; • update: Dt+l(i) = Dt(i) exp[-at(3iYih(xdl/Zt • where (3i = 1/(3 if Yi = 1 and 1 if otherwise, and Zt is a normalization factor such that Li Dt+1(i) = 1; 3. Output the final hypothesis: H(x) = sgn (L'f=l atht(x)). Algorithm ThetaBoost(X, Y, (3, 6M ) 1. H(x) = AdaUBoost(X, Y, ,6); 2. Remove from the training dataset the false positive and borderline points; 3. Find the smallest H(x+) and mark this as the SV+; and remove any negative points with value greater than H(SV+); 4. Find the first negative point that is next in ranking to the SV+ and mark this as SV_; and compute the margin as the sum of distances, d+ and d_, of SV+ and SV_ from the standard threshold; 5. Check for candidate SV_ 's that are near to the current one and change the margin by at least 6M ; 6. Use SV+ and SV_ to compute the theta threshold from Eqn (1) and (2); 7. Output the final hypothesis: H(x) = sgn (L'f=l atht(x) - e) Figure 1: The AdaUBoost and Theta-Boost algorithms. 258 G. Karakoulas and J Shawe-Taylor 5 Experiments The purpose of the experiments reported in this section is two-fold: (i) to compare the generalization performance of AdaUBoost against that of standard Adaboost on imbalanced datasetsj (ii) to examine the two formulas for choosing the threshold in ThetaBoost and evaluate their effect on generalization performance. For the evaluations in (i) and (ii) we use two performance measures: the average Li3 and the geometric mean of accuracy (g-mean) [4]. The latter is defined as 9 = Jprecision . recall, where . . # positives correct preClSlOn = # . . d· d j posItIves pre Icte II _ # positives correct reca # . . . true POSI tJ ves The g-mean has recently been proposed as a performance measure that, in contrast to accuracy, can capture the "specificity" trade-off between false positives and true positives in imbalanced datasets [4]. It is also independent of the distribution of examples between classes. For our initial experiments we used the satimage dataset from the UCI repository [5] and used a uniform D 1• The dataset is about classifying neigborhoods of pixels in a satelite image. It has 36 continuous attributes and 6 classes. We picked class 4 as the goal class since it is the less prevalent one (9.73% of the dataset). The dataset comes in a training (4435 examples) and a test (2000 examples) set. Table 1 shows the performance on the test set of AdaUBoost, AdaBoost and C4.5 for different values of the beta parameter. It should be pointed out that the latter two algorithms minimize the total error assuming an equal loss function (13 = 1). In the case of equal loss AdaUBoost simply reduces to AdaBoost. As observed from the table the higher the loss parameter the bigger the improvement of AdaUBoost over the other two algorithms. This is particularly apparent in the values of g-mean. AdaUBoost AdaBoost C4.5 f3 values avgLoss g-mean avgLoss g-mean avgLoss g-mean 1 0.0545 0.773 0.0545 0.773 0.0885 0.724 2 0.0895 0.865 0.0831 0.773 0.136 0.724 4 0.13 0.889 0.1662 0.773 0.231 0.724 8 0.1785 0.898 0.3324 0.773 0.421 0.724 16 0.267 0.89 0.664 0.773 0.801 0.724 Table 1: Generalization performance in the SatImage dataset. Figure 2 shows the generalization performance of ThetaBoost in terms of average loss (13 = 2) for different values of the threshold (). The latter ranges from the largest margin of negative examples that corresponds to SV_ to the smallest margin of positive examples that corresponds to SV+. This range includes the values of band TJ given by formulas (I) and (2). In this experiment,sM was set to 0.2. As depicted in the figure, the margin defined by b achieves better generalization performance than the margin defined by TJ. In particular, b is closer to the value of () that gives the minimum loss on this test set. In addition, ThetaBoost with b performs better than AdaUBoost on this test set. We should emphasise, however, that the differences are not significant and that more extensive experiments are required before the two approaches can be ranked reliably. Optimizing Classifers for Imbalanced Training Sets 0.2.----------.----------.,-----------, en en .3 0.18 0.16 ~0 . 14 l!! Q) ~ 0.12 0.1 0.08L--------L--------'----------' -50 o 50 100 Threshold e Figure 2: Average Loss L{3 (13 = 2) on test set as a function of () 6 Discussion 259 In the above we built a theoretical framework for optimaIly setting the margin given an unequal loss function. By applying this framework to boosting we developed AdaUBoost and ThetaBoost that generalize Adaboost, a weIl known boosting algorithm, for taking into account unequal loss functions and adjusting the margin in imbalanced datasets. Initial experiments have shown that both these factors improve the generalization performance of the boosted classifier. References [lJ Corinna Cortes and Vladimir Vapnik, Machine Learning, 20, 273- 297, 1995. [2J Yoav Freund and Robert Schapire, pages 148-156 in Proceedings of the International Conference on Machine Learning, ICML '96, 1996. [3] Michael J. Kearns and Robert E. Schapire, pages 382- 391 in Proceedings of the 31st Symposium on the Foundations of Computer Science, FOCS'90, 1990. [4J Kubat, M., Holte, R. and Matwin, S., Machine Learning, 30, 195-215, 1998. [5] Merz, C.J. and Murphy, P.M. (1997). UCI repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html. [6] R. Schapire, Y. Freund, P. Bartlett, W. Sun Lee, pages 322- 330 in Proceedings of International Conference on Machine Learning, ICML '97, 1997. [7] Robert Schapire and Yoram Singer, in Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT'98, 1998. [8] John Shawe-Taylor, Algorithmica, 22,157-172,1998. [9J John Shawe-Taylor, Peter Bartlett, Robert Williamson and Martin Anthony, IEEE Trans. Inf. Theory, 44 (5) 1926-1940, 1998.
1998
133
1,490
Blind Separation of Filtered Sources Using State-Space Approach Liqing Zhang· and Andrzej Cichockit Laboratory for Open Information Systems, Brain Science Institute, RIKEN Saitama 351-0198, Wako shi, JAPAN Email: {zha.cia}@open.brain.riken.go.jp Abstract In this paper we present a novel approach to multichannel blind separation/generalized deconvolution, assuming that both mixing and demixing models are described by stable linear state-space systems. We decompose the blind separation problem into two process: separation and state estimation. Based on the minimization of Kullback-Leibler Divergence, we develop a novel learning algorithm to train the matrices in the output equation. To estimate the state of the demixing model, we introduce a new concept, called hidden innovation, to numerically implement the Kalman filter. Computer simulations are given to show the validity and high effectiveness of the state-space approach. 1 Introd uction The field of blind separation and deconvolution has grown dramatically during recent years due to its similarity to the separation feature in human brain, as well as its rapidly growing applications in various fields, such as telecommunication systems, image enhancement and biomedical signal processing. The blind source separation problem is to recover independent sources from sensor outputs without assuming any priori knowledge of the original signals besides certain statistic features. Refer to review papers [lJ and [5J for the current state of theory and methods in the field. Although there exist a number of models and methods, such as the infomax, natural gradient approach and equivariant adaptive algorithms, for separating blindly independent sources, there still are several challenges in generalizing mixture to dy·On leave from South China University of Technology, China tan leave from Warsaw University of Technology, Poland Blind Separation of Filtered Sources 649 namic and nonlinear systems, as well as in developing more rigorous and effective algorithms with general convergence.[1-9], [11-13] The state-space description of systems is a new model for blind separation and deconvolution[9,12]. There are several reasons why we use linear state-space systems as blind deconvolution models. Although transfer function models are equivalent to the state-space ones, it is difficult to exploit any common features that may be present in the real dynamic systems. The main advantage of the state space description for blind deconvolution is that it not only gives the internal description of a system, but there are various equivalent types of state-space realizations for a system, such as balanced realization and observable canonical forms. In particular it is known how to parameterize some specific classes of models which are of interest in applications. Also it is much easy to tackle the stability problem of state-space systems using the Kalman Filter. Moreover, the state-space model enables much more general description than standard finite impulse response (FIR) convolutive filtering. All known filtering (dynamic) models, like AR, MA, ARMA, ARMAX and Gamma filterings, could also be considered as special cases of flexible state-space models. 2 Formulation of Problem Assume that the source signals are a stationary zero-mean i.i.d processes and mutually statistically independent. Let s(t) = (SI (t),"', sn(t)) be an unknown vector of independent Li.d. sources. Suppose that the mixing model is described by a stable linear state discrete-time system x(k + 1) u(k) Ax(k) + Bs(k) + Lep(k), Cx(k) + Ds(k) + 6(k), (1) (2) where x E RT is the state vector of system, s(k) E R n is the vector of source signals and u(k) E R m is the vector of sensor signals. A, B, C and D are the mixing matrices of the state space model with consistent dimensions. ep(k) is the process noise and 6(k) is sensor noise of the mixing system. If we ignore the noise terms in the mixing model, its transfer function matrix is described by a m x n matrix of the form H(z) = C(zI - A)-l B + D, (3) where Z-1 is a delay operator. We formulate the blind separation problem as a task to recover original signals from observations u(t) without prior knowledge on the source signals and the state space matrices [A, B, C, D] besides certain statistic features of source signals. We propose that the demixing model here is another linear state-space system, which is described as follows, (see Fig. 1) x(k + 1) = Ax(k) + Bu(k) + LeR(k), (4) y(k) = Cx(k) + DU(k), (5) where the input u(k) of the demixing model is just the output (sensor signals) of the mixing model and the eR(k) is the reference model noise. A, B, C and D are the demixing matrices of consistent dimensions. In general, the matrices W = [A, B, C, D, L] are parameters to be determined in learning process. For simplicity, we do not consider, at this moment, the noise terms both in the mixing and demixing models. The transfer function of the demixing model is W(z) = C(zI - A)-1 B + D. The output y(k) is designed to recover the source signals in the following sense y(k) = W(z)H(z)s(k) = PA(z)s(k), (6) 650 L. Zhang and A. Cichocki u(k) Figure 1: General state-space model for blind deconvolution where P is any permutation matrix and A(z) is a diagonal matrix with Aiz-Ti in diagonal entry (i,i), here Ai is a nonzero constant and Ti is any nonnegative integer. It is easy to see that the linear state space model mixture is an extension of instantaneous mixture. When both the matrices A, B, C in the mixing model and A, B, C in the demixing model are null matrices, the problem is simplified to standard leA problem [1-8]. The question here is whether exist matrices [A, B, C, D] in the demixing model (4) and (5), such that its transfer function W(z) satisfies (6). It is proven [12] that if the matrix D in the mixing model is of full rank, rank(D) = n, then there exist matrices [A, B, C, D], such that the output signal y of state-space system (4) and (5) recovers the independent source signal 8 in the sense of (6). 3 Learning Algorithm Assume that p(y, W),Pi(Yi, W) are the joint probability density function of y and marginal pdf of Yi, (i = 1" . " n) respectively. We employ the mutual information of the output signals, which measures the mutual independence of the output signals Yi(k), as a risk function [1,2] n l(W) = -H(y, W) + L H(Yi, W), (7) i=l where H(y, W) = - J p(y, W)logp(y, W)dy, H(Yi, W) = - J Pi(Yi, W)logpi(Yi, W)dYi. In this paper we do not directly develop learning algorithms to update all parameters W = [A, B, C, D] in demixing model. We separate the blind deconvolution problem into two procedures: separation and state-estimation. In the separation procedure we develop a novel learning algorithm, using a new search direction, to update the matrices C and D in output equation (5). Then we define a hidden innovation of the output and use Kalman filter to estimate the state vector x(k). For simplicity we suppose that the matrix D in the demixing model (5) is nonsingular n x n matrix. From the risk function (7), we can obtain a cost function for on line learning 1 n l(y, W) = -2logdet(DT D) - L logpi(Yi, W), (8) i=l Blind Separation of Filtered Sources 651 where det(DT D) is the determinant of symmetric positive definite matrix DT D. For the gradient of I with respect to W, we calculate the total differential dl of l(y, W) when we takes a differential dW on W dl(y, W) = l(y, W + dW) -l(y, W). (9) Following Amari's derivation for natural gradient methods [1-3], we have dl(y, W) = -tr(dDD-I ) + cpT(y)dy, (10) where tr is the trace of a matrix and cp(y) is a vector of nonlinear activation functions CPi(Yi) = - dlogpi(Yi) = _P~(Yi). (11) dYi Pi(Yi) Taking the derivative on equation (5), we have following approximation dy = dCx(k) + dDu(k). (12) On the other hand, from (5), we have u(k) = D-I (y(k) - Cx(k)) (13) Substituting (13) into (12), we obtain dy = (dC - dDD-IC)x + dDD-ly. (14) In order to improve the computing efficiency of learning algorithms, we introduce a new search direction = dC-dDD-IC , dX2 = dDD- I . Then the total differential dl can be expressed by dl = -tr(dX2) + cpT(y)(dXIX + dX2Y)' (15) (16) (17) It is easy to obtain the derivatives of the cost function I with respect to matrices Xl and X 2 as cp(y(k))XT(k), cp(y(k))yT (k) - I. (18) (19) From (15) and (16), we derive a novel learning algorithm to update matrices C and D. ~C(k) = 'T] (-cp(y(k))xT(k) + (I - cp(y(k))yT(k))C(k)) , (20) ~D(k) = 'T] (I - cp(y(k))yT(k)) D(k). (21) The equilibrium points of the learning algorithm satisfy the following equations E[cp(y(k))XT(k)] = 0, (22) E [I - cp(y(k))yT (k)] = O. (23) This means that separated signals y could achieve as mutually independent as possible if the nonlinear activation function cp(y) are,suitably chosen and the state vector x(k) is well estimated. From (20) and (21), we see that the natural gradient learning algorithm [2] is covered as a special case of the learning algorithm when the mixture is simplified to instantaneous case. The above derived learning algorithm enable to solve the blind separation problem under assumption that state matrices A and B are known or designed appropriately. In the next section instead of adjusting state matrices A and B directly, we propose new approaches how to estimate state vector x. 652 L. Zhang and A. Cichocki 4 State Estimator From output equation (5), it is observed that if we can accurately estimate the state vector x(k) of the system, then we can separate mixed signals using the learning algorithm (20) and (21). 4.1 Kalman Filter The Kalman filter is a useful technique to estimate the state vector in state-space models. The function of the Kalman Filter is to generate on line the state estimate of the state x(k). The Kalman filter dynamics are given as follows x(k + 1) = Ax(k) + BU(k) + Kr(k) + eR(k), (24) where K is the Kalman filter gain matrix, and r(k) is the innovation or residual vector which measures the error between the measured(or expected) output y(k) and the predicted output Cx(k) + Du(k). There are varieties of algorithms to update the Kalman filter gain matrix K as well as the state x(k), refer to [10] for more details. However in the blind deconvolution problem there exists no explicit residual r(k) to estimate the state vector x(k) because the expected output y(t) means the unavailable source signals. In order to solve this problem, we present a new concept called hidden innovation to implement the Kalman filter in blind deconvolution case. Since updating matrices C and D will produces an innovation in each learning step, we introduce a hidden innovation as follows r(k) = b.y(k) = t1Cx(k) + t1Du(k), (25) where t1C = C(k + 1) - C(k) and t1D = D(k + 1) - D(k). The hidden innovation presents the adjusting direction of the output of the demixing system and is used to generate an a posteriori state estimate. Once we define the hidden innovation, we can employ the commonly used Kalman filter to estimate the state vector x(k), as well as to update Kalman gain matrix K . The updating rule in this paper is described as follows: (1) Compute the Kalman gain matrix K(k) = P(k)C(kf(C(k)P(k)CT(k) + R(k))-l (2) Update state vector with hidden innovation x(k) = x(k) + K(k)r(k) (3) Update the error covariance matrix P(k) = (I - K(k)C(k))P(k) (4) evaluate the state vector ahead Xk+l = A(k)x(k) + B(k)u(k) (5) evaluate the error covariance matrix ahead P(k) = A(k)P(k)A(kf + Q(k) with the initial condition P(O) = I, where Q(k), R(k) are the covariance matrices of the noise vector eR and output measurement noise nk. The theoretic problems, such as convergence and stability, remain to be elaborated. Simulation experiments show that the proposed algorithm, based on the Kalman filter, can separate the convolved signals well. Blind Separation of Filtered Sources 653 4.2 Information Back-propagation Another solution to estimating the state of a system is to propagate backward the mutual information. If we consider the cost function is also a function of the vector x, than we have the partial derivative of l(y, W) with respect to x 8l(y, W) = C T ( ) 8x cp Y . (26) Then we adjust the state vector x(k) according to the following rule x(k) = x(k) - TlC(kf cp(y(k)). (27) Then the estimated state vector is used as a new state of the system. 5 Numerical Implementation Several numerical simulations have been done to demonstrate the validity and effectiveness of the proposed algorithm. Here we give a typical example Example 1. Consider the following MIMO mixing model 10 10 U(k) + L AiU(k - i) = s(k) + L BiS(k - i) + v(k), i=l i=l where u, s, v E R 3 , and -0.48 -0.16 -0.64 ), -0.50 -0.10 -0.40 ) A2 -0.16 -0.48 -0.24 A8 = -0.10 -0.50 -0.20 , -0.16 -0.16 -0.08 -0.10 -0.10 -0.10 0.32 0.19 0.38 ) 0.42 0.21 0.,4) AlO = 0.16 0.29 0.20 , B2 0.10 0.56 0.14 , 0.08 0.08 0.10 0.21 0.21 0.35 -0.40 -0.08 -0.08 ), -0.19 -0.15 -0.,0) B8 -0.08 -0.40 -0.16 BlO -0.11 -0.27 -0.12 , -0.08 -0.08 -0.56 -0.16 -0.18 -0.22 and other matrices are set to the null matrix. The sources s are chosen to be LLd signals uniformly distributed in the range (-1,1), and v are the Gaussian noises with zero mean and a covariance matrix 0.11. We employ the state space approach to separate mixing signals. The nonlinear activation function is chosen cp(y) = y3. The initial value for matrices A and B in the state equation are chosen as in canonical controller form. The initial values for matrix C is set to null matrix or given randomly in the range (-1,1) , and D = 13 . A large number of simulations show that the state space method can easily recover source signals in the sense of W(z)H( z ) = PA. Figure 2 illustrates the coefficients of global transfer function G(z) = W( z )H(z ) after 3000 iterations, where the (i,j)th sub-figure plots the coefficients of the transfer function Gij (z) = E~o gijkZ-k up to order of 50. References [1] S. Amari and A. Cichocki, "Adaptive blind signal processing- neural network approaches", Proceedings of the IEEE, 86(10):2026-2048, 1998. [2] S. Amari, A. Cichocki, and H.H. Yang, "A new learning algorithm for blind signal separation", Advances in Neural Information Processing Systems 1995 (Boston, MA: MIT Press, 1996), pp. 752- 763. 654 G(z) 1 1 G(Z) 1 2 G(Z) '3 _~CJ _~CJ~Cl ~CJ ~CJ ~CJ o ~ ~ a ~ ~ a 00 ~ (3(:) 21 G(Z)22 G(Z) :!3 _~CJ }:~ _~CJ ~CJ ~CJ ~CJ o 00 ~ a 00 40 a 00 ~ (3(Z) 3 1 G(Z)32 G(Z')33 _~c:J r~~ _;CJ ~CJ ~CJ ~CJ a 00 ~ a 00 ~ 0 00 ~ L. Zhang and A. Cichocki Figure 2: The coefficients of global transfer function after 3000 iterations [3] S. Amari "Natural gradient works efficiently in learning", Neural Computation, VoLlO, pp251-276, 1998. [4] A. J. Bell and T. J. Sejnowski, "An information-maximization approach to blind separation and blind deconvolution", Neural Computation, Vol. 7, pp 1129-1159, 1995. [5] J.-F Cardoso, "Blind signal separation: statistical principles", Proceedings of the IEEE, 86(10):2009-2025, 1998. [6] J.-F. Cardoso and B. Laheld, "Equivariant adaptive source separation," IEEE Trans. Signal Processing, vol. SP-43, pp. 3017-3029, Dec. 1996. [7] A.Cichocki and R. Unbehauen, "Robust neural networks with on-line learning for blind identification and blind separation of sources" IEEE Trans Circuits and Systems I : Fundamentals Theory and Applications, vol 43, No.Il, pp. 894-906, Nov. 1996. [8] P. Comon, "Independent component analysis: a new concept?", Signal Processing, vol.36, pp.287- 314, 1994. [9] A. Gharbi and F. Salam, "Algorithm for blind signal separation and recovery in static and dynamics environments", IEEE Symposium on Circuits and Systems, Hong Kong, June, 713-716, 1997. [10] O. L. R. Jacobs, "Introduction to Control Theory", Second Edition, Oxford University Press, 1993. [11] T. W. Lee, A.J. Bell, and R. Lambert, "Blind separation of delayed and convolved sources", NIPS 9, 1997, MIT Press, Cambridge MA, pp758-764. [12] L. -Q. Zhang and A. Cichocki, "Blind deconvolution/equalization using statespace models", Proc. '98 IEEE Signal Processing Society Workshop on NNSP, ppI23-131, Cambridge, 1998. [13] S. Choi, A. Cichocki and S. Amari, "Blind equalization of simo channels via spatio-temporal anti-Hebbian learning rule", Proc. '98 IEEE Signal Processing Society Workshop on NNSP, pp93-102, Cambridge, 1998. PART V IMPLEMENTATION
1998
134
1,491
Recurrent Cortical Amplification Produces Complex Cell Responses Frances S. Chance~ Sacha B. Nelson~ and L. F. Abbott Volen Center and Department of Biology Brandeis University Waltham, MA 02454 Abstract Cortical amplification has been proposed as a mechanism for enhancing the selectivity of neurons in the primary visual cortex. Less appreciated is the fact that the same form of amplification can also be used to de-tune or broaden selectivity. Using a network model with recurrent cortical circuitry, we propose that the spatial phase invariance of complex cell responses arises through recurrent amplification of feedforward input. Neurons in the network respond like simple cells at low gain and complex ceUs at high gain. Similar recurrent mechanisms may playa role in generating invariant representations of feedforward input elsewhere in the visual processing pathway. 1 INTRODUCTION Synaptic input to neurons in the primary visual cortex is primarily recurrent, arising from other cortical cells. The dominance of this type of connection suggests that it may play an important role in cortical information processing. Previous studies proposed that recurrent connections amplify weak feedforward input to the cortex (Douglas et aI., 1995) and selectively amplify tuning for specific stimulus characteristics, such as orientation or direction of movement (Douglas et aI., 1995; Ben-Yishai et aI., 1995; Somers et aI., 1995; Sompolinsky and Shapley, 1997). Cortical cooling and shocking experiments provide evidence that there is cortical amplification through recurrent connections, but they do not show increases in orientation or direction selectivity as a result of this amplification (Ferster et aI., 19%; Chung and Ferster, 1998). Recurrent connections can also decrease neuronal selectivity through the same form of amplification, generating responses that are insensitive to certain stimulus features. Although the ability to sharpen tuning may be an important feature in cortical processing, the capacity to broaden tuning for particular stimulus attributes is also desirable. Neurons in the primary visual cortex can be divided into two classes based on their reRecurrent Cortical Amplification Produces Complex Cell Responses 91 sponses to visual stimuli such as counterphase and drifting sinusoidal gratings. Simple cells show tuning for orientation, spatial frequency, and spatial phase of a grating (Movshon et aI., 1978a). Complex cells exhibit orientation and spatial frequency tuning, but are insensitive to spatial phase (Movshon et aI., 1978b). A counterphase grating, s(x, t) = cos(Kx - cp) cos(wt), is one in which the spatial phase, CP,and spatial frequency, K, are held constant but the contrast, s(x, t), varies sinusoidally in time at some frequency w. In response to a counterphase grating, the activity of a simple cell oscillates at the same frequency as the stimulus, w. A complex cell response is modulated at twice the frequency, 2w. To create a drifting grating of frequency 1/, s( x, t) = cos( K x - I/t), the spatial phase and spatial frequency are held constant but the grating is moved at velocity 1// K. A simple cell response to a drifting grating is highly modulated at frequency 1/, while a complex cell response to a drifting grating is elevated but relatively unmodulated. The differences between complex and simple cell responses are a direct consequence of the complex cell spatial phase insensitivity. Previous models of complex cells generate spatial-phase invariant responses through converging sets of feedforward inputs with a wide range of spatial phase preferences but similar orientation and spatial frequency selectivities (Hubel and Wiesel, 1962; Mel et aI., 1998). These models do not incorporate recurrent connections between complex cells, which are known to be particularly strong (Toyama et al., 1981). We propose that the spatial phase invariance of complex cell responses can arise from a broadening of spatial phase tuning by cortical amplification (Chance et aI., 1998). The model neurons exhibit simple cell behavior when weakly coupled and complex cell behavior when strongly coupled, suggesting that the two classes of neurons in the primary visual cortex may arise from the same basic cortical circuit. 2 THE MODEL The activity of neuron i in the model network is characterized by a firing rate rio Each neuron sums feedforward and recurrent input and responds as described by the standard rate-model equation dri ~ Trdi = Ii + L.J Wijrj - rio Ii represents the feedforward input to cell i, Wij is the weight of the synapse from neuron j to neuron i, and Tr is a time constant. Previous studies have suggested that, for a neuron receiving many inputs, Tr is small, closer to a synaptic time constant than the membrane time constant (Ben-Yishai et al., 1995; Treves, 1993). Thus we choose Tr = 1 ms. The feedforward input describes the response of a simple cell with a Gabor receptive field Ii = [/ dxGi(x) 1 00 dt' H(t')s(x, t - t')] + ' where s(x, t) represents the contrast function of the visual stimulus and the notation [ 1+ indicates rectification. The temporal response function is (Adelson and Bergen, 1985) H(t') = exp(-at') -- --( (at')5 (at')7) 5! 7!' where we use a = l/ms. The spatial filter is a Gabor function, G = exp ( - 2:; ) cos(kix - <Pi)' where Ui determines the spatial extent of the receptive field, ki is the preferred spatial frequency, and <Pi is the preferred spatial phase. The values of <Pi are equally distributed 92 F S. Chance, S. B. Nelson and L. F Abbott over the interval [-180 0 , 1800 ) . To give the neurons a realistic bandwidth, (j i is chosen such that ki(ji = 2.5. Initially we consider a simplified case in which k i = 1 for all cells. Later we consider the spatial frequency selectivity of neurons in the network and allow the value of ki to range from 0 to 3.5 cycles/deg. In this paper we assume that the model network describes one orientation column of the primary visual cortex, and thus all neurons have the same orientation tuning. All stimuli are of the optimal orientation for the network. Spatial phase tuning is selectively broadened in the model because the strength of a recurrent connection between two neurons is independent of the spatial phase selectivities of their feedforward inputs. In the model with all ki = 1, the recurrent input is determined by 9 Wij = (N -1) ' for all i =I j. N is the number of cells in the network, and 0 ~ 9 < gmax, where gmax is the largest value of 9 for which the network remains stable. In this case gmax = 1. 3 RESULTS The steady-state solution of the rate-model equation is given by Ti = Ii + L WijTj . To solve this equation, we express the rates and feedfoward inputs in terms of a complete set of eigenvectors ~r of the recurrent weight matrix, L Wij~r = ~/.l~r for I-L = 1,2, ... ,N, where ~/.l are the eigenvalues. The solution is then This equation displays the phenomenon of cortical amplification if one or more of the eigenvalues is near one. If we assume only one eigenvalue, ~1 , is close to one, the factor 1~1 in the denominator causes the I-L = 1 term to dominate and we find Ti ~ ~t L Ij~J (1 ~1) -1. The input combination L Ij~J dominates the response, determining selectivity, and this mode is amplified by a factor 1/(1 ~1)' We refer to this amplification factor as the cortical gain. In the case where Wij = g/(N - 1) for i =I j, the largest eigenvalue is ~1 = 9 and the corresponding eigenvector has all components equal to each other. For 9 near one, the recurrent input to neuron i is then proportional to Lj [cos(~-1>j)]+ which, for large numbers of cells with uniformly placed preferred spatial phases 1>i, is approximately independent of ~, the spatial phase of the stimulus. When 9 is near zero, the network is at low gain and the response of neuron i is roughly proportional to its feedforward input, [cos(~ - 1>j)]+, and is sensitive to spatial phase. The response properties of simple and complex cells to drifting and counterphase gratings are duplicated by the model neuron, as shown in figure 1. For low gain (gain = 1, top panels of figures lA and IB), the neuron acts as a simple cell and its activity is modulated at the same frequency as the stimulus (w for counterphase gratings and v for drifting gratings). At high gain (gain = 20), the neuron responds like a complex cell, exhibiting frequency doubling in the response to a counterphase grating (bottom panel of Figure 1A) and an elevated DC response to a drifting grating (bottom panel, Figure IB). Intermediate gain (gain = 5) produces intermediate behavior (middle panels). The basis of this model is that the amplified mode is independent of spatial phase. If the amplified mode depends on spatial frequency or orientation, neurons at high gain can be selective for these attributes. To show that the model can retain selectivity for other Recurrent Cortical Amplification Produces Complex Cell Responses 93 500 1000 500 1000 1~~1VY)!\< o 500 1000 500 1000 time (ms) time (ms) Figure 1: The effects of recurrent input on the responses of a neuron in the model network. The responses of one neuron to a 2 Hz counterphase grating (A) and to a 2 Hz drifting grating (B) are shown for different levels of network gain. From top to bottom in A and B, the gain of the network is one, five, and twenty. stimulus characteristics while maintaining spatial phase insensitivity, we allowed the spatial frequency selectivity which each neuron receives from its feedforward input, ki' to vary from neuron to neuron and also modified the recurrent weight matrix so that the strength of the connection between two neurons, i and j, depends on ki kj . The dependence is modeled as a difference of Gaussians, so the recurrent weight matrix is now 9 [ ((ki - kj )2) ((ki - kj )2)] Wij = (N -1) 2exp 20'~ - exp 20'; . Thus neurons that receive feedforward input tuned for similar spatial frequencies excite each other and neurons that receive very differently tuned feedforward input inhibit each other. This produces complex cells that are tuned to a variety of spatial frequencies, but are still insensitive to spatial phase (see figure 2). The spatial frequency tuning curve width is primarily determined by 0' c = 0.5 cycle/deg and 0' s = 1 cycle/deg. Cells within the same network do not have to exhibit the same level of gain. In previous figures, the gain of the network was determined by a parameter 9 that described the strength of all the connections between neurons. In figure 3, the recurrent input to cell i is determined by W ij = gi/(N - 1), where the values of gi are chosen randomly within the allowed range. The gain of each neuron depends on the value of gi for that neuron. As shown in figure 3, a range of complex and simple cell behaviors now coexist within the same network. 4 DISCUSSION In the recurrent model we have presented, as in Hubel and Wiesel's feedforward model, the feedforward input to a complex cell arises from simple cells. Measurements by Alonso and 94 A ~ 100 c: o a. (/) ~ 50 x ctS E #. O-+-~r----r--~----' -180 -90 o 90 180 phase (deg) F S. Chance, S. B. Nelson and L. F Abbott o 1 2 3 spatial frequency (cyc/deg) Figure 2: Neurons in a high-gain network can be selective for spatial frequency while remaining insensitive to spatial phase. Both spatial phase and spatial frequency tuning are included in the feedforward input. A) The spatial phase tuning curves of three representative neurons from a high-gain network. B) The spatial frequency tuning curves of the same three neurons as in A. Martinez (1998) support this circuitry. However, direct excitatory input to complex cells arising from the LGN has also been reported (Hoffman and Stone, 1971; Singer et aI., 1975; Ferster and Lindstrom, 1983). Supporting these measurements is evidence that certain stimuli can excite complex cells without strong excitation of simple cells (Hammond and Mackay, 1975, 1977; Movshon, 1975) and also that complex cells still respond when simple cells are silenced (Malpeli, 1983; Malpeli et ai, 1986; Mignard and Malpeli, 1991). In accordance with this, the weak feedforward simple cell input in the recurrent model could probably be replaced by direct LGN input, as in the feedforward model of Mel et al. (1998). The proposed model makes definite predictions about complex cell responses. If the phaseinvariance of complex cell responses is due to recurrent interactions, manipulations that modify the balance between feedforward and recurrent drive should change the nature of the responses in a predictable manner. The model predicts that blocking local excitatory connections should turn complex cells into simple cells. Conversely, manipulations that increase cortical gain should make simple cells act more like complex cells. One way to increase cortical gain may be to block or partially block inhibition since this increases the influence of excitatory recurrent connections. Experiments along these lines have been performed, and blockade of inhibition does indeed cause simple cells to take on complex cell properties (Sillito, 1975; Shulz et aI., 1993). In a previous study, Hawken, Shapley, and Grosof (1996) noted that the temporal frequency tuning curves for complex cells are narrower for counterphase stimuli than for drifting stimuli. The recurrent model reproduces this result as long as the integration of synaptic inputs depends on temporal frequency. Such a dependence is provided, for example, by short-term synaptic depression (Chance et aI., 1998). Hubel and Wiesel's feedforward model (1962) does not reproduce this effect, even with synaptic depression at the synapses. We have presented a model of primary visual cortex in which complex cell response characteristics arise from recurrent amplification of simple cell responses. The complex cell responses in the high gain regime arise because recurrent connections selectively deamplify selectivity for spatial phase. Thus recurrent connections can act to generate invariant representation of input data. A similar mechanism could be used to produce responses that are independent of other stimulus attributes, such as size or orientation. Given the ubiquity of invariant representations in the visual pathway, this mechanism may have widespread use. Recurrent Cortical Amplification Produces Complex Cell Responses 95 1~~~ o 500 1000 ~1001JV\M eft. 50 o -+-------,.--------. 1~~~ o 500 1000 o 500 1000 time (ms) time (ms) Figure 3: Responses to a 4 Hz drifting grating of four neurons from a large network consisting of a mixture of simple and complex cells. The two traces on the left represent simple cells and the two traces on the right represent complex cells. Acknowledgements Research supported by the Sloan Center for Theoretical Neurobiology at Brandeis University, the National Science Foundation (DMS-95-03261), the W.M. Keck Foundation, the National Eye Institute (EY-11116), and the Alfred P. Sloan Foundation. References Adelson, E. H. & Bergen, J. R. Spatiotemporal energy models for the perception of motion. 1. Opt. Soc. Am. A. 2,284-299 (1985) Alonso, J-M. & Martinez, L. M. Functional connectivity between simple cells and complex cells in cat striate cortex. Nature Neuroscience 1,395-403 (1998) Ben-Yishai, R., Bar-Or, L. & Sompolinsky, H. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA 92,3844-3848 (1995) Chance F. S., Nelson S. B. & Abbott L. F. Complex cells as cortically amplified simple cells. (submitted) Chung, S. & Ferster, D. Strength and orientation tuning of the thalamic input to simple cells revealed by electrically evoked cortical suppression. Neuron 20,1177-1189 (1998) Douglas, R. J., Koch, c., Mahowald, M., Martin, K. A. C. & Suarez, H. H. Recurrent excitation in neocortical circuits. Science 269 981-985 (1995) Ferster, D., Chung, S. & Wheat, H. Orientation selectivity ofthalamic input to simple cells of cat visual cortex. Nature 380,249-252 (1996) Ferster, D. & Lindstrom, S. An intracellular analysis of geniculo-cortical connectivity in area 17 of the cat. 1. Physiol. (Lond) 342,181-215 (1983) Hammond, P. & MacKay, D. M. Differential responses of cat visual cortical cells to textured stimuli. Exp. Brain Res. 22,427-430 (1975) 96 F S. Chance, S. B. Nelson and L. F Abbott Hammond, P. & MacKay, D. M. Differential responsiveness of simple and complex cells in cat striate cortex to visual texture. Exp. Brain Res. 30, 275-296 (1977) Hawken, M. J., Shapley, R. M. & Grosof, D. H. Temporal-frequency selectivity in monkey visual cortex. Vis. Neurosci. 13477-492 (1996) Hoffman, K. P. & Stone, J. Conduction velocity of afferents of cat visual cortex: a correlation with cortical receptive field properties. Brain Res. 32,460-466 (1971) Hubel, D. H. & Wiesel, T. N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. 1. Physiol.160, 106-154 (1962) Malpeli, J. G. Activity of cells in area 17 of the cat in absence of input from layer A of lateral geniculate nucleus. 1. Neurophysiol. 49,595-610 (1983) Malpeli, J. G., Lee, C., Schwark, H. D. & Weyand, T. G. Cat area 17.1. Pattern of thalamic control of cortical layers. 1. Neurophysiol. 56,1062-1073 (1986) Mel, B. W., Ruderman, D. L. & Archie, K. A. Translation-invariant orientation tuning in visual complex cells could derive from intradendritic computations. 1. Neurosci. 1843254334 (1998) Mignard, M. & Malpeli, J. G. Paths of information flow through visual cortex. Science 251, 1249-1251 (1991) Movshon, J. A. The velocity tuning of single units in cat striate cortex. 1. Physiol. 249, 445-468 (1975) Movshon, J., Thompson, I. & Tolhurst, D. Spatial summation in the receptive fields of simple cells in the cat's striate cortex. 1. Physiol. (Lond) 283, 53-77 (1978) Movshon, J., Thompson, I. & Tolhurst, D. Receptive field organization of complex cells in cat's striate cortex. 1. Physiol. (Lond) 283,79-99 (1978) Shulz, D. E., Bringuier, B. & Fregnac, Y. A complex-like structure of simple visual cortical receptive fields is masked by GABA-A intracortical inhibition. Soc .jor Neurosci. Abs. 19, 628 (1993) Sillito, A. M. The contribution of inhibitory mechanisms to the receptive field properties of neurones in the striate cortex of the cat. 1. Physiol. (Lond) 250,305-329 (1975) Singer, W., Tretter, F. & Cynader, M. Organization of cat striate cortex: a correlation of receptive-field properties with afferent and efferent connections. 1. Neurophysiol. 38, 10801098 (1975) Somers, D. C., Nelson, S. B. & Sur, M. An emergent model of orientation selectivity in cat visual cortical simple cells. 1. Neurosci. 15,5448-5465 (1995) Sompolinsky, H. & Shapley, R. New perspectives on the mechanisms for orientation selectivity. Current Opinion in Neurobiology 7, 514-522 (1997) Toyama, K., Kimura, M. & Tanaka, K. Organization of cat visual cortex as investigated by cross-correlation technique. 1. Neurophysiol. 46,202-214 (1981) Treves, A. Mean-field analysis of neuronal spike dynamics. Network 4, 259-284 (1993)
1998
135
1,492
Viewing Classifier Systems as Model Free Learning in POMDPs Akira Hayashi and Nobuo Suematsu Faculty of Information Sciences Hiroshima City University 3-4-1 Ozuka-higashi, Asaminami-ku, Hiroshima, 731-3194 Japan { akira,suematsu }@im.hiroshima-cu.ac.jp Abstract Classifier systems are now viewed disappointing because of their problems such as the rule strength vs rule set performance problem and the credit assignment problem. In order to solve the problems, we have developed a hybrid classifier system: GLS (Generalization Learning System). In designing GLS, we view CSs as model free learning in POMDPs and take a hybrid approach to finding the best generalization, given the total number of rules. GLS uses the policy improvement procedure by Jaakkola et al. for an locally optimal stochastic policy when a set of rule conditions is given. GLS uses GA to search for the best set of rule conditions. 1 INTRODUCTION Classifier systems (CSs) (Holland 1986) have been among the most used in reinforcement learning. Some of the advantages of CSs are (1) they have a built-in feature (the use of don't care symbols "#") for input generalization, and (2) the complexity of pOlicies can be controlled by restricting the number of rules. In spite of these attractive features, CSs are now viewed somewhat disappointing because of their problems (Wilson and Goldberg 1989; Westerdale 1997). Among them are the rule strength vs rule set performance problem, the definition of the rule strength parameter, and the credit assignment (BBA vs PSP) problem. In order to solve the problems, we have developed a hybrid classifier system: GLS (Generalization Learning System). GLS is based on the recent progress ofRL research in partially observable Markov decision processes (POMDPs). In POMDPs, the environments are really Markovian, but the agent cannot identify the state from the current observation. It may be due to noisy sensing or perceptual aliasing. Perceptual aliasing occurs when the sensor returns the same observation in multiple states. Note that even for a completely observable 990 A. Hayashi and N. Suematsu MDP, the use of don't care symbols for input generalization will make the process as if it were partially observable. In designing GLS, we view CSs as RL in POMDPs and take a hybrid approach to finding the best generalization, given the total number of rules. GLS uses the policy improvement procedure in Jaakkola et a!. (1994) for an locally optimal stochastic policy when a set of rule conditions is given. GLS uses GA to search for the best set of rule conditions. The paper is organized as follows. Since CS problems are easier to understand from GLS perspective, we introduce Jaakkola et a!. (1994), propose GLS, and then discuss CS problems. 2 LEARNING IN POMDPS Jaakkola et a1. (1994) consider POMDPs with perceptual aliasing and memoryless stochastic policies. Following the authors, let us call the observations messages. Therefore, a policy is a mapping from messages to probability distributions (PDs) over the actions. Given a policy 7r, the value of a state s, V7!' (s), is defined for POMDPs just as for MDPs. Then, the value of a message m under policy 7r, V7!' (m ), can be defined as follows: V7!'(m) = LP7!'(slm)V7!'(s) (1) sES where P7!' (slm) is the probability that the state is s when the message is m under the policy 7r. Then, the following holds. N lim'"' E{R(st, at) -R lSI = s} N-+(X)~ t=l E{V(s) Is --t m} (2) (3) where St and at refer to the state and the action taken at the tth step respectively, R( St, at) is the immediate reward at the tth step, R is the (unknown) gain (Le. the average reward per step). s --t m refers to all the instances where m is observed in sand E{· I s --t m} is a Monte-Carlo expectation. In order to compute E{V(s) I s --t m}, Jaakkola et a1. showed a Monte-Carlo procedure: 1 vt(m) = 'k{ Rtl +rl,IRtl+l + rl,2Rtl+2 + ... + rl ,t-tIRt + Rt2 +r2,IRt2+l + r2,2Rt2+2 + ... + r2,t-t2Rt (4) + Rtk +rk,IRtdl + ... + rk,t-tkRtl where tk denotes. the time step corresponding to the kth occurrence of the message m, R t = R(st, at) - R for every t, rk,T indicates the discounting at the Tth step in the kth sequence. By estimating R and by suitably setting rk,T, Vt(m) converges to V7!'(m). Q7!' (m, a), Q-value of the message m for the action a under the policy 7r, is also defined and computed in the same way. Jaakkola et a1. have developed a policy improvement method: Step 1 Evaluate the current policy 7r by computing V7!' (m) and Q7!' (m, a) for each m and a. Viewing Classifier Systems as Model Free Learning in POMDPs 991 Step 2 Test for any m whether maxa Q1r (m, a) > V 1r (m) holds. If 110t, then return 7r. Step 3 For each m and a, define 7r 1(alm) as follows: 7r 1 (aim) = 1.0 when a = argmaxaQ1r(m, a), 7r1 (aim) = 0.0 otherwise. Then, define 7r f as 7rf (aim) = (1 € )7r( aim) + €7r1 (aim) Step 4 Set the new policy as 7r = 7r f , and goto Stepl. 3 GLS Each rule in GLS consists of a condition part, an action part, and an evaluation part: Rule = (Condit'ion, Action, Evaluation). The condition part is a string c over the alphabet {O, 1, #}, and is compared with a binary sensor message. # is a don't care symbol, and matches 0 and 1. When the condition c matches the message, the action is randomly selected using the PD in the action part: Action = (p(allc),p(a21c), ... ,p(aIA!lc)), I:j'!\ p(ajlc) = 1.0 where IAI is the total number ofactions. The evaluation part records the value of the condition V ( c) and the Q-values of the condition action pairs Q(c, a): Evaluation = (V(c), Q(c, ad , Q(c, a2), ... ,Q(c, alAI))' Each rule set consists of N rules, {Rulel, Rule2,"" RuleN}. N, the total number of rules in a rule set, is a design parameter to control the complexity of policies. All the rules except the last one are called standard rules. The last rule Rule N is a special rule which is called the default rule. The condition part of the default rule is a string of # 's and matches any message. Learning in GLS proceeds as follows: (1 )Initialization: randomly generate an initial population of M rule sets, (2)Policy Evaluation and Improvement: for each rule set, repeat a policy evaluation and improvement cycle for a suboptimal policy, then, record the gain of the policy for each rule set, (3)Genetic Algorithm: use the gain of each rule set as its fitness measure and produce a new generation of rule sets, (4) Repeat: repeat from the policy evaluation and improvement step with the new generation of rule sets. In (2)Policy Evaluation and Improvement, GLS repeats the following cycle for each rule set. Step 1 Set € sufficiently small. Set t max sufficiently large. Step 2 Repeat for 1 :::; t :::; t max • 1. Make an observation of the environment and receive a message mt from the sensor. 2. From all the rules whose condition matches the message mt, find the rule whose condition is the most specific l . Let us call the rule the active rule. 3. Select the next action at randomly according to the PD in the action part of the active rule, execute the action, and receive the reward R( St, at) from the environment. (The state St is not observable.) 4. Update the current estimate of the gain R from its previous estimate and R( St, ad . Let R t = R( St , ad - R. For each rule, consider its condition Ci as (a generalization of) a message, and update its evaluation part V ( Ci ) and Q(c;, aHa E A) using Eq.(4). Step 3 Check whether the following holds. If not, exit. 3i (1 :::; i :::; N), maxa Q ( Ci , a) > V ( cd Step 4 Improve the current policy according to the method in the previous section, and update the action part of the corresponding rules and goto Step 2. IThe most specific rule has the least number of #'s. This is intended only for saving the number of rules. 992 A. Hayashi and N. Suematsu GLS extracts the condition parts of all the rules in a rule set and concatenates them to form a string. The string will be an individual to be manipulated by the genetic algorithm (GA). The genetic algorithm used in GLS is a fairly standard one. GLS combines the SGA (the simple genetic algorithm) (Goldberg 1989) with the elitist keeping strategy. The SGA is composed of three genetic operators: selection, crossover, and mutation. The fitness proportional selection and the single-point crossover are used. The three operators are applied to an entire population at each generation. Since the original SGA does not consider #'s in the rule conditions, we modified SGA as follows. When GLS randomly generates an initial population of rule sets, it generates # at each allele position in rule conditions according to the probability P#. 4 CS PROBLEMS AND GLS In the history of classifier systems, there were two quite different approaches: the Michigan approach (Holland and Reitman 1978), and the Pittsburgh (Pitt) approach (Dejong 1988). In the Michigan approach, each rule is considered as an individual and the rule set as the population in GA. Each rule has its strength parameter, which is based on its future payoff and is used as the fitness measure in GA. These aspects of the approach cause many problems. One is the rule strength vs rule set performance problem. Can we collect only strong rules and get the best rule set performance? Not necessarily. A strong rule may cooperate with weak rules to increase its payoff. Then, how can we define and compute the strength parameter for the best rule set performance? In spite of its problems, this approach is now so much more popular than the other, that when people simply say classifier systems, they refer to Michigan type classifier systems. In the Pitt approach, the problems of the Michigan approach are avoided by requiring GA to evaluate a whole rule set. In the approach, a rule set is considered as an individual and multiple rule sets are kept as the population. The problem of the Pitt approach is its computational difficulties. GLS can be considered as a combination of the Michigan and Pitt approaches. GA in GLS works as that in the Pitt approach. It evaluates a total rule set, and completely avoids the rule strength vs rule set performance problem in the Michigan approach. As the Michigan type CSs, GLS evaluates each rule to improve the policy. This alleviates the computational burden in the Pitt approach. Moreover, GLS evaluates each rule in a more formal and sound way than the Michigan approach. The values, V(c), and Q(c,a), are defined on the basis of POMDPs, and the policy improvement procedure using the values is guaranteed to find a local maximum. Westerdale (1997) has recently made an excellent analysis of problematic behaviors of Michigan type CSs. Two popular methods for credit assignment in CSs are the bucket brigade algorithm (BBA) (Holland 1986) and the profit sharing plan (PSP) (Grefenstette 1988). Westerdale shows that BBA does not work in POMDPs. He insists that PSP with infinite time span is necessary for the right credit assignment, although he does not show how to carry out the computation. GLS does not use BBA or PSP. GLS uses the Monte Carlo procedure, Eq.(4), to compute the value of each condition action pair. The series in Eq.(4) is slow to converge. But, this is the cost we have to pay for the right credit assignment in POMDPs. Westerdale points out another CS problem. He claims that a distinction must be made between the availability and the payoff of rules. We agree with him. As he says, if the expected payoff of Rule 1 is twice as much as Rule 2, then we want to a/ways choose Rule 1. GLS makes the distinction. The probability of a stochastic policy 71'(alc) in GLS corresponds to the availability, and the value of a condition action pair Q ( c, a) corresponds to the payoff. Samuel System (Grefenstette et a1. 1990) can also be considered as a combination of the Michigan and Pitt approaches. Samuel is a highly sophisticated system which has lots of features. We conjecture, however, that Samuel is not free from the CS problems which Viewing Classifier Systems as Model Free Learning in POMDPs 993 Westerdale has analyzed. This is because Samuel uses PSP for credit assignment, and Samuel uses the payoff of each rule for action selection, and does not make a distinction between the availability and the payoff of rules. xes (Wilson 1995) seems to be an exceptionally reliable Michigan·type es. In xes, each rule's fitness is based not on its future payoff but on the prediction accuracy of its future payoff (XeS uses BBA for credit assignment). Wilson reports that xes's population tends to form a complete and accurate mapping from sensor messages and actions to payoff predictions. We conjecture that xes tries to build the most general Markovian model of the environment. Therefore, it will be difficult to apply xes when the environment is not Markovian, or when we cannot afford the number of rules enough to build a Markovian model of the environment, even if the environment itself is Markovian. As we will see in the next section, GLS is intended exactly for these situations. Kaelbling et a1. (19%) surveys methods for input generalization when reward is delayed. The methods use a function approximator to represent the value function by mapping a state description to a value. Since they use value iteration or Q·leaming anyway, it is difficult to apply the methods when the generalization violates the Markov assumption and induces a POMDP. 5 EXPERIMENTS We have tested GLS with some of the representative problems in es literature. Fig. 1 shows Grefl world (Grefenstette 1987). In Grefl world, we used GLS to find the smallest rule set which is necessary for the optimal performance. Since this is not a POMDP but an MDP, the optimal policy can easily be learned when we have a corresponding rule for each of the 16 states. However, when the total number of rules is less than that of states, the environment looks like a POMDP to the learning agent, even if the environment itself is an MDP. The graph shows how the gain of the best rule set in the population changes with the generation. We can see from the figure that four rules are enough for the optimal performance. Also note that the saving of the rules is achieved by selecting the most specific matching rule as an active rule. The rule set with this rule selection is called the defallit hierarchy in es literature. payoff 150 ~~~--------~ 200 'i ISO ......................................... .......... ~ ................. . 100 50 N:.lN=3 N=2 M • •••• _ _ O L-~~ __ ~~~~ ~L~~ ·I ~ .. -.~ . ~.-~ o 10 15 10 15 30 35 40 g.!ner.atioruJ Figure 1: LEFT: GREF1 World. States {O, 1,2, 3} are the start states and states {12.13, 14, 15} are the end states. In each state, the agent gets the state number (4 bits) as a message, and chooses an action a,b,c, or d. When the agent reaches the end states, he receives reward 1000 in state 13, but reward 0 in other states. Then the agent is put in one of the start states with equal probability. We added 10% action errors to make the process ergodic. When an action error occurs, the agent moves to one of the 16 states with equal probability. RIGHT: Gain of the best rule set. Parameters: tma ;r =: 10000. € =: 0.10. M =: 10. N =: 2,3,4, P# =: 0.33. For N =: 4, the best rule set at the 40th generation was { if 0101 (State 5) then a 1.0, if 1010 (State 10) then c 1.0, if ##11 (States 3,7,11,15) then d 1.0, if #### (Default Rule) then b 1.0}. 994 BlillaD a II a II II a A. Hayashi and N. Suematsu oo~~~~~~~~~--~ 80 70 60 30 N0620 NoS10 L-...... ~~~oo4---'---'!'Pz.:tim:=o • .:...1 -_"" .... -.-J. o W 20 30 ~ ~ 60 m 80 00 ~ aenerations Figure 2: LEFf: McCallum's Maze. We show the state numbers in the left, and the messages in the right. States 8 and 9 are the start states, and state G is the goal state. In each state, the agent receives a sensor message which is 4 bit long, Each bit in the message tells whether a wall exists in each of the four directions. From each state, the agent moves to one of the adjacent states. When the agent reaches the goal state, he receives reward 1000. The agent is then put in one of the start states with equal probability. RIGHT: Gain of the best rule set. Parameters: t mBX = 50000, ~ = 0.10, M = 10, N = 5,6, P# = 0.33. Fig. 2 is a POMDP known as as McCallum's Maze (McCallum 1993). Thanks to the use of stochastic policies, GLS achieves near optimal gain for memoryless poliCies. Note that no memoryless deterministic policy can take the agent to the goal for this problem. We have seen GLS's generalization capability for an MDP in Grefl World, the advantage of stochastic policies for a POMDP in McCallum's maze. In Woods7 (Wilson 1994), we attempt to test GLS's generalization capability for a POMDP. See Fig. 3. Since each sensor message is 16 bit long, and the conditions of GLS rules can have either O,l,or # for each of the 16 bits, there are 316 possible conditions in total. When we notice that there are only 92 different actual sensor messages in the environment, it seems quite difficult to discover them only by using GA. In fact, when we ran GLS for the first time, the standard rules very rarely matched the messages and the default rule took over most of the time. In order to avoid the no matching rule problem, we made the number of rules in a rule set large (N = 100), increased P# from 0.33 in the previous problems to 0.70. The problem was independently attacked by other methods. Wilson applied his ZCS, zeroth level classifier system, to Woods7 (Wilson 1994). The gain was 0.20. ZCS has a special covering procedure to tum around the no matching rule problem. The covering procedure generates a rule which matches a message when none of the current rules matches the message. We expect further improvement on the gain, if we equip GLS with some covering procedure. 6 SUMMARY In order to solve the CS problems such as the rule strength vs rule set performance problem and the credit assignment problem, we have developed a hybrid classifier system: GLS. We notice that generalization often leads to state aliasing. Therefore, in designing GLS, we view CSs as model free learning in POMDPs and take a hybrid approach to finding the best generalization, given the total number of rules. GLS uses the policy improvement procedure by Jaakkola et a1. for an locally optimal stochastic policy when a set of rule conditions is given. GLS uses GA to search for the best set of rule conditions. Viewing Classifier Systems as Model Free Learning in POMDPs ••••• , ••••• 0 . • ••••••...• .. •••••••.••• 00 • • , ••••• ,0 •• . , . " •• . or o . .... F . •••• .. . F .• • .•••. ,D . ••••• . F •• , • . • . • . r o., ..... . .. . . •• 0 . • . •• . . . 00 ... •••. F . • •.•.• •. ••••• .. •... .• . • •• . •• •• • . .•••• .• ••• 0 ...... 0 ....... .. F •••••• 0 •• ••• . . . r ' " .. oro ..... .. . oro .. _ .. ..... F . •.•••.•• 00 •••• . • r ... . • . • 00 .. . . •• . .•••. .. •••..••••. .. .••. 0 . . . •..•• . •••• • ••. . 0 •• . 0.24 .----~~~~~~~.....,_.__,_" 0.23 02 2 021 0.2 :~r~ :.:: : :~~::::: : :~r: ::: : ::~: :: :::::: : ~t~ : :: :::: : ~:: ::::: 'i 0.19 • •••••••• • •••.••.••••••••••••.• • •.• . •• • • . .••.••••. 0 . •.•• . . . . • 00 . . .. •. , 0 . . .... .. ... _ • . 0 .•• . . ..• 00 • . .. • •. 0 . " . • • • . . . . F . . ..... r ...... . o .. .. .. . . Fo . .. . . ... F ... ... .. or . . . . oro. ..... . . . . ~ .. . ~ .. .. . . . ... . ... ... ... .. . .. .. . . . . • . 0,,, •• . • .. ...•• . .••.•• . 0 • . ..• . ••.••• 0 •••• . • 0 •••••• . 0 •••• . . r . .. . ... F . •• . • .. •• . . •.. • F . . • ••••. . Fo . • . ... . r .• •. •. oF . . . . . 0 , .. _ . . • 00 •. . • .. .• ....• •• 0 .•• ••.• , . .••• ••• • 0 • •.• .••• .. ... ... .. . ..... .... .. . 0 • .• •• • • . •.• " ....... 0 . . ... . ....... . .. . F .. .. . • • • oro ... .. . r . . . .... " F . . • .•.•• r . ... . 0F .. ••••.• ... 00 . • .•.. ••.•• •. •• . 0 ... ... ... 00 .... ••• 0 • .• •• . 0 ..• •.•• . 0.18 0.17 0.16 0.15 0.14 L.-~~~~~~~_---.J o W W ~ ~ ~ ~ ~ ~ ~ ~ geoentionJ 995 Figure 3: LEFT: Woods7.Each cell is either empty".", contains a stone "0", or contains food "F'. The cells which contain a stone are not passable, and the cells which contain food are goals. In each cell, the agent receives a 2 * 8 = 16 bit long sensor message, which tells the contents of the eight adjacent cells. From each cell, the agent can move to one of the eight adjacent cells. When the agent reaches a cell which contains food, he receives reward 1. The agent is then put in one of the empty cells with equal probability. RlGHT:Gain of the best rule set. Parameters: t ma x = 10000, to = 0.10, M = 10, N = 100, P# = 0.70. References Dejong, K. A. (1988). Learning with genetic algorithms: An overview. Machine Learning, 3:121-138. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley. Grefenstette, J. J. (1987). Multilevel credit assignment in a genetic learning system. In Proc. Second Int. Con! on Genetic Algorithms, pp. 202-209. Grefenstette, J. J. (1988). Credit assignment in rule discovery systems based on genetic algorithms. Machine Learning, 3:225-245. Grefenstette, J. J., C. L. Ramsey, and A. C. Schultz (1990). Learning sequential decision rules using simulation and competition. Machine Learning, 5:355-381. Holland, J. H. (1986). Escaping brittleness: the possibilities of general purpose learning algorithms applied to parallel rule-based systems. In Machine Learning II, pp. 593623. Morgan Kaufmann. Holland, J. H. and J. S. Reitman (1978). Cognitive systems based on adaptive algorithms. In D. A. Waterman and F. Hayes-Roth (Eds.), Pattern-directed inference systems. Academic Press. Jaakkola, T., S. P. Singh, and M. I. Jordan (1994). Reinforcement learning algorithm for partially observable markov decision problems. In Advances of Neural Information Processing Systems 7, pp. 345-352. Kaelbling, L. P., M. L. Littman, and A. W. Moore (1996). Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237-285. McCallum, R. A. (1993). Overcoming incomplete perception with utile distinction memory. In Proc. the Tenth Int. Con! on Machine Learning, pp. 190-196. Westerdale, T. H. (1997). Classifier systems - no wonder they don't work. In Proc. Second Annual Genetic Programming Conference, pp. 529-537. Wilson, S. W. (1994). Zcs: A zeroth order classifier system. Evolutionary Computation, 2(1): 1-18. Wilson, S. W. (1995). Classifier fitness based on accuracy. Evolutionary Computation, 3(2): 149-175. Wilson, S. W. and D. E. Goldberg (1989). A critical review of classifier systems. In Proc. Third Int. Con! on Genetic Algorithms, pp. 244-255.
1998
136
1,493
Reinforcement Learning based on On-line EM Algorithm Masa-aki Sato t t ATR Human Information Processing Research Laboratories Seika, Kyoto 619-0288, Japan masaaki@hip.atr.co.jp Shin Ishii +t tNara Institute of Science and Technology Ikoma, Nara 630-0101, Japan ishii@is.aist-nara.ac.jp Abstract In this article, we propose a new reinforcement learning (RL) method based on an actor-critic architecture. The actor and the critic are approximated by Normalized Gaussian Networks (NGnet), which are networks of local linear regression units. The NGnet is trained by the on-line EM algorithm proposed in our previous paper. We apply our RL method to the task of swinging-up and stabilizing a single pendulum and the task of balancing a double pendulum near the upright position. The experimental results show that our RL method can be applied to optimal control problems having continuous state/action spaces and that the method achieves good control with a small number of trial-and-errors. 1 INTRODUCTION Reinforcement learning (RL) methods (Barto et al., 1990) have been successfully applied to various Markov decision problems having finite state/action spaces, such as the backgammon game (Tesauro, 1992) and a complex task in a dynamic environment (Lin, 1992). On the other hand, applications to continuous state/action problems (Werbos, 1990; Doya, 1996; Sofge & White, 1992) are much more difficult than the finite state/action cases. Good function approximation methods and fast learning algorithms are crucial for successful applications. In this article, we propose a new RL method that has the above-mentioned two features. This method is based on an actor-critic architecture (Barto et al., 1983), although the detailed implementations of the actor and the critic are quite differReinforcement Learning Based on On-Line EM Algorithm 1053 ent from those in the original actor-critic model. The actor and the critic in our method estimate a policy and a Q-function, respectively, and are approximated by Normalized Gaussian Networks (NGnet) (l'doody & Darken, 1989). The NGnet is a network of local linear regression units. The model softly partitions the input space by using normalized Gaussian functions, and each local unit linearly approximates the output within its partition. As pointed out by Sutton (1996), local models such as the NGnet are more suitable than global models such as multi-layered perceptrons, for avoiding serious learning interference in on-line RL processes. The NGnet is trained by the on-line EM algorithm proposed in our previous paper (Sato & Ishii, 1998). It was shown that this on-line E11 algorithm is faster than a gradient descent algorithm. In the on-line EM algorithm, the positions of the local units can be adjusted according to the input and output data distribution. Moreover, unit creation and unit deletion are performed according to the data distribution. Therefore, the model can be adapted to dynamic environments in which the input and output data distribution changes with time (Sato & Ishii, 1998). \Ve have applied the new RL method to optimal control problems for deterministic nonlinear dynamical systems. The first experiment is the task of swinging-up and stabilizing a single pendulum with a limited torque (Doya, 1996). The second experiment is the task of balancing a double pendulum where a torque is applied only to the first pendulum. Our RL method based on the on-line E11 algorithm demonstrated good performances in these experiments. 2 NGNET AND ON-LINE EM ALGORITHM In this section, we review the on-line EM algorithm for the NGnet proposed in our previous paper (Sato & Ishii, 1998). The NGnet (Moody & Darken, 1989), which transforms an N-dimensional input vector x to a D-dimensional output vector y, is defined by the following equations. (la) (lb) AI denotes the number of units, and the prime (') denotes a transpose. Gi(x) is an N-dimensional Gaussian function, which has an N-dimensional center /11 and an (N x N)-dimensional covariance matrix E j • W i and bi are a (D x N)-dimensionallinear regression matrix and a D-dimensional bias vector, respectively. Subsequently, we use notations ll'-j == (Wi, bl ) and x' == (x' , 1). The NGnet can be interpreted as a stochastic model, in which a pair of an input and an output, (x, y) , is a stochastic event. For each event, a unit index i E {I, ... , AI} is assumed to be selected, which is regarded as a hidden variable. The stochastic model is defined by the probability distribution for a triplet (x, y , i), which is called a complete event: P(x, y, ilB) = (27r)-(D+N)/2a ;-DIEi l- 1/ 2 AI - I (2) x exp [- ~(x - /1i )'Ei l (x - 11i) 2~? (y - It-ix)2] . Here, B == {/1i, E i , a?, 11"1 Ii = 1, ... , AI} is a set of model parameters. We can easily proye that the expectation value of the output y for a giYen input x, i.e., E[Ylx] == 1054 M. Sato and S. Ishii J yP(ylx, B)dy, is identical to equation (1). Namely, the probability distribution (2) provides a stochastic model for the NGnet. From a set of T events (observed data) (X,Y) == {(x(t),y(t)) It = 1, ... ,TL the model parameter B of the stochastic model (2) can be determined by the maximum likelihood estimation method, in particular, by the EM algorithm (Dempster et al., 1977). The EM algorithm repeats the following E- and M-steps. E (~stimation) step: Let fJ be the present estimator. By using fJ, the posterior probability that the i-th unit is selected for (x(t), yet)) is given as M P(ilx(t) , yet) , fJ) = P(x(t), yet) , ilfJ)!2: P(x(t) , yet), jlfJ). (3) j=1 M (l\laximization) step: Using the posterior probability (3), the expected loglikelihood L(Bj1J, X, Y) for the complete events is defined by T AI L(Bj1J, X, Y) = 2: 2: P(ilx(t) , yet) , fJ) log P(x(t), yet), iIB). (4) t=1 ;=1 Since an increase of L(Bj1J, X , Y) implies an increase of the log-likelihood for the observed data (X, Y) (Dempster et al., 1977), L(BlfJ, X, Y) is maximized with respect to B. A solution of the necessity condition 8L!8B = 0 is given by (Xu et al. , 1995) . Ili = (x)i(T)!(l)i(T) (5a) ~ i 1 = [(xx')i(T)!(l)i(T) lli(T)Il~(T)] - 1 (5b) Tili = (yi;')i(T)[(i;i;')i(T)]-l (5c) a; = ~ [(ly 2 1)i(T) - Tr (Tt';(i;y')i(T))] !(l)i(T), (5d) where Oi denotes a weighted mean with respect to the posterior probability (3) and it is defined by 1 T _ (f(x, y)),(T) == T 2: f(x(t), y(t))P(ilx(t), yet) , B). (6) t=1 The EM algorithm introduced above is based on batch learning (Xu et al., 1995), namely, the parameters are updated after seeing all of the observed data. We introduce here an on-line version (Sato & Ishii, 1998) of the EM algorithm. Let B(t) be the estimator after the t-th observed data (x(t),y(t)). In this on-line EM algorithm, the weighted mean (6) is replaced by T T «f(x,y) »i (T) == TJ(T) 2:( II >.(s))f(x(t),y(t))P(ilx(t),y(t),B(t -1)). (7) t=1 s=i+1 The parameter >'(t) E [0,1] is a discount factor, which is introduced for forgetting the effect of earlier inaccurate estimator. TJ(T) == (Li=1 (TI~= t+l >.(S))) - 1 is a normalization coefficient and it is iteratively calculated by TJ(t) = (1 + >.(t)!TJ(t _1)) - 1. The modified weighted mean « . »i can be obtained by the step-wise equation: « f(x, y) »i (t) =« f(x, y) » i (t - 1) (8) +TJ(t) [!(x(t),y(t))Pi(t)-« f(x,y) »i (t - l)J, Reinforcement Learning Based on On-Line EM Algorithm /055 where Pi(t) == P(ilx(t) , y(t) , {}(t - 1)). Using the modified weighted mean, the new parameters are obtained by the following equations. Ai(t = 1 [Ai(t - 1 Pi(t)Ai(t l)x(t)x'(t~A i (t - 1) 1 ) 1 - 17(t) ) (l/17(t) - 1) + Pi(t)x'(t)Ai(t - l)x(t) f.Li(t) =« x »i (t)/ « 1 »i (t) W'i (t) = W'i(t - 1) + 17(t)Pi(t)(y(t) - Wi(t - l)x(t))x'(t)Ai(t) a;(t) = ~ [« lyl2 »i (t) - Tr (Wi(t)« xy' »i (t))] /« 1 »i (t), (9a) (9b) (9c) (9d) It can be proved that this on-line EM algorithm is equivalent to the stochastic approximation for finding the maximum likelihood estimator, if the time course of the discount factor A(t) is given by A(t) t~ 1 - (1 - a)/(at + b), where a (1 > a > 0) and b are constants (Sato & Ishii, 1998). (11) We also employ dynamic unit manipulation mechanisms in order to efficiently allocate the units (Sato & Ishii, 1998). The probability P(x(t), y(t), i I (}(t-1)) indicates how probable the i-th unit produces the datum (x(t) , y(t)) with the present parameter {)( t - 1). If the probability for every unit is less than some threshold value, a new unit is produced to account for the new datum. The weighted mean « 1 »i (t) indicates how much the i-th unit has been used to account for the data until t. If the mean becomes less than some threshold value, this unit is deleted. In order to deal with a singular input distribution, a regularization for 2:;1 (t) is introduced as follows. 2:; l(t) = [(<< xx' »i (t) - f.Li(t)f.L;(t)« 1 »i (t) (12a) + Q « ~; »i (t)IN) / « 1 »i (t)]-l «~T »i (t) = (<< Ixl 2 »i (t) -1f.Li(t)12« 1 »i (t)) /N, (12b) where IN is the (N x N)-dimensional identity matrix and Q is a small constant. The corresponding Ai(t) can be calculated in an on-line manner using a similar equation to (9a) (Sato & Ishii, 1998). 3 REINFORCEMENT LEARNING In this section, we propose a new RL method based on the on-line EM algorithm described in the previous section. In the following, we consider optimal control problems for deterministic nonlinear dynamical systems having continuous state/action spaces. It is assumed that there is no knowledge of the controlled system. An actor-critic architecture .(Barto et al. ,1983) is used for the learning system. In the original actor-critic model, the actor and the critic approximated the probability of each action and the value function, respectively, and were trained by using the TD-error. The actor and the critic in our RL method are different from those in the original model as explained later. 1056 M. Sato and S. Ishii For the current state, xc(t), of the controlled system, the actor outputs a control signal (action) u(t), which is given by the policy function 00, i.e., u(t) = O(xc(t)). The controlled system changes its state to xc(t + 1) after receiving the control signal u(t). Subsequently, a reward r(xc(t) , u(t)) is given to the learning system. The objective of the learning system is to find the optimal policy function that maximizes the discounted future return defined by 00 V(xc) == L "/r(xc(t), O(xc(t)))l xc (O)=::xc ' (13) / = 0 where 0 < , < 1 is a discount factor. V(xc), which is called the value function, is defined for the current policy function 0(-) employed by the actor. The Q-function is defined by (14) where xc(t) = Xc and u(t) = u are assumed. The value function can be obtained from the Q-function: V(xc) = Q(xc, O(xc))· The Q-function should satisfy the consistency condition (15) Q(xc(t), u(t)) = ,Q(xc(t + 1), O(xc(t + 1)) + r(xc(t) , u(t)). (16) In our RL method, the policy function and the Q-function are approximated by the NGnets, which are called the actor-network and the critic-network, respectively. In the learning phase, a stochastic actor is necessary in order to explore a better policy. For this purpose, we employ a stochastic model defined by (2), corresponding to the actor-network. A stochastic action is generated in the following way. A unit index i is selected randomly according to the conditional probability P(ilxc) for a given state X C. Subsequently, an action u is generated randomly according to the conditional probability P(ulxc, i) for a given Xc and the selected i. The value function can be defined for either the stochastic policy or the deterministic policy. Since the controlled system is deterministic, we use the value function defined for the deterministic policy which is given by the actor-network. The learning process proceeds as follows. For the current state xc(t) , a stochastic action u(t) is generated by the stochastic model corresponding to the current actornetwork. At the next time step, the learning system gets the next state xc(t+ 1) and the reward r(xc(t) , u(t)). The critic-network is trained by the on-line EM algorithm. The input to the critic-network is (xc(t) ,u(t)). The target output is given by the right hand side of (16) , where the Q-function and the deterministic policy function 00 are calculated using the current critic-network and the current actor-network, respectively. The actor-network is also trained by the on-line EM algorithm. The input to the actor-network is xc(t). The target output is given by using the gradient of the critic-network (Sofge & White, 1992): (17) where the Q-function and the deterministic policy function 00 are calculated using the modified critic-network and the current actor-network, respectively. E is a small constant. This target output gives a better action, which increases the Q-function value for the current state Xc (t) , than the current deterministic action 0 (xc (t)). In the above learning scheme, the critic-network and the actor-network are updated concurrently. One can consider another learning scheme. In this scheme, the learning system tries to control the controlled system for a given period of time by using the fixed actor-network. In this period, the critic-network is trained to estimate the Reinforcement Learning Based on On-Line EM Algorithm 1057 Q-function for the fixed actor-network. The state trajectory in this period is saved. At the next stage, the actor-network is trained along the saved trajectory using the critic-network modified in the first stage. 4 EXPERIMENTS The first experiment is the task of swinging-up and stabilizing a single pendulum with a limited torque (Doya, 1996). The state of the pendulum is represented by X c = (¢, cp), where cp and ¢ denote the angle from the upright position and the angular velocity of the pendulum, respectively. The reward r(xc(t) , u(t)) is assumed to be given by f(xc(t + 1)), where f(xc) = exp( -(¢)2/(2vi) cp2/(2v~)). (18) VI and V2 are constants. The reward (18) encourages the pendulum to stay high. After releasing the pendulum from a vicinity of the upright position, the control and the learning process of the actor-critic network is conducted for 7 seconds. This is a single episode. The reinforcement learning is done by repeating these episodes. After 40 episodes, the system is able to make the pendulum achieve an upright position from almost every initial state. Even from a low initial position, the system swings the pendulum several times and stabilizes it at the upright position. Figure 1 shows a control process, i.e., stroboscopic time-series of the pendulum, using the deterministic policy after training. According to our previous experiment, in which both of the actor- and critic- networks are the NGnets with fixed centers trained by the gradient descent algorithm, a good control was obtained after about 2000 episodes. Therefore, our new RL method is able to obtain a good control much faster than that based on the gradient descent algorithm. The second experiment is the task of balancing a double pendulum near the upright position. A torque is applied only to the first pendulum. The state of the pendulum is represented by X c = (¢1, ¢2, CPl, CP2), where CPl and CP2 are the first pendulum's angle from the upright direction and the second pendulum's angle from the first pendulum's direction, respectively. ¢1 (¢2) is the angular velocity of the first (second) pendulum. The reward is given by the height of the second pendulum's end from the lowest position. After 40 episodes, the system is able to stabilize the double pendulum. Figure 2 shows the control process using the deterministic policy after training. The upper two figures show stroboscopic time-series of the pendulum. The dashed, dotted, and solid lines in the bottom figure denote cPl/7r, CP2/7r, and the control signal u produced by the actor-network, respectively. After a transient period, the pendulum is successfully controlled to stay near the upright position. The numbers of units in the actor- (critic-) networks after training are 50 (109) and 96 (121) for the single and double pendulum cases, respectively. The RL method using center-fixed NGnets trained by the gradient descent algorithm employed 441 (= 212) actor units and 18,081 (= 212x41) critic units, for the single pendulum task. For the double pendulum task, this scheme did not work even when 14,641 (= 114) actor units and 161,051 (= 114 X 11) critic units were prepared. The numbers of units in the NGnets trained by the on-line EM algorithm scale moderately as the input dimension increases. 5 CONCLUSION In this article, we proposed a new RL method based on the on-line EM algorithm. We showed that our RL method can be applied to the task of swinging-up and 1058 M. Sato and S. Ishii stabilizing a single pendulum and the task of balancing a double pendulum near the upright position. The number of trial-and-errors needed to achieve good control was found to be very small in the two tasks. In order to apply a RL method to continuous state/action problems, good function approximation methods and fast learning algorithms are crucial. The experimental results showed that our RL method has both features. References Barto, A. G., Sutton, R. S., & Anderson, C. W. (1983). IEEE Transactions on Systems, Man, and Cybernetics, 13,834-846. Barto, A. G., Sutton, R. S., & Watkins, C. J. C. H. (1990). Learning and Computational Neuroscience: Foundations of Adaptive Networks (pp. 539-602), MIT Press. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Journal of Royal Statistical Society B, 39, 1-22. Doya, K. (1996). Advances in Neural Information Processing Systems 8 (pp. lO731079), MIT Press. Lin, L. J. (1992). Machine Learning, 8,293-321. Moody, J., & Darken, C. J. (1989). Neural Computation, 1, 281-294. Sato, M., & Ishii, S. (1998). ATR Technical Report, TR-H-243, ATR. Sofge, D. A., & White, D. A. (1992). Handbook of Intelligent Control (pp. 259-282), Van Nostrand Reinhold. Sutton, R. S. (1996). Advances in Neural Information Processing Systems 8 (pp. 1038-1044), MIT Press. Tesauro, G. J. (1992). Machine Learning, 8, 257-278. Werbos, P. J. (1990). Neural Networks for Control (pp. 67-95), MIT Press. Xu, 1., Jordan, M. 1., & Hinton, G. E. (1995). Advances in Neural Information Processing Systems "( (pp. 633-640), MIT Press. Time Sequence of Inverted Pendulum 3l II I II! j l \ \ \ \ \ \ \ \ \ \ \ \> o 2 2 3 4 Jl <II i \ \ '\, '-.~/'//////'--~ 4 5 6 U :-\I//?-~' ~ \ II 111111 ~ 6 7 Time (sec.) Figure 1 8 3 -l~~ ________ ~ ________ ~_ o 2 3 -l~~ ________ ~ ________ ~_ 2 3 4 Figure 2
1998
137
1,494
Learning multi-class dynamics A. Blake, B. North and M. Isard Department of Engineering Science, University of Oxford, Oxford OXl 3P J, UK. Web: http://www.robots.ox.ac.uk/ ... vdg/ Abstract Standard techniques (eg. Yule-Walker) are available for learning Auto-Regressive process models of simple, directly observable, dynamical processes. When sensor noise means that dynamics are observed only approximately, learning can still been achieved via Expectation-Maximisation (EM) together with Kalman Filtering. However, this does not handle more complex dynamics, involving multiple classes of motion. For that problem, we show here how EM can be combined with the CONDENSATION algorithm, which is based on propagation of random sample-sets. Experiments have been performed with visually observed juggling, and plausible dynamical models are found to emerge from the learning process. 1 Introduction The paper presents a probabilistic framework for estimation (perception) and classification of complex time-varying signals, represented as temporal streams of states. Automated learning of dynamics is of crucial importance as practical models may be too complex for parameters to be set by hand. The framework is particularly general, in several respects, as follows. 1. Mixed states: each state comprises a continuous and a discrete component. The continuous component can be thought of as representing the instantaneous position of some object in a continuum. The discrete state represents the current class of the motion, and acts as a label, selecting the current member from a set of dynamical models. 2. Multi-dimensionality: the continuous component of a state is, in general, allowed to be multi-dimensional. This could represent motion in a higher dimensional continuum, for example, two-dimensional translation as in figure 1. Other examples include multi-spectral acoustic or image signals, or multi-channel sensors such as an electro-encephalograph. 390 A. Blake. B. North and M Isard Figure 1: Learning the dynamics of juggling. Three motion classes, emerging from dynamical learning, turn out to correspond accurately to ballistic motion (mid grey), catch/throw (light grey) and carry (dark grey). 3. Arbitrary order: each dynamical system is modelled as an Auto-Regressive Process (ARP) and allowed to have arbitrary order (the number of time-steps of "memory" that it carries.) 4. Stochastic observations: the sequence of mixed states is "hidden" not observable directly, but only via observations, which may be multi-dimensional, and are stochastically related to the continuous component of states. This aspect is essential to represent the inherent variability of response of any real signal sensing system. Estimation for processes with properties 2,3,4 has been widely discussed both in the control-theory literature as "estimation" and "Kalman filtering" (Gelb, 1974) and in statistics as ''forecasting'' (Brockwell and Davis, 1996). Learning of models with properties 2,3 is well understood (Gelb, 1974) and once learned can be used to drive pattern classification procedures, as in Linear Predictive Coding (LPC) in speech analysis (Rabiner and Bing-Hwang, 1993), or in classification of EEG signals (Pardey et al., 1995). When property 4 is added, the learning problem becomes harder (Ljung, 1987) because the training sets are no longer observed directly. Mixed states (property 1) allow for combining perception with classification. Allowing properties 2,4, but restricted to a Oth order ARP (in breach of property 3), gives Learning Multi-Class Dynamics 391 Hidden Markov Models (HMM) (Rabiner and Bing-Hwang, 1993), which have been used effectively for visual classification (Bregler, 1997). Learning HMMs is accomplished by the "Baum-Welch" algorithm, a form of Expectation-Maximisation (EM) (Dempster et al., 1977). Baum-Welch learning has been extended to "graphicalmodels" of quite general topology (Lauritzen, 1996). In this paper, graph topology is a simple chain-pair as in standard HMMs, and the complexity of the problem lies elsewhere in the generality of the dynamical model. Generally then, restoring non-zero order to the ARPs (property 3), there is no exact algorithm for estimation. However the estimation problem can be solved by random sampling algorithms, known variously as bootstrap filters (Gordon et al., 1993), particle filters (Kitagawa, 1996), and CONDENSATION (Blake and Isard, 1997). Here we show how such algorithms can be used, with EM, in dynamical learning theory and experiments (figure 1). 2 Multi-class dynamics Continuous dynamical systems can be specified in terms of a continuous state vector Xt E nNcr. In machine vision, for example, Xt represents the parameters of a timevarying shape at time t. Multi-class dynamics are represented by appending to the continuous state vector Xt, a discrete state component Yt to make a "mixed" state X t = ( ~: ) , where Yt E Y = {I, . .. , Ny} is the discrete component of the state, drawn from a finite set of integer labels. Each discrete state represents a class of motion, for example "stroke", "rest" and "shade" for a hand engaged in drawing. Corresponding to each state Yt = Y there is a dynamical model, taken to be a Markov model of order KY that specifies Pi (Xt IXt-l, . .. Xt-KY ). A linear-Gaussian Markov model of order K is an Auto-Regressive Process (ARP) defined by K Xt = LAkxt-k + d + BWt k=1 in which each Wt is a vector of Nx independent random N(O, 1) variables and Wi, W t' are independent for t ¥ t'. The dynamical parameters of the model are • deterministic parameters AI, A 2 , ... , AK • stochastic parameters B, which are multipliers for the stochastic process Wt, and determine the "coupling" of noise Wt into the vector valued process Xt. For convenience of notation, let Each state Y E Y has a set {AY, BY, dY} of dynamical parameters, and the goal is to learn these from example trajectories. Note that the stochastic parameter BY is a first-class part of a dynamical model, representing the degree and the shape of uncertainty in motion, allowing the representation of an entire distribution of possible motions for each state y. In addition, and independently, state transitions are governed by the transition matrix for a 1st order Markov chain: P(Yt = y'IYt-1 = y) = My,y" 392 A. Blake. B. North and M. Isard. Observations Zt are assumed to be conditioned purely on the continuous part x of the mixed state, independent of Yt, and this maintains a healthy separation between the modelling of dynamics and of observations. Observations are also assumed to be independent, both mutually and with respect to the dynamical process. The observation process is defined by specifying, at each time t, the conditional density p(ZtIXt) which is taken to be Gaussian in experiments here. 3 Maximum Likelihood learning When observations are exact, maximum likelihood estimates (MLE) for dynamical parameters can be obtained from a training sequence Xi ... XT of mixed states. The well known Yule-Walker formula approximates MLE (Gelb, 1974; Ljung, 1987), but generalisations are needed to allow for short training sets (small T), to include stochastic parameters B, to allow a non-zero offset d (this proves essential in experiments later) and to encompass multiple dynamical classes. The resulting MLE learning rule is as follows. AY RY = BY d Y = 1 (RY _ AYRY) CY = 1 (iW _ AY('QY)T) 0' TY _ KY 0 , TY _ KY ""0,0 .L"O' where (omitting the Y superscripts for clarity) C = BBT and and the first-order moments Ri and (offset-invariant) auto correlations Ri,j, for each class y, are given by Rf = L x;_i and RL = RL - T ~ KRfRrT, y;=y Y where RL = L X;_iX;_j T; Ty = H t : Y; = y} == L 1. Yt=Y t:Yt=Y The MLE for the transition matrix M is constructed from relative frequencies as: M Ty,y' h T ll{t· * * '} Y,Y' = "" T, were y,y' = II . Yt-l = y, Yt = Y . 6y'EY Y,Y 4 Learning with stochastic observations To allow for stochastic observations, direct MLE is no longer possible, but an EM learning algorithm can be formulated. Its M-step is simply the MLE estimate of the previous section. It might be thought that the E-step should consist simply of computing expectations, for instance [[xtIZ[J, (where Zi = (Zl,"" Zt) denotes a sequence of observations) and treating them as training values x;. This would be incorrect however because the log-likelihood function I:- for the problem is not linear in the x; but quadratic. Instead, we need expectations Learning Multi-Class Dynamics 393 conditioned on the entire training set Z'[ of observations, given that £ is linear in the R i , Ri,j etc. (Shumway and Stoffer, 1982). These expected values of autocorrelations and frequencies are to be used in place of actual auto correlations and frequencies in the learning formulae of section 3. The question is, how to compute them. In the special case y = {I} of single-class dynamics, and assuming a Gaussian observation density, exact methods are available for computing expected moments, using Kalman and smoothing filters (Gelb, 1974), in an "augmented state" filter (North and Blake, 1998). For multi-class dynamics, exact computation is infeasible, but good approximations can be achieved based on propagation of sample sets, using CONDENSATION. Forward sampling with backward chaining For the purposes of learning, an extended and generalised form of the CONDENSATION algorithm is required. The generalisations allow for mixed states, arbitrary order for the ARP, and backward-chaining of samples. In backward chaining, sample-sets for successive times are built up and stored together with a complete state history back to time t = O. The extended CONDENSATION algorithm is given in figure 2. Note that the algorithm needs to be initialised. This requires that the Yo and (X~~lo' k = 0, ... ,KYO - 1) be drawn from a suitable (joint) prior for the multi-class process. One way to do this is to ensure that the training set starts in a known state and to fix the initial sample-values accordingly. Normally, the choice of prior is not too important as it is dominated by data. At time t = T, when the entire training sequence has been processed, the final sample set is { (X(n) X(n») (n)} - 1 N} TIT'· .. , OIT' 7rT ,n , ... , represents fairly (in the limit, weakly, as N -+ 00) the posterior distribution for the entire state sequence X O, .•• ,XT, conditioned on the entire training set Z'[ of observations. The expectations of the autocorrelation and frequency measures required for learning can be estimated from the sample set, for example: An alternative algorithm is a sample-set version of forward-backward propagation (Kitagawa, 1996). Experiments have suggested that probability densities generated by this form of smoothing converge far more quickly with respect to sample set size N, but at the expense of computational complexity O(N2) as opposed to O(N log N) for the algorithm above. 5 Practical applications Experiments are reported briefly here on learning the dynami(:s of juggling using the EM-Condensation algorithm, as in figure 1. An offset d Y is learned for each class in Y = {I, 2, 3}; other dynamical parameters are fixed such that that learning d Y amounts to learning mean accelerations a Y for each class. The transition matrix is also learned. From a more or ·less neutral starting point, learned structure emerges as in figure 3. Around 60 iterations of EM suffice, with N = 2048, to learn dynamics in this case. It is clear from the figure that the learned structure is an altogether plausible model for the juggling process. 394 A. Blake, B. North and M. Isard Iterate for t = 1, ... , T. C h I {(X (n) (n») (n)} . onstruct t e samp e-set lit"'" Xtit ,7rt ,n = 1, ... , N for time t. For each n: 1. Choose (with replacement) mE {I, .. . , N} with prob. 7ri~{' 2. Predict by sampling from (x I vt-l (X(m) X(m»)) P t 1"\.1 llt-l"'" t-llt-1 to choose X~~). For multi-class ARPs this is done in two steps. Discrete: Choose y~n) = y' E Y with probability My,y" where y = y~~i. Continuous: Compute K (n) _ ~AY (m) x tit - ~ kXt-klt-l k=l +d + Bw~n), where y = y~n) and w~n) is a vector of standard normal r.v. 3. Observation weights 7r~n) are computed from the observation density, evaluated for the current observations Zt: (n) (I (n») 7rt = P Zt Xt = xtit ' then normalised multiplicatively so that En 7rin ) = 1. 4. Update sample history: X (n) x(m) I ti lt tilt-I' t = 1, ... , t - 1. Figure 2: The CONDENSATION algorithm for forward propagation with backward chaining. Acknowledgements We are grateful for the support of the EPSRC (AB,BN) and Magdalen College Oxford (MI). References Blake, A. and Isard, M. (1997). The Condensation algorithm conditional density propagation and applications to visual tracking. In Advances in Neural Information Processing Systems 9, pages 361-368. MIT Press. Learning Multi-Class Dynamics 395 (:0 0.01 a = ( 0.0 ) 0.04 -9.7 Ballistic pat:::) ~ Cony a=(-;:) ~ Catchlthrow J Figure 3: Learned dynamical model for juggling. The three motion classes allowed in this experiment organise themselves into: ballistic motion (acceleration a ~ -g),- catch/throw,- carry. As expected, life-time in the ballistic state is longest, the transition probability of 0.95 corresponding to 20 time-steps or about 0.7 seconds. Transitions tend to be directed, as expected,- for example ballistic motion is more likely to be followed by a catch/throw (p = 0.04) than by a carry (p = 0.01). (Acceleration a shown here in units of m/ S2 .) Bregler, C. (1997). Learning and recognising human dynamics in video sequences. In Proc. Conf. Computer Vision and Pattern Recognition. Brockwell, P. and Davis, R. (1996). Introduction to time-series and forecasting. SpringerVerlag. Dempster, A., Laird, M., and Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. B., 39:1-38. Gelb, A., editor (1974). Applied Optimal Estimation. MIT Press, Cambridge, MA. Gordon, N., Salmond, D., and Smith, A. (1993). Novel approach to nonlinear/nonGaussian Bayesian state estimation. lEE Proc. F, 140(2):107- 113. Kitagawa, G. (1996). Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1- 25. Lauritzen, S. (1996). Graphical models. Oxford. Ljung, L. (1987). System identification: theory for the user. Prentice-Hall. North, B. and Blake, A. (1998). Learning dynamical models using expectationmaximisation. In Proc. 6th Int. Conf. on Computer Vision, pages 384-389. Par dey, J., Roberts, S., and Tarassenko, L. (1995). A review of parametric modelling techniques for EEG analysis. Medical Engineering Physics, 18(1):2- 1l. Rabiner, L. and Bing-Hwang, J. (1993). Fundamentals of speech recognition. Prentice-Hall. Shumway, R. and Stoffer, D. (1982). An approach to time series smoothing and forecasting USing the EM algorithm. J. Time Series Analysis, 3:253-226.
1998
138
1,495
Markov processes on curves for automatic speech recognition Lawrence Saul and Mazin Rahim AT&T Labs Research Shannon Laboratory 180 Park Ave E-171 Florham Park, NJ 07932 {lsaul,rnazin}Gresearch.att.com Abstract We investigate a probabilistic framework for automatic speech recognition based on the intrinsic geometric properties of curves. In particular, we analyze the setting in which two variables-one continuous (~), one discrete (s )-evolve jointly in time. We suppose that the vector ~ traces out a smooth multidimensional curve and that the variable s evolves stochastically as a function of the arc length traversed along this curve. Since arc length does not depend on the rate at which a curve is traversed, this gives rise to a family of Markov processes whose predictions, Pr[sl~]' are invariant to nonlinear warpings of time. We describe the use of such models, known as Markov processes on curves (MPCs), for automatic speech recognition, where ~ are acoustic feature trajectories and s are phonetic transcriptions. On two tasks-recognizing New Jersey town names and connected alpha-digits- we find that MPCs yield lower word error rates than comparably trained hidden Markov models. 1 Introduction Variations in speaking rate currently present a serious challenge for automatic speech recognition (ASR) (Siegler & Stern, 1995). It is widely observed, for example, that fast speech is more prone to recognition errors than slow speech. A related effect, occurring at the phoneme level, is that consonants (l,re more frequently botched than vowels. Generally speaking, consonants have short-lived, non-stationary acoustic signatures; vowels, just the opposite. Thus, at the phoneme level, we can view the increased confusability of consonants as a consequence of locally fast speech. 752 START t=O set) = s 1 x(t) L. Saul and M. Rahim END t='t Figure 1: Two variables-one continuous (x), one discrete (s )- evol ve jointly in time. The trace of s partitions the curve of x into different segments whose boundaries occur where s changes value. In this paper, we investigate a probabilistic framework for ASR that models variations in speaking rate as arising from nonlinear warpings of time (Tishby, 1990). Our framework is based on the observation that acoustic feature vectors trace out continuous trajectories (Ostendorf et aI, 1996). We view these trajectories as multidimensional curves whose intrinsic geometric properties (such as arc length or radius) do not depend on the rate at which they are traversed (do Carmo, 1976). We describe a probabilistic model whose predictions are based on these intrinsic geometric properties and-as such-are invariant to nonlinear warpings of time. The handling of this invariance distinguishes our methods from traditional hidden Markov models (HMMs) (Rabiner & Juang, 1993). The probabilistic models studied in this paper are known as Markov processes on curves (MPCs). The theoretical framework for MPCs was introduced in an earlier paper (Saul, 1997), which also discussed the problems of decoding and parameter estimation. In the present work, we report the first experimental results for MPCs on two difficult benchmark problems in ASR. On these problems- recognizing New Jersey town names and connected alpha-digits- our results show that MPCs generally match or exceed the performance of comparably trained HMMs. The organization of this paper is as follows. In section 2, we review the basic elements of MPCs and discuss important differences between MPCs and HMMs. In section 3, we present our experimental results and evaluate their significance. 2 Markov processes on curves Speech recognizers take a continuous acoustic signal as input and return a sequence of discrete labels representing phonemes, syllables, or words as output. Typically the short-time properties of the speech signal are summarized by acoustic feature vectors. Thus the abstract mathematical problem is to describe a multidimensional trajectory {x(t) It E [0, T]} by a sequence of discrete labels S1 S2 . . . Sn. As shown in figure 1, this is done by specifying consecutive time intervals such that s(t) = Sk for t E [tk-1, tk] and attaching the labels Sk to contiguous arcs along the trajectory. To formulate a probabilistic model of this process, we consider two variables-one continuous (x), one discrete (s )-that evolve jointly in time. Thus the vector x traces out a smooth multidimensional curve, to each point of which the variable s attaches a discrete label. Markov processes on curves are based on the concept of arc length. After reviewing how to compute arc lengths along curves, we introduce a family of Markov processes whose predictions are invariant to nonlinear warpings of time. We then consider the ways in which these processes (and various generalizations) differ from HMMs. Markov Processes on Curves for Automatic Speech Recognition 753 2.1 Arc length Let g(~) define a D x D matrix-valued function over x E RP. If g(~) is everywhere non-negative definite, then we can use it as a metric to compute distances along curves. In particular, consider two nearby points separated by the infinitesimal vector d~. We define the squared distance between these two points as: (1) Arc length along a curve is the non-decreasing function computed by integrating these local distances. Thus, for the trajectory x(t), the arc length between the points x(t!) and X(t2) is given by: f= l t2dt [~Tg(x)i:]~, (2) tl where i: = it [~(t)] denotes the time derivative of~ . Note that the arc length defined by eq. (2) is invariant under reparameterizations of the trajectory, ~(t) -t ~(J(t)) , where f(t) is any smooth monotonic function of time that maps the interval [tl, t2] into itself. In the special case where g(~) is the identity matrix, eq. (2) reduces to the standard definition of arc length in Euclidean space. More generally, however, eq. (1) defines a non-Euclidean metric for computing arc lengths. Thus, for example, if the metric g(x) varies as a function of~, then eq. (2) can assign different arc lengths to the trajectories x(t) and x(t) + ~o, where ~o is a constant displacement. 2.2 States and lifelengths We now return to the problem of segmentation, as illustrated in figure 1. We refer to the possible values of s as states. MPCs are conditional random processes that evolve the state variable s stochastically as a function of the arc length traversed along the curve of~. In MPCs, the probability of remaining in a particular state decays exponentially with the cumulative arc length traversed in that state. The signature of a state is the particular way in which it computes arc length. To formalize this idea, we associate with each state i the following quantities: (i) a feature-dependent matrix gi (x) that can be used to compute arc lengths, as in eq. (2); (ii) a decay parameter Ai that measures the probability per unit arc length that s makes a transition from state i to some other state; and (iii) a set of transition probabilities aij, where aij represents the probability that-having decayed out of state i-the variable s makes a transition to state j . Thus, aij defines a stochastic transition matrix with zero elements along the diagonal and rows that sum to one: aii = 0 and 2:j aij = 1. A Markov process is defined by the set of differential equations: d 1 1 Pi \ [ . T ( ).]:1 ~ \ [ . T ( ) • ] :I dt = -/liPi X gi X X + L.J /ljpjaji ~ 9j x ~ , #i (3) where Pi(t) denotes the (forward) probability that s is in state i at time t, based on its history up to that point in time. The right hand side of eq. (3) consists of two competing terms. The first term computes the probability that s decays out of state i; the second computes the probability that s decays into state i. Both terms are proportional to measures of arc length, making the evolution of Pi along the curve of x invariant to nonlinear warpings of time. The decay parameter, Ai, controls the typical amount of arc length traversed in state i; it may be viewed as I / 754 L. Saul and M. Rahim an inverse lifetime or-to be more precise-an inverse lifelength. The entire process is Markovian because the evolution of Pi depends only on quantities available at time t. 2.3 Decoding Given a trajectory x(t), the Markov process in eq. (3) gives rise to a conditional probability distribution over possible segmentations, s(t). Consider the segmentation in which s(t) takes the value Sk between times tk-l and tk, and let fSk = jtk dt [XTgsk(X) X ]% tk-l (4) denote the arc length traversed in state Sk. By integrating eq. (3), one can show that the probability of remaining in state Sk decays exponentially with the arc length f Sk ' Thus, the conditional probability of the overall segmentation is given by: n n Pr[s,flx] = II ASke->'Sklsk II aSkSk+ll (5) k=l k=O where we have used So and Sn+1 to denote the START and END states of the Markov process. The first product in eq. (5) multiplies the probabilities that each segment traverses exactly its observed arc length. The second product multiplies the probabilities for transitions between states Sk and Sk+l' The leading factors of ASk are included to normalize each state's distribution over observed arc lengths. There are many important quantities that can be computed from the distribution, Pr[ S Ix]. Of particular interest for ASR is the most probable segmentation: s* (x) = argmaxs,l {In Pr[s, fix]}. As described elsewhere (Saul, 1997), this maximization can be performed by discretizing the time axis and applying a dynamic programming procedure. The resulting algorithm is similar to the Viterbi procedure for maximum likelihood decoding (Rabiner & Juang, 1993). 2.4 Parameter estimation The parameters {Ai, aij, gi (x)} in MPCs are estimated from training data to maximize the log-likelihood of target segmentations. In our preliminary experiments with MPCs, we estimated only the metric parameters, gi(X); the others were assigned the default values Ai = 1 and aij = 1/ Ii, where Ii is the fanout of state i. The metrics gi (x) were assumed to have the parameterized form: (6) where (ji is a positive definite matrix with unit determinant, and cI>i (x) is a nonnegative scalar-valued function of x. For the experiments in this paper, the form of cI>i(X) was fixed so that the MPCs reduced to HMMs as a special case, as described in the next section. Thus the only learning problem was to estimate the matrix parameters (ji. This was done using the reestimation formula: J ~xT (ji ~ C dt. T • 1 cI>i(x(t)), [x (ji-1X]"2 (7) where the integral is over all speech segments belonging to state i, and the constant C is chosen to enforce the determinant constraint l(ji I = 1. For fixed cI>i (x), we have shown previously (Saul, 1997) that this iterative update leads to monotonic increases in the log-likelihood. Markov Processes on Curves for Automatic Speech Recognition 755 2.5 Relation to HMMs and previous work There are several important differences between HMMs and MPCs. HMMs parameterize joint distributions of the form: Pr[s, z] = Dt Pr[st+1lsd Pr[zt Isd. Thus, in HMMs, parameter estimation is directed at learning a synthesis model, Pr[zls]' while in MPCs, it is directed at learning a segmentation model, Pr[s,flz]. The direction of conditioning on z is a crucial difference. MPCs do not attempt to learn anything as ambitious as a joint distribution over acoustic feature trajectories. \ HMMs and MPCs also differ in how they weight the speech signal. In HMMs, each state contributes an amount to the overall log-likelihood that grows in proportion to its duration in time. In MPCs, on the other hand, each state contributes an amount that grows in proportion to its arc length. Naturally, the weighting by arc length attaches a more important role to short-lived but non-stationary phonemes, such as consonants. It also guarantees the invariance to nonlinear warpings of time (to which the predictions of HMMs are quite sensitive). In terms of previous work,\mr motivation for MPCs resembles that of Tishby (1990), who several years ago proposed a dynamical systems approach to speech processing. Because MPCs exploit the continuity of acoustic feature trajectories, they also bear some resemblance to so-called segmental HMMs (Ostendorf et aI, 1996). MPCs nevertheless differ from segmental HMMs in two important respects: the invariance to nonlinear warpings of time, and the emphasis on learning a segmentation model Pr[s,flz], as opposed to a synthesis model, Pr[xls]. Finally, we note that admitting a slight generalization in the concept of arc length, we can essentially realize HMMs as a special case of MPCs. This is done by computing arc lengths along the spacetime trajectories z(t) = {x(t),t}-that is to say, replacing eq. (1) by dL2 = [zTg(z) z]dt 2, where z = {:il, 1} and g(z) is a spacetime metric. This relaxes the invariance to nonlinear warpings of time and incorporates both movement in acoustic feature space and duration in time as measures of phonemic evolution. Moreover, in this setting, one can mimic the predictions of HMMs by setting the (J'i matrices to have only one non-zero element (namely, the diagonal element for delta-time contributions to the arc length) and by defining the functions <l>i(X) in terms of HMM emission probabilities P(xli) as: [ P(zli) ] <l>i(X) = -In 2::k P(xlk) . (8) This relation is important because it allows us to initialize the parameters of an MPC by those of a continuous-density HMM,. This initialization was used in all the experiments reported below. 3 Automatic speech recognition Both HMMs and MPCs were used to, build connected speech recognizers. Training and test data came from speaker-independent databases of telephone speech. All data was digitized at the caller's local switch and transmitted in this form to the receiver. For feature extraction, input telephone signals (sampled at 8 kHz and band-limited between 100-3800 Hz) were pre-emphasized and blocked into 30ms frames with a frame shift of 10ms. Each frame was Hamming windowed, autocorrelated, and processed by LPC cepstral analysis to produce a vector of 12 liftered cepstral coefficients (Rabiner & Juang, 1993). The feature vector was then augmented by its normalized log energy value, as well as temporal derivatives of first and second order. Overall, each frame of speech was described by 39 features. These features were used diffe:.;ently by HMMs and MPCs, as described below. 756 Mixtures HMM (%) 2 22.3 4 18.9 8 16.5 16 14.6 32 13.5 64 11.7 MPC ('fo) 20.9 17.5 15.1 13.3 12.3 11.4 L. Saul and M. Rahim NJ town names 22 ~ , 14 - 012 o 1000 2000 3000 4000 5000 parameters pe r state Table 1: Word error rates for HMMs (dashed) and MPCs (solid) on the task of recognizing NJ town names. The table shows the error rates versus the number of mixture components; the graph, versus the number of parameters per hidden state. Recognizers were evaluated on two tasks. The first task was recognizing New Jersey town names (e.g., Newark) . The training data for this task (Sachs et aI, 1994) consisted of 12100 short phrases, spoken in the seven major dialects of American English. These phrases, ranging from two to four words in length, were selected to provide maximum phonetic coverage. The test data consisted of 2426 isolated utterances of 1219 New Jersey town names and was collected from nearly 100 speakers. Note that the training and test data for this task have non-overlapping vocabularies. Baseline recognizers were built using 43Ieft-to-right continuous-density HMMs, each corresponding to a context-independent English phone. Phones were modeled by three-state HMMs, with the exception of background noise, which was modeled by a single state. State emission probabilities were computed by mixtures of Gaussians with diagonal covariance matrices. Different sized models were trained using M = 2, 4, 8, 16, 32, and 64 mixture components per hidden state; for a particular model, the number of mixture components was the same across all states. Parameter estimation was handled by a Viterbi implementation of the Baum-Welch algorithm. MPC recognizers were built using the same overall grammar. Each hidden state in the MPCs was assigned a metric gi(~) = O';l<I>l(~). The functions <I>i(~) were initialized (and fixed) by the state emission probabilities of the HMMs, as given by eq. (8). The matrices O'i were estimated by iterating eq. (7). We computed arc lengths along the 14 dimensional spacetime trajectories through cepstra, log-energy, and time. Thus each O'i was a 14 x 14 symmetric matrix applied to tangent vectors consisting of delta-cepstra, delta-log-energy, and delta-time. The table in figure 1 shows the results of these experiments comparing MPCs to HMMs. For various model sizes (as measured by the number of mixture components), we found the MPCs to yield consistently lower error rates than the HMMs. The graph in figure 1 plots these word error rates versus the number of modeling parameters per hidden state. This graph shows that the MPCs are not outperforming the HMMs merely because they have extra modeling parameters (i.e. , the O'i matrices). The beam widths for the decoding procedures in these experiments were chosen so that corresponding recognizers activated roughly equal numbers of arcs. The second task in our experiments involved the recognition of connected alphadigits (e.g., N Z 3 V J 4 E 3 U 2). The training and test data consisted of Markov Processes on Curves for Automatic Speech Recognition 757 13 12 ..... , Mixtures HMM (%) MPC (%) 2 12.5 10.0 4 10.7 8.8 ~11 ~ g10 '0 CD 8 10.0 8.2 9 ~oo 400 600 800 1000 1200 1400 parameters per state Figure 2: Word error rates for HMMs and MPCs on the task of recognizing connected alpha-digits. The table shows the error rates versus the number of mixture components; the graph, versus the number of parameters per hidden state. 14622 and 7255 utterances, respectively. Recognizers were built from 285 sub-word HMMs/MPCs, each corresponding to a context-dependent English phone. The recognizers were trained and evaluated in the same way as the previous task. Results are shown in figure 2. While these results demonstrate the viability of MPCs for automatic speech recognition, several issues require further attention. The most important issues are feature selection-how to define meaningful acoustic trajectories from the raw speech signal-and learning- how to parameterize and estimate the hidden state metrics gi (~) from sampled trajectories {z (t)}. These issues and others will be studied in future work. References M. P. do Carmo (1976) . Differential Geometry of Curves and Surfaces. Prentice Hall. M. Ostendorf, V. Digalakis, and O. Kimball (1996). From HMMs to segment models: a unified view of stochastic modeling for speech recognition. IEEE Transactions on Acoustics, Speech and Signal Processing, 4:360-378. L. Rabiner and B. Juang (1993) . Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ. R. Sachs, M. Tikijian, and E. Roskos (1994). United States English subword speech data. AT&T unpublished report. L. Saul (1998) . Automatic segmentation of continuous trajectories with invariance to nonlinear warpings of time. In Proceedings of the Fifteenth International Conference on Machine Learning, 506- 514. M. A. Siegler and R. M. Stern (1995). On the effects of speech rate in large vocabulary speech recognition systems. In Proceedings of the 1995 IEEE International Conference on Acoustics, Speech, and Signal Processing, 612-615. N. Tishby (1990). A dynamical system approach to speech processing. In Proceedings of the 1990 IEEE International Conference on Acoustics, Speech, and Signal Processing, 365-368. PART VII VISUAL PROCESSING
1998
139
1,496
Multi-electrode spike sorting by clustering transfer functions Dmitry Rinberg Hanan Davidowitz NEe Research Institute 4 Independence Way Princeton, N J 08540 N aftali Tishby* E-mail: {dima,hanan, tishby }<Dreseareh. nj . nee . com Categories: spike sorting, population coding, signal processing. Abstract A new paradigm is proposed for sorting spikes in multi-electrode data using ratios of transfer functions between cells and electrodes. It is assumed that for every cell and electrode there is a stable linear relation. These are dictated by the properties of the tissue, the electrodes and their relative geometries. The main advantage of the method is that it is insensitive to variations in the shape and amplitude of a spike. Spike sorting is carried out in two separate steps. First, templates describing the statistics of each spike type are generated by clustering transfer function ratios then spikes are detected in the data using the spike statistics. These techniques were applied to data generated in the escape response system of the cockroach. 1 Introduction Simultaneous recording of activity from many neurons can greatly expand our understanding of how information is coded in neural systems[l]. 11ultiple electrodes are often used to measure the activity in neural tissue and have become a standard tool in neurophysiology [2, 3,4]. Since every electrode is in a different position it will measure a different contribution from each of the different neurons. Simply stated, the problem is this: how can these complex signals be untangled to determine when each individual cell fired? This problem is difficult because, a) the objects being classified are very similar and often noisy, b) spikes coming from the same cell can ·Permanent address: Institute of Computer Science and Center for Neural Computation, The Hebrew University, Jerusalem, Israel. Email: tishby<Des.huji.ae.il Transfer Function Spike Sorting 147 vary in both shape and amplitude, depending on the previous activity of the cell and c) spikes can overlap in time, resulting in even more complex temporal patterns. Current approaches to spike sorting are based primarily on the presumed consistency of the spike shape and amplitude for a given cell [5, 6]. This is clearly the only possible basis for sorting using a single electrode. Multiple electrodes, however, provide additional independent information through the differences in the way the same neuron is detected by the different electrodes. The same spike measured on different electrodes can differ in amplitude, shape and its relative timing. These differences can depend on the specific cell, the electrode and the media between them. They can be characterized by linear transfer functions that are invariant to changes in the overall spike waveform. In this paper the importance of this information is highlighted by using only the differences in how signals are measured on different electrodes. It is then shown that clusters of similar differences correspond to the same neuron. It should be emphasized that in a full treatment this transfer function information will be combined with other cues to sort spikes. 2 Spikes, spectra and noise The basic assumption behind the spike sorting approach described here is that the medium between each neuron-electrode pair can be characterized by a linear system that remains fixed during the course of an experiment. This assumption is justified by the approximately linear dielectric properties of the electrode and its surrounding nerve tissues. Linear systems are described by their phase and amplitude response to pure frequencies, namely, by their complex transfer function H(w) = O(w)j I(w), where I(w) and O(w) are the complex spectra (Le. Fourier transform, henceforth called spectrum) of the input and output of the system, respectively. In the experiments described here the input signal is the spectrum of the action potential generated by cell j, denoted by Sj(w) and the output signal is the spectrum of the voltage measured at electrode Il, denoted by VJ.L(w). The transfer function of the system that links Sj(w) and VJ.L(w) is then defined as Hf(w) = VJ.L(w)jSj(w). If the transfer functions are fixed in time, the ratio between the complex spectrum of any spike from cell j as detected by electrodes Il and v , VJ.L (w) and V II (w ), is given by, Hf(w) Hj(w) , (1) which is independent of the cell action potential spectrum Sj(w), provided that the spike was detected by both electrodes. Thus, even if a spike varies in shape and amplitude, Tjll (w) will remain a fixed complex function of frequency. This ratio is also invariant with respect to time translations of the spikes. In addition, the frequency components are asymptotically un correlated for stationary processes, which justifies treating the frequency components as statistically independent[7]. The idea behind the approach described here is shown in Figure 1. In real experiments, however, noise can corrupt the invariance of Tjll. There are several possible sources of noise in experiments of this kind: a) fluctuations in the transfer function, b) changes in the spike shape, <;j and c) electrical and electrochemical noise, nJ.L. 148 D. Rinberg, H. Davidowitz and N Tishby cell-1 cell-1 cell-2 Q) u ::l -a. E ctl time time time Q) u ::l ::: a. E ctl time time time c:: 0 I~ ~ u c:: ::::I 0 -~ ctl Q) ~ (/) c:: ctl ~ frequency frequency frequency Figure 1: The idea behind spike sorting by clustering of transfer function ratios. Two spikes from the same cell (cell-I) may vary in shape/ amplitude during bursting activity, for example. Although the spike shapes may differ, the transfer functions relating them to the electrodes do not change so the transfer function ratios are similar (two left columns). A different cell (cell-2) has a different transfer function ratio even though the spikes shapes themselves are similar to those of cell-l (right column). If Hf varies slowly with time, the transfer function noise is small relative to <;j, n Y and n Y • Try can then be expanded to first order in <;j, nJ.l. and n Y as (2) which is independent of <;j. Since the noise, nJ.l., is un correlated with the spike signal, 5j , the variance at each frequency component can be considered to be Gaussian with equal variances on the real and imaginary axes. Thus the mean of Try will be independent of 5j , <;j and nJ.l. while its variance will be inversely proportional to 5 j . 3 A model system: the escape response of the cockroach These techniques were tested on a relatively simple neural system - the escape response system of the American cockroach. The escape behaviour, which has been studied extensively [9, 10, 11], is activated when the insect detects air currents Transfer Function Spike Sorting ( f·--, "-, , ~ \ . t \ .. 1 t \' . l \ " it !~ I : 10 ms 149 10.5mv 50 ms Figure 2: A schematic representation of the experiment. Typical raw data measured on two electrodes is shown at right. Relative time delays are evident in the inset, but are not a necessary condition for the sorting techniques described here. Abbreviations are: p-puffers, cg-circal ganglion, c-cerci. produced by the movements of a predator. The insect detects the approach of a predator, determines the direction of approach and starts running in an appropriate direction. The cockroach does this by detecting the movement of several hundred fine hairs located on two appendages, called cerci, protruding from the posterior end of the animal. Each of these hairs is connected to a single neuron. Axons from these cells converge on a dense neuropil called the cercal ganglion (cg), where directional information is extracted and conveyed to the rest of the body by axons in the abdominal nerve. This is shown schematically in Figure 2. This system proved to be well suited as a first test of the sorting technique. The system is simple enough so that it is not overwhelming (since only 7 neurons are known to contribute to the code) but complex enough to really test the approach. In addition, the nerve cords are linear in geometry, easily accessible and very stable. Male cockroaches (Periplaneta americana) were dissected from the dorsal side to expose the nerve cord. The left and right cords were gently separated and two tungsten wire electrodes were hooked onto the connective about 2 mm apart, separated by abdominal ganglia. The stimulus was presented by two loudspeakers driving two miniature wind tunnels pointed at the cerci, at 90 degrees from one another as shown in Figure 2. Recordings typically lasted for several hours. Data were collected with a sampling frequency of 2 . 104 Sis which was sufficient to preserve the high frequency components of the spikes. 150 D. Rinberg, H. Davidowitz and N. Tishby 0.5 1 -1.5 -1 Re(T) 1 Figure 3: Real and imaginary parts of Trv a single w. The circles have centers (radii) equal to the average (variance) of Trv at w = 248.7 rad S-l. Note that while some clusters seem to overlap at this frequency they may be well seperated at others. Cluster-l is dispersed throughout the complex plane and its variance is well beyond the range of this plot. 4 Clustering and the detection of spikes The spike sorting algorithm described here is done is two separate stages. First, a statistical model of the individual spike types is built from "clean" examples found in the data. Only then are occurrences of these spikes detected in the multielectrode data. This two-step arrangement allows a great deal of flexibility by disconnecting the clustering from the detection. For example, here the clustering was done on transfer function ratios while the detection was done on full complex spectra. These stages are described below in more detail. 4.1 The clustering phase First, the multi-electrode recording is chopped into 3 ms long frames using a sliding window. Frames that have either too low total energy or too high energy at the window edges are discarded. This leaves frames that are energetic in their central 2 ms and are assumed to carry one spike. No attempt is made to find all spikes in the data. Instead, the idea is to generate a set of candidate spike types from clean frames. Once a large collection of candidate spikes is found, TrV(w) is calculated for every Transfer Function Spike Sorting 151 Hook #1 Hook #2 cluster frames yCluster 6 324 ~ 5 720 -'lr 7E ca & -'\I 2 rvv 4 d 729 z R & ..- & V 3 518 • cs '-" E2£@£C9 .... -2 748 ~ ~6 I!!!I! 1 R g 4V 500~V I ~.I", ru@ 1 w§ Figure 4: Results of clustering spikes using transfer function ratios. Note that although cluster-5 and cluster-6 are similarly shaped on hook-l they are time shifted on hook-3. Cluster-l is made up of overlaps which are dealt with in the detection phase. spike. These are then grouped together into clusters containing similar Ttl! (w). Results of the clustering are shown in Figure 3 while the corresponding waveforms are shown in Figure 4. Full complex spectra are then used to build a statistical model of the different spike types, {Vj(w), af(w)}, which represent each cell's action potential as it appears on each of the electrodes. 4.2 The detection phase Once the cluster statistics are determined, an independent detection algorithm is used. The data is again broken into short frames but now the idea is to find which of the spike types (represented by the different clusters found in the previous steps) best represents the data in that frame. Each frame can contain either noise, a spike or an overlap of 2 spikes (overlaps of more than 2 spikes are not dealt with). This part is not done on transfer function ratios because dealing with overlaps is more difficult. 5 Concl usion A new method of spike sorting using transfer function ratios has been presented. In effect the sorting is done on the properties of the tissue between the neuron and 152 D. Rinberg, H Davidowitz and N. Tishby the electrode and individual spike shapes become less important. This method may be useful when dealing with bursting cells where the transfer function ratios should remain constant even though the spike amplitude can change significantly. This technique may prove to be a useful tool for analysing multi-electrode data. Acknowledgments We are grateful to Bill Bialek for numerous enlightening discussions and many useful suggestions. References [1] M. Barinaga. Listening in on the brain. Science 280, 376-378 (1998). [2] M. Abeles. Coriiconics, (Cambridge University Press, Cambridge, 1991) [3] B.L. McNaughton, J. O'Keefe and C.A. Barnes. The stereotrode: a new technique for simultaneous isolation of several single units in the central nervous system from multiple unit records. Journal of Neuroscience Methods, 8, 391-7 (1983). [4] M.L. Reece and J. O'Keefe. The tetrode: a new technique for multi-unit extracellular recording. Society of Neuroscience Abstracts 15, 1250 (1989). [5] M.S. Fee, P. P. Mitra and D. Kleinfeld. Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-Gaussian variability. Journal of Neuroscience Methods 69, 175-188 (1996). [6] M.S. Lewicki. A review of methods for spike sorting: the detection and classification of neural potentials. Network: Compututational Neural Systems 9 , R53-R78 (1998). [7] A. Papoulis. Probability, random variables and stochastic processes, (McGrawHill, New-York, 1965). [8] M. Abeles and G.L. Gerstein. Detecting spatio-temporal firing patterns among simultaneously recorded single neurons. Journal of Neurophysiology 60(3) , 909-924 (1988). [9] J .M. Camhi and A. Levy. The code for stimulus direction in a cell assembly in the cockroach. Journal of Comparative Physiology A 165, 83-97 (1989). [10] L. Kolton and J.M. Camhi. Cartesian representation of stimulus direction: parallel processing by two sets of giant interneurons in the cockroach. Journal of Comparative Physiology A 176, 691-702 (1995). [11] J. Westin, J.J. Langberg and J.M. Camhi. Responses of Giant Interneurons of the cockroach Periplaneta americana to wind puffs of different directions and velocities. Journal of Comparative Physiology A 121,307-324 (1977) .
1998
14
1,497
Discovering hidden features with Gaussian processes regression Francesco Vivarelli Christopher K. I. Williams Centro Ricerche Ambientali Division of Informatics Montecatini, The University of Edinburgh, via Ciro Menotti, 48 5 Forrest Hill, 48023 Marina di Ravenna Edinburgh, EH1 2QL Italy United Kingdom fvivarelli@cramont.it ckiw@dai.ed.ac.uk Abstract In Gaussian process regression the covariance between the outputs at input locations x and x′ is usually assumed to depend on the distance (x −x′)T W (x −x′), where W is a positive definite matrix. W is often taken to be diagonal, but if we allow W to be a general positive definite matrix which can be tuned on the basis of training data, then an eigen-analysis of W shows that we are effectively creating hidden features, where the dimensionality of the hidden-feature space is determined by the data. We demonstrate the superiority of predictions using the general matrix over those based on a diagonal matrix on two test problems. 1 Introduction Over the last few years Bayesian approaches to prediction with neural networks have come to the fore. Following an argument in Neal (1996) concerning the equivalence between infinite neural networks and certain Gaussian processes, Gaussian process (GP) prediction has also become popular, and Rasmussen (1996) has demonstrated good performance of GP predictors on a number of tasks. In Gaussian process prediction as applied by Rasmussen (1996), Williams and Rasmussen (1996) and others, the covariance between the outputs at locations x and x′ is usually assumed to depend on the distance (x −x′)T W(x −x′), where W is a positive definite, diagonal matrix. This means that different dimensions in the input space can have different relevances to the prediction problem (c.f.MacKay and Neal’s idea of Automatic Relevance Determination (Neal, 1996)). However, some of the reasoning about the success of neural networks and methods such as projection pursuit regression suggests that discovering relevant directions in feature space is important; clearly the ARD model is a special case, where these directions are parallel to the axes in the input feature space. In this paper we allow W to be a general positive semidefinite matrix (defining a Mahalanobis distance in the input space), thereby allowing general directions in the input space to be selected. We then compare the performance of GP predictors using the diagonal and full distance matrices on some regression problems. The structure of the paper is as follows. GPs for regression are introduced in Section 2, where we also explain the rˆole played by the distance matrix W and the criterion used to compare the generalisation performances of the diagonal and the general distance matrices. The two methods have been compared on two regression tasks and the results of our experiments are shown in Section 3. A summary of the work done and some open questions are presented in Section 4. 2 Gaussian processes and prediction In this paper we use Gaussian process models as predictors. Consider a stochastic process Y (x), with the input observable x belonging to some input space X ⊆Rd. Gaussian processes are a subset of stochastic processes that can be defined by specifying the mean and covariance functions, µ (x) = E [Y (x)] and Cp (x, x′) = E [Y (x) Y (x′)] respectively. For the work below we shall set µ (x) ≡0. Although the GP formulation provides a prior over functions, for our purposes it suffices to note that the y-values Y  x1 , Y  x2 , . . . , Y (xn) corresponding to xvalues x1, x2, . . . , xn have a multivariate Gaussian distribution N (0, Kp), where (Kp)ij = Cp  xi, xj . The specific form of the covariance function that we shall use is Cp (x, x′) = σ2 p exp  −1 2 (x −x′)T W (x −x′)  . (1) When W is a diagonal matrix the entry wii is the inverse of the squared correlation length-scale of the process along the direction i. In particular, we note that this model is closely related to the Automatic Relevance Determination method of MacKay and Neal (Neal, 1996), as a small lengthscale along a certain direction of the space highlights the relevance of the corresponding input feature (assuming that the inputs are normalised). For the prediction problem, let us suppose to have n data points Dn =  x1, t1 ,  x2, t2 , . . . , (xn, tn)  , where ti is the output-value corresponding to the input xi. The t’s are assumed to be generated from the true y-values by adding Gaussian noise of variance σ2 ν. Given the assumption of a Gaussian process prior over functions, it is a standard result (e.g. Whittle, 1963) that the predictive distribution p (t|x, Dn) corresponding to a new input is N  ˆy (x) , σ2 (x)  , with mean and variance ˆy (x) = kT (x) K−1t (2) σ2 (x) = Cp (x, x) + σ2 ν −kT (x) K−1k (x) , (3) where K = Kp + σ2 νI, kT (x) =  Cp  x, x1 , Cp  x, x2 , . . . , Cp (x, xn)  and tT =  t1, t2, . . . tn . This method of prediction assumes that the process y (x) we are modelling is really a function of the observable x. However it is often the case that for real world problems the y is actually a function of a set of hidden features z ∈Z ⊆Rq which arise from a combination of the manifest variables x. In particular we wish to study the problem in which the hidden features are a linear combination of the observable coordinates through a q × d matrix M, where q < d (i.e.z = Mx). In this case, the covariance of the function y is specified by Equation 1 but turns out to depend upon the estimation of the distance between hidden features (z −z′)T Ψ (z −z′). Since z = Mx, (z −z′) = M (x −x′) and W = M T ΨM. A GP model depends on the parameters which describe the covariance function (i.e.σ2 p, σ2 ν and the elements of W). The training of a GP can be carried out by either estimating the parameters of the covariance function (for example, using the maximum likelihood method) or using a Bayesian approach and sampling from the posterior distribution over the parameters (Williams and Rasmussen, 1996). We follow the first approach, maximising the logarithm of the likelihood L = log p (Dn|θ) = −1 2 log det K −1 2tT K−1t −n 2 log 2π (4) where K−1 depends upon θ, the vector of parameters of the covariance function. The number of free parameters depends on the number of non-zero elements of the matrix W. Usually, W is chosen to be diagonal and the number of free parameters is d + 2 (the d diagonal elements, σ2 p and σ2 ν). We notice that this parametrisation of W allows the discovery of relevant directions in the observed space; it does not lead to an estimation of a general mapping of X onto the feature space Z as the relevant directions are parallel to the axes in the input manifest space. If q is not known in advance, it is preferable to use a general symmetric positive semidefinite matrix W. A parametrisation of such a matrix follows from the Choleski decomposition as W = U T U, where U is an upper triangular matrix with positive entries on the diagonal (Williams, 1996). Hence the factorisation of U turns out to be U =    exp [u1,1] u1,2 . . . u1,d 0 exp [u2,2] . . . u2,d 0 0 . . . u3,d . . . . . . . . . exp [ud,d]   . (5) The elements on the diagonal are positive because of the exponential. Being symmetric, W has at most d (d + 1) /2 independent entries and thus the total number of free parameters of the GP model is 2 + d (d + 1) /2. We note that such a full distance matrix W allows an estimation of the matrix M from an eigenvalue decomposition of W = V ΛV T , where Λ is a diagonal matrix of the eigenvalues of W and V is the matrix of the eigenvectors. The dimension of the hidden feature space Z can be inferred by the number of relevant eigenvalues of the matrix Λ (which are the inverse of the squared correlation lengths of the process along the directions of the hidden space). The directions of the hidden feature space are defined by the eigenvectors corresponding to the relevant eigenvalues; in particular the matrix composed by these eigenvectors gives an estimate of the mapping from X to Z. In the following the diagonal and the general full correlation matrices are designated by Wd and Wf. It is important to observe that the predictor obtained using Wf is not equivalent to an additive model (Hastie and Tibshirani, 1990), as the predictor is a multivariate function of z rather than being an additive function of the components of z. However, it would be possible to produce an additive function in the GP context, using a covariance function which is the sum of one-dimensional covariance functions based on projections of x. 2.1 Generalisation error Consider predicting the value of a function y(x) with a predictor ˆy(x). A commonlyused measure of the generalisation error given a dataset Dn is the average squared error Eg (Dn) = (y (x) −ˆyDn (x))2 p (x) dx. (6) The average generalisation error Eg (n) for a dataset of size n is obtained by averaging over the choice of training dataset, i.e. Eg (n) = ED [Eg (Dn)]. Eg (Dn) can sometimes be evaluated analytically or by numerical integration, but it is usually necessary to use samples to perform the average over training datasets Dn. In order to investigate the generalisation capabilities of GPs using a diagonal and full distance matrices Wd and Wf, we trained the GP predictors on some regression tasks. The generalisation errors are compared by looking at the relative error ρ (Dn) = Eg d (Dn) −Eg f (Dn) Eg d (Dn) (7) where Eg d (Dn) and Eg f (Dn) are the generalisation errors reported using a diagonal and a full distance matrix respectively. This ratio allow us to perform a fair comparison between the pairwise differences of the generalisation errors for each dataset and the actual value Eg d (Dn). The expected value ρ (n) is the average over the sampling of the training data Dn: ρ (n) = ED [ρ (Dn)]. 3 Results We have conducted experiments to compare the generalisation capabilities of a GP predictor with full and diagonal distance matrices. In this section we illustrate the results we obtained by training a GP on two regression tasks, the regression of a trigonometric function (Section 3.1), and the regression of a high-interaction surface (Section 3.2). 3.1 Regression of a trigonometric function In the first experiments, a GP has been trained on observations drawn from the function y (z) = sin (2πz) corrupted by Gaussian noise of mean zero and variance σ2 ν = 10−4, 10−3, 10−2, 10−1, 1. The hidden feature z ∈R has been generated from the observable variables x ∈R2 through the transformation z = mT x, where mT =  1/ √ 2, 1/ √ 2  and x ∼N (0, I). We wish to infer the process y (z) (which is actually a function of the one-dimensional feature z) by using a GP on the manifest space R2. We evaluated the expected generalisation errors of Equation 6 by Gaussian quadrature (Press et al., 1992) and estimated the expected relative error ρ (n) by averaging over 10 different samples of the training set. The parameters of the covariance function are optimised on each of the 10 training datasets by maximising the likelihood (see Equation 4) with the conjugate gradient algorithm (Press et al., 1992) with 50 (for Wd) and 70 (for Wf) iterations for the largest training sets with 256 data. Figure 1 reports the value of ρ (n) on the vertical axis as a function of the amount of training data (x axis). The variance of the noise has been set to 0.01 in Figure 1(a) and 0.1 in Figure 1(b). The plots show that the use of Wf significantly improves the generalisation performance with respect to a diagonal matrix as the relative error ρ (n) lies well above zero, within its confidence interval. This is particularly highlighted in Figure 1(a) where for datasets larger than 32 data, ρ (n) is larger than 75%. We notice that for small datasets, ρ (n) is close to zero, as the distribution of its values are spread 8 16 32 64 128 256 −0.5 −0.25 0 0.25 0.5 0.75 1 n ρ(n) (a) 8 16 32 64 128 256 −0.5 −0.25 0 0.25 0.5 0.75 1 n ρ(n) (b) Figure 1: The Figures report on the y axis the graphs of ρ (n) (see Equation 7) as a function of the amount of training data (x axis); the noise level is set to 0.01 (Figure 1(a)) and to 0.1 (Figure 1(b)). The error bars are generated by the minimum and the maximum value of ρ (Dn) which occurred over the 10 training datasets. out around zero with wide confidence intervals. This is due to the fact that with small amounts of data it is not possible to train the GP properly; in particular, as the number of free parameters of Wf is larger than that of Wd, the former needs larger datasets for the training than the latter in order to avoid overfitting. A fully Bayesian treatment of the training of a GP (see Section 2) would not be so seriously affected by this problem since the prediction of the GP would be marginalised over the posterior distribution of the parameters. For large datasets, the relative error declines after having reached its maximum value; this agrees with the intuition that with large amounts of data, both methods will be become good predictors. Similar remarks apply also to Figure 1(b) (where σ2 ν = 0.1) although we notice that the relative error ρ (n) assumes lower values due to the higher noise variance. The better perfomance of Wf with respect to Wd can be explained by an eigenanalysis of the two distance matrices. Since one eigenvalue of Wf is much larger than the other (O (10) vs. O  10−4 ), the full rank distance matrix is able to discover the relevant true dimension of the process. The eigenvector corresponding to the larger eigenvalue represents the operator which maps the space of the observables onto the hidden feature space. Wd fails to find out the effective dimension of the problem as it is characterised by two eigenvalues of similar magnitude (O (10)). 3.2 A high-interaction surface We also tested our method on an example taken from Breiman (1993) which is concerned with a regression problem of a surface in a high dimensional space. The target function is y (x) = σ (z1)+σ (z2)+σ (z3), where σ (z) is the sigmoid function σ (z) = exp [z] / (1 + exp [z]). The hidden features z1, z2 and z3 are derived from the transformation zi = 2 (li −2) , i = 1 . . . 3, where the li are the normalised inner products mT i x. The observed variables x ∈R10 are uniformly distributed over [0, 1]10; the three vectors mi are mT 1 = (10, 9, 3, 7, −6, −5, −9, −3, −2, −1), mT 2 = (−1, −2, −3, −4, −5, −6, 7, 8, 9, 10) and mT 3 = (−1, −2, −3, 4, 5, 4, −3, −2, −1, 0). The values of the true function are also corrupted by Gaussian noise of mean zero; the variance of the noise was such that the ratio between the standard deviations of the signal y (x) and the the noise was 4.0, as in Breiman (1993). We have run experiments, training GPs with diagonal and full distance matrices on 10 data sets of size 64, 128, 256 and 512; in his work, Breiman used training 64 128 256 512 −1 −0.75 −0.5 −0.25 0 0.25 0.5 n ρ(n) (a) 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 λ Wf Wd (b) Figure 2: Figure 2(a) reports on the y axis the graph of ρ (n) (see Equation 7) as a function of the amount of training data (x axis); the error bars are generated by the minimum and the maximum value of ρ (Dn) that occurred over the 10 training datasets. Figure 2(a) shows the graph of the ten eigenvalues of the Wd (∗) and the Wf (◦) distance matrices obtained using one training set of 512 data. The lower values reached by training sets with 64 and 128 data are −1.06 and −1.97 respectively. sets with 400 datapoints. The GP’s parameters are optimised on each of the 10 training datasets by maximising the likelihood (see Equation 4) with the conjugate gradient algorithm (Press et al., 1992). The generalisation errors of Wd and Wf have been estimated using 1024 test data points; the relative generalisation error ρ (n) (c.f.Equation 7) is shown in Figure 2(a). We observe that for datasets of size 512 the use of Wf significantly reduces the relative error with respect the diagonal matrix. Models trained with smaller training sets do not have such good generalisation performance because the larger number of parameters in Wf (57) overfits the data. An eigenvalue decomposition of the distance matrices shows that Wf is able to discover the underlying structure of the process. Figure 2(b) displays the eigenvalues of Wf and Wd optimised for one of the training sets of 512 data. Wf is characterised by three large eigenvalues, whose eigenvectors indicate the three main directions in the feature space; thus the full matrix is able to find out three out of ten directions which are responsible of the variation of the function. Conversely, Wd fails to discover the hidden features in the data; since all the eigenvalues have almost the same magnitude, all the input dimensions of the observed variable are equally relevant in training the GP. The eigenvectors ef i , i = 1, 2, 3 of Wf define a basis in the space generating a subspace of features. In order to verify the subspace spanned by the ef i actually overlaps the hidden feature space, we tried to express the former set of vectors as a linear combination the latter. Thus we computed the singular values (Press et al., 1992) of the matrix composed by the normalised vectors mi and the basis ef i ’s. As three out of six singular values are negligible with respect to the others (O  10−2 vs. O (1)), the original hidden transformation can be well approximated as a linear combination of the new basis of eigenvectors showing that the eigenspace of Wf is a good approximation of the hidden feature space. 4 Discussion In this paper we have shown how to discover hidden features with GP regression. We also note that this technique could be applied to problems where Gaussian process predictors are used in classification problems. An attractive feature of the method is that it allows the appropriate dimensionality of the z space to be discovered. If we wish to restrict the maximum dimensionality of Z to be q then one could use a distance matrix of rank-q, i.e.(Ψ 1 2 M)T (Ψ 1 2 M). The idea of allowing a general transformation of the input space has been mentioned before in the literature, for example in (Girosi et al., 1995). However, Girosi et al suggest setting the parameters in Wf by cross-validation; we believe that this is not very practical in high-dimensional spaces. The results obtained show that the use of a full distance matrix can reduce significantly the relative error with respect to the use of a diagonal distance matrix. As the training of the GP has been carried out maximising the logarithm of the likelihood, this effect was particularly evident when larger amounts of data were used; this problem can be reduced when a full Bayesian approach to the GP regression is used. Currently we are investigating how the input-dimensionality of the affects GP regression with a general distance matrix W (for a fixed dimensionality of Z). Acknowledgements This research forms part of the “Validation and Verification of Neural Network Systems” project funded jointly by EPSRC (GR/K 51792) and British Aerospace. We thank Dr. Andy Wright of BAe for helpful discussions. References Breiman, L. (1993). Hinging hyperplanes for regression, classification and function approximation. IEEE Trans. on Information Theory, 39(3):999–1013. Girosi, F., Jones, M., and Poggio, T. (1995). Regularization Theory and Neural Networks Architectures. Neural Computation, 7(2):219–269. Hastie, T. J. and Tibshirani, R. J. (1990). Generalized Additive Models. Chapman and Hall, London. Neal, R. M. (1996). Bayesian Learning for Neural Networks. Springer. Lecture Notes in Statistics 118. Press, W., Teukolsky, S., Vetterling, W., and Flannery, B. (1992). Numerical Recipes in C. The Art of Scientific Computing. Cambridge University Press. second edition. Rasmussen, C. E. (1996). Evaluation of Gaussian Processes and Other Methods for Nonlinear Regression. PhD thesis, Dept. of Computer Science, University of Toronto. Whittle, P. (1963). Prediction and regulation by linear least square methods. English Universities Press. Williams, C. K. I. and Rasmussen, C. E. (1996). Gaussian processes for regression. In Touretzky, M. C. and Mozer, M. C. and Hasselmo, M. E., editors, Advances in Neural Information Processing Systems 8, pages 514–520. MIT Press. Williams, P. M. (1996). Conditional multivariate densities. Neural Computation, 8(4).
1998
140
1,498
Utilizing Time: Asynchronous Binding Bradley C. Love Department of Psychology Northwestern University Evanston, IL 60208 Abstract Historically, connectionist systems have not excelled at representing and manipulating complex structures. How can a system composed of simple neuron-like computing elements encode complex relations? Recently, researchers have begun to appreciate that representations can extend in both time and space. Many researchers have proposed that the synchronous firing of units can encode complex representations. I identify the limitations of this approach and present an asynchronous model of binding that effectively represents complex structures. The asynchronous model extends the synchronous approach. I argue that our cognitive architecture utilizes a similar mechanism. 1 Introduction Simple connectionist models can fall prey to the "binding problem" . A binding problem occurs when two different events (or objects) are represented identically. For example, representing "John hit Ted" by activating the units JOHN, HIT, and TED would lead to a binding problem because the same pattern of activation would also be used to represent "Ted hit John". The binding problem is ubiquitous and is a concern whenever internal representations are postulated. In addition to guarding against the binding problem, an effective binding mechanism must construct representations that assist processing. For instance, different states of the world must be represented in a manner that assists in discovering commonalities between disparate states, allowing for category formation and analogical processing. Interestingly, new connectionist binding mechanisms [5, 9, 12J utilize time in their operation. Pollack's Recursive Auto-Associative Memory (RAAM) model combines a standard fixed-width multi-layer network architecture with a stack and a simple controller, enabling RAAM to encode hierarchical representations over multiple processing steps. RAAM requires more time to encode representations as they become more complex, but its space requirements remain constant. The clearest example Utilizing Time: Asynchronous Binding 39 of utilizing time are models that perform dynamic binding through synchronous firings of units [17, 5, 12]. Synchrony models explicitly use time to mark relations between units, distributing complex representations across multiple time steps. Most other models neglect the time aspect of representation. Even synchrony models fail to fully utilize time (I will clarify this point in a later section). In this paper, a model is introduced (the asynchronous binding mechanism) that attempts to rectify this situation. The asynchronous approach is similar to the synchronous approach but is more effective in binding complex representations and exploiting time. 2 Utilizing time and the brain Representational power can be greatly increased by taking advantage of the time dimension of representation. For instance, a telephone would need thousands of buttons to make a call if sequences of digits were not used. From the standpoint of a neuron, taking advantage of timing information increases processing capacity by more than a 100 fold [13]. While this suggests that the neural code might utilize both time and space resources, the neuroscience community has not yet arrived at a consensus. While it is known that the behavior of a postsynaptic neuron is affected by the location and arrival times of dendritic input [10], it is generally believed that only the rate of firing (a neuron's firing rate is akin to the activation level of a unit in a connectionist network) can code information, as opposed to the timing of spikes, since neurons are noisy devices [14]. However, findings that are taken as evidence for rate coding, like elevated firing rates in memory retention tasks [8], can often be reinterpreted as part of complex cortical events that extend through time [1]. In accord with this view, recent empirical findings suggests that the timing of spikes (e.g., firing patterns, intervals) are also part of the neural code [4, 16]. Contrary to the rate based view (which holds only that only the firing rate of a neuron encodes information), these studies suggest that the timing of spikes encodes information (e.g., when two neurons repeatedly spike together it signifies something different than when they fire out of phase, even if their firing rates are identical in both cases). Behavioral findings also appear consistent with the idea that time is used to construct complex representations. Behavioral research in illusory conjunction phenomena [15], and sentence processing performance [11] all suggest that bindings or relations are established through time, with bindings becoming more certain as processing proceeds. In summary, early in processing humans can gauge which representational elements are relevant while remaining uncertain about how these elements are interrelated. 3 Dynamic binding through synchrony Given the demands placed on a representational system, a system that utilizes dynamic binding through synchrony would seem to be a good candidate mental architecture (though, as we will see, limitations arise when representing complex structures). A synchronous binding account of our mental architecture is consistent (at a general level) with behavioral findings, the intuition that complex representations are distributed across time, and that neural temporal dynamics code information. Synchrony seems to offer the power to recombine a finite set of elements in a virtually unlimited number of ways (the defining characteristic of a discrete combinatorial system). 40 B. C. Love While synchrony models seem appropriate for modeling certain behaviors, dynamic binding through synchrony does not seem to be an appropriate mechanism for establishing complex recursive bindings [2]. In a synchronous dynamic binding system, the distinction between a slot and a filler is lost, since bindings are not directional (i.e., which unit is a predicate and which unit is an argument is not clear). The slot and the filler simply share the same phase. In this sense, the mechanism is more akin to a grouping mechanism than to a binding mechanism. Grouping units together indicates that the units are a part of the same representation, but does not sort out the relations among the units as binding does. Synchrony runs into trouble when a unit has to act simultaneously as a slot and a filler. For instance, to represent embedded propositions with synchronous binding, a controller needs to be added. For instance, a structure with embedding, like A-+B-+C, could be represented with synchronous firings if A and B fired synchronously and then Band C fired synchronously. Still, synchronous binding blurs the distinction between a slot and a filler, necessitating that A, B, and C be marked as slots or fillers to unambiguously represent the simple A-+B-+C structure. Notice that B must be marked as a slot when it fires synchronously with A, but must be marked as filler when it synchronously fires with C. When representing embedded structures, the synchronous approach becomes complicated (Le., simple connections are not sufficient to modulate firing patterns) and rigid (Le., parallelism and flexibility are lost when a unit has to be either a slot or a filler). Ideally, units would be able to act simultaneously as slots and fillers, instead of alternating between these two structural roles. 4 The asynchronous approach While synchrony models utilize some timing information, other valuable timing information is discarded as noise, making it difficult to represent multiple levels of structure. If A fired slightly before B, which fired slightly before C, asynchronous timing information (ordering information) would be available. This ordering information allows for directional binding relations and alleviates the need to label units as slots or fillers. Notice that B can act simultaneously as a slot and a filler. Directional bindings can unambiguously represent complex structures. Phase locking and wave like patterns of firing need not occur during asynchronous binding. For instance, the firing pattern that encodes a structure like A-+B-+C does not need to be orderly (Le., starting with A and ending with C). To encode A-+B-+C, unit B's firing schedule must observably speed up (on average) after unit A fires, while C's must speed up after B fires. For example, if we only considered the time window immediately after a unit fires, a firing sequence of B, C, no unit fires, A, and then B would provide evidence for the structure A-+B-+C. Of course, if A, B, and C fire periodically with stochastic schedules that are influenced by other units' firings, spurious binding evidence will accrue (e.g., occasionally, C will fire and A will fire in the next time step). Luckily, these accidents will be less frequent than events that support the intended bindings. As binding evidence is accumulated over time, binding errors will become less likely. Interestingly, the asynchronous mechanism can also represent structures through an inhibitory process that mirrors the excitatory process described above. A-+B-+C could be represented asynchronously if A was less likely to fire after B fired and B was less likely to fire after C fired. An inhibitory (negative) connection from B to A is in some ways equivalent to an excitatory (positive) connection form A to B. Utilizing Time: Asynchronous Binding 41 4.1 The mathematical expression of the model The previous discussion of the asynchronous approach can be formalized. Below is a description of an asynchronous model that I have implemented. 4.1.1 The anatomy of a unit Individual units, when unaffected by other units, will fire periodically when active: if Rti ~ 1, then Oti+l = 1, otherwise Oti + l = O. (1) where Oti+l is the unit's output (at time i + 1), Rti is the unit's output refractory period which is randomly set (after the unit fires) to a value drawn from the uniform distribution between 0 and 1 and is incremented at each time step by some constant (which was set to .1 in all simulations). Notice that a unit produces an output one time step after its output refractory period reaches threshold. 4.1.2 A unit's behavior in the presence of other units A unit alters its output refractory if it receives a signal (via a connection) from a unit that has just fired (i.e., a unit with a positive output) . For example, if unit A fires (its output is 1) and there is a connection to unit B of strength +.3, then B's output refractory will be incremented by +.3, enabling unit B to fire during the next time step or at least decreasing the time until B fires. Alternatively, negative (inhibitory) connections lower refractory. Two unconnected units will tend to fire independently of each other, providing little evidence for a binding relation. Again, over a small time window, two units may fire contiguously by chance, but over many firings the evidence for a binding will approach zero. 4.1.3 Interpreting firing patterns Every time a unit fires, it creates evidence for binding hypotheses. The critical issue is how to collect and evaluate evidence for bindings. There are many possible evidence functions that interpret firing patterns in a sensible fashion. One simple function is to have evidence for two units binding decrease linearly as the time between their firings increases. Evidence is updated every time step according to the following equation: if p ~ (tuj - tu,) ~ 1, then ~Eij = - (1/ p) (tUj - tuJ + (1/ p) + 1. (2) where p is the size of the window for considering binding evidence (Le., if p is 5, then units firing 5 time steps apart still generate binding evidence), tUi is the most recent time step unit Ui fired, and ~Eij is the change in the amount of evidence for Ui binding to Uj . Of course, some evidence will be spurious. The following decision rule can be used to determine if two units share a binding relation: if (Eij - Eji ) > k, then Ui binds to Uj. (3) where k is some threshold greater than O. This decision rule is formally equivalent to the diffusion model which is a type of random walk model [6]. Equations 2 and 3 are very simple. Other more sophisticated methods can be used for collecting and evaluating binding evidence. 4.2 Performance of the Asynchronous Mechanism In this section, the asynchronous binding mechanism's performance characteristics are examined. In particular, the model's ability to represent tree structures of 42 B. C. Love Organized by Branching Organized by Depth ~ ~ ~,'go 'go 1/ g(» 801 (,' 1J,iil 00 ~ aiBm~~ Eranc~in8= ~a) /i ~ R ranc In = ~R I: h 'C ranc In = ig iil iil 500 1000 1500 2000 2S00 500 1000 1 SOD 2000 2500 proCessing lime processing time 0 ~ 0 ---.:;:;: ..... ~;:-:-,/ -",h I /.", l~ /.,' 8 ( / / ~/ g,iil / 1/ g.~ 1/ -B 1/ -B It' ~R ~R ~ I 1/ ~ If ~o /1/ ~o /: ~'" ~'" iil iil SOD 1000 1500 2000 2S00 500 1000 1 SOD 2000 2500 processmg time procesSing lime 8 ~ -----::=--_.------/ -.. -....-:,-:::.-h ' .... h .,..-:....:, .. I ,- " .... 8 I , , 8 /,' I , " ~2 I I , 00 /, I I , g>~ -B I I , ~o /" "0 I .o~ I I I .o~ 1/ c I I ~ ~ I I i g I I ~o , 8.'" 1/ I I , I I , , iil iil SOD 1000 1500 2000 2500 500 1000 1500 2000 2500 processrng tuna processing lime Figure 1: Performance curves for the 9 different structures are shown. varying complexity was explored. Tree structures can be used to represent complex relational information, like the parse of a sentence. An advantage of using tree structures to measure performance is that the complexity of a tree can be easily described by two factors. Trees can vary in their depth and branching. In the simulations reported here, trees had a branching factor and depth of either 1, 2, or 3. These two factors were crossed, yielding 9 different tree structures. This design makes it possible to assess how the model processes structures of varying complexity. One sensible prediction (given our intuitions about how we process structured representations) is that trees with greater depth and branching will take longer to represent. In the simulations reported here, both positive and negative connections were used simultaneously. For instance, in a tree structure, if A was intended to bind to B , A 's connection to B was set to +.1 and B 's connection to A was set to - .1. The combination of both connection types yields the best performance. In these simulations both excitatory and inhibitory binding connection values were set relatively low (all binding connections were set to size .1), providing a strict test of the model's sensitivity. The low connection values prevented bound units from establishing tight couplings (characteristic of bound units in synchrony models). For example, with an excitatory connection from A to B of .1, A 's firing does not ensure that B will fire in the next time step (or the next few time steps for that matter). The lack of a tight coupling requires the model to be more sensitive to how one unit affects another unit's firing schedule. With all connections of size .1, firing patterns representing complex structures will appear chaotic and unorderly. Utilizing Time: Asynchronous Binding 43 In all simulations, the time window for considering binding evidence was 5 time steps (i.e., Equation 2 was used with p set to 5). Performance was measured by calculating the percent bindings correct. The bindings the model settled upon were determined by calculating the number of bindings in the intended structure. The model then created a structure with this number of bindings (this is equivalent to treating k like a free parameter), choosing the bindings it believed to be most likely (based on accrued evidence). The model was correct when the bindings it believed to be present corresponded to the intended bindings. For each of the 9 structures (3 levels of depth by 3 levels of branching), hundreds of trials were run (the mechanism is stochastic) until performance curves became smooth. The model's performance was measured every 25th time step up to the 2500th time step. Performance (averaged across trials) for all structures is shown in Figure 1. Any viewable difference between performance curves is statistically significant. As predicted, there was a main effect for both branching and depth. The left panels of Figure 1 organize the data by branching factor, revealing a systematic effect of depth. The right panel is organized by depth and reveals a systematic effect of branching. As structures become more complex, they appear to take longer to represent. 5 Conclusions The ability to effectively represent and manipulate complex knowledge structures is central to human cognition [3]. Connectionists models generally lack this ability, making it difficult to give a connectionist account of our mental architecture. The asynchronous mechanism provides a connectionist framework for representing structures in a way that is biologically, computationally, and behaviorally feasible. The mechanism establishes bindings over time using simple neuron-like computing elements. The asynchronous approach treats bindings as directional and does not blur the distinction between a slot and a filler as the synchronous approach does. The asynchronous mechanism builds representations that can be differentiated from each other, capturing important differences between representational states. The representations that the asynchronous mechanism builds also can be easily compared and commonalities between disparate states can be extracted by analogical processes, allowing for generalization and feature discovery. In fact, an analogical (i.e., graph) matcher has been built using the asynchronous mechanism [7]. Variants of the model need to be explored. This paper only outlines the essentials of the architecture. Synchronous dynamic binding models were partly inspired from work in neuroscience. Hopefully the asynchronous dynamic binding model will now inspire neuroscience researchers. Some evidence for rate-based firing (spatially based) neural codes has been revisited and viewed as consistent with more complex temporal codes [1]; perhaps evidence for synchrony can be subjected to more sophisticated analyses and be better construed as evidence for the asynchronous mechanism. Acknow ledgments This work was supported by the Office of Naval Research under the National Defense Science and Engineering Graduate Fellowship Program. I would like to thank John Hummel for his helpful comments. 44 B. C. Love References [1] M. Abeles, H. Bergman, E. Margalit, and E. Vaadia. Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. Journal of Neurophysiology, 70:1629- 1638,1993. [2] E. Bienenstock. Composition. In A. Aertsen and V. Braitenberg, editors, Brain Theory: Biological Basis and Computational Principles. Elsevier, New York, 1996. [3] D. Gentner and A. B. Markman. Analogy-watershed or waterloo? structural alignment and the development of connectionist models of analogy. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 855-862. Morgan Kaufman Publishers, San Mateo, CA,1993. [4] C. M. Gray and W. Singer. Stimulus specific neuronal oscillations in orientation columns of cat visual cortex. Proceedings of the National Academy of Sciences, USA, 86:1698- 1702, 1989. [5] J. E. Hummel and I. Biederman. Dynamic binding in a neural network for shape recognition. Psychological Review, 99:480-517, 1992. [6] D.R.J. Laming. Information theory of choice reaction time. Oxford University Press, New York, 1968. [7] B. C. Love. Asynchronous connectionist binding. (Under Review), 1998. [8] Y. Miyashita and H. S. Chang. Neuronal correlate of pictorial short-term memory in primate temporal cortex. Nature, 331:68- 70, 1988. [9] J. Pollack. Recursive distributed representations. Artificial Intelligence, 46:77105, 1990. [10] W. Rail. Dendritic locations of synapses and possible mechanisms for the monosynaptic EPSP in motorneurons. Journal of Neurophysiology, 30:11691193,1967. [11] R. Ratcliff and G. McKoon. Speed and accuracy in the processing of false statements about semantic information. Journal of Experimental Psychology: Learning, Memory, fj Cognition, 8:16- 36, 1989. [12] L. Shastri and V. Ajjanagadde. From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic binding using temporal synchrony. Behavioral and Brain Sciences, 16:417- 494, 1993. [13] W. Softky. Fine analog coding minimizes information transmission. Neural Networks, 9:15- 24, 1996. [14] A. C. Tang and T. J. Sejnowski. An ecological approach tp the neural code. In Proceedings of the Nineteenth Annual Conference of the Cogntive Science Society, page 852, Mahwah, NJ, 1996. Erlbaum. [15] A. Treisman and H. Schmidt. Illusory conjunctions in the perception of objects. Cognitive Psychology, 14:107- 141,1982. [16] E. Vaadia, I. Haalman, M. Abeles, and H. Bergman. Dynamics of neuronal interactions in monkey cortex in relation to behavioral events. Nature, 2373:515518, 1995. [17] C. von der Malsburg. The correlation theory of brain function. Technical Report 81-2, Max-Planck-Institut for Biophysical Chemistry, G6ttingen, Germany, 1981.
1998
141
1,499
Very Fast EM-based Mixture Model Clustering using Multiresolution kd-trees Andrew W. Moore Robotics Inst.i t.ut.e, Carnegie tl1plloll University Pittsburgh , PA 15:21:3. a\\'l11'9.'cs.cll1u.eciu Abstract Clustering is importa nt in many fields including m anufactlll'ing, bio l og ~' , fin ance, and astronomy. l\Iixturp models arp a popular approach due to their st.atist.ical foundat.ions, and EM is a very popular l1wthocl for fillding mixture models. EM, however, requires lllany accesses of the data, and thus has been dismissed as impractical (e.g. [9]) for data mining of enormous dataset.s. We present a nt'\\· algorit.hm, baspd on thp l1lultiresolution ~.'Cl- trees of [5] , which dramatically reelucps the cost of EtlI-baspd clusteriug, wit.h savings rising linearl:; wit.h the number of datapoints. Although prespnt.pd lwre for maximum likplihoocl estimation of Gaussian mixt.ure modf'ls, it. is also applicable to non-(~aussian models (provided class densit.ies are monotonic in Mahalanobis dist.ance), mixed categorical/ nUllwric clusters. anel Bayesian nwthocls such as Antoclass [1]. 1 Learning Mixture Models In a Gaussian mixture lllodf'l (e.g. [3]) , we aSSUI1W t.hat d ata points {Xl .. . XR} ha\'p bef'n gelw r<ltecl incl e p e ncl e lltl~ by the following process. For each X I in turn, natlll'f' begius by randomly picking a class, c}' from a discrf'te set of classf's {('I . . ' Cs }. T lwn nat m e draws X I from an .II-dimensional Gallssian whosf' mean fI i and cO\'a riallce ~i depend 0 11 the class, Thus we have . where 8 denotps all the parameters of the mixture: the class probabilities Vi (wlwre Vi = P(Cj 18)) , the class centers fl j and the class covariances ~j' Tlw job of a mixture model learner is to find a good estimate of t.he modeL and Expectation MaximizRtion (EM) , also known a::l "Fuzzy ~'-m e a n::l", i::l a popular 544 A. W Moore algorit.hm for doing so. TIlt' Ith iteration of El\I begins \vith an estimatp (/ of tllP model, and ends with all il11prO\'ed pstimate ll+1. 'Write (2) E;\I iteratps over parh point.-class combination, comput.ing for pach dass Cj and Pnch datapoint Xi, thp pxtent to which Xi is "owned" by Cj. The ownership is simply tl'i) = P(Cj I Xi, (/). Throughout this paper \\iP will use thp following notation : (lij P(Xi I Cj Jl) Wlj P(Cj I Xi , (i) = (/ ijJ!.d Lt~ l (/iI .. )JJ .. (by Bayes' Rule) Then tl1P new value of thp cpntroid, J.ij' of the jt.b rlass in thp npw modpl (l+l is sim ply tilt' \\"Pightpel t11pan of all the da t<'l point:,;, using the values {LV 1), W~j, . .. lIB.i } as t.he weights. A similar weight.eel procedure gives the new est.imat.e's of the class probabilities and the dass cov<'Iriances: sw· p' f- __ .J .I R 1 R tt} f- -- ~ U:i}Xi . s\\" L . .I i= l w)wre S\\'j = L~= l tl;ij . Thus each iteration of EM visit:-; ewry datapoint-rla:,;:,; pair. meaning "YR evaluations of a l\l-dilllensional Ganssian, and so needing O(J/.!.SR) arithnwtic operations ]wr iteration. This paper aims to reduce that cost. An IIIrkd-tree (Multiresolution J~D-tree), introduced in [2] and developed further in [5], is a binary tree in which each node is associateel wit.h a subset of the elatapoints. The root node owns all the datapoints. Each non-leaf-noelp has t \VO rhilelren. defineel by a splitting dimension NO.SPLITOIM and a splitting valup NO.sPLITVAL. The two children divide their parent's datapoints between them , with the left child owing those dat.apoillts that are strictly less than the splitting \"alue in the splitting dimension, and t he right child owning tllP remaindpr of the parent's cia t apoinb: Xi E NO.LEFT <=} x i[No.sPLITDIM] < l\O.SPLITVAL anel Xi E No (4) Xi E NO.RIGHT <=} xdNo .SPLlTOIM] 2: NO .SPLITVAL and Xi E ND (.5) The distinguishing feature of mrkcl-trees is that their nodes contain the following: • NO.NUMPOINTS: The number of points owned by No (equivalently, the average density in No). • NO.CENTROID: The centroid of the points owned by No (equivalently, the fir:st 1lI0ment of the density below ND) . • No.(·ov: The cov<'lriance of the points owned by No (equivalently. the second lI10lllent. of the clensi t.y below No). • NO. HYPERRECT: The bounding hyper-rectangle of the points below No \\"1' construct. mrkcl-trpes top-down, ident.ifying tilt' bounding box of th e current. node, and splitting in t.hf' center of t.he widest dimension. A node is declared to be a leaL and is left unsplit, if the widest dimension of its bounding box is :s SOIllP threshold, JIB IV. If MB W is zero, t.hen all leaf nodes denote singleton or coincident points, the tree bas O(R) nodes and so requires O(M~ R) memory, and (with some care) the construction co ~t is O(",J'2 R+ M R log R). In practice, we :-;et MB IT' t.o 1 % of t he range of the datapoint components. The tree size and construction thus cost Very Fast EM-Based Mixture Model Clustering Using Multiresolution Kd-Trees 545 consiclera bl," If'sS t.lJa n these bOllnc\s bf'ca use in c\f'nsf' regions , tin~' leaf nodes werf' able to summarize clozf'lls of clat apoints, Notf' t oo tha t. the cost of trf'e-builcling is am ol'tizeclthf' tree must I)f' built. o nef', .vet E ~I pf'l'forms m allY it era tions , To perform an it.eration ofE1\1 with tllf' IIIrkd-tl'f'e , we call t.he function l\L\KESTATS (df'scril)f'c\ below) on t.hf' root of the trf'f', \L-\KESn,Ts (No , tl) outputs :i N values: (S\\'l , S\\':! , ' , , SWx , SWX 1, ' , ,S"'X ,V, SWXX 1, , , ,S\\,XX N ) whf're I::: Wij x , E NO I::: tl'ij X , x, E NO SWXXj = ((j) X , E NO TI1f' res ults of 1\ 1A K ESTATS (RooT) pro\'idf' sU fficif'nt. stat istic8 to COIlSt ruet et +1: ]I) f--- S \\' ,iI R (I) If l\[AKESTATS is callf'cl on a If'af noc\f' , \ \'P simpl)' compu tf', for f'aeh j, S te) = P (c) I x,et ) = P(x I t'j ,(l)p(c) I et)/ I::: P (x I Ck , et )p (Ck let) (8) 1.' =1 wherf' x = NO, ('ENTROID , and when ' all the itf'lI1s in tllf' right hand f'qu at ion arf' eas il~ ' comput.ed , \Vf' thell rf't urn S\\'j = Il 'j X ="O,NTJ 1\IPOINTS, s \\,x) = Il 'j x NO,N TI1\!POINTS x X a nd s\\'xx) = Il'j x );O,f\Tli\!POINTS x No,co\', The reaSOil Wf' call do this is tha t, if the If'af noclf' is W J')' sm a ll, t.llf'rf' willlw lit tle va riatlon in te l ) for thf' point s o\\'llf'd by t.hf' nod f' a nd so, for f'xa lll plf' L: Il',j X i ~ 11'.1 LX" III the eXpf'rilllf'llts below \\'e lise \'er)' tiny leaf nodf's , f'lIsuring accurac,\', If \IAKEST.-\TS is called on a non-If'af-nodf', it ('a n easily computf' its a ns\Vf'r by r f'c llrsiv f' l~' calling l\IAK ESTATs on its two chilclrf'n and t.hf' n l'f'turning the :-:UIll of tllf' t\Vo Sf' ts of ans\\'f'l'S, In genf'ral. th at is f'xact ly how we will procef'd, If t hat was th f' end of the story, W P would haw little computation a l improvel1lf'nt o \'er conventional E1\l, b ecRuse onf' pass would fully t.ral'E'rsf' the trf'f', which contains O(R) nodes, doing O( S .11:!) \\'ork per node, \Ve will win if we evf'l' spot that. a t som f' intf'rmediate nod f' , \\'f' can jll '1111 t , i,f', e\'aluatf' thf' node as if it were a If'a f. without. sf'al't'hing it.s d f'scendents , but witho u t. introducing significant error int o tllf' compu tation, To do t hi:-:, \\,f' will COtllput f' , for f'ach j , the minimulIl and m aximum U'ij that any point insidp th f' nodf' could h a\'f' , This pl'ocf'cl lll'f' is m ore complf'x than in tllf' casf' of l o" all~ ' weight pd rf'g l'eSSiOll [.5] , \\'e wi;-;h t o computf' u ',~nln and I{'Tax for each j , where u'Tll1 is a lo\\'f'l' b ound on minxi E NO /i'i) and u'T <lX is a n uppf'r bound on m axx, E :\O (I'i,/ , T his is h ard b ecause wjl11 11 is determined Bot only by t Ilf' m ea n and co\'ariance of tllf' jt.h d as;-; but also the oth f'r cla:,ses, For f'x ample, in Figurf' l. ti'3:! is approximatel~ ' 0,5, but it would bf' much larger if Cl werf' fmtller to tlrf' If'ft , or h ad a thinller ('o\'a riance, Blit l'f'nw mber tha t tllf' ti'ij' S arf' d f'finf'cl in terms of (l ij'S, t hus: lI 'i) (/ 'j j),1/ L: ~:~1 (/,h lih, \Vf' ((/11 pu t bounds on t.llP (li j ':, rela tively <-,a sil~' , [t sim ply rpquire:, th at 1'0 1' f'a('h j \\,f' compllt f' l tllf' closf' ;-;t and fUl'tllf'st point from I',; within I Computing Ihbt' point:-. require,., non-t ri\'ial compu t.a tional geoIlletr,\' lwcau"e the co\'a ria lJce III a t rice:, are not llece""arily axis-a ligned , There i" no space here for del aiJ,." 546 Maximizer of a 2 • .~ • ~2 ~Minimizer of a 1 A. W. Moore Figure 1: The rectangle denote" a h.\"lwrrectangle in the mrkd-tn'e. The !"mall "quares denote datapoint.s "owlled" h.\· t lIe node. Suppo:se t.here are ju:;( l \\'0 claf-se!", with the given means, and covaliances depicteel by the ellipse:;. Small circles indicate the locations wit.hin the rMinimizer of a 2 node for which (/) (i.e. P(.r I c))) would -----"----------e> be extremized. NO .HYPERHECT, using the Mahalanobis cli::otancp MHD(x, x') = (x-x/)T~.j I (x-x'). Call tllf':';P short.pst and furtllf'st squarpcl distancps illHDI11I11 and JIHDl11 ax . Then (D) is a lowpr bound for minx, END (lij , with a similar dpfinition of aTflX . Thpn write nlln Wi' x, E NO ) min (aij}Jj/L((ikPh'l = min (aij}Jj/(Clij}Jj + LouiN)) x, E NO x, E NO . k kt.l > ajlllnpj /(ClT ll1 pj + L ar1axPk) = W.Tll1 h·tj wlwrp tl'T II1 is ulli' lo\\,pr bound. There is a similar definition for tl'.TflX. The iLlc'qualit.\' i;-, proved b)' elenH'ntary algebra, and requires that all qllantitips are positiw (which thpy are). vVe can often tight.en thp bounds further using a procedure that pxploits the fact. t.hat. 2::j Wij = 1, but space does not permit further discussion. \ \,p will prune if wjllll1 anel tl'Tax are close for all j. 'Vha t should be the criterion for clospnpss? The first. idea that springs to mind is: Prune if V j . (wj11aX wj11lI1 < t). But such a simplp critprioll is not suitable: some classps may be accumulating very largp sums of weights, whilst others may bp accumulating vpry small Sllms. The largp-sllll1-weight clasl>ps can t.olerate far looser bounds than the small-sum-weight da.sses. Hprp, then, is a more satisfactory pruning critf'l'ion : Pnll1P ifVj . (wr ax Il',Tll1 < nC,;otal) where wjotal is the tot al weight. awarded to class j o\,pr tlw entire dataset, and T is SOI1lP small constant. Sadl~' , w.ioTal is not. known ill advan('e, but happily we can find a lower bound on u,.~otal of wrfar + NO.NTTMPOINTS x wrlI1 , where Lt'jofar is the total weight awarded to cla.ss j so fa.r during the sear('h over the kcl-trpp. The algorithm as c1(>scribed so far performs c1ivide-and-conquer-\vith-cut.offs on the spt of clatapoints. In addition, it is possiblp to achieve an extra ac(,pleration by nwallS of diviclp and conquer on the class ('enters. Suppose there wpre N = 100 classps. Illstpad of considering all 100 classps at all Bodes, it is frequelltly possible t.o clPlPrmine at SOI1W node that t.he maximum possi ble \\,pight. w,Tux for some class j is less thau a minisculp fraction of tllf' minimull1 pos:-;ible weight u'tln for sonlf' other da:-,:-, "'. Thlb if we 0\'<"1' find that in some nocle wr ax < Autlll where /\ = 10-..( . tlLell class ('j is rel1lowc\ from ('onsicleration from all clescendpnt:-; ofthp Clll'l'pnt node. FrpC[uPlltly this m eallS that near tllf' tree's Ipa\'ps, only a tiny fraction of thp dassps compete for o\\'nership of the datapoints, and thil> lea.ds to large time savings. Very Fast EM-Based Mixture Model Clusten'ng Using Multiresolution Kd-Trees 547 2 Results \~'e havp subj ed pd this approach to llumprous i\Iont.e-Ca rlo empirical tests. Herp \VP report 0 11 one ::::pt of Ruch tpsts. created with the following methodology. • We randomly gPllerate a mixt ure of Ga u::::sians in 1\J -dimensiollal ::::pace (by ciefanlt .11 = 2). The number of Gaussians, N is , by default, :20. Each (~ a u ~ ~i a n h a ~ a mean lying within the unit hypercube, and a covaria nce m atrix randomly generated with diagonal elem ents between 0 up to 40' :! (by default, 0' = 0.05) and random non-diagonal elem ent.s that ensure symmetric positive defini tene:-;s. T hus the distance from a Gaussian center t.o it.s l-::::t.andard-eleviation contour is of thp order of magnitude of 0'. • \\lp randomly generate a dataset fro111 the mixture m odel. The number of point::::, R , i~ (by default) 160,000 . Figure :2 show:::: a typical generated set of G a u::::~i a n s anel clat apoinb. • We then build an I/Irkd-tree for the dataset., and record the mem ory requirPlllents and real time to build (on a Pent.ium :200Mhz, in seconds). • We thpn run Ei\I on the data. Ei\I begin:::: wit.h an entirely different set of (~ au ss i a n:-;, randomly generated using the sam e procedure. • \Vp run 5 iterations of the conventional EM algori thm and the new mrkdtrpp-ba ::::pd algorithm. TllP new algorit.hm uses a defa ult value of 0.1 for T . \Vp record thp rpal time (ill seconds) for each iteration of each algorithm, and wp also record the mean log-likelihood score (1/ R) L~= l log P(Xi I rl) for t.he tth modpl for both algorithm::::. Figurf' :) :-;ho\\':-; t.he nodes t.hat arp visit.pd during Iteration :2 of the Fast. EM with ~y = (j cla::::ses. Tablp 1 shows t.he dptailecl resul ts a:::: the experimental param eters are varied. Speedups vary from 8-fold to 1000-fold. There are 100-fold speedup:" even wit.h very wiele (non-local) Gaussians. In othpI' experiments, similar results were also obt ain f>c\ on l'eal d ata ~ ets that disobe.y tllP Gaussian assumption. There too, we find one- a.nd two-order-of-m agnitude computational advantages with indist.inguish able ::::tat.i :-;tical bphayior (no bett.pr anclno worse) compared with conventional E i\I. R eal Data: Preliminary experiments in applying this to large datasets have been encouraging. For thrpe-dinlPnsional galaxy clustering with 800 ,000 galaxies and lUOO elustns, traditional El\1 needed :3·5 minutes per iteration, while the mrkd-trees rpquired only H SPcOl1(ls. With l. () million galaxies, t.raditional EM needed 70 minutes and IIIrkd-trpPs required 14 seconds. 3 Conclusion Thp use of variable resolution structures for clustering has been suggested in m a ny places (P.g. [7,8, 4, !:l]). The BIRCH system , in part.icular, is popular in the dat.abase community'. BIRCH is, howpver. Iln9blp to identify seconci-mOl11Pllt features of clust,pr:; (such as Ilon-nxis-aligned spread). Our contributions have been the use of a ll1ulti-l'f'solut.ion approach, with associa tf>d computational benefits, and the introduction of an pfficient algorithm that leaves tllP sta tistical aspects of mixture m odel estil1lation uncilangpd. The growth of rpcpnt d a.t.a mining algorihms that are /l ot based on st.a t istical foundations has frec!pnt.ly been j ust.ified by the following statelllent: U:;illg st ate-of-t hp- Cl rt sta tistical techniques is too expensive because such tpchniqups were not dpsignpel to handle largp da t asets and becom e intraeta bJe with mi Ilion:'; of da t a points . In earlier work we prO\ iclpd evidence t.hat t.his sta tement may 548 Effect of Number of Datapoints, R: As R increases so Joes the computational aJvantage, essentiall~' linearly. The tree-build time (11 seconds at worst) is a tiny cost compared with even just one iteration of Regular EM (2385 seconds, on the big dataset.) FinalSlowSecs: 238.5. FinalFastSecs: 3. Effect of Number of Dimensions, A/f: As with many J.:d-tree algorithms. the benefits decline as dimensionality increases, yet even in 6 dimensions, there is an 8-fold advantage. FinalSlowSecs: 2742. FinalFastSecs: 310.2.5. EHect of N umber of Classes, N: Conventional EM slows down linearly with the number of classes. Fast EM is clearly sublinear, with a 70-fold speedup even with 320 classes. Note how the tree size grows. This is because more classes mean a more uniform data distribution and fewer datapoints "sharing" tree leaves. FinalSlowSecs: 9278. FinalFastSecs: 143.:3. Effect of Tau, T: The larger T, the more willing we are to prune during the tree search, anJ thus the faster we search, but the less accurately we mirror EM's statistical behavior. InJeeJ when T is large, the discrep<\llcy In the log likelihood is relatively large. FinalSlowSecs: .584 . .5. FinalFastSecs: .) Effect of Standard Deviation, 17: Even with very wide Gaussians, with wide support., we still get large savings. The nodes that are pruned in these cases are rarely nodes with one class owning all the probability, but instead are nodes where all classes have non-zero, but little varying, probability. FinalSlowSecs: 58.1. FinalFastSecs: 4.75. ~ " " ,,,,, ','" I"" '''''· 300 500 A. W Moore ' ... ~ , •. , If' " ,_ , '" Num~ or pOinte (in thou •• nds) 1 2 3 4 5 6 Number of Inpuba 5 '0 20 4 0 80 160 320 Number of cent.,.. 001 ' 003301 03 09 "u 0025005 01 02 04 .Igma Table 1: In all the above results all parameters were held at their default values except for one, which varied as shown in the graphs. Each graph shows the factor by which the new E1.'1 is faster than the conventional EM. Below each graph is the time to build the mrkd-tree in seconds and the number of nodes in the t.ree. Note that although the tree builJing cost is not included in the speedup calculation, it is negligibl~ in all cases, especially considering that only one tree build is needed for all EM iterations. Does the approximate nature of this process result in inferior clusters'? The answer is no: the quality of clusters is indistinguishable between the slow and fast methods when measureJ by log-likelihood and when viewed visually. Very Fast EM-Based Mixture Model Clustering Using Multiresolution Kd-Trees 549 Figure 2: A typical set of Gaussians generated by our random procedure. They in t.urn generate the datasets upon which we compare the performance of the old and new implementations of EM. ,[J Figure 3: The ellipses show the model 8t at the start of an EM iteration. The rectangles depict the mrkdtree nodes that were pruned. Observe larger rectangles (and larger savings) in areas with less variation in class probabilities. Note this is not merely able to only prune where the data density is low. not apply for locally weighted regression [.5] or Bayesian network learning [6], and we hope this paper provides some evidence that it also needn't apply to clustering. References (1) P Cheeseman and R. Oldford. Selectmg Models f,'om Data: Artljioal Intelligence and Stat/shes IV. Lectun No tes m S tattsttcs, vol. 89. Spl'inger Verlag, 1994 [:?) K Denl:) and A W Moore l\1ult lresolutlOn Insta nce-based Learning In Proceedl71gs of IJCAI-95. Morgan Kaufmann , 1995. [3) R O. Duda and P E Hart Pattern Clas,<ljicatlon alld Scen r AnalYSIS John Wiley So: Sons, 1~173. [4) ]'v{ Ester, H P . Kriegel , and X . Xu A Database Int e rfa ,~e for Clustertllg In La rge Spati al Databas",s. In Proceeding' uf th e First Int~rnatluna( Cunf~"eTlet un I\'nowledge Discovery and D,lta .\Ilnlll~/· AAAI Press, 1~19 5. [s] A . W Moore, J Schneider. alld K DEcng EfficI",nt Locally \\'Eclghted Polynomi a l R Ecgresslon PrediCtions In D Fisher, editor, P"UCOedlTlYS ,'f the J 9Y7 ITlt~nllltlonal !\laehllLe Le(trnmg Cunlf 7'enCf. 1\10rga n Kaufmann, 1 ~19 7 [6) Andrew \V Moore and 1\t. S. Lee Cached SuffiCient Statistics for EffiCient 1\1achm'" Learnmg With Large Datasets Journal oj A,'tljicll1l Intelllyence Research, 8 , March 1888. [7) S M . Omohu ndro . Efficient Algortthms With Neural Network BehaVIOu r. JOU7'T1l11 of Complex Systems, 1(2 ):2 73-347, 1987. [8) S. M Omohundro Burnptrees for Efficient FlmctlOn , Const ralllt , a nd C lassificaflon Learning. In R . P. Lippmann, J E. Moody. and D S. Touretzky, editors, Advances tTl Neural Inform atIOn ProCfssmg S!jstElns '3 Morgan Kaufmann, 18~ll [~7) T. Zhang, R . Ramakrtshna n, and M Llvny, BIRC H ' An Efficient Data Clustering Method for \'",ry Large D a taba~,"s In Proceedwgs of th e FIfteenth AC'.\J .. ;'JGACT-SIGMOD-8IGART ::"!jmpollum 0'1 Pnn np ies of Database Sys tems: PODS 1991>. Assn for Computing l\lachmery, EI~II).
1998
142