index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
2,500
|
Gaussian Processes in Reinforcement Learning Carl Edward Rasmussen and Malte Kuss Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 T¨ubingen, Germany carl,malte.kuss @tuebingen.mpg.de Abstract We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning. 1 Introduction Model-based control of discrete-time non-linear dynamical systems is typically exacerbated by the existence of multiple relevant time scales: a short time scale (the sampling time) on which the controller makes decisions and where the dynamics are simple enough to be conveniently captured by a model learning from observations, and a longer time scale which captures the long-term consequences of control actions. For most non-trivial (nonminimum phase) control tasks a policy relying solely on short time rewards will fail. In reinforcement learning this problem is explicitly recognized by the distinction between short-term (reward) and long-term (value) desiderata. The consistency between short- and long-term goals are expressed by the Bellman equation, for discrete states and actions :
! "$# &%' )( (1) where is the value (the expected long term reward) of state while following policy , which is the probability of taking action in state , and is the transition probability of going to state % when applying action given that we are in state , ! denotes the immediate expected reward and *,+ # +.- is the discount factor (see Sutton and Barto (1998) for a thorough review). The Bellman equations are either solved iteratively by policy evaluation, or alternatively solved directly (the equations are linear) and commonly interleaved with policy improvement steps (policy iteration). While the concept of a value function is ubiquitous in reinforcement learning, this is not the case in the control community. Some non-linear model-based control is restricted to the easier minimum-phase systems. Alternatively, longer-term predictions can be achieved by concatenating short-term predictions, an approach which is made difficult by the fact that uncertainty in predictions typically grows (precluding approaches based on the certainty equivalence principle) as the time horizon lengthens. See Qui˜nonero-Candela et al. (2003) for a full probabilistic approach based on Gaussian processes; however, implementing a controller based on this approach requires numerically solving multivariate optimisation problems for every control action. In contrast, having access to a value function makes computation of control actions much easier. Much previous work has involved the use of function approximationtechniques to represent the value function. In this paper, we exploit a number of useful properties of Gaussian process models for this purpose. This approach can be naturally applied in discrete time, continuous state space systems. This avoids the tedious discretisation of state spaces often required by other methods, eg. Moore and Atkeson (1995). In Dietterich and Wang (2002) kernel based methods (support vector regression) were also applied to learning of the value function, but in discrete state spaces. In the current paper we use Gaussian process (GP) models for two distinct purposes: first to model the dynamics of the system (actually, we use one GP per dimension of the state space) which we will refer to as the dynamics GP and secondly the value GP for representing the value function. When computing the values, we explicitly take the uncertainties from the dynamics GP into account, and using the linearity of the GP, we are able to solve directly for the value function, avoiding slow policy evaluation iterations. Experiments on a simple problem illustrates the viability of the method. For these experiments we use a greedy policy wrt. the value function. However, since our representation of the value function is stochastic, we could represent uncertainty about values enabling a principled attack of the exploration vs. exploitation tradeoff, such as in Bayesian Q-learning as proposed by Dearden et al. (1998). This potential is outlined in the discussion section. 2 Gaussian Processes and Value Functions In a continuous state space we straight-forwardly generalize the Bellman equation (1) by substituting sums with integrals; further, we assume for simplicity of exposition that the policy is deterministic (see Section 4 for a further discussion): & ! " # % ( % (2) ! % " # % &% (3) This involves two integrals over the distribution of consecutive states % visited when following the policy . The transition probabilities may include two sources of stochasticity: uncertainty in the model of the dynamics and stochasticity in the dynamics itself. 2.1 Gaussian Process Regression Models In GP models we put a prior directly on functions and condition on observations to make predictions (see Williams and Rasmussen (1996) for details). The noisy targets
" are assumed jointly Gaussian with covariance function : "! where !$#&% ' # % ( (4) Throughout the remainder of this paper we use a Gaussian covariance function: # % ) +*,.-(/0 1 2 # 1 % 43"576.8 # 1 % :9<;>= "@? #&%(A., B (5) where the positive elements of the diagonal matrix 5 , * and A , B are hyperparameters collected in ) . The hyperparameters are fit by maximising the marginal likelihood (see again Williams and Rasmussen (1996)) using conjugate gradients. The predictive distribution for a novel test input is Gaussian: ) 2 ! 6.8 A , ' 1 ! 6 8 :=< (6) 2.2 Model Identification of System Dynamics Given a set of -dimensional observations of the form % , we use a separate Gaussian process model for predicting each coordinate of the system dynamics. The inputs to each model are the state and action pair , the output is a (Gaussian) distribution over the consecutive state variable, % & using eq. (6). Combining the predictive models we obtain a multivariate Gaussian distribution over the consecutive state: the transition probabilities . 2.3 Policy Evaluation We now turn towards the problem of evaluating & for a given policy over the continuous state space. In policy evaluation the Bellman equations are used as update rules. In order to apply this approach in the continuous case, we have to solve the two integrals in eq. (3). For simple (eg. polynomial or Gaussian) reward functions ! we can directly compute1 the first Gaussian integral of eq. (3). Thus, the expected immediate reward, from state , following is: ! % where
A , 8 & & :A , 4= (7) in which the mean and covariance for the consecutive state are coordinate-wise given by eq. (6) evaluated on the dynamics GP. The second integral of eq. (3) involves an expectation over the value function, which is modeled by the value GP as a function of the states. We need access to the value function at every point in the continuous state space, but we only explicitly represent values at a finite number of support points, 8 & and let the GP generalise to the entire space. Here we use the mean of the GP to represent the value2 – see section 4 for an elaboration. Thus, we need to average the values over the distribution predicted for % . For a Gaussian covariance function3 this can be done in closed form as shown by Girard et al. (2002). In detail, the Bellman equation for the value at support point is: "# % % "$# ! 6 8 where (8) and ! 5 6 8 "#" 6.8 $ , * , -(/0 1 1% 3 5 " 6 8 1& 9<; = (9) where ! denotes the covariance matrix of the value GP, is the ' ’th row of the matrix and boldface is the vector of values at the support points: 8 & 3 . Note, that this equation implies a consistency between the value at the support points with the values at all other points. Equation (8) could be used for iterative policy evaluation. Notice however, that eq. (8) is a set of ( linear simultaneous equations in , which we can solve4 explicitly: " #) ! 6 8 +* " 1 # ! 6 8 = 6.8 (10) 1For more complex reward functions we may approximate it using eg. a Taylor expansion. 2Thus, here we are using the GP for noise free interpolation of the value function, and consequently set its noise parameter to a small positive constant (to avoid numerical problems) 3The covariance functions allowing analytical treatment in this way include Gaussian and polynomial, and mixtures of these. 4We conjecture that the matrix ,.-0/132546 7 is non-singular under mild conditions, but have not yet devised a formal proof. The computational cost of solving this system is ( , which is no more expensive than doing iterative policy evaluation, and equal to the cost of value GP prediction. 2.4 Policy Improvement Above we demonstrated how to compute the value function for a given policy . Now given a value function we can act greedily, thereby defining an implicit policy: / ! "$# % )( % (11) giving rise to ( one-dimensional optimisation problems (when the possible actions are scalar). As above we can solve the relevant integrals and in addition compute derivatives wrt. the action. Note also that application-specific constraints can often be reformulated as constraints in the above optimisation problem. 2.5 The Policy Iteration Algorithm We now combine policy evaluation and policy improvement into policy iteration in which both steps alternate until a stable configuration is reached5. Thus given observations of system dynamics and a reward function we can compute a continuous value function and thereby an implicitly defined policy. Algorithm 1 Policy iteration, batch version 1. Given:
observations of system dynamics of the form % for a fixed time interval , discount factor # and reward function ! 2. Model Identification: Model the system dynamics by Gaussian processes for each state coordinate and combine them to obtain a model 3. Initialise Value Function: Choose a set $ 8 & of ( support points and initialize ! . Fit Gaussian process hyperparameters for representing using conjugate gradient optimisation of the marginal likelihood and set A , B to a small positive constant. 4. Policy Iteration: repeat for all do Find action by solving equation (11) subject to problem specific constraints. Compute using the dynamics Gaussian processes Solve equation (7) in order to obtain Compute ' ’th row of as in equation (9) end for " 1 # ! 6.8 6 8 Update Gaussian process hyperparameter for representing to fit the new until stabilisation of The selection of the support points remains to be determined. When using the algorithm in an online setting, support points could naturally be chosen as the states visited, possibly selecting the ones which conveyed most new information about the system. In the experimental section, for simplicity of exposition we consider only the batch case, and use simply a regular grid of support points. We have assumed for simplicity that the reward function is deterministic and known, but it would not be too difficult to also use a (GP) model for the rewards; any model that allows 5Assuming convergence, which we have not proven. -1 1 0.6 -0.5 −1 −0.5 0.6 1 1 2 3 4 5 t (a) (b) Figure 1: Figure (a) illustrates the mountain car problem. The car is initially standing motionless at 1 * and the goal is to bring it up and hold it in the region * * such that 1 * '- * - . The hatched area marks the target region and below the approximation by a Gaussian is shown (both projected onto the axis). Figure (b) shows the position of the car is when controlled according to (11) using the approximated value function after 6 policy improvements shown in Figure 3. The car reaches the target region in about five time steps but does not end up exactly at * due to uncertainty in the dynamics model. The circles mark the , * second time steps. evaluation of eq. (7) could be used. Similarly the greedy policy has been assumed, but generalisation to stochastic policies would not be difficult. 3 Illustrative Example For reasons of presentability of the value function, we below consider the well-known mountain car problem “park on the hill”, as described by Moore and Atkeson (1995) where the state-space is only two-dimensional. The setting depicted in Figure 1(a) consists of a frictionless, point-like, unit mass car on a hilly landscape described by 2 , " for + *
8
for * (12) The state of the system is described by the position of the car and its speed which are constrained to 1 - - and 1 ; +; respectively. As action a horizontal force in the range 1 can be applied in order to bring the car up into the target region which is a rectangle in state space such that * * and 1 * '- * '- . Note that the admissible range of forces is not sufficient to drive up the car greedily from the initial state ! 1 * * such that a strategy has to be found which utilises the landscape in order to accelerate up the slope, which gives the problem its non-minimum phase character. For system identification we draw samples " ' & &
# * uniformly from their respective admissible regions and simulate time steps of * seconds6 forward in time using an ODE solver in order to get the consecutive states % . We then use two Gaussian processes to build a model to predict the system behavior from these examples for the two state variables independently using covariance functions of type eq. (5). Based 6Note that $&%('*),+ seconds seems to be an order of magnitude slower than the time scale usually considered in the literature. Our algorithm works equally well for shorter time steps ( / should be increased); for even longer time steps, modeling of the dynamics gets more complicated, and eventually for large enough $.% control is no-longer possible. −1 −0.5 0 0.5 1 −2 −1 0 1 2 0 0.2 0.4 0.6 dx x V −1 −0.5 0 0.5 1 −2 −1 0 1 2 0 2 4 dx x V −1 −0.5 0 0.5 1 −2 −1 0 1 2 0 2 4 dx x V (a) (b) (c) Figure 2: Figures (a-c) show the estimated value function for the mountain car example after initialisation (a), after the first iteration over (b) and a nearly stabilised value function after 3 iterations (c). See also Figure 3 for the final value function and the corresponding state transition diagram on * random examples, the relations can already be approximated to within root mean squared errors (estimated on -&* ** test samples and considering the mean of the predicted distribution) of * * ; for predicting and * ; for predicting . Having a model of the system dynamics, the other necessary element to provide to the proposed algorithm is a reward function. In the formulation by Moore and Atkeson (1995) the reward is equal to - if the car is in the target region and * elsewhere. For convenience we approximate this cube by a Gaussian proportional to * * % * * , " with maximum reward - as indicated in Figure 1(a). We now can solve the update equation (10) and also evaluate its gradient with respect to . This enables us to efficiently solve the optimization problem eq. (11) subject to the constraints on , and described above. States outside the feasible region are assigned zero value and reward. As support points for the value function we simply put a regular ; - ; - grid onto the state-space and initialise the value function with the immediate rewards for these states, Figure 2(a). The standard deviation of the noise of the value GP representing & is set to A * * - , and the discount factor to # * . Following the policy iteration algorithm we estimate the value of all support points following the implicit policy (11) wrt. the initial value function, Figure 2(a). We then evaluate this policy and obtain an updated value function shown in Figure 2(b) where all points which can expect to reach the reward region in one time step gain value. If we iterate this procedure two times we obtain a value function as shown in Figure 2(c) in which the state space is already well organised. After five policy iterations the value function and therefore the implicit policy is stable, Figure 3(a). In Figure 3(b) a dynamic GP based state-transition diagram is shown, in which each support point is connected to its predicted (mean) consecutive state % when following the implicit policy. For some of the support points the model correctly predicts that the car will leave the feasible region, no matter what is applied, which corresponds to areas with zero value in Figure 3(a). If we control the car from ! 1 * * according to the found policy the car gathers momentum by first accelerating left before driving up into the target region where it is balanced as illustrated in Figure 1(b). It shows that the * random examples of the system dynamics are sufficient for this task. The control policy found is probably very close to the optimally achievable. −1 −0.5 0 0.5 1 −2 −1 0 1 2 0 2 4 dx x V −1 −0.5 0 0.5 1 1.5 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 x dx (a) (b) Figure 3: Figures (a) shows the estimated value function after 6 policy improvements (subsequent to Figures 2(a-c)) over where has stabilised. Figure (b) is the corresponding state transition diagram illustrating the implicit policy on the support points. The black lines connect and the respective % estimated by the dynamics GP when following the implicit greedy policy with respect to (a). The thick line marks the trajectory of the car for the movement described in Figure 1(b) based on the physics of the system. Note that the temporary violation of the constraint + ; remains unnoticed using time intervals of , * ; to avoid this the constraints could be enforced continuously in the training set. 4 Perspectives and Conclusion Commonly the value function is defined to be the expected (discounted) future reward. Conceptually however, there is more to values than their expectations. The distribution over future reward could have small or large variance and identical means, two fairly different situations, that are treated identically when only the value expectation is considered. It is clear however, that a principled approach to the exploitation vs. exploration tradeoff requires a more faithful representation of value, as was recently proposed in Bayesian Qlearning (Dearden et al. 1998), and see also Attias (2003). For example, the large variance case is more attractive for exploration than the small variance case. The GP representation of value functions proposed here lends itself naturally to this more elaborate concept of value. The GP model inherently maintains a full distribution over values, although in the present paper we have only used its expectation. Implementation of this would require a second set of Bellman-like equations for the second moment of the values at the support points. These equations would simply express consistency of uncertainty: the uncertainty of a value should be consistent with the uncertainty when following the policy. The values at the support points would be (Gaussian) distributions with individual variances, which is readily handled by using a full diagonal noise term in the place of ? #&% A , B in eq. (5). The individual second moments can be computed in closed form (see derivations in Qui˜nonero-Candela et al. (2003)). However, iteration would be necessary to solve the combined system, as there would be no linearity corresponding to eq. (10) for the second moments. In the near future we will be exploring these possibilities. Whereas only a batch version of the algorithm has been described here, it would obviously be interesting to explore its capabilities in an online setting, starting from scratch. This will require that we abandon the use of a greedy policy, to avoid risking to get stuck in a local minima caused by an incomplete model of the dynamics. Instead, a stochastic policy should be used, which should not cause further computational problems as long as it is represented by a Gaussian (or perhaps more appropriately a mixture of Gaussians). A good policy should actively explore regions where we may gain a lot of information, requiring the notion of the value of information (Dearden et al. 1998). Since the information gain would come from a better dynamics GP model, it may not be an easy task in practice to optimise jointly information and value. We have introduced Gaussian process models into continuous-state reinforcement learning tasks, to model the state dynamics and the value function. We believe that the good generalisation properties, and the simplicity of manipulation of GP models make them ideal candidates for these tasks. In a simple demonstration, our parameter-free algorithm converges rapidly to a good approximation of the value function. Only the batch version of the algorithm was demonstrated. We believe that the full probabilistic nature of the transition model should facilitate the early states of an on-line process. Also, online addition of new observations in GP model can be done very efficiently. Only a simple problem was used, and it will be interesting to see how the algorithm performs on more realistic tasks. Direct implementation of GP models are suitable for up to a few thousand support points; in recent years a number of fast approximate GP algorithms have been developed, which could be used in more complex settings. We are convinced that recent developments in powerful kernel-based probabilistic models for supervised learning such as GPs, will integrate well into reinforcement learning and control. Both the modeling and analytic properties make them excellent candidates for reinforcement learning tasks. We speculate that their fully probabilistic nature offers promising prospects for some fundamental problems of reinforcement learning. Acknowledgements Both authors were supported by the German Research Council (DFG). References Attias, H. (2003). Planning by probabilistic inference. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics. Dearden, R., N. Friedman, and S. J. Russell (1998). Bayesian Q-learning. In Fifteenth National Conference on Artificial Intelligence (AAAI). Dietterich, T. G. and X. Wang (2002). Batch value function approximation via support vectors. In Advances in Neural Information Processing Systems 14, Cambridge, MA, pp. 1491–1498. MIT Press. Girard, A., C. E. Rasmussen, J. Qui˜nonero-Candela, and R. Murray-Smith (2002). Multiple-step ahead prediction for non linear dynamic systems – a Gaussian process treatment with propagation of the uncertainty. In Advances in Neural Information Processing Systems 15. Moore, A. W. and C. G. Atkeson (1995). The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning 21, 199–233. Qui˜nonero-Candela, J., A. Girard, J. Larsen, and C. E. Rasmussen (2003). Propagation of uncertainty in Bayesian kernel models - application to multiple-step ahead forecasting. In Proceedings of the 2003 IEEE Conference on Acoustics, Speech, and Signal Processing. Sutton, R. S. and A. G. Barto (1998). Reinforcement Learning. Cambridge, Massachusetts: MIT Press. Williams, C. K. I. and C. E. Rasmussen (1996). Gaussian processes for regression. In Advances in Neural Information Processing Systems 8.
|
2003
|
94
|
2,501
|
The doubly balanced network of spiking neurons: a memory model with high capacity Yuval Aviel* David Horn Interdisciplinary Center for Neural Computation School of Physics Hebrew University Tel Aviv University Jerusalem, Israel 91904 Tel Aviv, Israel 69978 aviel@cc.huji.ac.il horn@post.tau.ac.il Moshe Abeles Interdisciplinary Center for Neural Computation Hebrew University Jerusalem, Israel 91904 abeles@vms.huji.ac.il Abstract A balanced network leads to contradictory constraints on memory models, as exemplified in previous work on accommodation of synfire chains. Here we show that these constraints can be overcome by introducing a 'shadow' inhibitory pattern for each excitatory pattern of the model. This is interpreted as a doublebalance principle, whereby there exists both global balance between average excitatory and inhibitory currents and local balance between the currents carrying coherent activity at any given time frame. This principle can be applied to networks with Hebbian cell assemblies, leading to a high capacity of the associative memory. The number of possible patterns is limited by a combinatorial constraint that turns out to be P=0.06N within the specific model that we employ. This limit is reached by the Hebbian cell assembly network. To the best of our knowledge this is the first time that such high memory capacities are demonstrated in the asynchronous state of models of spiking neurons. 1 Introduction Numerous studies analyze the different phases of unstructured networks of spiking neurons [1, 2]. These networks with random connectivity possess a phase of asynchronous activity, the asynchronous state (AS), which is the most interesting one from the biological perspective, since it is similar to physiological data. Unstructured networks, however, do not hold information in their connectivity matrix, and therefore do not store memories. Binary networks with ordered connectivity matrices, or structured networks, and their ability to store and retrieve memories, have been extensively studied in the past [3-8]. Applicability of these results to biologically plausible neuronal models is questionable. In particular, models of spiking neurons are known to have modes of synchronous global oscillations. Avoiding such modes, and staying in an AS, is a major constraint on networks of spiking neurons that is absent in most binary neural networks. As we will show below, it is this constraint that imposes a limit on capacity in our model. Existing associative memory models of spiking neurons have not strived for maximal pattern capacity [3, 4, 8]. Here, using an integrate-and-fire model, we embed structured synaptic connections in an otherwise unstructured network and study the capacity limit of the system. The system is therefore macroscopically unstructured, but microscopically structured. The unstructured network model is based on Brunel's [1] balanced network of integrate-and-fire neurons. In his model, the network possesses different phases, one of which is the AS. We replace his unstructured excitatory connectivity by a semistructured one, including a super-position of either synfire chains or Hebbian cell assemblies. The existence of a stable AS is a fundamental prerequisite of the system. There are two reasons for that: First, physiological measurements of cortical tissues reveal an irregular neuronal activity and an asynchronous population activity. These findings match the properties of the AS. Second, in term of information content, the entropy of the system is the highest when firing probability is uniformly distributed, as in an AS. In general, embedding one or two patterns will not destabilize the AS. Increasing the number of embedded patterns, however, will eventually destabilize the AS, leading to global oscillations. In previous work [9], we have demonstrated that the cause of AS instability is correlations between neurons that result from the presence of structure in the network. The patterns, be it Hebbian cell assemblies (HCA) or pools occurring in synfire chains (SFC), have an important characteristic: neurons that are members of the same pattern (or pool) share a large portion of their inputs. This common input correlates neuronal activities both when a pattern is activated and when both neurons are influenced by random activity. If too many patterns are embedded in the network, too many neurons become correlated due to common inputs, leading to globally synchronized deviations from mean activity. A qualitative understanding of this state of affairs is provided by a simple model of a threshold linear pair of neurons that receive n excitatory common, and correlated, inputs, and K-n excitatory, as well as K inhibitory, non-common uncorrelated inputs. Thinking of these neurons as belonging to a pattern or a pool within a network, we can obtain an interesting self-consistent result by assuming the correlation of the pair of neurons to be also the correlation in their common correlated input (as is likely to be the case in a network loaded with HCA or SFC). We find then [9] that there exists a critical pattern size, , below which correlations decay but above which correlations are amplified. Furthermore, the following scaling was found to exist cn (1) n r K c c = . Implications of this model for the whole network are that: (i) rc is independent of N, the size of the network, (ii) below nc the AS is stable, and (iii) above nc the AS is unstable. Using extensive computer simulations we were able [9] to validate all these predictions. In addition, keeping n<nc, we were able to observe traveling synfire waves on top of global asynchronous activity. The pattern's size n is also limited from below, n> nmin, by the requirement that n excitatory post-synaptic potentials (PSPs), on average, drive a neuron across its threshold. Since N>K and typically N>>K, together with Eq. (1) it follows that . Hence r ( 2 min / cr n N >> ) c and nmin set the lower bound of the network's size, above which it is possible to embed a reasonable number of patterns in the network without losing the AS. In this paper we propose a solution that enables small nmin and large r values, which in turn enables embedding a large number of patterns in much smaller networks. This is made possible by the doubly-balanced construction to be outlined below. 2 The double-balance principle Counteracting the excitatory correlations with inhibitory ones is the principle that will allow us to solve the problem. Since we deal with balanced networks, in which the mean excitatory input is balanced by an inhibitory one, we note that this principle imposes a second type of balancing condition, hence we refer to it as the double- balance principle. In the following, we apply this principle by introducing synaptic connections between any excitatory pattern and its randomly chosen inhibitory pattern. These inhibitory patterns, which we call shadow patterns, are activated after the excitatory patterns fire, but have no special in-pattern connectivity or structured projections onto other patterns. The premise is that correlations evolved in the excitatory patterns will elicit correlated inhibitory activity, thus balancing the network's average correlation level. The size of the shadow pattern has to be small enough, so that the global network activity will not be quenched, yet large enough, so that the excitatory correlation will be counteracted. A balanced network that is embedded with patterns and their shadow patterns will be referred to as a doubly balanced network (DBN), to be contrasted with the singly balanced network (SBN) where shadow patterns are absent. 3 Application of the double balance principle. 3.1 The Network We model neuronal activity with the Integrate and Fire [10] model. All neurons have the same parameters: ms 10 = τ , ms ref 5.2 = τ , C=250pF. PSPs are modeled by a delta function with fixed delay. The number of synapses on a neuron is fixed and set to KE excitatory synapses from the local network, KE excitatory synapses from external sources and KI inhibitory synapses from the local network. See Aviel et al [9] for details. All synapses of each group will be given fixed values. It is allowed for one pre-synaptic neuron to make more than one connection to one postsynaptic neuron. The network possesses NE excitatory neurons and E I N N γ ≡ inhibitory neurons. Connectivity is sparse, ε = = I I E N K N E K , (we use 1.0 = ε ). A Poisson process with rate vext=10Hz models the external source. If a neuron of population y innervates a neuron of population x its synaptic strength is defined as xy J 0 J J K xE E ≡ , 0 J gJ K xI ≡− I with J0=10, and g=5. Note that xE g xI J J γ − = , hence γ g controls the balance between the two populations. Within an HCA pattern the neurons have high connection probability with one another. Here it is achieved by requiring L of the synapses of a neuron in the excitatory pattern to originate from within the pattern. Similarly, a neuron in the inhibitory shadow pattern dedicates L of its synapses to the associated excitatory pattern. In a SFC, each neuron in an excitatory pool is fed by L neurons from the previous pool. This forms a feed forward connectivity. In addition, when shadow pools are present, each neuron in a shadow pool is fed by L neurons from its associated excitatory pool. In both cases E L K C L = , with CL=2.5. The size of the excitatory patterns (i.e. the number of neurons participating in a pattern) or pools, nE, is also chosen to be proportional to E K (see Aviel et al. 2003 [9]), E n E K C n ≡ , where varies. This is a suitable choice, because of the behavior of the critical n n C c of Eq. (1), and is needed for the meaningful memory activity (of the HCA or SFC) to overcome synaptic noise. The size of a shadow pattern is defined as E I n d n ~ ≡ . This leads to the factor d, representing the relative strength of inhibitory and excitatory currents, due to a pattern or pool, affecting a neuron that is connected to both: (2) 0 0 gJ K d J n gd E xI I d J n J K xE E I γ − ≡ = = . Thus it fixes ( ) E g I n d n γ = . In the simulations reported below d varied between 1 and 3. Wiring the network is done in two stages, first all excitatory patterns are wired, and then random connections are added, complying with the fixed number of synapses. A volley of w spikes, normally distributed over time with width of 1ms, is used to ignite a memory pattern. In the case of SFC, the first pool is ignited, and under the right conditions the volley propagates along the chain without fading away and without destabilizing the AS. 3.2 Results First we show that the AS remains stable when embedding HCAs in a small DBN, whereas global oscillations take place if embedding is done without shadow pools. Figure 1 displays clearly the sustained activity of an HCA in the DBN. The same principle also enables embedding of SFCs in a small network. This is to be contrasted with the conclusions drawn in Aviel et al [9], where it was shown that otherwise very large networks are necessary to reach this goal. Figure 1: HCAs are embedded in a balanced network without (left) and with (right) shadow patterns. P=300 HCAs of size nE=194 excitatory neurons were embedded in a network of NE=15,000 excitatory neurons. The eleventh pattern is externally ignited at time t=100ms. A raster plot of 200ms is displayed. Without shadow patterns the network exhibits global oscillations, but with shadow patterns the network exhibits only minute oscillations, enabling the activity of the ignited pattern to be sustained. The size of the shadow patterns is set according to Eq. (2) with d=1. Neurons that participate in more than one HCA may appear more than once on the raster plot, whose y-axis is ordered according to HCAs, and represents every second neuron in each pattern. Figure 2: SFCs embedded in a balanced network without (left) and with (right) shadow patterns. The first pool is externally ignited at time t=100ms. d=0.5. The rest of the parameters are as in Figure 1. Here again, without shadow pools, the network exhibits global oscillations, but with shadow pools it has only minute oscillation, enabling a stable propagation of the synfire wave. 3.3 Maximum Capacity In this section we show that, within our DBN, it is the fixed number of synapses (rather than dynamical constraints) that dictates the maximal number of patterns or pools P that may be loaded onto the network. Let us start by noting that a neuron of population x (E or I) can participate in at most L K m E ≡ patterns, hence m N x sets an upper bound on the number of neurons that participate in all patterns: . Next, defining x x N m P n ⋅ ≤ P x Nx α ≡ , we find that x x α d Dx ≡ γα ( P C C D n L I = (3) K C K m E L E x n n x x α ≤ = To leading order in NE this turns into (4) ( ) ( ) 1 K C K E L E N N C C D N n x E L E D C K x n E − = = − O NE where ( ) γ g if x=I, or 1 for x=E. Thus we conclude that synaptic combinatorial considerations lead to a maximal number of patterns P. If DI<1, including the case DI=0 of the SBN, the excitatory neurons determine the limit to be ( ) E L n N C C P 1 − = . If, as is the case in our DBN, DI>1, then E I α < ) 1 NE − and the inhibitory neurons set the maximum value to . For example, setting Cn=3.5, CL=2.4, g=3 and d=3, in Eq. (4), we get P=0.06NE. In Figure 3 we use these parameters. The capacity of a DBN is compared to that of an SBN for different network sizes. The maximal load is defined by the presence of global oscillation strong enough to prohibit sustained activity of patterns. The DBN reaches the combinatorial limit, whereas the SBN does not increase with N and obviously does not reach its combinatorial limit. Figure 3: A balanced network maximally loaded with HCAs. Left: A raster plot of a maximally loaded DBN. P=408, NE=6,000. At time t=450ms, the seventh pattern is ignited for a duration of 10ms, leading to termination of another pattern's activity (upper stripe) and to sustained activity of the ignited pattern (lower stripe). Right: P(NE) as inferred from simulations of a SBN ("o") and of a DBN ("*"). The DBN realizes the combinatorial limit (dashed line) whereas the SBN does not realize its limit (solid line). From this comparison it is clear that DBN is superior to the SBN in terms of network capacity. 0 5000 10000 15000 0 200 400 600 800 1000 1200 1400 NE Pmax DBN SBN DBN Upper Limit SBN Upper Limit The simulations displayed in Figure 3 show that in the DBN the combinatorial P is indeed realized, and the capacity of this DBN grows like 0.06NE. In the SBN, dynamic interference prevents reaching the combinatorial limit. We have tried, in many ways, to increase the capacity of SBN. Recently, we have discovered [11] that only if the external rates are appropriately scaled, then SBN capacity can be linear with NE with a pre-factor α almost as high as that of a DBN. Although under these conditions SBNs can have large capacity, we emphasize that DBNs posses a clear advantage. Their structure guarantees high capacity under more general conditions. 4 Discussion In this paper we study memory patterns embedded in a balanced network of spiking neurons. In particular, we focus on the maximal capacity of Hebbian cell assemblies. Requiring stability of the asynchronous state of the network, that serves as the background for memory activity, and further assuming that the neuronal spiking process is noise-driven, we show that naively applying Hebb's architecture leads to global oscillations. We propose the double-balance principle as the solution to this problem. This double-balance is obtained by introducing shadow patterns, i.e. inhibitory patterns that are associated with the excitatory ones and fed by them, but do not have specific connectivity other than that. The maximal load of our system is determined in terms of the available synaptic resources, and is proportional to the size of the excitatory population, NE. For the parameters used here it turns out to be P=0.06NE. This limit was estimated by a combinatorial argument of synaptic availability, and shown to be realized by simulations. Synfire chains were also studied. DBNs allow for their embedding in relatively small networks, as shown in . Previous studies have shown that their embedding in balanced networks without shadow pools require network sizes larger by an order of magnitude [9]. The capacity P of a SFC is defined, in analogy with the HCA case, as the number of pools embedded in the network. In this case we cannot realize the theoretical limit in simulations. We believe that the feed-forward structure of the SFC, which is absent in HCA, introduces further dynamical interference. The feed-forward structure can amplify correlations and firing rates more efficiently than the feedback structure within patterns of the HCA. Thus a network embedded with SFCs may be more sensitive to spontaneously evolved correlations than a network embedded with HCAs. Figure 2 It is interesting to note that the addition of shadow patterns has an analogy in the Hopfield model [5], where neurons in a pattern have both excitatory and inhibitory couplings with the rest of the network. One may claim that the architecture proposed here recovers the same effect via the shadow patterns. Accommodating the Hopfield model in networks of spiking neurons was tried before [3, 4] without specific emphasis on the question of capacity. In Gerstner and van Hemenn [4] the synaptic matrix is constructed in the same way as in the Hopfield model, i.e. neurons can have excitatory and inhibitory synapses. In [3, 8] the synaptic bonds of the Hopfield model were replaced by strong excitatory connections within a pattern, and weak excitatory connections among neurons in a patterns and those outside the pattern. While the different types of connection are of different magnitude, they are all excitatory. In contrast, here, excitation exists within a pattern as well as outside it, but the pattern has a well-defined inhibitory effect on the rest of the network, mediated by the shadow pattern. The resulting inhibitory correlated currents cancel the excitatory correlated input. Since the firing process in a BN is driven by fluctuations, it seems that negating excitatory correlations by inhibitory ones is more akin to Hopfield's construction in a network of two populations. Hertz [12] has argued that a capacity limit obtained in a network of integrate-andfire neurons should be multiplied by 2 / τ to compare it with a network of binary neurons. Hence the 12 .0 = α obtained here, is equivalent to 6.0 = α in a binary model. It is not surprising that the last number is higher than 0.14, the limit of the original Hopfield model, since our model is sparse, as, e.g. the Tsodyks-Feigelman [7] model, where larger capacities were achieved. Finally, let us point out again that whereas only DBNs can reach the combinatorial capacity limit under the conditions specified in this paper, we have recently discovered [11] that SBN can also reach this limit if additional scaling conditions are imposed on the input. The largest capacities that we obtained under these conditions were of order 0.1. Acknowledgments This work was supported in part by grants from GIF. References 1. Brunel, N., Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci, 2000. 8(3): p. 183-208. 2. van Vreeswijk, C. and H. Sompolinsky, Chaotic balanced state in a model of cortical circuits. Neural Comput, 1998. 10(6): p. 1321-71. 3. Amit, D., J and N. Brunel, Dynamics of a recurrent network of spiking neurons before and following learning. Network, 1997. 8: p. 373. 4. Gerstner, W. and L. van Hemmen, Associative memory in a network of 'spiking' neurons. Network, 1992. 3: p. 139-164. 5. Hopfield, J.J., Neural networks and physical systems woth emergant collective computational abilities. PNAS, 1982. 79: p. 2554-58. 6. Willshaw, D.J., O.P. Buneman, and H.C. Longuet-Higgins, Nonholographic associative memory. Nature (London), 1969. 222: p. 960-962. 7. Tsodyks, M.V. and M.V. Feigelman, The enhanced storage capacity in neural networks with low activity level. Europhys. Let., 1988. 6(2): p. 101. 8. Brunel, N. and X.-J. Wang, Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. J. of Computational Neuroscience, 2001. 11: p. 63-85. 9. Aviel, Y., et al., On embedding synfire chains in a balanced network. Neural Computation, 2003. 15(6): p. 1321-1340. 10. Tuckwell, H.C., Introduction to theoretical neurobiology. 1988, Cambridge: Cambridge University Press. 11. Aviel, Y., D. Horn, and M. Abeles, Memory Capacity of Balanced Networks. 2003: Submitted. 12. Hertz, J.A., Modeling synfire networks, in Neuronal Information processing - From Biological Data to Modelling and Application, G. Burdet, P. Combe, and O. Parodi, Editors. 1999.
|
2003
|
95
|
2,502
|
Hierarchical Topic Models and the Nested Chinese Restaurant Process David M. Blei Thomas L. Griffiths blei@cs.berkeley.edu gruffydd@mit.edu Michael I. Jordan Joshua B. Tenenbaum jordan@cs.berkeley.edu jbt@mit.edu University of California, Berkeley Massachusetts Institute of Technology Berkeley, CA 94720 Cambridge, MA 02139 Abstract We address the problem of learning topic hierarchies from data. The model selection problem in this domain is daunting—which of the large collection of possible trees to use? We take a Bayesian approach, generating an appropriate prior via a distribution on partitions that we refer to as the nested Chinese restaurant process. This nonparametric prior allows arbitrarily large branching factors and readily accommodates growing data collections. We build a hierarchical topic model by combining this prior with a likelihood that is based on a hierarchical variant of latent Dirichlet allocation. We illustrate our approach on simulated data and with an application to the modeling of NIPS abstracts. 1 Introduction Complex probabilistic models are increasingly prevalent in domains such as bioinformatics, information retrieval, and vision. These domains create fundamental modeling challenges due to their open-ended nature—data sets often grow over time, and as they grow they bring new entities and new structures to the fore. Current statistical modeling tools often seem too rigid in this regard; in particular, classical model selection techniques based on hypothesis testing are poorly matched to problems in which data can continue to accrue and unbounded sets of often incommensurate structures must be considered at each step. An important instance of such modeling challenges is provided by the problem of learning a topic hierarchy from data. Given a collection of “documents,” each of which contains a set of “words,” we wish to discover common usage patterns or “topics” in the documents, and to organize these topics into a hierarchy. (Note that while we use the terminology of document modeling throughout this paper, the methods that we describe are general.) In this paper, we develop efficient statistical methods for constructing such a hierarchy which allow it to grow and change as the data accumulate. We approach this model selection problem by specifying a generative probabilistic model for hierarchical structures and taking a Bayesian perspective on the problem of learning these structures from data. Thus our hierarchies are random variables; moreover, these random variables are specified procedurally, according to an algorithm that constructs the hierarchy as data are made available. The probabilistic object that underlies this approach is a distribution on partitions of integers known as the Chinese restaurant process [1]. We show how to extend the Chinese restaurant process to a hierarchy of partitions, and show how to use this new process as a representation of prior and posterior distributions for topic hierarchies. There are several possible approaches to the modeling of topic hierarchies. In our approach, each node in the hierarchy is associated with a topic, where a topic is a distribution across words. A document is generated by choosing a path from the root to a leaf, repeatedly sampling topics along that path, and sampling the words from the selected topics. Thus the organization of topics into a hierarchy aims to capture the breadth of usage of topics across the corpus, reflecting underlying syntactic and semantic notions of generality and specificity. This approach differs from models of topic hierarchies which are built on the premise that the distributions associated with parents and children are similar [2]. We assume no such constraint—for example, the root node may place all of its probability mass on function words, with none of its descendants placing any probability mass on function words. Our model more closely resembles the hierarchical topic model considered in [3], though that work does not address the model selection problem which is our primary focus. 2 Chinese restaurant processes We begin with a brief description of the Chinese restaurant process and subsequently show how this process can be extended to hierarchies. 2.1 The Chinese restaurant process The Chinese restaurant process (CRP) is a distribution on partitions of integers obtained by imagining a process by which M customers sit down in a Chinese restaurant with an infinite number of tables.1 The basic process is specified as follows. The first customer sits at the first table. The mth subsequent customer sits at a table drawn from the following distribution: p(occupied table i | previous customers) = mi γ+m−1 p(next unoccupied table | previous customers) = γ γ+m−1 (1) where mi is the number of previous customers at table i, and γ is a parameter. After M customers sit down, the seating plan gives a partition of M items. This distribution gives the same partition structure as draws from a Dirichlet process [4]. However, the CRP also allows several variations on the basic rule in Eq. (1), including a data-dependent choice of γ and a more general functional dependence on the current partition [5]. This flexibility will prove useful in our setting. The CRP has been used to represent uncertainty over the number of components in a mixture model. In a species sampling mixture [6], each table in the Chinese restaurant is associated with a draw from p(β | η) where β is a mixture component parameter. Each data point is generated by choosing a table i from Eq. (1) and then sampling a value from the distribution parameterized by βi (the parameter associated with that table). Given a data set, the posterior under this model has two components. First, it is a distribution over seating plans; the number of mixture components is determined by the number of tables which the data occupy. Second, given a seating plan, the particular data which are sitting at each table induce a distribution on the associated parameter β for that table. The posterior can be estimated using Markov chain Monte Carlo [7]. Applications to various kinds of mixture models have begun to appear in recent years; examples include Gaussian mixture models [8], hidden Markov models [9] and mixtures of experts [10]. 1The terminology was inspired by the Chinese restaurants in San Francisco which seem to have an infinite seating capacity. It was coined by Jim Pitman and Lester Dubins in the early eighties [1]. 2.2 Extending the CRP to hierarchies The CRP is amenable to mixture modeling because we can establish a one-to-one relationship between tables and mixture components and a one-to-many relationship between mixture components and data. In the models that we will consider, however, each data point is associated with multiple mixture components which lie along a path in a hierarchy. We develop a hierarchical version of the CRP to use in specifying a prior for such models. A nested Chinese restaurant process can be defined by imagining the following scenario. Suppose that there are an infinite number of infinite-table Chinese restaurants in a city. One restaurant is determined to be the root restaurant and on each of its infinite tables is a card with the name of another restaurant. On each of the tables in those restaurants are cards that refer to other restaurants, and this structure repeats infinitely. Each restaurant is referred to exactly once; thus, the restaurants in the city are organized into an infinitely-branched tree. Note that each restaurant is associated with a level in this tree (e.g., the root restaurant is at level 1 and the restaurants it refers to are at level 2). A tourist arrives in the city for a culinary vacation. On the first evening, he enters the root Chinese restaurant and selects a table using Eq. (1). On the second evening, he goes to the restaurant identified on the first night’s table and chooses another table, again from Eq. (1). He repeats this process for L days. At the end of the trip, the tourist has sat at L restaurants which constitute a path from the root to a restaurant at the Lth level in the infinite tree described above. After M tourists take L-day vacations, the collection of paths describe a particular L-level subtree of the infinite tree (see Figure 1a for an example of such a tree). This prior can be used to model topic hierarchies. Just as a standard CRP can be used to express uncertainty about a possible number of components, the nested CRP can be used to express uncertainty about possible L-level trees. 3 A hierarchical topic model Let us consider a data set composed of a corpus of documents. Each document is a collection of words, where a word is an item in a vocabulary. Our basic assumption is that the words in a document are generated according to a mixture model where the mixing proportions are random and document-specific. Consider a multinomial variable z, and an associated set of distributions over words p(w | z, β), where β is a parameter. These topics (one distribution for each possible value of z) are the basic mixture components in our model. The document-specific mixing proportions associated with these components are denoted by a vector θ. Temporarily assuming K possible topics in the corpus, an assumption that we will soon relax, z thus ranges over K possible values and θ is a K-dimensional vector. Our document-specific mixture distribution is p(w | θ) = PK i=1 θip(w | z = i, βi) which is a random distribution since θ is random. We now specify the following two-level generative probabilistic process for generating a document: (1) choose a K-vector θ of topic proportions from a distribution p(θ | α), where α is a corpus-level parameter; (2) repeatedly sample words from the mixture distribution p(w | θ) for the chosen value of θ. When the distribution p(θ | α) is chosen to be a Dirichlet distribution, we obtain the latent Dirichlet allocation model (LDA) [11]. LDA is thus a twolevel generative process in which documents are associated with topic proportions, and the corpus is modeled as a Dirichlet distribution on these topic proportions. We now describe an extension of this model in which the topics lie in a hierarchy. For the moment, suppose we are given an L-level tree and each node is associated with a topic. A document is generated as follows: (1) choose a path from the root of the tree to a leaf; (2) draw a vector of topic proportions θ from an L-dimensional Dirichlet; (3) generate the words in the document from a mixture of the topics along the path from root to leaf, with mixing proportions θ. This model can be viewed as a fully generative version of the cluster abstraction model [3]. Finally, we use the nested CRP to relax the assumption of a fixed tree structure. As we have seen, the nested CRP can be used to place a prior on possible trees. We also place a prior on the topics βi, each of which is associated with a restaurant in the infinite tree (in particular, we assume a symmetric Dirichlet with hyperparameter η). A document is drawn by first choosing an L-level path through the restaurants and then drawing the words from the L topics which are associated with the restaurants along that path. Note that all documents share the topic associated with the root restaurant. 1. Let c1 be the root restaurant. 2. For each level ℓ∈{2, . . . , L}: (a) Draw a table from restaurant cℓ−1 using Eq. (1). Set cℓto be the restaurant referred to by that table. 3. Draw an L-dimensional topic proportion vector θ from Dir(α). 4. For each word n ∈{1, . . . , N}: (a) Draw z ∈{1, . . . , L} from Mult(θ). (b) Draw wn from the topic associated with restaurant cz. This model, hierarchical LDA (hLDA), is illustrated in Figure 1b. The node labeled T refers to a collection of an infinite number of L-level paths drawn from a nested CRP. Given T, the cm,ℓvariables are deterministic—simply look up the ℓth level of the mth path in the infinite collection of paths. However, not having observed T, the distribution of cm,ℓ will be defined by the nested Chinese restaurant process, conditioned on all the cq,ℓfor q < m. Now suppose we are given a corpus of M documents, w1, . . . , wM. The posterior on the c’s is essentially transferred (via the deterministic relationship), to a posterior on the first M paths in T. Consider a new document wM+1. Its posterior path will depend, through the unobserved T, on the posterior paths of all the documents in the original corpus. Subsequent new documents will also depend on the original corpus and any new documents which were observed before them. Note that, through Eq. (1), any new document can choose a previously unvisited restaurant at any level of the tree. I.e., even if we have a peaked posterior on T which has essentially selected a particular tree, a new document can change that hierarchy if its words provide justification for such a change. In another variation of this model, we can consider a process that flattens the nested CRP into a standard CRP, but retains the idea that a tourist eats L meals. That is, the tourist eats L times in a single restaurant under the constraint that he does not choose the same table twice. Though the vacation is less interesting, this model provides an interesting prior. In particular, it can be used as a prior for a flat LDA model in which each document can use at most L topics from the potentially infinite total set of topics. We examine such a model in Section 5 to compare CRP methods with selection based on Bayes factors. 4 Approximate inference by Gibbs sampling In this section, we describe a Gibbs sampling algorithm for sampling from the posterior nested CRP and corresponding topics in the hLDA model. The Gibbs sampler provides a method for simultaneously exploring the parameter space (the particular topics of the corpus) and the model space (L-level trees). The variables needed by the sampling algorithm are: wm,n, the nth word in the mth document (the only observed variables in the model); cm,ℓ, the restaurant corresponding to the ℓth topic in document m; and zm,n, the assignment of the nth word in the mth document β4 1 β5 β6 β3 β2 β1 2 4 3 4 3 2 1 4 3 2 1 cL c1 c3 c2 η α z θ N M w Τ γ β 8 (a) (b) Figure 1: (a) The paths of four tourists through the infinite tree of Chinese restaurants (L = 3). The solid lines connect each restaurant to the restaurants referred to by its tables. The collected paths of the four tourists describe a particular subtree of the underlying infinite tree. This illustrates a sample from the state space of the posterior nested CRP of Figure 1b for four documents. (b) The graphical model representation of hierarchical LDA with a nested CRP prior. We have separated the nested Chinese restaurant process from the topics. Each of the infinite β’s corresponds to one of the restaurants. to one of the L available topics. All other variables in the model—θ and β—are integrated out. The Gibbs sampler thus assesses the values of zm,n and cm,ℓ. Conceptually, we divide the Gibbs sampler into two parts. First, given the current state of the CRP, we sample the zm,n variables of the underlying LDA model following the algorithm developed in [12], which we do not reproduce here. Second, given the values of the LDA hidden variables, we sample the cm,ℓvariables which are associated with the CRP prior. The conditional distribution for cm, the L topics associated with document m, is: p(cm | w, c−m, z) ∝p(wm | c, w−m, z)p(cm | c−m), where w−m and c−m denote the w and c variables for all documents other than m. This expression is an instance of Bayes’ rule with p(wm | c, w−m, z) as the likelihood of the data given a particular choice of cm and p(cm | c−m) as the prior on cm implied by the nested CRP. The likelihood is obtained by integrating over the parameters β, which gives: p(wm | c, w−m, z) = L Y ℓ=1 Γ(n(·) cm,ℓ,−m + Wη) Q w Γ(n(w) cm,ℓ,−m + η) Q w Γ(n(w) cm,ℓ,−m + n(w) cm,ℓ,m + η) Γ(n(·) cm,ℓ,−m + n(·) cm,ℓ,m + Wη) ! , where n(w) cm,ℓ,−m is the number of instances of word w that have been assigned to the topic indexed by cm,ℓ, not including those in the current document, W is the total vocabulary size, and Γ(·) denotes the standard gamma function. When c contains a previously unvisited restaurant, n(w) cm,ℓ,−m is zero. Note that the cm must be drawn as a block. The set of possible values for cm corresponds to the union of the set of existing paths through the tree, equal to the number of leaves, with the set of possible novel paths, equal to the number of internal nodes. This set can be enumerated and scored using Eq. (1) and the definition of a nested CRP in Section 2.2. (a) (b) Structure Leaf error Other 0 1 2 3 (7 6 5) 70% 14% 4% 12% 4 (6 6 5 5) 48% 30% 2% 20% 4 (6 6 6 4) 52% 36% 0% 12% 5 (7 6 5 5 4) 30% 40% 16% 14% 5 (6 5 5 5 4) 50% 22% 16% 12% Figure 2: (a) Six sample documents from a 100 document corpus using the three level bars hierarchy described in Section 5 and α skewed toward higher levels. Each document has 1000 words from a 25 term vocabulary. (b) The correct hierarchy found by the Gibbs sampler on this corpus. Figure 3: Results of estimating hierarchies on simulated data. Structure refers to a three level hierarchy: the first integer is the number of branches from the root and is followed by the number of children of each branch. Leaf error refers to how many leaves were incorrect in the resulting tree (0 is exact). Other subsumes all other errors. 5 Examples and empirical results In this section we describe a number of experiments using the models described above. In all experiments, we let the sampler burn in for 10000 iterations and subsequently took samples 100 iterations apart for another 1000 iterations. Local maxima can be a problem in the hLDA model. To avoid them, we randomly restart the sampler 25 times and take the trajectory with the highest average posterior likelihood. We illustrate that the nested CRP process is feasible for learning text hierarchies in hLDA by using a contrived corpus on a small vocabulary. We generated a corpus of 100 1000word documents from a three-level hierarchy with a vocabulary of 25 terms. In this corpus, topics on the vocabulary can be viewed as bars on a 5 × 5 grid. The root topic places its probability mass on the bottom bar. On the second level, one topic is identified with the leftmost bar, while the rightmost bar represents a second topic. The leftmost topic has two subtopics while the rightmost topic has one subtopic. Figure 2a illustrates six documents sampled from this model. Figure 2b illustrates the recovered hierarchy using the Gibbs sampling algorithm described in Section 4. In estimating hierarchy structures, hypothesis testing approaches to model selection are impractical since they do not provide a viable method of searching over the large space of trees. To compare the CRP method on LDA models with a standard approach, we implemented the simpler, flat model described at the end of Section 3. We generated 210 corpora of 100 1000-word documents each from an LDA model with K ∈{5, . . . , 25}, L = 5, a vocabulary size of 100, and randomly generated mixture components from a symmetric Dirichlet (η = 0.1). For comparison with the CRP prior, we used the approximate Bayes factors method of model selection [13], where one chooses the model that maximizes p(data | K)p(K) for various K and an appropriate prior. With the LDA model, the Bayes factors method is much slower than the CRP as it involves multiple runs of a Gibbs sampler with speed comparable to a single run of the CRP sampler. Furthermore, with the Bayes factors method one must choose an appropriate range of K. With the CRP prior, the only free parameter is γ (we used γ = 1.0). As shown in Figure 4, the CRP prior was more effective than Bayes factors in this setting. We should note that both the CRP and Bayes factors are somewhat sensitive to the choice η, the hyperparameter to the prior on the topics. However, in simulated data, this hyperparameter was known and thus we can provide a fair comparison. In a similar experiment, we generated 50 corpora each from five different hierarchies using 5 10 15 20 25 5 15 25 true dimension found dimension CRP prior 5 10 15 20 25 5 15 25 true dimension found dimension Bayes factors Figure 4: (Left) The average dimension found by a CRP prior plotted against the true dimension on simulated data (the true value is jiggled to see overlapping points). For each dimension, we generated ten corpora with a vocabulary size of 100. Each corpus contains 100 documents of 1000 words. (Right) Results of model selection with Bayes factors. an hLDA model and the same symmetric Dirichlet prior on topics. Each corpus has 100 documents of 1000 words from a vocabulary of 100 terms. Figure 3 reports the results of sampling from the resulting posterior on trees with the Gibbs sampler from Section 4. In all cases, we recover the correct structure more than any other and we usually recover a structure within one leaf of the correct structure. In all experiments, no predicted structure deviated by more than three nodes from the correct structure. Lastly, to demonstrate its applicability to real data, we applied the hLDA model to a text data set. Using 1717 NIPS abstracts from 1987–1999 [14] with 208,896 words and a vocabulary of 1600 terms, we estimated a three level hierarchy as illustrated in Figure 5. The model has nicely captured the function words without using an auxiliary list, a nuisance that most practical applications of language models require. At the next level, it separated the words pertaining to neuroscience abstracts and machine learning abstracts. Finally, it delineated several important subtopics within the two fields. These results suggest that hLDA can be an effective tool in text applications. 6 Summary We have presented the nested Chinese restaurant process, a distribution on hierarchical partitions. We have shown that this process can be used as a nonparametric prior for a hierarchical extension to the latent Dirichlet allocation model. The result is a flexible, general model for topic hierarchies that naturally accommodates growing data collections. We have presented a Gibbs sampling procedure for this model which provides a simple method for simultaneously exploring the spaces of trees and topics. Our model has two natural extensions. First, we have restricted ourselves to hierarchies of fixed depth L for simplicity, but it is straightforward to consider a model in which L can vary from document to document. Each document is still a mixture of topics along a path in a hierarchy, but different documents can express paths of different lengths as they represent varying levels of specialization. Second, although in our current model a document is associated with a single path, it is also natural to consider models in which documents are allowed to mix over paths. This would be a natural way to take advantage of syntactic structures such as paragraphs and sentences within a document. Acknowledgements We wish to acknowledge support from the DARPA CALO program, Microsoft Corporation, and NTT Communication Science Laboratories. the, of, a, to, and, in, is, for neurons, visual, cells, cortex, synaptic, motion, response, processing cell, neuron, circuit, cells, input, i, figure, synapses chip, analog, vlsi, synapse, weight, digital, cmos, design algorithm, learning, training, method, we, new, problem, on recognition, speech, character, word, system, classification, characters, phonetic b, x, e, n, p, any, if, training hidden, units, layer, input, output, unit, x, vector control, reinforcement, learning, policy, state, actions, value, optimal Figure 5: A topic hierarchy estimated from 1717 abstracts from NIPS01 through NIPS12. Each node contains the top eight words from its corresponding topic distribution. References [1] D. Aldous. Exchangeability and related topics. In ´Ecole d’´et´e de probabilit´es de Saint-Flour, XIII—1983, pages 1–198. Springer, Berlin, 1985. [2] E. Segal, D. Koller, and D. Ormoneit. Probabilistic abstraction hierarchies. In Advances in Neural Information Processing Systems 14. [3] T. Hofmann. The cluster-abstraction model: Unsupervised learning of topic hierarchies from text data. In IJCAI, pages 682–687, 1999. [4] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1:209–230, 1973. [5] J. Pitman. Combinatorial Stochastic Processes. Notes for St. Flour Summer School. 2002. [6] J. Ishwaran and L. James. Generalized weighted Chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13:1211–1235, 2003. [7] R. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249–265, June 2000. [8] M. West, P. Muller, and M. Escobar. Hierarchical priors and mixture models, with application in regression and density estimation. In Aspects of Uncertainty. John Wiley. [9] M. Beal, Z. Ghahramani, and C. Rasmussen. The infinite hidden Markov model. In Advances in Neural Information Processing Systems 14. [10] C. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in Neural Information Processing Systems 14. [11] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, January 2003. [12] T. Griffiths and M. Steyvers. A probabilistic approach to semantic representation. In Proceedings of the 24th Annual Conference of the Cognitive Science Society, 2002. [13] R. Kass and A. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773–795, 1995. [14] S. Roweis. NIPS abstracts, 1987–1999. http://www.cs.toronto.edu/ roweis/data.html.
|
2003
|
96
|
2,503
|
A Sampled Texture Prior for Image Super-Resolution Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman Robotics Research Group Department of Engineering Science University of Oxford Parks Road, Oxford, OX1 3PJ {elle,sjrob,az}@robots.ox.ac.uk Abstract Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1 Introduction The aim of super-resolution is to take a set of one or more low-resolution input images of a scene, and estimate a higher-resolution image. If there are several low resolution images available with sub-pixel displacements, then the high frequency information of the superresolution image can be increased. In the limiting case when the input set is just a single image, it is impossible to recover any high-frequency information faithfully, but much success has been achieved by training models to learn patchwise correspondences between low-resolution and possible highresolution information, and stitching patches together to form the super-resolution image [1]. A second approach uses an unsupervised technique where latent variables are introduced to model the mean intensity of groups of surrounding pixels [2]. In cases where the high-frequency detail is recovered from image displacements, the models tend to assume that each low-resolution image is a subsample from a true highresolution image or continuous scene. The generation of the low-resolution inputs can then be expressed as a degradation of the super-resolution image, usually by applying an image homography, convolving with blurring functions, and subsampling [3, 4, 5, 6, 7, 8, 9]. Unfortunately, the ML (maximum likelihood) super-resolution images obtained by reversing the generative process above tend to be poorly conditioned and susceptible to highfrequency noise. Most approaches to multiple-image super-resolution use a MAP (maximum a-posteriori) approach to regularize the solution using a prior distribution over the high-resolution space. Gaussian process priors [4], Gaussian MRFs (Markov Random Fields) and Huber MRFs [3] have all been proposed as suitable candidates. In this paper, we consider an image prior based upon samples taken from other images, inspired by the use of non-parametric sampling methods in texture synthesis [10]. This texture synthesis method outperformed many other complex parametric models for texture representation, and produces perceptively correct-looking areas of texture given a sample texture seed. It works by finding texture patches similar to the area around a pixel of interest, and estimating the intensity of the central pixel from a histogram built up from similar samples. We turn this approach around to produce an image prior by finding areas in our sample set that are similar to patches in our super-resolution image, and evaluate how well they match, building up a p.d.f. over the high-resolution image. In short, given a set of low resolution images and example images of textures in the same class at the higher resolution, our objective is to construct a super-resolution image using a prior that is sampled from the example images. Our method differs from the previous super-resolution methods of [1, 7] in two ways: first, we use our training images to estimate a distribution rather than learn a discrete set of lowresolution to high-resolution matches from which we must build up our output image; second, since we are using more than one image, we naturally fold in the extra high-frequency information available from the low-resolution image displacements. We develop our model in section 2, and expand upon some of the implementation details in section 3, as well as introducing the Huber prior model against which most of the comparisons in this paper are made. In section 4 we display results obtained with our method on some simple images, and in section 5 we discuss these results and future improvements. 2 The model In this section we develop the mathematical basis for our model. The main contribution of this work is in the construction of the prior over the super-resolution image, but first we will consider the generative model for the low-resolution image generation, which closely follows the approaches of [3] and [4]. We have K low-resolution images y(k), which we assume are generated from the super-resolution image x by y(k) = W (k)x + ϵG (k) (1) where ϵG is a vector of i.i.d. Gaussians ϵG ∼N(0, β−1 G ), and βG is the noise precision. The construction of W involves mapping each low-resolution pixel into the space of the super-resolution image, and performing a convolution with a point spread function. The constructions given in [3] and [4] are very similar, though the former uses bilinear interpolation to achieve a more accurate approximation. We begin by assuming that the image registration parameters may be determined a priori, so each input image has a corresponding set of registration parameters θ(k). We may now construct the likelihood function p(y(k)|x, θ(k)) = βG 2π M/2 exp h −βG 2 ||y(k) −W (k)x||2i (2) where each input image is assumed to have M pixels (and the super-resolution image N pixels). The ML solution for x can be found simply by maximizing equation 2 with respect to x, which is equivalent to minimizing the negative log likelihood −log p({y(k)}|x, {θ(k)}) ∝ K X k=1 ||y(k) −W (k)x||2, (3) though super-resolved images recovered in this way tend to be dominated by a great deal of high-frequency noise. To address this problem, a prior over the super-resolution image is often used. In [4], the authors restricted themselves to Gaussian process priors, which made their estimation of the registration parameters θ tractable, but encouraged smoothness across x without any special treatment to allow for edges. The Huber Prior was used successfully in [3] to penalize image gradients while being less harsh on large image discontinuities than a Gaussian prior. Details of the Huber prior are given in section 3. If we assume a uniform prior over the input images, the posterior distribution over x is of the form p(x|{y(k), θ(k)}) ∝ p(x) K Y k=1 p(y(k)|x, θ(k)). (4) To build our expression for p(x), we adopt the philosophy of [10], and sample from other example images rather than developing a parametric model. A similar philosophy was used in [11] for image-based rendering. Given a small image patch around any particular pixel, we can learn a distribution for the central pixel’s intensity value by examining the values at the centres of similar patches from other images. Each pixel xi has a neighbourhood region R(xi) consisting of the pixels around it, but not including xi itself. For each R(xi), we find the closest neighbourhood patch in the set of sampled patches, and find the central pixel associated with this nearest neighbour, LR(xi). The intensity of our original pixel is then assumed to be Gaussian distributed with mean equal to the intensity of this central pixel, and with some precision βT , xi ∼N(LR(xi), β−1 T ) (5) leading us to a prior of the form p(x) = βT 2π N/2 exp h −βT 2 ||x −LR(x)||2i . (6) Inserting this prior into equation 4, the posterior over x, and taking the negative log, we have −log p(x|{y(k), θ(k)}) ∝ β||x −LR(x)||2 + K X k=1 ||y(k) −W (k)x||2 + c, (7) where the right-hand side has been scaled to leave a single unknown ratio β between the data error term and the prior term, and includes an arbitrary constant c. Our super-resolution image is then just arg minx(L), where L = β||x −LR(x)||2 + K X k=1 ||y(k) −W (k)x||2. (8) 3 Implementation details We optimize the objective function of equation 8 using scaled conjugate gradients (SCG) to obtain an approximation to our super-resolution image. This requires an expression for the gradient of the function with respect to x. For speed, we approximate this by dL dx = 2β x −LR(x) −2 K K X k=1 W (k)T y(k) −W (k)x , (9) which assumes that small perturbations in the neighbours of x will not change the value returned by LR(x). This is obviously not necessarily the case, but leads to a more efficient algorithm. The same k-nearest-neighbour variation introduced in [10] could be adopted to smooth this response. Our image patch regions R(xi) are square windows centred on xi, and pixels near the edge of the image are supported using the average image of [3] extending beyond the edge of the super-resolution image. To compute the nearest region in the example images, patches are normalized to sum to unity, and centre weighted as in [10] by a 2-dimensional Gaussian. The width of the image patches used, and of the Gaussian weights, depends very much upon the scales of the textures present in the image. Our images intensities were in the range [0, 1], and all the work so far has been with grey-scale images. Most of our results with this sample-based prior are compared to super-resolution images obtained using the Huber prior used in [3]. Other edge-preserving functions are discussed in [12], though the Huber function performed better than these as a prior in this case. The Huber potential function is given by ρ(x) = n x2, if |x| ≤α 2α|x| −α2, otherwise. (10) If G is a matrix which pre-multiplies x to give a vector of first-order approximations to the magnitude of the image gradient in the horizontal, vertical, and two diagonal directions, then the Huber prior we use is of the form: p(x) = 1 Z exp h −γ 4N X i=1 ρ((Gx)i) i (11) for some prior strength γ, Z is the partition function, and (Gx) is the 4N × 1 column vector of approximate derivatives of x in the four directions mentioned above. Plugging this into the posterior distribution of equation 4 leads to a Huber MAP image xH which minimizes the negative log probability LH = β 4N X i=1 ρ((Gx)i) + K X k=1 ||y(k) −W (k)x||2, (12) where again the r.h.s. has been scaled so that β is the single unknown ratio parameter. We also optimize this by SCG, using the full analytic expression for dLH dx . 4 Preliminary results To test the performance of our texture-based prior, and compare it with that of the Huber prior, we produced sets of input images by running the generative model of equation 1 in the forward direction, introducing sub-pixel shifts in the x- and y-directions, and a small rotation about the viewing axis. We added varying amounts of Gaussian noise (2/256, 6/256 and 12/256, grey levels) and took varying number of these images (2, 5, 10) to produce nine separate sets of low-resolution inputs from each of our initial “ground-truth” high resolution images. Figure 1 shows three 100 × 100 pixel ground truth images, each accompanied by corresponding 40 × 40 pixel low-resolution images generated from the ground truth images at half the resolution, with 6/256 levels of noise. Our aim was to reconstruct the central 50 × 50 pixel section of the original ground truth image. Figure 2 shows the example images from which our texture samples patches were taken 1 – note that these do not overlap with the sections used to generate the low-resolution images. Text Truth Brick Truth Beads Truth Text Low−res Brick Low−res Beads Low−res Figure 1: Left to right: ground truth text, ground truth brick, ground truth beads, low-res text, low-res brick and low-res beads. Figure 2: Left: Text sample (150 × 200 pixels). Centre: Brick sample (200 × 200 pixels). Right: Beads sample (60 × 60 pixels). Figure 3 shows the difference in super-resolution image quality that can be obtained using the sample-based prior over the Huber prior using identical input sets as described above. For each Huber super-resolution image, we ran a set of reconstructions, varying the Huber parameter α and the prior strength parameter β. The image shown for each input number/noise level pair is the one which gave the minimum RMS error when compared to the ground-truth image; these are very close to the “best” images chosen from the same sets by a human subject. The images shown for the sample-based prior are again the best (in the sense of having minimal RMS error) of several runs per image. We varied the size of the sample patches from 5 to 13 pixels in edge length – computational cost meant that larger patches were not considered. Compared to the Huber images, we tried relatively few different patch size and β-value combinations for our sample-based prior; again, this was due to our method taking longer to execute than the Huber method. Consequently, the Huber parameters are more likely to lie close to their own optimal values than our sample-based prior parameters are. We also present images recovered using a “wrong” texture. We generated ten lowresolution images from a picture of a leaf, and used texture samples from a small black-andwhite spiral in our reconstruction (Figure 4). A selection of results are shown in Figure 5, where we varied the β parameter governing the prior’s contribution to the output image. 1Text grabbed from Greg Egan’s novella Oceanic, published online at the author’s website. Brick image from the Brodatz texture set. Beads image from http://textures.forrest.cz/. Texture prior 2 2 6 5 12 10 Noise (grey levels) Number of Images HMAP prior 2 2 6 5 12 10 Noise (grey levels) Number of Images Texture prior 2 2 6 5 12 10 Noise (grey levels) Number of Images HMAP prior 2 2 6 5 12 10 Noise (grey levels) Number of Images Texture prior 2 2 12 5 32 10 Noise (grey levels) Number of Images HMAP prior 2 2 12 5 32 10 Noise (grey levels) Number of Images Figure 3: Recovering the super-resolution images at a zoom factor of 2, using the texturebased prior (left column of plots) and the Huber MRF prior (right column of plots). The text and brick datasets contained 2, 6, 12 grey levels of noise, while the beads dataset used 2, 12 and 32 grey levels. Each image shown is the best of several attempts with varying prior strengths, Huber parameter (for the Huber MRF prior images) and patch neighbourhood sizes (for the texture-based prior images). Using a low value gives an image not dissimilar to the ML solution; using a significantly higher value makes the output follow the form of the prior much more closely, and here this means that the grey values get lost as the evidence for them from the data term is swamped by the black-and-white pattern of the prior. Figure 4: The original 120×120 high-resolution image (left), and the 80×80 pixel “wrong” texture sample image (right). beta=0.01 beta=0.04 beta=0.16 beta=0.64 Figure 5: Four 120×120 super-resolution images are shown on the lower row, reconstructed using different values of the prior strength parameter β: 0.01, 0.04, 0.16, 0.64, from left to right. 5 Discussion and further considerations The images of Figure 3 show that our prior offers a qualitative improvement over the generic prior, especially when few input images are available. Quantitively, our method gives an RMS error of approximately 25 grey levels from only 2 input images with 2 grey levels of additive Gaussian noise on the text input images, whereas the best Huber prior super-resolution image for that image set and noise level uses all 10 available input images, and still has an RMS error score of almost 30 grey levels. Figure 6 plots the RMS errors from the Huber and sample-based priors against each other. In all cases, the sample-based method fares better, with the difference most notable in the text example. In general, larger patch sizes (11 × 11 pixels) give smaller errors for the noisy inputs, while small patches (5 × 5) are better for the less noisy images. Computational costs mean we limited the patch size to no more than 13 × 13, and terminated the SCG optimization algorithm after approximately 20 iterations. In addition to improving the computational complexity of our algorithm implementation, we can extend this work in several directions. Since in general the textures for the prior will not be invariant to rotation and scaling, consideration of the registration of the input images will be necessary. The optimal patch size will be a function of the image textures, so learning this as a parameter of an extended model, in a similar way to how [4] learns the point-spread function for a set of input images, is another direction of interest. 10 20 30 40 50 60 10 20 30 40 50 60 Huber RMS Texture−based RMS Comparison of RMSE (grey levels) equal−error line text dataset brick dateset bead dataset Figure 6: Comparison of RMS errors in reconstructing the text, brick and bead images using the Huber and sample-based priors. References [1] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution. IEEE Computer Graphics and Applications, 22(2):56–65, March/April 2002. [2] A. J. Storkey. Dynamic structure super-resolution. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 1295–1302. MIT Press, Cambridge, MA, 2003. [3] D. P. Capel. Image Mosaicing and Super-resolution. PhD thesis, University of Oxford, 2001. [4] M. E. Tipping and C. M. Bishop. Bayesian image super-resolution. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 1279– 1286. MIT Press, Cambridge, MA, 2003. [5] M. Irani and S. Peleg. Improving resolution by image registration. CVGIP: Graphical Models and Image Processing, 53:231–239, 1991. [6] M. Irani and S. Peleg. Motion analysis for image enhancement:resolution, occlusion, and transparency. Journal of Visual Communication and Image Representation, 4:324–335, 1993. [7] S. Baker and T. Kanade. Limits on super-resolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1167–1183, 2002. [8] R. R. Schultz and R. L. Stevenson. Extraction of high-resolution frames from video sequences. IEEE Transactions on Image Processing, 5(6):996–1011, June 1996. [9] P. Cheeseman, B. Kanefsky, R. Kraft, J. Stutz, and B. Hanson. Super-resolved surface reconstruction from multiple images. In Glenn R. Heidbreder, editor, Maximum Entropy and Bayesian Methods, pages 293–308. Kluwer Academic Publishers, Dordrecht, the Netherlands, 1996. [10] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In IEEE International Conference on Computer Vision, pages 1033–1038, Corfu, Greece, September 1999. [11] A. Fitzgibbon, Y. Wexler, and A. Zisserman. Image-based rendering using image-based priors. In Proceedings of the International Conference on Computer Vision, October 2003. [12] M. J. Black, G. Sapiro, D. Marimont, and D. Heeger. Robust anisotropic diffusion. IEEE Trans. on Image Processing, 7(3):421–432, 1998.
|
2003
|
97
|
2,504
|
Analytical solution of spike-timing dependent plasticity based on synaptic biophysics Bernd Porr, Ausra Saudargiene and Florentin W¨org¨otter Computational Neuroscience Psychology University of Stirling FK9 4LR Stirling, UK {Bernd.Porr,ausra,worgott}@cn.stir.ac.uk Abstract Spike timing plasticity (STDP) is a special form of synaptic plasticity where the relative timing of post- and presynaptic activity determines the change of the synaptic weight. On the postsynaptic side, active backpropagating spikes in dendrites seem to play a crucial role in the induction of spike timing dependent plasticity. We argue that postsynaptically the temporal change of the membrane potential determines the weight change. Coming from the presynaptic side induction of STDP is closely related to the activation of NMDA channels. Therefore, we will calculate analytically the change of the synaptic weight by correlating the derivative of the membrane potential with the activity of the NMDA channel. Thus, for this calculation we utilise biophysical variables of the physiological cell. The final result shows a weight change curve which conforms with measurements from biology. The positive part of the weight change curve is determined by the NMDA activation. The negative part of the weight change curve is determined by the membrane potential change. Therefore, the weight change curve should change its shape depending on the distance from the soma of the postsynaptic cell. We find temporally asymmetric weight change close to the soma and temporally symmetric weight change in the distal dendrite. 1 Introduction Donald Hebb [1] postulated half a century ago that the change of synaptic strength depends on the correlation of pre- and postsynaptic activity: cells which fire together wire together. Here we want to concentrate on a special form of correlation based learning, namely, spike timing dependent plasticity (STDP, [2, 3]). STDP is asymmetrical in time: Weights grow if the pre-synaptic event precedes the postsynaptic event. This phenomenon is called longterm potentiation (LTP). Weights shrink when the temporal order is reversed. This is called long-term depression (LTD). Correlations between pre- and postsynaptic activity can take place at different locations of the cell. Here we will focus on the dendrite of the cell (see Fig. 1). The dendrite has attracted interest recently because of its ability to propagate spikes back from the soma of the cell into its distal regions. Such spikes are called backpropagating spikes. The transmission is active which guarantees that the spikes can reach even the distal regions of the dendrite [4]. Backpropagating spikes have been suggested to be the driving force for STDP in the dendrite [5]. On the presynaptic side the main contribution to STDP comes from Ca2+ flow through the NMDA channels [6]. The goal of this study is to derive an analytical solution for STDP on the basis of the biophysical properties of the NMDA channel and the cell membrane. We will show that mainly the timing of the backpropagating spike determines the shape of the learning curve. With fast decaying backpropagating spikes we obtain STDP while with slow decaying backpropagating spikes we approximate temporally symmetric Hebbian learning. Plastic Synapse BP-Spike ∑ = i iI d t d V C BP I NMDA g g postsyn. event = BP-spike presyn. event at the plastic NMDA synapse T ρ t 100 ms 0 Figure 1: Schematic diagram of the model setup. The inset shows the time course of an NMDA response as modelled by Eq. 2. 2 The Model The goal is to define a weight change rule which correlates the dynamics of an NMDA channel with a variable which is linked to the dynamics of a backpropagating spike. The precise biophysical mechanisms of STDP are still to a large degree unresolved. It is, however, known that high levels of Ca2+ concentration resulting from Ca2+ influx mainly through NMDA-channels will lead to LTP, while lower levels will lead to LTD. Several biophysically more realistic models for STDP were recently designed which rely on this mechanisms [7, 8, 9]. Recent physiological results (reviewed in detail in [10]), however suggest that not only the Ca2+ concentration but maybe more importantly the change of the Ca2+ concentration determines if LTP or LTD is observed. This clearly suggests that a differential term should be included in the learning rule, when trying to model STDP. On theoretical grounds such a suggestion has also been made by several authors [11] who discussed that the abstract STDP models [12] are related to the much older model class of differential Hebbian learning rules [13]. In our model we assume that the Ca2+ concentration and the membrane potential are highly correlated. Consequently, our learning rule utilises the derivative of the membrane potential for the postsynaptic activity. After having identified the postsynaptic part of the weight change rule we have to define the presynaptic part. This shall be the conductance function of the NMDA channel [6]. The conventional membrane equation reads: C dv(t) dt = ρ g(t)[E −v(t)] + iBP (t) + Vrest −v(t) R , (1) where v is the membrane potential, ρ the synaptic weight of the NMDA-channel and g, E are its conductance and equilibrium potential, respectively. The current, which a BP-spike elicits, is given by iBP and the last term represents the passive repolarisation property of the membrane towards its resting potential Vrest = −70 mV . We set the membrane capacitance C = 50 pF and the membrane resistance to R = 100 MΩ. E is set to zero. The NMDA channel has the following equation: g(t) = ¯g e−b1t −e−a1t [a1 −b1][1 + κe−γV (t)] (2) For simpler notation, in general we use inverse time-constants a1 = τ −1 a , b1 = τ −1 b , etc. In addition, the term a1 −b1 in the denominator is required for later easier integration in the Laplace domain. Thus, we adjust for this by defining ¯g = 12 mS/ms which represents the peak conductance (4 nS) multiplied by b1 −a1. The other parameters were: a1 = 3.0/ms, b1 = 0.025/ms, γ = 0.06/mV . Since we will not vary the Mg2+ concentration we have already abbreviated: κ = η[Mg2+], η = 0.33/mM, [Mg2+] = 1 mM [14]. The synaptic weight of the NMDA channel is changed by correlating the conductance of this NMDA channel with the change (derivative) of the membrane potential: d dtρ = g(t)v′(t) (3) To describe the weight change, we wish to solve: ∆ρ(T) = Z ∞ 0 g(T + τ)v′(τ)dτ, (4) where T is the temporal shift between the presynaptic activity and the postsynaptic activity. The shift T > 0 means that the backpropagating spike follows after the trigger of the NMDA channel. The shift T < 0 means that the temporal sequence of the pre- and postsynaptic events is reversed. To solve Eq. 4 we have to simplify it, however, without loosing biophysical realism. In this paper we are interested in different shapes of backpropagating spikes. The underlying mechanisms which establish backpropagating spikes will not be addressed here. The backpropagating spike shall be simply modelled as a potential change in the dendrite and its shape is determined by its amplitude, its rise time and its decay time. First we observe that the influence of a single (or even a few) NMDA-channels on the membrane potential can be neglected in comparison to a BP-spike1, which, due to active processes, leads to a depolarisation of often more than 50 mV even at distal dendrites because of active processes [15]. Thus, we can assume that the dynamics of the membrane potential is established by the backpropagating spike and the resting potential Vrest: C dv(t) dt = iBP (t) + Vrest −v(t) R (5) This equation can be further simplified. Next we assume that the second passive repolarisation term can also be absorbed into iBP , thus resulting to itotal(t) = iBP (t) + Vrest−v(t) R . To this end we model itotal as a derivative of a band-pass filter function: itotal(t) = ¯itotal a2e−a2t −b2e−b2t a2 −b2 (6) 1Note that in spines, however, synaptic input can lead to large changes in the postsynaptic potential. In such cases g(t) contributes substantially to v(t). where ¯itotal is the current amplitude. This filter function causes first an influx of charges into the dendrite and then again an outflux of charges. The time constants a2 and b2 determine the timing of the current flow and therefore the rise and decay time. The total charge flux is zero so that the resting potential is reestablished after a backpropagating spike. In this way the active de- and repolarising properties of a BP-spike can be combined with the passive properties of the membrane, in practise by a curve fitting procedure which yields a2, b2. As a result we find that the membrane equation in our case reduces to: C dv(t) dt = itotal(t) (7) We receive the resulting membrane potential simply by integrating Eq. 6: v(t) = ¯itotal C e−b2t −e−a2t a2 −b2 (8) Note the sign inversion between v (Eq. 8) and i (Eq. 6, the one being the derivative of the other. The NMDA conductance g is more complex, because the membrane potential enters the denominator in Eq. 2. To simplify we perform a Taylor expansion around v = 0 mV . We expand around 0 mV and not around the resting potential. There are two reasons. First, we are interested in the open NMDA channel. This is the case for voltages towards 0 mV . Second, the NMDA channel has a strong non-linearity around the resting potential. Towards 0 mV , however, the NMDA channel has a linear voltage/current curve. Therefore it makes sense to expand around 0 mV . The NMDA conductance can now be written as: g(t) = ¯g e−b1t −e−a1t a1 −b1 · ( 1 κ + 1 + γκv(t) (κ + 1)2 + . . .) (9) and finally the potential v(t) (Eq. 8) can be inserted: g(t) = ¯g e−b1t −e−a1t a1 −b1 · (10) 1 κ + 1 + ¯itotalγκe−b2t C(κ + 1)2(a2 −b2) − ¯itotalγκe−a2t C(κ + 1)2(a2 −b2) + . . . (11) terminating the Taylor series after the second term this leads to three contributions to the conductance: g(t) = ¯g κ + 1 e−b1t −e−a1t a1 −b1 | {z } g(0) (12) −¯g¯itotalγκ (κ + 1)2C e−(b1+a2)t −e−(a1+a2)t (a1 −b1)(a2 −b2) | {z } g(1a) (13) + ¯g¯itotalγκ (κ + 1)2C e−(b1+b2)t −e−(a1+b2)t (a1 −b1)(a2 −b2) | {z } g(1b) (14) To perform the correlation in Eq. 4 we transform the required terms into the Laplace domain getting: g(0,1a,1b)(t) = k e−βt −e−αt α −β ↔ G(0,1a,1b)(s) = k 1 (s + α)(s + β) (15) itotal(t) = ¯itotal a2e−a2t −b2e−b2t a2 −b2 ↔ Itotal(s) = ¯itotal s (s + a2)(s + b2) (16) where α and β take the coefficient values from the exponential terms in g(0), g(1a), g(1b), respectively and k are the corresponding multiplicative factors2. A correlation in the Laplace domain is expressed by Plancherel’s theorem [16]: ∆ρ = 1 2π Z +∞ −∞ G(0)(−ıω)e−ıωT It(ıω)dω (17) − Z +∞ −∞ G(1a)(−ıω)e−ıωT It(ıω)dω (18) + Z +∞ −∞ G(1b)(−ıω)e−ıωT It(ıω)dω (19) The solution is calculated with the method of residuals which leads to a split of the result into T ≥0 and T < 0 and we get: For T ≥0: ∆ρ(T) = ¯g¯itotal (κ + 1)C b1e−b1T B(0) + −a1e−a1T A(0) + (20) − γκ¯itotal (κ+1)(a2−b2)C (b1+a2)e−(b1+a2)T B(1) + −(a1+a2)e−(a1+a2)T A(1) + (21) + γκ¯itotal (κ+1)(a2−b2)C (b1+b2)e−(b1+b2)T B(1) + −(a1+b2)e−(a1+b2)T A(1) + (22) with A(0) + = (a1−b1)(a1+a2)(a1+b2), A(1) + = (a1−b1)(a1+2a2)(a1+a2+b2), B(0) + = (a1 −b1)(b1 + b2)(a2 + b1), B(1) + = (a1 −b1)(2a2 + b1)(a2 + b1 + b2). For T < 0: ∆ρ(T) = ¯g¯itotal (κ + 1)C a2ea2T A(0) − −b2eb2T B(0) − (23) − γκ¯itotal (κ+1)(a2−b2)C a2ea2T A(1a) − −b2eb2T B(1a) − (24) + γκ¯itotal (κ+1)(a2−b2)C a2ea2T A(1b) − −b2eb2T B(1b) − (25) with A(0) −= (a2 −b2)(a1 +a2)(a2 +b1), A(1a) − = (a2 −b2)(a1 +2a2)(2a2 +b1), A(1b) − = (a2 −b2)(a1 + b2 + a2)(a2 + b1 + b2), B(0) − = (a2 −b2)(a1 + b2)(b1 + b2), B(1a) − = (a2 −b2)(a1 + a2 + b2)(b1 + a2 + b2), B(1b) − = (a2 −b2)(a1 + 2b2)(b1 + 2b2). The resulting equations contain interesting symmetries which makes the interpretation easy. We observe that they split into three terms. For T > 0 the first term captures the NMDA influence only, while for T < 0 it captures the influence of only the BP-spike (apart from scaling factors). Mixed influences arise from the second and third terms which scale with the peak current amplitude ¯itotal of the BP-spike. 3 Results While the properties of mature NMDA channels are captured by the parameters given for Eq. 2 and remain fairly constant, BP-spikes change their shapes along the dendrite. Thus, 2We use lower-case letters for functions in the time-domain and upper-case letters for their equivalent in the Laplace domain. Figure 2: (A-F) STDP-curves obtained from Eqs. 22, 25 and corresponding normalised BP-spikes (G-I, ¯itotal = 1, left y-axis: current, right y-axis: integrated potential). Panels A-C were obtained with different peak currents ¯itotal = 0.5 nA, 0.1nA and 25pA. These currents cause peak voltages of 40mV, 50mV and 40mV respectively. Panels D-F were all simulated with a peak current of ¯itotal = 5.0 nA. This current is unrealistic, however, it is chosen for illustrative purposes to show the different contributions to the learning curve (the dashed lines for G(0) and the dotted lines for G(1a,b) and the solid lines for the sum of the two contributions). Time constants for the BP-spikes were: (A,D,G) a−1 2 = τa = 0.0095 ms, b−1 2 = τb = 0.01 ms (B,E,H) τa = 0.05 ms, τb = 0.1 ms (C,F,I) τa = 0.1 ms, τb = 1.0 ms. we kept the NMDA properties unchanged and varied the time constants of the BP-spikes as well as the current amplitude to simulate this effect. Fig. 2 shows STDP curves (solid lines, A-F) and the corresponding BP-spikes (G-I). The contributions of the different terms to the STDP curves are also shown (first term, dashed, as well as second and third term scaled with their fore-factor, dotted). All curves have arbitrary units. As expected we find that the first term dominates for small (realistic) currents (top panels), while the second and third terms dominate for higher currents (middle panels). Furthermore, we find that long BP-spikes will lead to plain Hebbian learning, where only LTP but no LTD is observed (B,C,E,F). 4 Discussion We believe that two of our findings could be of longer lasting relevance for the understanding of synaptic learning, provided they withstand physiological scrutinising: 1) The shape of the weight change curves heavily relies on the shape of the backpropagating spike. 2) STDP can turn into plain Hebbian learning if the postsynaptic depolarisation (i.e., the BP-spike) rises shallow. Physiological studies suggest that weight change curves can indeed have a widely varying shape (reviewed in [17]). In this study we argue that in particular the shape of the backpropagating spike influences the shape of the weight change curve. In fact the dendrites can be seen as active filters which change the shape of backpropagating spikes during their journey to the distal parts of the dendrite [18]. In particular, the decay time of the BP spike is increased in the distal parts of the dendrite [15]. The different decay times determine if we get pure symmetric Hebbian learning or STDP (see Fig. 2). Thus, the theoretical result would suggest temporal symmetric Hebbian learning in the distal dendrites and STDP in the proximal dendrites. From a computational perspective this would mean that the distal dendrites perform principle component analysis [19] and the proximal dendrites temporal sequence learning [20]. Now, our model has to be compared to other models of STDP. We can count our model to the “state variable models”. Such models can either adopt a rather descriptive approach [21], where appropriate functions are being fit to the measured weight change curves. Others are closer to the kinetic models in trying to fit phenomenological kinetic equations [7, 22, 23, 9]. Those models establish a more realistic relation between calcium concentration and membrane potential. The calcium concentration seems to be a low-pass filtered version of the membrane potential [24]. Such a low pass filter hlow could be added to the learning rule Eq. 3 resulting in: dρ/dt = g(t)hlow(t) ∗v′(t). The approaches of [9] as well as of Karmarkar and co-workers [23] are closely related to our model. Both models investigate the effects of different calcium concentration levels by assuming certain (e.g. exponential) functional characteristics to govern its changes. This allows them to address the question of how different calcium levels will lead to LTD or LTP [25]. Both model-types [9, 23, 8] were designed to produce a zero-crossing (transition between LTD and LTP) at T = 0. The differential Hebbian rule employed by us leads to the observed results as the consequence of the fact that the derivative of any generic unimodal signal will lead to a bimodal curve. We utilise the derivative of the unimodal membrane potential to obtain a bimodal weight change curve. The derivative of the membrane potential is proportional to the charge transfer dqt dt = it across the (post-synaptic) membrane (see Eq. 7). There is wide ranging support that synaptic plasticity is strongly dominated by calcium transfer through NMDA channels [26, 27, 6]. Thus it seems reasonable to assume that a part of dQ dt represents calcium flow through the NMDA channel. References [1] D. O. Hebb. The organization of behavior: A neurophychological study. WileyInterscience, New York, 1949. [2] H Markram, J L¨ubke, M Frotscher, and B Sakman. Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science, 275:213–215, 1997. [3] J. C. Magee and D. Johnston. A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons. Science, 275:209–213, 1997. [4] Daniel Johnston, Brian Christie, Andreas Frick, Richard Gray, Dax A. Hoffmann, Lalania K. Schexnayder, Shigeo Watanabe, and Li-Lian Yuan. Active dendrites, potassium channels and synaptic plasticity. Phil. Trans. R. Soc. Lond. B, 358:667– 674, 2003. [5] D. J. Linden. The return of the spike: Postsynaptic action potentials and the induction of LTP and LTD. Neuron, 22:661–666, 1999. [6] R. C. Malenka and R. A. Nicoll. Long-term potentiation — a decade of progress? Science, 285:1870–1874, 1999. [7] W. Senn, H. Markram, and M. Tsodyks. An algorithm for modifying neurotransmitter release probability based on pre-and postsynaptic spike timing. Neural Comp., 13:35– 67, 2000. [8] U. R. Karmarkar, M. T. Najarian, and D. V. Buonomano. Mechanisms and significance of spike-timing dependent plasticity. Biol. Cybern., 87:373–382, 2002. [9] H. Z. Shouval, M. F. Bear, and L. N. Cooper. A unified model of NMDA receptor-dependent bidirectional synaptic plasticity. Proc. Natl. Acad. Sci. (USA), 99(16):10831–10836, 2002. [10] G. Q. Bi. Spatiotemporal specificity of synaptic plasticity: cellular rules and mechanisms. Biol. Cybern., 87:319–332, 2002. [11] Patrick D. Roberts. Temporally asymmetric learning rules: I. Differential Hebbian Learning. Journal of Computational Neuroscience, 7(3):235–246, 1999. [12] Richard Kempter, Wulfram Gerstner, and J. Leo van Hemmen. Hebbian learning and spiking neurons. Physical Review E, 59:4498–4514, 1999. [13] R.S. Sutton and A.G. Barto. Towards a modern theory of adaptive networks: Expectation and prediction. Psychological Review, 88:135–170, 1981. [14] C. Koch. Biophysics of Computation. Oxford University Press, 1999. [15] Greg Stuart, Nelson Spruston, Bert Sakmann, and Michael H¨ausser. Action potential initiation and backpropagation in neurons of the mammalian cns. Trends Neurosci., 20(3):125–131, 1997. [16] John L. Stewart. Fundamentals of Signal Theory. Mc Graw-Hill, New York, 1960. [17] P. D. Roberts and C. C. Bell. Spike timing dependent synaptic plasticity in biological systems. Biol. Cybern., 87:392–403, 2002. [18] Nace L. Golding, William L. Kath, and Nelson Spruston. Dichotomy of action potential backpropagation in ca1 pyramidal neuron dendrites. J Neurophysiol, 86:2998– 3009, 2001. [19] E. Oja. A simplified neuron model as a principal component analyzer. J Math Biol, 15(3):267–273, 1982. [20] Bernd Porr and Florentin W¨org¨otter. Isotropic Sequence Order learning. Neural Computation, 15:831–864, 2003. [21] H. D. I. Abarbanel, R. Huerta, and M. I. Rabinovich. Dynamical model of long-term synaptic plasticity. Proc. Natl. Acad. Sci. (USA), 99(15):10132–10137, 2002. [22] G. C. Castellani, E. M. Quinlan, L. N. Cooper, and H. Z. Shouval. A biophysical model of bidirectional synaptic plasticity: Dependence on AMPA and NMDA receptors. Proc. Natl. Acad. Sci. (USA), 98(22):12772–12777, 2001. [23] U. R. Karmarkar and D. V. Buonomano. A model of spike-timing dependent plasticity: One or two coincidence detectors? J. Neurophysiol., 88:507–513, 2002. [24] G. Stuart, J. Schiller, and B. Sakmann. Action potential initiation and propagation in rat neocortical pyramidal neurons. J Physiol, 505:617–632, 1997. [25] M. Nishiyama, K. Hong, K. Mikoshiba, M. Poo, and K. Kato. Calcium stores regulate the polarity and input specificity of synaptic modification. Nature, 408:584–588, 2000. [26] J. Schiller, Y. Schiller, and D. E. Clapham. Amplification of calcium influx into dendritic spines during associative pre- and postsynaptic activation: The role of direct calcium influx through the NMDA receptor. Nat. Neurosci., 1:114–118, 1998. [27] R. Yuste, A. Majewska, S. S. Cash, and W. Denk. Mechanisms of calcium influx into hippocampal spines: heterogeneity among spines, coincidence detection by NMDA receptors, and optical quantal analysis. J. Neurosci., 19:1976–1987, 1999.
|
2003
|
98
|
2,505
|
Sparse Greedy Minimax Probability Machine Classification Thomas R. Strohmann Department of Computer Science University of Colorado, Boulder strohman@cs.colorado.edu Andrei Belitski Department of Computer Science University of Colorado, Boulder Andrei.Belitski@colorado.edu Gregory Z. Grudic Department of Computer Science University of Colorado, Boulder grudic@cs.colorado.edu Dennis DeCoste Machine Learning Systems Group NASA Jet Propulsion Laboratory decoste@aig.jpl.nasa.gov Abstract The Minimax Probability Machine Classification (MPMC) framework [Lanckriet et al., 2002] builds classifiers by minimizing the maximum probability of misclassification, and gives direct estimates of the probabilistic accuracy bound Ω. The only assumptions that MPMC makes is that good estimates of means and covariance matrices of the classes exist. However, as with Support Vector Machines, MPMC is computationally expensive and requires extensive cross validation experiments to choose kernels and kernel parameters that give good performance. In this paper we address the computational cost of MPMC by proposing an algorithm that constructs nonlinear sparse MPMC (SMPMC) models by incrementally adding basis functions (i.e. kernels) one at a time – greedily selecting the next one that maximizes the accuracy bound Ω. SMPMC automatically chooses both kernel parameters and feature weights without using computationally expensive cross validation. Therefore the SMPMC algorithm simultaneously addresses the problem of kernel selection and feature selection (i.e. feature weighting), based solely on maximizing the accuracy bound Ω. Experimental results indicate that we can obtain reliable bounds Ω, as well as test set accuracies that are comparable to state of the art classification algorithms. 1 Introduction The goal of a binary classifier is to maximize the probability that unseen test data will be classified correctly. Assuming that the test data is generated from the same probability distribution as the training data, it is possible to derive specific probability bounds for the case that the decision boundary is a hyperplane. The following result due to Marshall and Olkin [1] and extended by Bertsimas and Popescu [2] provides the theoretical basis for assigning probability bounds to hyperplane classifiers: sup E[z]=¯z,Cov[z]=Σz Pr{aT z ≥b} = 1 1 + ω2 ω2 = infaT t≥b(t −¯z)T Σ−1 z (t −¯z) (1) where a ∈Rd, b are the hyperplane parameters, z is a random vector, and t is an ordinary vector. Lanckriet et al (see [3] and [4]) used the above result to build the Minimax Probability Machine for binary classification (MPMC). From (1) we note that the only required relevant information of the underlying probability distribution for each class is its mean and covariance matrix. No other estimates and/or assumptions are needed, which implies that the obtained bound (which we refer to as Ω) is essentially distribution free, i.e. it holds for any distribution with a certain mean and covariance matrix. As with other classification algorithms such as Support Vector Machines (SVM) (see [5]), the main disadvantage of current MPMC implementations is that they are computationally expensive (same complexity as SVM), and require extensive cross validation experiments to choose kernels and kernel parameter to give good performance on each data set. The goal of this paper is to propose a kernel based MPMC algorithm that directly addresses these computational issues. Towards this end, we propose a sparse greedy MPMC (SMPMC) algorithm that efficiently builds classifiers, while at the same time maintains the distribution free probability bound of MPM type algorithms. To achieve this goal, we propose to use an iterative algorithm which adds basis functions (i.e. kernels) one by one, to an initially ”empty” model. We are considering basis functions that are induced by Mercer kernels, i.e. functions of the following form f(z) = Kγ(z, zi) (where zi is an input vector of the training data). Bases are added in a greedy way: we select the particular zi that maximizes the MPMC objective Ω. Furthermore, SMPMC chooses optimal kernel parameters that maximize this metric (hence the subscript γ in Kγ), including automatically weighting input features by γj ≥ 0 for each kernel added, such that zi = (γ1z1, γ2z2, ..., γdzd) for d dimensional data. The proposed SMPMC algorithm automatically selects kernels and re-weights features (i.e. does feature selection) for each new added basis function, by minimizing the error bound (i.e. maximizing Ω). Thus the large computational cost of cross validation (typically used by SVM and MPMC) is avoided. The paper is organized as follows: Section 2.1 reviews the standard MPMC; Section 2.2 describes the proposed sparse greedy MPMC algorithm (SMPMC); and Sections 2.3-2.4 show how we can use sparse MPMC to determine optimal kernel parameters. In section 3 we compare our results to the ones described in the original MPMC paper (see [4]), showing the probability bounds and the test set accuracies for different binary classification problems. The conclusion is presented in section 4. Matlab source code for the SMPMC algorithm is available online: http://nago.cs.colorado.edu/∼strohman/papers.html 2 Classification model In this section we develop a sparse version of the Minimax Probability Machine for binary classification. We show that besides a significant reduction in computational cost, the SMPMC algorithm allows us to do automated kernel and feature selection. 2.1 Minimax Probability Machine for binary classification We will briefly describe the underlying concepts of the MPMC framework as developed by Lanckriet et al. (see [4]). The goal of MPMC is to find a decision boundary H(a, b) = {z|aT z = b} such that the minimum probability ΩH of classifying future data correctly is maximized. If we assume that the two classes are generated from random vectors x and y, we can express this probability bound just in terms of the means and covariances of these random vectors: ΩH = inf x∼(¯x,Σx),y∼(¯y,Σy) Pr{aT x ≥b ∧aT y ≤b} (2) Note that we do not make any distributional assumptions other than that ¯x, Σx, ¯y, and Σx are bounded. Exploiting a theorem from Marshall and Olkin [1], it is possible to rewrite (2) as a closed form expression: ΩH = 1 1 + m2 (3) where m = min a p aT Σxa + q aT Σya s.t. aT (¯x −¯y) = 1 (4) The optimal hyperplane parameter a∗is the vector that minimizes (4). The hyperplane parameter b∗can then be computed as: b∗= aT ∗¯x − p aT∗Σxa∗ m (5) A new data point znew is classified according to sign(aT ∗znew −b∗); if this yields +1, znew is classified as belonging to class x, otherwise it is classified as belonging to class y. 2.2 Sparse MPM classification One of the appealing properties of Support Vector Machines is that their models typically rely only on a small fraction of the training examples, the so called support vectors. The models obtained from the kernelized MPMC, however, use all of the training examples (see [4]), i.e. the decision hyperplane will look like: Nx X i=1 a(x) i K(xi, z) + Ny X i=1 a(y) i K(yi, z) = b (6) where in general all a(x) i , a(y) i ̸= 0. This brings up the question whether one can construct sparse models for the MPMC where most of the coefficients a(x) i or a(y) i are zero. In this paper we propose to do this by starting with an initially ”empty” model and then adding basis functions one by one. As we will see shortly, this approach is speeding up both learning and evaluation time while it is still maintaining the distribution free probability bound of the MPMC. Before we outline the algorithm we introduce some notation: N = Nx + Ny the total number of training examples ℓ = (ℓ1, ..., ℓN)T ∈{−1, 1}N the labels of the training data bℓ(k) = (bℓ(k) 1 , ..., bℓ(k) N )T ∈RN output of the model after adding the kth basis function a(k) = the MPMC hyperplane coefficients when adding the kth basis function b(k) = the MPMC hyperplane offset when adding the kth basis function ⃗Kb = (Kv(v, x1), ..., Kv(v, xNx), Kv(v, y1), ..., Kv(v, yNy))T basis function evaluated on all training examples (empirical map) ⃗Kxv = (Kv(v, x1), ..., Kv(v, xNx))T evaluated only on positive examples ⃗Kyv = (Kv(v, y1), ..., Kv(v, yNy))T evaluated only on negative examples Note that bℓ(k) is a vector of real numbers (the distances of the training data to the hyperplane before applying the sign function). v ∈Rd is the training vector generating the basis function ⃗Kv 1. We will simply write ⃗K(k), ⃗K(k) x , ⃗K(k) y for the kth basis function. 1Note that we use the same symbol ⃗K for both the empirical map and the induced function. It will always be clear from the context what ⃗K refers to. For the first basis we are solving the one dimensional MPMC: m = min a q aσ2 ⃗K(1) x a + q aσ2 ⃗K(1) y a s.t. a( ⃗K(1) x −⃗K(1) y ) = 1 (7) where ⃗K(1) x and σ2 ⃗K(1) x are the mean and variance of the vector ⃗K(1) x (which is the first basis function evaluated on all positive training examples). Because of the constraint the feasible region contains just one value for a(1): a(1) = 1/( ⃗K(1) x −⃗K(1) y ) b(1) = a(1) ⃗K(1) x − q aσ2 ⃗ K(1) x a q aσ2 ⃗ K(1) x a+q aσ2 ⃗ K(1) y a = a(1) ⃗K(1) x − σ ⃗ K(1) x σ ⃗ K(1) x +σ ⃗ K(1) y (8) The first model then looks like: bℓ(1) = a(1) ⃗K(1) −b(1) (9) All of the subsequent models use the previous estimation bℓ(k) as one input and the next basis ⃗K(k+1) as the other input. We set up the two dimensional classification problem: x(k+1) = [bℓ(k) x , ⃗K(k+1) x ] ∈RNx×2 y(k+1) = [bℓ(k) y , ⃗K(k+1) y ] ∈RNy×2 (10) And solve the following optimization problem: m = min a p aT Σx(k+1)a + q aT Σy(k+1)a s.t. aT (x(k+1) −y(k+1)) = 1 (11) where x(k+1) is the 2-dimensional mean vector (bℓ(k) x , ⃗K(k+1) x )T and where Σx(k+1) is the 2 × 2 sample covariance matrix of the vectors bℓ(k) x and ⃗K(k+1) x . Let a(k+1) = (a(k+1) 1 , a(k+1) 2 )T be the optimal solution of (11). We set: b(k+1) = a(k+1)T x(k+1) − q a(k+1)T Σx(k+1)a(k+1) q a(k+1)T Σx(k+1)a(k+1) + q a(k+1)T Σy(k+1)a(k+1) (12) and obtain the next model as: bℓ(k+1) = a(k+1) 1 bℓ(k) + a(k+1) 2 ⃗K(k+1) −b(k+1) (13) As stated above, one computational advantage of SMPMC is that we typically use only a small number of training examples to obtain our final model (i.e. k << N). Another benefit is that we have to solve only one and two dimensional MPMC problems. As seen in (8) the one dimensional solution is trivial to compute. An analysis of the two dimensional problem shows that it can be reduced to the problem of finding the roots of a fourth order polynomial. Polynomials of degree 4 still have closed form solutions (see e.g. [6]) which can be computed efficiently. In the standard MPMC algorithm (see [4]), however, the solution a for equation (4) has N dimensions and can therefore only be found by expensive numerical methods. It may seem that the values of Ω= 1/(1 + m2) which we obtain from (11) are not true for the whole model since we are considering only two dimensional problems and not all of the k + 1 dimensions we have added so far through our basis functions. But it turns out that the ”local” bound (from the 2D MPMC) is indeed equal to the ”global” bound (when considering all k + 1 dimensions). We state this fact more formally in the following theorem: Theorem 1: Let bℓ(k) = c0 + c1 ⃗K(1) + ... + ck ⃗K(k) be the sparse MPMC model at the kth iteration (k ≥1) and let a(k+1) 1 , a(k+1) 2 , b(k+1) be the solution of the two dimensional MPMC: bℓ(k+1) = a(k+1) 1 bℓ(k) + a(k+1) 2 ⃗K(k+1) −b(k+1). Then the values of Ωfor the two dimensional MPMC and for the k +1 dimensional MPMC are the same. Proof: see Appendix 2.3 Selection of bases and Gaussian Kernel widths In our experiments we are using the Gaussian kernel which looks like: Kσ(u, v) = exp(−||u −v||2 2 2σ2 ) (14) where σ is the so called kernel width. As mentioned before, one typically has to choose σ manually or determine it by cross validation (see [4]). The SMPMC algorithm greedily selects a basis function – out of a randomly chosen candidate set – to maximize Ωwhich is equivalent to minimizing the value of m in (7) and (11). Before we state the optimization problem for the one and two dimensional MPMC we rewrite (14) so that we can get rid of the denominator: Kγ(u, v) = exp(−γ||u −v||2 2) γ ≥0 (15) The optimization problem we solve for the first iteration is then: min γ m(γ) = min a q aσ2 ⃗K(1) x a + q aσ2 ⃗K(1) y a s.t. a( ⃗K(1) x −⃗K(1) y ) = 1 (16) note that – even though we did not state it explicitly – the statistics σ2 ⃗K(1) x , σ2 ⃗K(1) y , ⃗K(1) x , and ⃗K(1) y (and consequently the coefficient a) all depend on the kernel parameter γ. The two dimensional problem that has to be solved for all subsequent iterations k ≥2 turns into the following optimization problem for γ: min γ m(γ) = min a p aT Σx(k+1)a+ q aT Σy(k+1)a s.t. aT (x(k+1)−y(k+1)) = 1 (17) Again, x(k+1), y(k+1), Σx(k+1), and Σy(k+1) all depend on the kernel parameter γ and from these four statistics we can compute the minimizer a ∈R2 analytically. 2.4 Feature selection For doing feature selection with Gaussian kernels one has to replace the uniform kernel width γ with a d dimensional vector ⃗γ of kernel weightings: K⃗γ(u, v) = exp(−Pd l=1 γl(ul −vl)2) (γl ≥0 l = 1, ..., d) (18) Note that the optimization problems (16) and (17) for the one respectively two dimensional MPMC are now d dimensional instead of just one dimensional. 3 Experiments In this section we describe the results we obtained for SMPMC on various classification benchmarks. We used the same data sets as Lanckriet et al. in [4] for the standard MPMC. The data sets were randomly divided into 90% training data and 10% test data and the results were averaged over 50 runs for each of the five problems (see table 1). In all the experiments listed in table 1 we used the feature selection algorithm (with the exception of Sonar where width selection was used) and had a candidate set of size 5, i.e. at each iteration the best basis out of 5 randomly chosen candidates was selected. The results we obtained are comparable to the ones reported by Lanckriet et al [4]. Note that for all of the data sets SMPMC uses significantly less basis functions than MPMC does which directly translates into an accordingly smaller evaluation cost. The differences in training cost are shown in table 2. The total training time for standard MPMC takes into account the 50-fold cross validation and 10 candidates for the kernel parameter. We observe that for all of the five data sets the training cost of sparse MPMC is only a fraction of the one for standard MPMC. The two plots in figure 1 show what typical learning curves for sparse MPMC look like. As the number of basis function increases, both the bound Ωand the test set accuracy start Table 1: Bound Ω, Test set accuracy (TSA), number of bases (B) for sparse and standard MPMC Dataset SMPMC Standard MPMC (Lanckriet et al.) Ω TSA B Ω TSA B Twonorm 86.4 ± 0.1% 98.3 ± 0.4% 25 91.3 ± 0.1% 95.7 ± 0.5% 270 Breast Cancer 90.9 ± 0.1% 96.8 ± 0.3% 50 89.1 ± 0.1% 96.9 ± 0.3% 614 Ionosphere 77.7 ± 0.2% 91.6 ± 0.5% 25 89.3 ± 0.2% 91.5 ± 0.7% 315 Pima Diabetes 38.2 ± 0.1% 75.4 ± 0.7% 50 32.5 ± 0.2% 76.2 ± 0.6% 691 Sonar 78.5 ± 0.2% 86.4 ± 1.0% 80 99.9 ± 0.1% 87.5 ± 0.9% 187 Table 2: training time (in seconds) for Matlab implementations of SMPMC and MPMC Dataset # training SMPMC Standard MPMC (Lanckriet et al.) examples training time one optimization total training time Twonorm 270 125.0 23.9 1199.2 Breast Cancer 614 188.5 122.4 6123.2 Ionosphere 315 416.3 28.1 1404.3 Pima Diabetes 691 165.6 186.5 9324.2 Sonar 187 35.3 8.7 435.1 to go up and after a while stabilize. The stabilization point usually occurs earlier when one does full feature selection (a γ weight for each input dimension) instead of kernel width selection (one uniform γ for all dimensions). We also experimented with different sizes for the candidate set. The plots in figure 2 show what happens for 1, 5, and 10 candidates. The overall behavior is that the test set accuracy as well as the Ωvalue converge earlier for larger candidate sets (but note that a larger candidate set also increases the computational cost per iteration). As seen in figure 1, feature selection gives usually better results in terms of the bound Ω and the test set accuracy. Furthermore, a feature selection algorithm should indicate which features are relevant and which are not. We set up an experiment for the Twonorm data (which has 20 input features) where we added 20 additional noisy features that were not related to the output. The results are shown in figure 3 and demonstrate that the feature selection algorithm obtained from SMPMC is able to distinguish between relevant and irrelevant features. 4 Conclusion & future work This paper introduces a new algorithm (Sparse Minimax Probability Machine Classification - SMPMC) for building sparse classification models that provide a lower bound on the probability of classifying future data correctly. We have shown that the method of iteratively adding basis functions has significant computational advantages over the standard MPMC, while it still maintains the distribution free probability bound Ω. Experimental results indicate that automated selection of kernel parameters, as well as automated feature selection (weighting), both key characteristics of the SMPMC algorithm, result in error rates that are competitive with those obtained by models where these parameters must be tuned by computationally expensive cross validation. Future research on sparse greedy MPMC will focus on establishing a theoretical framework for a stopping criterion, when adding more basis functions (kernels) will not significantly reduce error rates, and may lead to overfitting. Also, experiments have so far focused on using Gaussian kernels as basis functions. From the experience with other kernel algorithms, it is known that other type of kernels (polynomial, tanh) can yield better results for certain applications. Furthermore, our framework is not limited to Mercer kernels, and other types 0 5 10 15 20 25 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Ionosphere basis functions Ω for WS TSA for WS Ω for FS TSA for FS 0 10 20 30 40 50 60 70 80 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Sonar basis functions Ω for WS TSA for WS Ω for FS TSA for FS Figure 1: Bound Ωand Test Set accuracy (TSA) for width selection (WS) and feature selection (FS). Note that the accuracies are all higher than the corresponding bounds. 0 2 4 6 8 10 12 14 16 18 20 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76 0.78 Test set accuracy basis functions 1 candidate 5 candidates 10 candidates 0 2 4 6 8 10 12 14 16 18 20 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Ω bound basis functions 1 candidate 5 candidates 10 candidates Figure 2: Accuracy and bound for the Diabetes data set using 1,5 or 10 basis candidates per iteration. Again, the Ωbound is a true lower bound on the test set accuracy. 0 5 10 15 20 25 30 35 40 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 feature i weight γi Figure 3: Average feature weighting for the Twonorm data set over 50 test runs. The first 20 features are the original inputs, the last 20 features are additional noisy inputs of basis functions are also worth investigating. Recent work by Crammer et al. [7] uses boosting to construct a suitable kernel matrix iteratively. An interesting open question is how this approach relates to sparse greedy MPMC. References [1] A. W. Marshall and I. Olkin. Multivariate chebyshev inequalities. Annals of Mathematical Statistics, 31(4):1001–1014, 1960. [2] I. Popescu and D. Bertsimas. Optimal inequalities in probability theory: A convex optimization approach. Technical Report TM62, INSEAD, Dept. Math. O.R., Cambridge, Mass, 2001. [3] G. R. G. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan. Minimax probability machine. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [4] G. R. G. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan. A robust minimax approach to classification. Journal of Machine Learning Research, 3:555–582, 2002. [5] B. Sch¨olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [6] William H. Beyer. CRC Standard Mathemathical Tables, page 12. CRC Press Inc., Boca Raton, FL, 1987. [7] K. Crammer, J. Keshet, and Y. Singer. Kernel design using boosting. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 15, Cambridge, MA, 2003. MIT Press. Appendix: Proof of Theorem 1 We have to show that the values of m are equal for the two dimensional MPMC and the k + 1 dimensional MPMC. We will just show the equivalence for the first term √ aT Σxa, an analogue argumentation will hold for the second term. For the two dimensional MPMC we have the following for the term under the square root: ¡ a(k+1) 1 a(k+1) 2 ¢ Ã σ2 b ℓ(k) x σb ℓ(k) x ⃗ K(k+1) x σ ⃗ K(k+1) x b ℓ(k) x σ2 ⃗ K(k+1) x ! µ a(k+1) 1 a(k+1) 2 ¶ = [a(k+1) 1 ]2σ2 b ℓ(k) x + 2a(k+1) 1 a(k+1) 2 σb ℓ(k) x ⃗ K(k+1) x + [a(k+1) 2 ]2σ2 ⃗ K(k+1) x (19) Note that we can rewrite σ2 b ℓ(k) x = Cov(c0 + c1 ⃗K(1) x + ... + ck ⃗K(k) x , c0 + c1 ⃗K(1) x + ... + ck ⃗K(k) x ) = Pk i=1 Pk j=1 cicjCov( ⃗K(i) x , ⃗K(j) x ) σb ℓ(k) x ⃗ K(k+1) x = Cov(c0 + c1 ⃗K(1) x + ... + ck ⃗K(k) x , ⃗K(k+1) x ) = Pk i=1 ciCov( ⃗K(i) x , ⃗K(k+1) x ) (20) by using properties of the sample covariance (linearity, Cov(const, X) = 0). For the k + 1 dimensional MPMC let us first determine the k + 1 coefficients: bℓ(k+1) = a(k+1) 1 (c0 + c1 ⃗K(1) x + ... + ck ⃗K(k) x ) + a(k+1) 2 ⃗K(k+1) x −b(k+1) = a(k+1) 1 c1 ⃗K(1) x + ... + a(k+1) 1 ck ⃗K(k) x + a(k+1) 2 ⃗K(k+1) x + a(k+1) 1 c0 −b(k+1) The term under the square root then looks like: a(k+1) 1 c1 ... a(k+1) 1 ck a(k+1) 2 T σ2 ⃗ K(1) x ... σ ⃗ K(1) x ⃗ K(k) x σ ⃗ K(1) x ⃗ K(k+1) x ... ... ... ... σ ⃗ K(k) x ⃗ K(1) x ... σ2 ⃗ K(k) x σ ⃗ K(k) x ⃗ K(k+1) x σ ⃗ K(k+1) x ⃗ K(1) x ... σ ⃗ K(k+1) x ⃗ K(k) x σ2 ⃗ K(k+1) x a(k+1) 1 c1 ... a(k+1) 1 ck a(k+1) 2 (21) Multiplying out (21) and substituting according to the equations in (20) yields exactly expression (19) (which is the aT Σxa term of the two dimensional MPM). Since this equivalence will hold likewise for the p aT Σya term in m, we have shown that m (and therefore Ω) is equal for the two dimensional and the k + 1 dimensional MPMC.
|
2003
|
99
|
2,506
|
PAC-Bayes Learning of Conjunctions and Classification of Gene-Expression Data Mario Marchand IFT-GLO, Universit´e Laval Sainte-Foy (QC) Canada, G1K-7P4 Mario.Marchand@ift.ulaval.ca Mohak Shah SITE, University of Ottawa Ottawa, Ont. Canada,K1N-6N5 mshah@site.uottawa.ca Abstract We propose a “soft greedy” learning algorithm for building small conjunctions of simple threshold functions, called rays, defined on single real-valued attributes. We also propose a PAC-Bayes risk bound which is minimized for classifiers achieving a non-trivial tradeoffbetween sparsity (the number of rays used) and the magnitude of the separating margin of each ray. Finally, we test the soft greedy algorithm on four DNA micro-array data sets. 1 Introduction An important challenge in the problem of classification of high-dimensional data is to design a learning algorithm that can often construct an accurate classifier that depends on the smallest possible number of attributes. For example, in the problem of classifying gene-expression data from DNA micro-arrays, if one can find a classifier that depends on a small number of genes and that can accurately predict if a DNA micro-array sample originates from cancer tissue or normal tissue, then there is hope that these genes, used by the classifier, may be playing a crucial role in the development of cancer and may be of relevance for future therapies. The standard methods used for classifying high-dimensional data are often characterized as either “filters” or “wrappers”. A filter is an algorithm used to “filter out” irrelevant attributes before using a base learning algorithm, such as the support vector machine (SVM), which was not designed to perform well in the presence of many irrelevant attributes. A wrapper, on the other hand, is used in conjunction with the base learning algorithm: typically removing recursively the attributes that have received a small “weight” by the classifier obtained from the base learner. The recursive feature elimination method is an example of a wrapper that was used by Guyon et al. (2002) in conjunction with the SVM for classification of micro-array data. For the same task, Furey et al. (2000) have used a filter which consists of ranking the attributes (gene expressions) as function of the difference between the positive-example mean and the negative-example mean. Both filters and wrappers have sometimes produced good empirical results but they are not theoretically justified. What we really need is a learning algorithm that has provably good guarantees in the presence of many irrelevant attributes. One of the first learning algorithms proposed by the COLT community has such a guarantee for the class of conjunctions: if there exists a conjunction, that depends on r out of the n input attributes and that correctly classifies a training set of m examples, then the greedy covering algorithm of Haussler (1988) will find a conjunction of at most r ln m attributes that makes no training errors. Note the absence of dependence on the number n of input attributes. In contrast, the mistake-bound of the Winnow algorithm (Littlestone, 1988) has a logarithmic dependence on n and will build a classifier on all the n attributes. Motivated by this theoretical result and by the fact that simple conjunctions of gene expression levels seems an interesting learning bias for the classification of DNA micro-arrays, we propose a “soft greedy” learning algorithm for building small conjunctions of simple threshold functions, called rays, defined on single real-valued attributes. We also propose a PAC-Bayes risk bound which is minimized for classifiers achieving a non-trivial tradeoffbetween sparsity (the number of rays used) and the magnitude of the separating margin of each ray. Finally, we test the proposed soft greedy algorithm on four DNA micro-array data sets. 2 Definitions The input space X consists of all n-dimensional vectors x = (x1, . . . , xn) where each real-valued component xi ∈[Ai, Bi] for i = 1, . . . n. Hence, Ai and Bi are, respectively, the a priori lower and upper bounds on values for xi. The output space Y is the set of classification labels that can be assigned to any input vector x ∈X. We focus here on binary classification problems. Thus Y = {0, 1}. Each example z = (x, y) is an input vector x with its classification label y ∈Y. In the probably approximately correct (PAC) setting, we assume that each example z is generated independently according to the same (but unknown) distribution D. The (true) risk R(f) of a classifier f : X →Y is defined to be the probability that f misclassifies z on a random draw according to D: R(f) def = Pr(x,y)∼D (f(x) ̸= y) = E(x,y)∼DI(f(x) ̸= y) where I(a) = 1 if predicate a is true and 0 otherwise. Given a training set S = (z1, . . . , zm) of m examples, the task of a learning algorithm is to construct a classifier with the smallest possible risk without any information about D. To achieve this goal, the learner can compute the empirical risk RS(f) of any given classifier f according to: RS(f) def = 1 m m X i=1 I(f(xi) ̸= yi) def = E(x,y)∼SI(f(x) ̸= y) We focus on learning algorithms that construct a conjunction of rays from a training set. Each ray is just a threshold classifier defined on a single attribute (component) xi. More formally, a ray is identified by an attribute index i ∈{1, . . . , n}, a threshold value t ∈[Ai, Bi], and a direction d ∈{−1, +1} (that specifies whether class 1 is on the largest or smallest values of xi). Given any input example x, the output ri td(x) of a ray is defined as: ri td(x) def = ½ 1 if (xi −t)d > 0 0 if (xi −t)d ≤0 To specify a conjunction of rays we need first to list all the attributes who’s ray is present in the conjunction. For this purpose, we use a vector i def = (i1, . . . , i|i|) of attribute indices ij ∈{1, . . . , n} such that i1 < i2 < . . . < i|i| where |i| is the number of indices present in i (and thus the number of rays in the conjunction) 1. To complete the specification of a conjunction of rays, we need a vector t = (ti1, ti2, . . . , ti|i|) of threshold values and a vector of d = (di1, di2, . . . , di|i|) of directions where ij ∈{1, . . . , n} for j ∈{1, . . . , |i|}. On any input example x, the output Ci td(x) of a conjunction of rays is given by: Ci td(x) def = ( 1 if rj tjdj(x) = 1 ∀j ∈i 0 if ∃j ∈i : rj tjdj(x) = 0 Finally, any algorithm that builds a conjunction can be used to build a disjunction just by exchanging the role of the positive and negative labelled examples. Due to lack of space, we describe here only the case of a conjunction. 3 A PAC-Bayes Risk Bound The PAC-Bayes approach, initiated by McAllester (1999), aims at providing PAC guarantees to “Bayesian” learning algorithms. These algorithms are specified in terms of a prior distribution P over a space of classifiers that characterizes our prior belief about good classifiers (before the observation of the data) and a posterior distribution Q (over the same space of classifiers) that takes into account the additional information provided by the training data. A remarkable result that came out from this line of research, known as the “PAC-Bayes theorem”, provides a tight upper bound on the risk of a stochastic classifier called the Gibbs classifier. Given an input example x, the label GQ(x) assigned to x by the Gibbs classifier is defined by the following process. We first choose a classifier h according to the posterior distribution Q and then use h to assign the label h(x) to x. The risk of GQ is defined as the expected risk of classifiers drawn according to Q: R(GQ) def = Eh∼QR(h) = Eh∼QE(x,y)∼DI(f(x) ̸= y) The PAC-Bayes theorem was first proposed by McAllester (2003). The version presented here is due to Seeger (2002) and Langford (2003). Theorem 1 Given any space H of classifiers. For any data-independent prior distribution P over H and for any (possibly data-dependent) posterior distribution Q over H, with probability at least 1 −δ over the random draws of training sets S of m examples: kl(RS(GQ)∥R(GQ)) ≤KL(Q∥P) + ln m+1 δ m where KL(Q∥P) is the Kullback-Leibler divergence between distributions2 Q and P: KL(Q∥P) def = Eh∼Q ln Q(h) P(h) and where kl(q∥p) is the Kullback-Leibler divergence between the Bernoulli distributions with probabilities of success q and p: kl(q∥p) def = q ln q p + (1 −q) ln 1 −q 1 −p for q < p 1Although it is possible to use up to two rays on any attribute, we limit ourselves here to the case where each attribute can be used for only one ray. 2Here Q(h) denotes the probability density function associated to Q, evaluated at h. The bound given by the PAC-Bayes theorem for the risk of Gibbs classifiers can be turned into a bound for the risk of Bayes classifiers in the following way. Given a posterior distribution Q, the Bayes classifier BQ performs a majority vote (under measure Q) of binary classifiers in H. When BQ misclassifies an example x, at least half of the binary classifiers (under measure Q) misclassifies x. It follows that the error rate of GQ is at least half of the error rate of BQ. Hence R(BQ) ≤2R(GQ). In our case, we have seen that ray conjunctions are specified in terms of a mixture of discrete parameters i and d and continuous parameters t. If we denote by Pi,d(t) the probability density function associated with a prior P over the class of ray conjunctions, we consider here priors of the form: Pi,d(t) = 1 ¡ n |i| ¢p(|i|) 1 2|i| Y j∈i 1 Bj −Aj ; ∀tj ∈[Aj, Bj] If I denotes the set of all 2n possible attribute index vectors and Di denotes de set of all 2|i| binary direction vectors d of dimension |i|, we have that: X i∈I X d∈Di Y j∈i Z Bj Aj dtjPi,d(t) = 1 whenever Pn e=0 p(e) = 1. The reasons motivating this choice for the prior are the following. The first two factors come from the belief that the final classifier, constructed from the group of attributes specified by i, should depend only on the number |i| of attributes in this group. If we have complete ignorance about the number of rays the final classifier is likely to have, we should choose p(e) = 1/(n + 1) for e ∈{0, 1, . . . , n}. However, we should choose a p that decreases as we increase e if we have reasons to believe that the number of rays of the final classifier will be much smaller than n. The third factor of Pi,d(t) gives equal prior probabilities for each of the two possible values of direction dj. Finally, for each ray, every possible threshold value t should have the same prior probability of being chosen if we do not have any prior knowledge that would favor some values over the others. Since each attribute value xi is constrained, a priori, to be in [Ai, Bi], we have chosen a uniform probability density on [Ai, Bi] for each ti such that i ∈i. This explains the last factors of Pi,d(t). Given a training set S, the learner will choose an attribute group i and a direction vector d. For each attribute xi ∈[Ai, Bi] : i ∈i, a margin interval [ai, bi] ⊆[Ai, Bi] will also be chosen by the learner. A deterministic ray-conjunction classifier is then specified by choosing the thresholds values ti ∈[ai, bi]. It is tempting at this point to choose ti = (ai + bi)/2 ∀i ∈i (i.e., in the middle of each interval). However, we will see shortly that the PAC-Bayes theorem offers a better guarantee for another type of deterministic classifier. The Gibbs classifier is defined with a posterior distribution Q having all its weight on the same i and d as chosen by the learner but where each ti is uniformly chosen in [ai, bi]. The KL divergence between this posterior Q and the prior P is then given by: KL(Q∥P) = Y j∈i Z bj aj dtj bj −aj ln µQ i∈i(bi −ai)−1 Pi,d(t) ¶ = ln µn |i| ¶ + ln µ 1 p(|i|) ¶ + |i| ln(2) + X i∈i ln µBi −Ai bi −ai ¶ Hence, we see that the KL divergence between the “continuous components” of Q and P (given by the last term) vanishes when [ai, bi] = [Ai, Bi] ∀i ∈i. Furthermore, the KL divergence between the “discrete components” of Q and P is small for small values of |i| (whenever p(|i|) is not too small). Hence, this KL divergence between our choices for Q and P exhibits a tradeoffbetween margins (large values of bi −ai) and sparsity (small value of |i|) for Gibbs classifiers. According to Theorem 1, the Gibbs classifier with the smallest guarantee of risk R(GQ) should minimize a non trivial combination of KL(Q∥P) (margins-sparsity tradeoff) and empirical risk RS(GQ). Since the posterior Q is identified by an attribute group vector i, a direction vector d, and intervals [ai, bi] ∀i ∈i, we will refer to the Gibbs classifier GQ by Gid ab where a and b are the vectors formed by the unions of ais and bis respectively. We can obtain a closed-form expression for RS(Gid ab) by first considering the risk R(x,y)(Gid ab) on a single example (x, y) since RS(Gid ab) = E(x,y)∼SR(x,y)(Gid ab). From our definition for Q, we find that: R(x,y)(Gid ab) = (1 −2y) "Y i∈i σdi ai,bi(xi) −y # (1) where we have used the following piece-wise linear functions: σ+ a,b(x) def = 0 if x < a x−a b−a if a ≤x ≤b 1 if b < x ; σ− a,b(x) def = 1 if x < a b−x b−a if a ≤x ≤b 0 if b < x (2) Hence we notice that R(x,1)(Gid ab) = 1 (and R(x,0)(Gid ab) = 0) whenever there exist i ∈i : σdi ai,bi(xi) = 0. This occurs iffthere exists a ray which outputs 0 on x. We can also verify that the expression for R(x,y)(Ci td) is identical to the expression for R(x,y)(Gid ab) except that the piece-wise linear functions σdi ai,bi(xi) are replaced by the indicator functions I((xi −ti)di > 0). The PAC-Bayes theorem provides a risk bound for the Gibbs classifier Gid ab. Since the Bayes classifier Bid ab just performs a majority vote under the same posterior distribution as the one used by Gid ab, we have that Bid ab(x) = 1 iffthe probability that Gid ab classifies x as positive exceeds 1/2. Hence, it follows that Bid ab(x) = ( 1 if Q i∈i σdi ai,bi(xi) > 1/2 0 if Q i∈i σdi ai,bi(xi) ≤1/2 (3) Note that Bid ab has an hyperbolic decision surface. Consequently, Bid ab is not representable as a conjunction of rays. There is, however, no computational difficulty at obtaining the output of Bid ab(x) for any x ∈X. From the relation between Bid ab and Gid ab, it also follows that R(x,y)(Bid ab) ≤ 2R(x,y)(Gid ab) for any (x, y). Consequently, R(Bid ab) ≤2R(Gid ab). Hence, we have our main theorem: Theorem 2 Given all our previous definitions, for any δ ∈(0, 1], and for any p satisfying Pn e=0 p(e) = 1, we have: PrS∼Dm à ∀i, d, a, b: R(Gid ab) ≤sup ½ ϵ: kl(RS(Gid ab)∥ϵ) ≤1 m · ln µn |i| ¶ + + |i| ln(2) + ln µ 1 p(|i|) ¶ + X i∈i ln µBi −Ai bi −ai ¶ + ln m + 1 δ #) ! ≥1 −δ Furthermore: R(Bid ab) ≤2R(Gid ab) ∀i, d, a, b. 4 A Soft Greedy Learning Algorithm Theorem 2 suggests that the learner should try to find the Bayes classifier Bid ab that uses a small number of attributes (i.e., a small |i|), each with a large separating margin (bi −ai), while keeping the empirical Gibbs risk RS(Gid ab) at a low value. To achieve this goal, we have adapted the greedy algorithm for the set covering machine (SCM) proposed by Marchand and Shawe-Taylor (2002). It consists of choosing the feature (here a ray) i with the largest utility Ui where: Ui = |Qi| −p|Ri| where Qi is the set of negative examples covered (classified as 0) by feature i, Ri is the set of positive examples misclassified by this feature, and p is a learning parameter that gives a penalty p for each misclassified positive example. Once the feature with the largest Ui is found, we remove Qi and Pi from the training set S and then repeat (on the remaining examples) until either no more negative examples are present or that a maximum number s of features has been reached. In our case, however, we need to keep the Gibbs risk on S low instead of the risk of a deterministic classifier. Since the Gibbs risk is a “soft measure” that uses the piece-wise linear functions σd a,b instead of the “hard” indicator functions, we need a “softer” version of the utility function Ui. Indeed, a negative example that falls in the linear region of a σd a,b is in fact partly covered. Following this observation, let k be the vector of indices of the attributes that we have used so far for the construction of the classifier. Let us first define the covering value C(Gkd ab) of Gkd ab by the “amount” of negative examples assigned to class 0 by Gkd ab: C(Gkd ab) def = X (x,y)∈S (1 −y) 1 − Y j∈k σdj aj,bj(xj) We also define the positive-side error E(Gkd ab) of Gkd ab as the “amount” of positive examples assigned to class 0 : E(Gkd ab) def = X (x,y)∈S y 1 − Y j∈k σdj aj,bj(xj) We now want to add another ray on another attribute, call it i, to obtain a new vector k′ containing this new attribute in addition to those present in k. Hence, we now introduce the covering contribution of ray i as: Ckd ab(i) def = C(Gk′d′ a′b′) −C(Gkd ab) = X (x,y)∈S (1 −y) h 1 −σdi ai,bi(xi) i Y j∈k σdj aj,bj(xj) and the positive-side error contribution of ray i as: Ekd ab (i) def = E(Gk′d′ a′b′) −E(Gkd ab) = X (x,y)∈S y h 1 −σdi ai,bi(xi) i Y j∈k σdj aj,bj(xj) Typically, the covering contribution of ray i should increase its “utility” and its positive-side error should decrease it. Moreover, we want to decrease the “utility” of ray i by an amount which would become large whenever it has a small separating margin. Our expression for KL(Q∥P) suggests that this amount should be proportional to ln((Bi −Ai)/(bi −ai)). Furthermore we should compare this margin term with the fraction of the remaining negative examples that ray i has covered (instead of the absolute amount of negative examples covered). Hence the covering contribution Ckd ab(i) of ray i should be divided by the amount N kd ab of negative examples that remains to be covered before considering ray i: N kd ab def = X (x,y)∈S (1 −y) Y j∈k σdj aj,bj(xj) which is simply the amount of negative examples that have been assigned to class 1 by Gkd ab. If P denotes the set of positive examples, we define the utility U kd ab (i) of adding ray i to Gkd ab as: U kd ab (i) def = Ckd ab(i) N kd ab −pEkd ab (i) |P| −η ln Bi −Ai bi −ai where parameter p represents the penalty of misclassifying a positive example and η is another parameter that controls the importance of having a large margin. These learning parameters can be chosen by cross-validation. For fixed values of these parameters, the “soft greedy” algorithm simply consists of adding, to the current Gibbs classifier, a ray with maximum added utility until either the maximum number s of rays has been reached or that all the negative examples have been (totally) covered. It is understood that, during this soft greedy algorithm, we can remove an example (x, y) from S whenever it is totally covered. This occurs whenever Q j∈k σdj aj,bj(xj) = 0. 5 Results for Classification of DNA Micro-Arrays We have tested the soft greedy learning algorithm on the four DNA micro-array data sets shown in Table 1. The colon tumor data set (Alon et al., 1999) provides the expression levels of 40 tumor and 22 normal colon tissues measured for 6500 human genes. The ALL/AML data set (Golub et al., 1999) provides the expression levels of 7129 human genes for 47 samples of patients with acute lymphoblastic leukemia (ALL) and 25 samples of patients with acute myeloid leukemia (AML). The B MD and C MD data sets (Pomeroy et al., 2002) are micro-array samples containing the expression levels of 6817 human genes. Data set B contains 25 classic and 9 desmoplastic medulloblastomas whereas data set C contains 39 medulloblastomas survivors and 21 treatment failures (non-survivors). We have compared the soft greedy learning algorithm with a linear-kernel softmargin SVM trained both on all the attributes (gene expressions) and on a subset of attributes chosen by the filter method of Golub et al. (1999). It consists of ranking the attributes as function of the difference between the positive-example mean and the negative-example mean and then use only the first ℓattributes. The resulting learning algorithm, named SVM+gs in Table 1, is basically the one used by Furey et al. (2000) for the same task. Guyon et al. (2002) claimed obtaining better results with the recursive feature elimination method but, as pointed out by Ambroise and McLachlan (2002), their work contained a methodological flaw and, consequently, the superiority of this wrapper method is questionable. Each algorithm was tested with the 5-fold cross validation (CV) method. Each of the five training sets and testing sets was the same for all algorithms. The learning parameters of all algorithms and the gene subsets (for SVM+gs) were chosen from the training sets only. This was done by performing a second (nested) 5-fold CV on each training set. For the gene subset selection procedure of SVM+gs, we have considered the first ℓ= 2i genes (for i = 0, 1, . . . , 12) ranked according to the criterion of Golub et al. (1999) and have chosen the i value that gave the smallest 5-fold CV error on the training set. Data Set SVM SVM+gs Soft Greedy Name #exs errs errs size ratio size G-errs B-errs Bound Colon 62 12 11 256 0.42 1 12 9 18 B MD 34 12 6 32 0.10 1 6 6 20 C MD 60 29 21 1024 0.077 3 24 22 40 ALL/AML 72 18 10 64 0.002 2 19 17 38 Table 1: DNA micro-array data sets and results. For each algorithm, the “errs” columns of Table 1 contain the 5-fold CV error expressed as the sum of errors over the five testing sets and the “size” columns contain the number of attributes used by the classifier averaged over the five testing sets. The “G-err” and “B-err” columns refer to the Gibbs and Bayes error rates. The “ratio” column refers to the average value of (bi −ai)/(Bi −Ai) obtained for the rays used by classifiers and the “bound” column refers to the average risk bound of Theorem 2 multiplied by the total number of examples. We see that the gene selection filter generally improves the error rate of SVM and that the Bayes error rate is slightly better than the Gibbs error rate. Finally, the error rates of Bayes and SVM+gs are competitive but the number of genes selected by the soft greedy algorithm is always much smaller. References U. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D. Mack, and A.J. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. PNAS USA, 96:6745–6750, 1999. C. Ambroise and G. J. McLachlan. Selection bias in gene extraction on the basis of microarray gene-expression data. Proc. Natl. Acad. Sci. USA, 99:6562–6566, 2002. T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski, M. Schummer, and D. Haussler. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics, 16:906–914, 2000. T.R. Golub, D.K. Slonim, and Many More Authors. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science, 286:531– 537, 1999. I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, 46:389–422, 2002. D. Haussler. Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial Intelligence, 36:177–221, 1988. John Langford. Tutorial on practical prediction theory for classification. http://hunch.net/~jl/projects/prediction_bounds/tutorial/tutorial.ps, 2003. N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2(4):285–318, 1988. Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of Machine Learning Reasearch, 3:723–746, 2002. David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355–363, 1999. David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5–21, 2003. A priliminary version appeared in proceedings of COLT’99. S. L. Pomeroy, P. Tamayo, and Many More Authors. Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature, 415:436–442, 2002. Matthias Seeger. PAC-Bayesian generalization bounds for gaussian processes. Journal of Machine Learning Research, 3:233–269, 2002.
|
2004
|
1
|
2,507
|
A Probabilistic Model for Online Document Clustering with Application to Novelty Detection Jian Zhang† †School of Computer Science Cargenie Mellon University Pittsburgh, PA 15213 jian.zhang@cs.cmu.edu Zoubin Ghahramani†‡ ‡ Gatsby Computational Neuroscience Unit University College London London WC1N 3AR, UK zoubin@gatsby.ucl.ac.uk Yiming Yang† †School of Computer Science Cargenie Mellon University Pittsburgh, PA 15213 yiming@cs.cmu.edu Abstract In this paper we propose a probabilistic model for online document clustering. We use non-parametric Dirichlet process prior to model the growing number of clusters, and use a prior of general English language model as the base distribution to handle the generation of novel clusters. Furthermore, cluster uncertainty is modeled with a Bayesian Dirichletmultinomial distribution. We use empirical Bayes method to estimate hyperparameters based on a historical dataset. Our probabilistic model is applied to the novelty detection task in Topic Detection and Tracking (TDT) and compared with existing approaches in the literature. 1 Introduction The task of online document clustering is to group documents into clusters as long as they arrive in a temporal sequence. Generally speaking, it is difficult for several reasons: First, it is unsupervised learning and the learning has to be done in an online fashion, which imposes constraints on both strategy and efficiency. Second, similar to other learning problems in text, we have to deal with a high-dimensional space with tens of thousands of features. And finally, the number of clusters can be as large as thousands in newswire data. The objective of novelty detection is to identify the novel objects from a sequence of data, where “novel” is usually defined as dissimilar to previous seen instances. Here we are interested in novelty detection in the text domain, where we want to identify the earliest report of every new event in a sequence of news stories. Applying online document clustering to the novelty detection task is straightforward by assigning the first seed of every cluster as novel and all its remaining ones as non-novel. The most obvious application of novelty detection is that, by detecting novel events, systems can automatically alert people when new events happen, for example. In this paper we apply Dirichlet process prior to model the growing number of clusters, and propose to use a General English language model as a basis of newly generated clusters. In particular, the new clusters will be generated according to the prior and a background General English model, and each document cluster is modeled using a Bayesian Dirichletmultinomial language model. The Bayesian inference can be easily carried out due to conjugacy, and model hyperparameters are estimated using a historical dataset by the empirical Bayes method. We evaluate our online clustering algorithm (as well as its variants) on the novelty detection task in TDT, which has been regarded as the hardest task in that literature [2]. The rest of this paper is organized as follows. We first introduce our probabilistic model in Section 2, and in Section 3 we give detailed information on how to estimate model hyperparameters. We describe the experiments in Section 4, and related work in Section 5. We conclude and discuss future work in Section 6. 2 A Probabilistic Model for Online Document Clustering In this section we will describe the generative probabilistic model for online document clustering. We use x = (n(x) 1 , n(x) 2 , . . . , n(x) V ) to represent a document vector where each element n(x) v denotes the term frequency of the vth corresponding word in the document x, and V is the total size of the vocabulary. 2.1 Dirichlet-Multinomial Model The multinomial distribution has been one of the most frequently used language models for modeling documents in information retrieval. It assumes that given the set of parameters θ = (θ1, θ2, . . . , θV ), a document x is generated with the following probability: p(x|θ) = (PV v=1 n(x) v )! QV v=1 n(x) v ! VY v=1 θn(x) v v . From the formula we can see the so-called naive assumption: words are assumed to be independent of each other. Given a collection of documents generated from the same model, the parameter θ can be estimated with Maximum Likelihood Estimation (MLE). In a Bayesian approach we would like to put a Dirichlet prior over the parameter (θ ∼ Dir(α)) such that the probability of generating a document is obtained by integrating over the parameter space: p(x) = R p(θ|α)p(x|θ)dθ. This integration can be easily written down due to the conjugacy between Dirichlet and multinomial distributions. The key difference between the Bayesian approach and the MLE is that the former uses a distribution to model the uncertainty of the parameter θ, while the latter gives only a point estimation. 2.2 Online Document Clustering with Dirichlet Process Mixture Model In our system documents are grouped into clusters in an online fashion. Each cluster is modeled with a multinomial distribution whose parameter θ follows a Dirichlet prior. First, a cluster is chosen based on a Dirichlet process prior (can be either a new or existing cluster), and then a document is drawn from that cluster. We use Dirichlet Process (DP) to model the prior distribution of θ’s, and our hierarchical model is as follows: xi|ci ∼ Mul(.|θ(ci)) θi iid. ∼ G (1) G ∼ DP(λ, G0) where ci is the cluster indicator variable, θi is the multinomial parameter 1 for each document, and θ(ci) is the unique θ for the cluster ci. G is a random distribution generated from the Dirichlet process DP(λ, G0) [4], which has a precision parameter λ and a base distribution G0. Here our base distribution G0 is a Dirichlet distribution Dir(γπ1, γπ2, . . . , γπV ) with PV t=1 πt = 1, which reflects our expected knowledge about G. Intuitively, our G0 distribution can be treated as the prior over general English word frequencies, which has been used in information retrieval literature [6] to model general English documents. The exact cluster-document generation process can be described as follows: 1. Let xi be the current document under processing (the ith document in the input sequence), and C1, C2, . . . , Cm are already generated clusters. 2. Draw a cluster ci based on the following Dirichlet process prior [4]: p(ci = Cj) = |Cj| λ + Pm j=1 |Cj| (j = 1, 2, . . . , m) p(ci = Cm+1) = λ λ + Pm j=1 |Cj| (2) where |Cj| stands for the cardinality of cluster j with Pm j=1 |Cj| = i −1, and with certain probability a new cluster Cm+1 will be generated. 3. Draw the document xi from the cluster ci. 2.3 Model Updating Our models for each cluster need to be updated based on incoming documents. We can write down the probability that the current document xi is generated by any cluster as p(xi|Cj) = Z p(θ(Cj)|Cj)p(xi|θ(Cj))dθ(Cj) (j = 1, 2, . . . , m, m + 1) where p(θ(Cj)|Cj) is the posterior distribution of parameters of the jth cluster (j = 1, 2, . . . , m) and we use p(θ(Cm+1)|Cm+1) = p(θ(Cm+1)) to represent the prior distribution of the parameters of the new cluster for convenience. Although the dimensionality of θ is high (V ≈105 in our case), closed-form solution can be obtained under our Dirichletmultinomial assumption. Once the conditional probabilities p(xi|Cj) are computed, the probabilities p(Cj|xi) can be easily calculated using the Bayes rule: p(Cj|xi) = p(Cj)p(xi|Cj) Pm+1 j′=1 p(Cj′)p(xi|Cj′) where the prior probability of each cluster is calculated using equation (2). Now there are several choices we can consider on how to update the cluster models. The first choice, which is correct but obviously intractable, is to fork m + 1 children of the current system where the jth child is updated with document xi assigned to cluster j, while the final system is a probabilistic combination of those children with the corresponding probabilities p(Cj|xi). The second choice is to make a hard decision by assigning the current document xi to the cluster with the maximum probability: ci = arg max Cj p(Cj|xi) = p(Cj)p(xi|Cj) Pm+1 j′=1 p(Cj′)p(xi|Cj′) . 1For θ we use θv to denote the vth element in the vector, θi to denote the parameter vector that generates the ith document, and θ(j) to denote the parameter vector for the jth cluster. The third choice is to use a soft probabilistic updating, which is similar in spirit to the Assumed Density Filtering (ADF) [7] in the literature. That is, each cluster is updated by exponentiating the likelihood function with probabilities: p(θ(Cj)|xi, Cj) ∝ p(xi|θ(Cj)) p(Cj|xi) p(θ(Cj)|Cj) However, we have to specially deal with the new cluster since we cannot afford both timewise and space-wise to generate a new cluster for each incoming document. Instead, we will update all existing clusters as above, and new cluster will be generated only if ci = Cm+1. We will use HD and PD (hard decision and probabilistic decision) to denote the last two candidates in our experiments. 3 Learning Model Parameters In the above probabilistic model there are still several hyperparameters not specified, namely the π and γ in the base distribution G0 = Dir(γπ1, γπ2, . . . , γπV ), and the precision parameter λ in the DP(λ, G0). Since we can obtain a partially labeled historical dataset 2 , we now discuss how to estimate those parameters respectively. We will mainly use the empirical Bayes method [5] to estimate those parameters instead of taking a full Bayesian approach, since it is easier to compute and generally reliable when the number of data points is relatively large compared to the number of parameters. Because the θi’s are iid. from the random distribution G, by integrating out the G we get θi|θ1, θ2, . . . , θi−1 ∼ λ λ + i −1G0 + 1 λ + i −1 X j<i δθj where the distribution is a mixture of continuous and discrete distributions, and the δθ denotes the probability measure giving point mass to θ. Now suppose we have a historical dataset H which contains K labeled clusters Hj(j = 1, 2, . . . , K), with the kth cluster Hk = {xk,1, xk,2, . . . , xk,mk} having mk documents. The joint probability of θ’s of all documents can be obtained as p(θ1, θ2, . . . , θ|H|) = |H| Y i=1 ( λ λ + i −1G0 + 1 λ + i −1 X j<i δθj) where |H| is the total number of documents. By integrating over the unknown parameter θ’s we can get p(H) = Z |H| Y i=1 p(xi|θi) p(θ1, θ2, . . . , θ|H|)dθ1dθ2 . . . dθ|H| = |H| Y i=1 Z p(xi|θi)( λ λ + i −1G0 + 1 λ + i −1 X j<i δθj)dθi (3) Empirical Bayes method can be applied to equation (3) to estimate the model parameters by maximization3. In the following we discuss how to estimate parameters individually in detail. 2Although documents are grouped into clusters in the historical dataset, we cannot make directly use of those labels due to the fact that clusters in the test dataset are different from those in the historical dataset. 3Since only a subset of documents are labeled in the historical dataset H, the maximization is only taken over the union of the labeled clusters. 3.1 Estimating πt’s Our hyperparameter π vector contains V number of parameters for the base distribution G0, which can be treated as the expected distribution of G – the prior of the cluster parameter θ’s. Although π contains V ≈105 number of actual parameters in our case, we can still use the empirical Bayes to do a reliable point estimation since the amount of data we have to represent general English is large (in our historical dataset there are around 106 documents, around 1.8 × 108 English words in total) and highly informative about π. We use the smoothed estimation π ∝(1 + n(H) 1 , 1 + n(H) 2 , . . . , 1 + n(H) V ) where n(H) t = P x∈H n(x) t is the total number of times that term t happened in the collection H, and PV t=1 πt should be normalized to 1. Furthermore, the pseudo-count one is added to alleviate the out-ofvocabulary problem. 3.2 Estimating γ Though γ is just a scalar parameter, it has the effect to control the uncertainty of the prior knowledge about how clusters are related to the general English model with the parameter π. We can see that γ controls how far each new cluster can deviate from the general English model 4. It can be estimated as follows: ˆγ = arg max γ K Y k=1 p(Hk|γ) = arg max γ K Y k=1 Z p(Hk|θ(k))p(θ(k)|γ)dθ(k) (4) ˆγ can be numerically computed by solving the following equation: KΨ(γ) −K V X v=1 Ψ(γπv)πv + K X k=1 V X v=1 Ψ(γπv + n(Hk) v )πv − K X k=1 Ψ(γ + V X v=1 n(Hk) v ) = 0 where the digamma function Ψ(x) is defined as Ψ(x) ≡ d dx ln Γ(x). Alternatively we can choose γ by evaluating over the historical dataset. This is applicable (though computationally expensive) since it is only a scalar parameter and we can precompute its possible range based on equation (4). 3.3 Estimating λ The precision parameter λ of the DP is also very important for the model, which controls how far the random distribution G can deviate from the baseline model G0. In our case, it is also the prior belief about how quickly new clusters will be generated in the sequence. Similarly we can use equation (3) to estimate λ, since items related to λ can be factored out as Q|H| i=1 λyi λ+i−1. Suppose we have a labeled subset HL = {(x1, y1), (x2, y2), . . . , (xM, yM)} of training data, where yi is 1 if xi is a novel document or 0 otherwise. Here we describe two possible choices: 1. The simplest way is to assume that λ is a fixed constant during the process, and it can be computed as ˆλ = arg maxλ Q i∈HL λyt λ+i−1, here HL denotes the subset of indices of labeled documents in the whole sequence. 2. The assumption that λ is fixed maybe restrictive in reality, especially considering the fact that it reflects the generation rate of new clusters. More generally, we 4The mean and variance of a Dirichlet distribution (θ1, θ2, . . . , θV ) ∼Dir(γπ1, γπ2, . . . , γπV ) are: E[θv] = πv and Var[θv] = πv(1−πv) (γ+1) . can assume that λ is some function of variable i. In particular, we assume λ = a/i + b + ci where a, b and c are non-negative numbers. This formulation is a generalization of the above case, where the i−1 term allows a much faster decrease at the beginning, and c is the asymptotic rate of events happening as i →∞. Again the parameters a, b and c are estimated by MLE over the training dataset: ˆa,ˆb, ˆc = arg maxa,b,c>0 Q i∈HL (a/i+b+ci)yi a/i+b+ci+i . 4 Experiments We apply the above online clustering model to the novelty detection task in Topic Detection and Tracking (TDT). TDT has been a research community since its 1997 pilot study, which is a research initiative that aims at techniques to automatically process news documents in terms of events. There are several tasks defined in TDT, and among them Novelty Detection (a.k.a. First Story Detection or New Event Detection) has been regarded as the hardest task in this area [2]. The objective of the novelty detection task is to detect the earliest report for each event as soon as that report arrives in the temporal sequence of news stories. 4.1 Dataset We use the TDT2 corpus as our historical dataset for estimating parameters, and use the TDT3 corpus to evaluate our model 5. Notice that we have a subset of documents in the historical dataset (TDT2) for which events labels are given. The TDT2 corpus used for novelty detection task consists of 62,962 documents, among them 8,401 documents are labeled in 96 clusters. Stopwords are removed and words are stemmed, and after that there are on average 180 words per document. The total number of features (unique words) is around 100,000. 4.2 Evaluation Measure In our experiments we use the standard TDT evaluation measure [1] to evaluate our results. The performance is characterized in terms of the probability of two types of errors: miss and false alarm (PMiss and PF A). These two error probabilities are then combined into a single detection cost, Cdet, by assigning costs to miss and false alarm errors: Cdet = CMiss · PMiss · Ptarget + CF A · PF A · Pnon−target where 1. CMiss and CF A are the costs of a miss and a false alarm, respectively, 2. PMissand PF A are the conditional probabilities of a miss and a false alarm, respectively, and 3. Ptarget and Pnon−target is the priori target probabilities (Ptarget = 1 − Pnon−target). It is the following normalized cost that is actually used in evaluating various TDT systems: (Cdet)norm = Cdet min(CMiss · Ptarget, CF A · Pnon−target) where the denominator is the minimum of two trivial systems. Besides, two types of evaluations are used in TDT, namely macro-averaged (topic-weighted) and micro-averaged 5Strictly speaking we only used the subsets of TDT2 and TDT3 that is designated for the novelty detection task. (story-weighted) evaluations. In macro-averaged evaluation, the cost is computed for every event, and then the average is taken. In micro-averaged evaluation the cost is averaged over all documents’ decisions generated by the system, thus large event will have bigger impact on the overall performance. Note that macro-averaged evaluation is used as the primary evaluation measure in TDT. In addition to the binary decision “novel” or “non-novel”, each system is required to generated a confidence score for each test document. The higher the score is, the more likely the document is novel. Here we mainly use the minimum cost to evaluate systems by varying the threshold, which is independent of the threshold setting. 4.3 Methods One simple but effective method is the “GAC-INCR” clustering method [9] with cosine similarity metric and TFIDF term weighting, which has remained to be the top performing system in TDT 2002 & 2003 official evaluations. For this method the novelty confidence score we used is one minus the similarity score between the current cluster xi and its nearest neighbor cluster: s(xi) = 1.0 −maxj<i sim(ci, cj), where ci and cj are the clusters that xi and xj are assigned to, respectively, and the similarity is taken to be the cosine similarity between two cluster vectors, where the ltc TFIDF term weighting scheme is used to scale each dimension of the vector. Our second method is to train a logistic regression model which combines multiple features generated by the GAC-INCR method. Those features not only include the similarity score used by the first method, but also include the size of its nearest cluster, the time difference between the current cluster and the nearest cluster, etc. We call this method “Logistic Regression”, where we use the posterior probability p(novelty|xi) as the confidence score. Finally, for our online clustering algorithm we choose the quantity s(xi) = log p(C0|xi) as the output confidence score. 4.4 Experimental Results Our results for three methods are listed in Table 1, where both macro-averaged and microaveraged minimum normalized costs are reported 6. The GAC-INCR method performs very well, so does the logistic regression method. For our DP results, we observed that using the optimized ˆγ will get results (not listed in the table) that are around 10% worse than using the γ obtained through validation, which might be due to the flatness of the optimal function value as well as the sample bias of the clusters in the historical dataset7. Another observation is that the probabilistic decision does not actually improve the hard decision performance, especially for the λvar option. Generally speaking, our DP methods are comparable to the other two methods, especially in terms of topic-weighted measure. Table 1: Results for novelty detection on TDT3 corpus Topic-weighted Cost Story-weighted Cost Method COST (Miss, FA) COST (Miss, FA) GAC-INCR 0.6945 (0.5614, 0.0272) 0.7090 (0.5614, 0.0301) Logistic Regression 0.7027 (0.5732, 0.0264) 0.6911 (0.5732, 0.0241) DP with λfix, HD 0.7054 (0.4737, 0.0473) 0.7744 (0.5965, 0.0363) DP with λvar, HD 0.6901 (0.5789, 0.0227) 0.7541 (0.5789, 0.0358) DP with λfix, PD 0.7054 (0.4737, 0.0473) 0.7744 (0.5965, 0.0363) DP with λvar, PD 0.9025 (0.8772, 0.0052) 0.9034 (0.8772, 0.0053) 6In TDT official evaluation there is also the DET curve, which is similar in spirit to the ROC curve that can reflects how the performance changes as the threshold varies. We will report those results in a longer version of this paper. 7It is known that the cluster labeling process of LDC is biased toward topics that will be covered in multiple languages instead of one single language. 5 Related Work Zaragoza et al. [11] applied a Bayesian Dirichlet-multinomial model to the ad hoc information retrieval task and showed that it is comparable to other smoothed language models. Blei et al. [3] used Chinese Restaurant Processes to model topic hierachies for a collection of documents. West et al. [8] discussed the sampling techniques for base distribution parameters in the Dirichlet process mixture model. 6 Conclusions and Future Work In this paper we used a hierarchical probabilistic model for online document clustering. We modeled the generation of new clusters with a Dirichlet process mixture model, where the base distribution can be treated as the prior of general English model and the precision parameter is closely related to the generation rate of new clusters. Model parameters are estimated with empirical Bayes and validation over the historical dataset. Our model is evaluated on the TDT novelty detection task, and results show that our method is promising. In future work we would like to investigate other ways of estimating parameters and use sampling methods to revisit previous cluster assignments. We would also like to apply our model to the retrospective detection task in TDT where systems do not need to make decisions online. Though its simplicity, the unigram multinomial model has its well-known limitation, which is the naive assumption about word independence. We also plan to explore richer but still tractable language models in this framework. Meanwhile, we would like to combine this model with the topic-conditioned framework [10] as well as incorporate hierarchical mixture model so that novelty detection will be conditioned on some topic, which will be modeled by either supervised or semi-supervised learning techniques. References [1] The 2002 topic detection & tracking task definition and evaluation plan. http://www.nist.gov/speech/tests/tdt/tdt2002/evalplan.htm, 2002. [2] Allan, J., Lavrenko, V. & Jin, H. First story detection in tdt is hard. In Proc. of CIKM 2000. [3] Blei, D., Griffiths, T., Jordan, M. & Tenenbaum, J. Hierarchical topic models and the nested chinese restaurant process. Advances in Neural Information Processing Systems, 15, 2003. [4] Ferguson, T. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1:209– 230, 1973. [5] Gelman A., Carlin, J., Stern, H. & Rubin, D. Bayesian Data Analysis (2nd ed.). CHAPMAN & HALL/CRC, 2003. [6] Miller, D., Leek, T. & Schwartz, R. Bbn at trec 7: Using hidden markov models for information retrieval. In TREC-7, 1999. [7] Minka, T. A family of algorithms for approximate Bayesian inference. Ph.D. thesis, MIT, 2001. [8] West, M., Mueller, P. & Escobar, M.D. Hierarchical priors and mixture models, with application in regression and density estimation. In Aspects of Uncertainty: A tribute to D. V. Lindley, A.F.M. Smith and P. Freeman, (eds.), Wiley, New York. [9] Yang, Y., Pierce, T. & Carbonell, J. A Study on Retrospective and On-line Event Detection. In Proc. of SIGIR 1998. [10] Yang, Y., Zhang, J., Carnobell, J. & Jin, C. Topic-conditioned novelty detection. In Proc. of 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2002. [11] Zaragoza, H., Hiemstra, D., Tipping, D. & Robertson, S. Bayesian extension to the language model for ad hoc information retrieval. In Proc. SIGIR 2003.
|
2004
|
10
|
2,508
|
VDCBPI: an Approximate Scalable Algorithm for Large POMDPs Pascal Poupart Department of Computer Science University of Toronto Toronto, ON M5S 3H5 ppoupart@cs.toronto.edu Craig Boutilier Department of Computer Science University of Toronto Toronto, ON M5S 3H5 cebly@cs.toronto.edu Abstract Existing algorithms for discrete partially observable Markov decision processes can at best solve problems of a few thousand states due to two important sources of intractability: the curse of dimensionality and the policy space complexity. This paper describes a new algorithm (VDCBPI) that mitigates both sources of intractability by combining the Value Directed Compression (VDC) technique [13] with Bounded Policy Iteration (BPI) [14]. The scalability of VDCBPI is demonstrated on synthetic network management problems with up to 33 million states. 1 Introduction Partially observable Markov decision processes (POMDPs) provide a natural and expressive framework for decision making, but their use in practice has been limited by the lack of scalable solution algorithms. Two important sources of intractability plague discrete model-based POMDPs: high dimensionality of belief space, and the complexity of policy or value function (VF) space. Classic solution algorithms [4, 10, 7], for example, compute value functions represented by exponentially many value vectors, each of exponential size. As a result, they can only solve POMDPs with on the order of 100 states. Consequently, much research has been devoted to mitigating these two sources of intractability. The complexity of policy/VF space has been addressed by observing that there are often very good policies whose value functions are representable by a small number of vectors. Various algorithms such as approximate vector pruning [9], point-based value iteration (PBVI) [12, 16], bounded policy iteration (BPI) [14], gradient ascent (GA) [11, 1] and stochastic local search (SLS) [3] exploit this fact to produce (often near-optimal) policies of low complexity (i.e., few vectors) allowing larger POMDPs to be solved. Still these scale to problems of only roughly 1000 states, since each value vector may still have exponential dimensionality. Conversely, it has been observed that belief states often carry more information than necessary. Hence, one can often reduce vector dimensionality by using compact representations such as decision trees (DTs) [2], algebraic decision diagrams (ADDs) [8, 9], or linear combinations of small basis functions (LCBFs) [6], or by indirectly compressing the belief space into a small subspace by a value-directed compression (VDC) [14] or exponential PCA [15]. Once compressed, classic solution methods can be used. However, since none of these approaches address the exponential complexity of policy/VF space, they can only solve slightly larger POMDPs (up to 8250 states [15]). Scalable POMDP algorithms can only be realized when both sources of intractability are tackled simultaneously. While Hansen and Feng [9] implemented such an algorithm by combining approximate state abstraction with approximate vector pruning, they didn’t demonstrate the scalability of the approach on large problems. In this paper, we describe how to combine value directed compression (VDC) with bounded policy iteration (BPI) and demonstrate the scalability of the resulting algorithm (VDCBPI) on synthetic network management problems of up to 33 million states. Among the techniques that deal with the curse of dimensionality, VDC offers the advantage that the compressed POMDP can be directly fed into existing POMDP algorithms with no (or only slight) adjustments. This is not the case for exponential-PCA, nor compact representations (DTs, ADDs, LCBFs). Among algorithms that mitigate policy space complexity, BPI distinguishes itself by its ability to avoid local optima (cf. GA), its efficiency (cf. SLS) and the fact that belief state monitoring is not required (cf. PBVI, approximate vector pruning). Beyond the combination of VDC with BPI, we offer two other contributions. We propose a new simple heuristic to compute good lossy value directed compressions. We also augment BPI with the ability to bias its policy search to reachable belief states. As a result, BPI can often find a much smaller policy of similar quality for a given initial belief state. 2 POMDP Background A POMDP is defined by: states S; actions A; observations Z; transition function T , where T (s, a, s′) denotes Pr(s′|s, a); observation function Z, where Z(s, z) is the probability Pr(z|s, a) of observation z in state s after executing a; and reward function R, where R(s, a) is the immediate reward associated with s when executing a. We assume discrete state, action and observation sets and focus on discounted, infinite horizon POMDPs with discount factor 0 ≤γ < 1. Policies and value functions for POMDPs are typically defined over belief space B, where a belief state b is a distribution over S capturing an agent’s knowledge about the current state of the world. Belief state b can be updated in response to a specific action-observation pair ⟨a, z⟩using Bayes rule. We denote the (unnormalized) belief update mapping by T a,z, where T a,z ij = Pr(sj|a, si) Pr(z|sj). A factored POMDP, with exponentially many states, thus gives rise to a belief space of exponential dimensionality. Policies represented by finite state controllers (FSCs) are defined by a (possibly cyclic) directed graph π = ⟨N, E⟩, where nodes n ∈N correspond to stochastic action choices and edges e ∈E to stochastic transitions. An FSC can be viewed as a policy π = ⟨α, β⟩, where action strategy α associates each node n with a distribution over actions α(n) = Pr(a|n), and observation strategy β associates each node n and observation z with a distribution over successor nodes β(n, z) = Pr(n′|n, z) (corresponding to the edge from n labeled with z). The value function V π of FSC π is given by: V π(n, s) = X a Pr(a|n)R(s, a) + γ X z Pr(s′|s, a) Pr(z|s′, a) X n′ Pr(n′|n, z)V π(n′, s′) (1) The value V (n, b) of each node n is thus linear w.r.t the belief state; hence the value function of the controller is piecewise-linear and convex. The optimal value function V ∗ often has a large (if not infinite) number of vectors, each corresponding to a different node. The optimal value function V ∗satisfies Bellman’s equation: V ∗(b) = max a R(b, a) + γ X z Pr(z|b, a)V (ba z) (2) max ϵ s.t. V (n, s) + ϵ ≤ P a[Pr(a|n)R(s, a) + γ P s′,z Pr(s′|s, a) Pr(z|s′, a) Pr(a, n′|n, z)V (n′, s′)], ∀s P a Pr(a|n) = 1; P n′ Pr(a, n′|n, z) = Pr(a|n), ∀a Pr(a|n) ≥0, ∀a; Pr(a, n′|n, z) ≥0, ∀a, z Table 1: LP to uniformly improve the value function of a node. max P s,n o(s, n)ϵs,n s.t. V (n, s) + ϵs,n ≤ P a[Pr(a|n)R(s, a) + γ P s′,z Pr(s′|s, a) Pr(z|s′, a) Pr(a, n′|n, z)V (n′, s′)], ∀s P a Pr(a|n) = 1; P n′ Pr(a, n′|n, z) = Pr(a|n), ∀a Pr(a|n) ≥0, ∀a; Pr(a, n′|n, z) ≥0, ∀a, z Table 2: LP to improve the value function of a node in a non-uniform way according to the steady state occupancy o(s, n). 3 Bounded Policy Iteration We briefly review the bounded policy iteration (BPI) algorithm (see [14] for details) and describe a simple extension to bias its search toward reachable belief states. BPI incrementally constructs an FSC by alternating policy improvement and policy evaluation. Unlike policy iteration [7], this is done by slowly increasing the number of nodes (and value vectors). The policy improvement step greedily improves each node n by optimizing its action and observation strategies by solving the linear program (LP) in Table 1. This LP uniformly maximizes the improvement ϵ in the value function by optimizing n’s distributions Pr(a, n′|n, z). The policy evaluation step computes the value function of the current controller by solving Eq. 1. The algorithm monotonically improves the policy until convergence to a local optimum, at which point new nodes are introduced to escape the local optimum. BPI is guaranteed to converge to a policy that is optimal at the “tangent” belief states while slowly growing the size of the controller [14]. In practice, we often wish to find a policy suitable for a given initial belief state. Since only a small subset of belief space is often reachable, it is generally possible to construct much smaller policies tailored to the reachable region. We now describe a simple way to bias BPI’s efforts toward the reachable region. Recall that the LP in Table 1 optimizes the parameters of a node to uniformly improve its value at all belief states. We propose a new LP (Table 2) that weighs the improvement by the (unnormalized) discounted occupancy distribution induced by the current policy. This accounts for belief states reachable for the node by aggregating them together. The (unnormalized) discounted occupancy distribution is given by: o(s′, n′) = b0(s′, n′) + γ X s,a,z,n o(s, n) Pr(a|n) Pr(z|a, s) Pr(n′|n, z) ∀s′, n′ The LP in Table 2 is obtained by introducing variables ϵs,n for each s, replacing the objective ϵ by P s,n o(s, n)ϵs,n and replacing ϵ in each constraint by the corresponding ϵs,n. When using the modified LP, BPI naturally tries to improve the policy at the reachable belief states before the others. Since the modification ensures that the value function doesn’t decrease at any belief state, focusing the efforts on reachable belief states won’t decrease policy value at other belief states. Furthermore, though the policy is initially biased toward reachable states, BPI will eventually improve the policy for all belief states. ~b b’ b r r’ T π T ~ π T ~ π f T ~b’ R R R R ~ π ~ Figure 1: Functional flow of a POMDP (dotted arrows) and a compressed POMDP (solid arrows). 4 Value-Directed Compression We briefly review the sufficient conditions for a lossless compression of POMDPs [13] and describe a simple new algorithm to obtain good lossy compressions. Belief states constitute a sufficient statistic summarizing all information available to the decision maker (i.e., past actions and observations). However, as long as enough information is available to evaluate the value of each policy, one can still choose the best policy. Since belief states often contain information irrelevant to the estimation of future rewards, one can often compress belief states into some lower-dimensional representation. Let f be a compression function that maps each belief state b into some lower dimensional compressed belief state ˜b (see Figure 1). Here ˜b can be viewed as a bottleneck that filters the information contained in b before it is used to estimate future rewards. We desire a compression f such that ˜b corresponds to the smallest statistic sufficient for accurately predicting the current reward r as well as the next compressed belief state ˜b′ (since it captures all the information in b′ necessary to accurately predict subsequent rewards). Such a compression f exists if we can also find compressed transition dynamics ˜T a,z and a compressed reward function ˜R such that: R = ˜R ◦f and f ◦T a,z = ˜T a,z ◦f ∀a ∈A, z ∈Z (3) Given an f, ˜R and ˜T a,z satisfying Eq. 3, we can evaluate any policy π using the compressed POMDP dynamics to obtain ˜V π. Since V π = ˜V π◦f, the compressed POMDP is equivalent to the original. When restricting f to be linear (represented by matrix F), we can rewrite Eq. 3 R = F ˜R and T a,zF = F ˜T a,z ∀a ∈A, z ∈Z (4) That is, the column space of F spans R and is invariant w.r.t. each T a,z. Hence, the columns of the best linear lossless compression mapping F form a basis for the smallest invariant subspace (w.r.t. each T a,z) that spans R, i.e., the Krylov subspace. We can find the columns of F by Krylov iteration: multiplying R by each T a,z until the newly generated vectors are linear combinations of previous ones.1 The dimensionality of the compressed space is equal to the number of columns of F, which is necessarily smaller than or equal to the dimensionality of the original belief space. Once F is found, we can compute ˜R and each ˜T a,z by solving the system in Eq. 4. Since linear lossless compressions are not always possible, we can extend the technique of [13] to find good lossy compressions with early stopping of the Krylov iteration. We retain only the vectors that are “far” from being linear combinations of prior vectors. For instance, if v is a linear combination of v1, v2, . . . , vn, then there are coefficients c1, c2, . . . , cn s.t. the error ||v −P i civi||2 is zero. Given a threshold ϵ or some upper bound k on the desired number of columns in F, we run Krylov iteration, retaining only the vectors with an error greater than ϵ, or the k vectors with largest error. When F is computed by approximate 1For numerical stability, one must orthogonalize each vector before multiplying by T a,z. Krylov iteration, we cannot compute ˜R and ˜T a,z by solving the linear system in Eq. 4— due to the lossy nature of the compression, the system is overconstrained. But we can find suitable ˜R and ˜T a,z by computing a least square approximation, solving: F ⊤R = F ⊤F ˜R and F ⊤T a,zF = F ⊤F ˜T a,z ∀a ∈A, z ∈Z While compression is required when the dimensionality of belief space is too large, unfortunately, the columns of F have the same dimensionality. Factored POMDPs of exponential dimension can, however, admit practical Krylov iteration if carried out using a compact representation (e.g., DTs or ADDs) to efficiently compute F, ˜R and each ˜T a,z. 5 Bounded Policy Iteration with Value-Directed Compression In principle, any POMDP algorithm can be used to solve the compressed POMDPs produced by VDC. If the compression is lossless and the POMDP algorithm exact, the computed policy will be optimal for the original POMDP. In practice, POMDP algorithms are usually approximate and lossless compressions are not always possible, so care must be taken to ensure numerical stability and a policy of high quality for the original POMDP. We now discuss some of the integration issues that arise when combining VDC with BPI. Since V = F ˜V , maximizing the compressed value vector ˜V of some node n automatically maximizes the value V of n w.r.t. the original POMDP when F is nonnegative; hence it is essential that F be nonnegative. Otherwise, the optimal policy of the compressed POMDP may not be optimal for the original POMDP. Fortunately, when R is nonnegative then F is guaranteed to be nonnegative by the nature of Krylov iteration. If some rewards are negative, we can add a sufficiently large constant to R to make it nonnegative without changing the decision problem. Since most algorithms, including BPI, compute approximately optimal policies it is also critical to normalize the columns of F. Suppose F has two columns f1 and f2 with L1lengths 1 and 100, respectively. Since V = F ˜V = ˜v1f1 + ˜v2f2, changes in ˜v2 have a much greater impact on V than changes in ˜v1. Such a difference in sensitivity may bias the search for a good policy to an undesirable region of the belief space, or may even cause the algorithm to return a policy that is far from optimal for the original POMDP despite the fact that it is ϵ-optimal for the compressed POMDP. We note that it is “safer” to evaluate policies iteratively by successive approximation rather than solving the system in Eq. 1. By definition, the transition matrices T a,z have eigenvalues with magnitude ≤1. In contrast, lossy compressed transition matrices ˜T a,z are not guaranteed to have this property. Hence, solving the system in Eq. 1 may not correspond to policy evaluation. It is thus safer to evaluate policies by successive approximation for lossy compressions. Finally several algorithms including BPI compute witness belief states to verify the dominance of a value vector. Since the compressed belief space ˜B is different from the original belief space B, this must be approached with care. B is a simplex corresponding to the convex hull of the state points. In contrast, since each row vector of F is the compressed version of some state point, ˜B corresponds to the convex hull of the row vectors of F. When F is non-negative, it is often possible to ignore this difference. For instance, when verifying the dominance of a value vector, if there is a compressed witness ˜b, there is always an uncompressed witness b, but not vice-versa. This means that we can properly identify all dominating value vectors, but we may erroneously classify a dominated vector as dominating. In practice, this doesn’t impact the correctness of algorithms such as policy iteration, bounded policy iteration, incremental pruning, witness algorithm, etc. but it will slow them down since they won’t be able to prune as many value vectors as possible. 20 40 60 80 100 120 50 100 150 200 250 90 95 100 105 # of nodes cycle16 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 105 110 115 120 # of nodes cycle19 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 120 125 130 # of nodes cycle22 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 135 140 145 150 # of nodes cycle25 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 95 100 105 110 115 120 # of nodes 3legs16 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 110 115 120 125 130 135 # of nodes 3legs19 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 125 130 135 140 145 150 # of nodes 3legs22 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 140 145 150 155 160 # of nodes 3legs25 # of basis fns Expected Rewards 20 40 60 80 100 120 50 100 150 200 250 2 4 6 8 10 12 # of nodes cycle25 # of basis fns Time (1000 seconds) Figure 2: Experimental results for cycle and 3legs network configurations of 16, 19, 22 and 25 machines. The bottom right graph shows the running time of BPI on compressed versions of a cycle network of 25 machines. 3legs cycle 16 19 22 25 16 19 22 25 VDCBPI 120.9 137.0 151.0 164.8 103.9 121.3 134.3 151.4 heuristic 100.6 118.3 138.3 152.3 102.5 117.9 130.2 152.3 doNothing 98.4 112.9 133.5 147.1 91.6 105.4 122.0 140.1 Table 3: Comparison of the best policies achieved by VDCBPI to the doNothing and heuristic policies. The above tips work well when VDC is integrated with BPI. We believe they are sufficient to ensure proper integration of VDC with other POMDP algorithms, though we haven’t verified this empirically. 6 Experiments We report on experiments with VDCBPI on some synthetic network management problems similar to those introduced in [5]. A system administrator (SA) maintains a network of machines. Each machine has a 0.1 probability of failing at any stage; but this increases to 0.333 when a neighboring machine is down. The SA receives a reward of 1 per working machine and 2 per working server. At each stage, she can either reboot a machine, ping a machine or do nothing. She only observes the status of a machine (with 0.95 accuracy) if she reboots or pings it. Costs are 2.5 (rebooting), 0.1 (pinging), and 0 (doing nothing). An n-machine network induces to a POMDP with 2n states, 2n+1 actions and 2 observations. We experimented with networks of 16, 19, 22 and 25 machines organized in two configurations: cycle (a ring) and 3legs (a tree of 3 branches joined at the root). Figure 2 shows the average expected reward earned by policies computed by BPI after the POMDP has been compressed by VDC. Results are averaged over 500 runs of 60 steps, starting with a belief state where all machines are working.2 As expected, decision quality increases as we increase the number of nodes used in BPI and basis functions used in VDC. Also interesting are some of the jumps in the reward surface of some graphs, suggesting phase transitions w.r.t. the dimensionality of the compression. The bottom right graph in Fig. 2 shows the time taken by BPI on a cycle network of 25 machines (other problems exhibit similar behavior). VDC takes from 4902s. to 12408s. (depending on size and configuration) to compress POMDPs to 250 dimensions.3 In Table 3 we compare the value of the best policy with less than 120 nodes found by VDCBPI to two other simple policies. The doNothing policy lets the network evolve without any rebooting or pinging. The heuristic policy estimates at each stage the probability of failure4 of each machine and reboots the machine most likely to be down if its failure probability is greater than threshold p1 or pings it if greater than threshold p2. Settings of p1 = 0.8 and p2 = 0.15 were used.5 This heuristic policy performs very well and therefore offers a strong competitor to VDCBPI. But it is possible to do better than the heuristic policy by optimizing the choice of the machine that the SA may reboot or ping. Since a machine is more likely to fail when neighboring machines are down, it is sometimes better to choose (for reboot) a machine surrounded by working machines. However, since the SA doesn’t exactly know which machines are up or down due to partial observability, such a tradeoff is difficult to evaluate and sometimes not worthwhile. With a sufficient number of nodes and basis functions, VDCBPI outperforms the heuristic policy on the 3legs networks and matches it on the cycle networks. This is quite remarkable given the fact that belief states were compressed to 250 dimensions or less compared to the original dimensionality ranging from 65,536 to 33,554,432. 7 Conclusion We have described a new POMDP algorithm that mitigates both high belief space dimensionality and policy/VF complexity. By integrating value-directed compression with 2The ruggedness of the graphs is mainly due to the variance in the reward samples. 3Reported running times are the cputime measured on 3GHz linux machines. 4Due to the large state space, approximate monitoring was performed by factoring the joint. 5These values were determined through enumeration of all threshold combinations in increments of 0.05, choosing the best for 25-machine problems. bounded policy iteration, we can solve synthetic network management POMDPs of 33 million states (3 orders of magnitude larger than previously solved discrete POMDPs). Note that the scalability of VDCBPI is problem dependent, however we hope that new, scalable, approximate POMDP algorithms such as VDCBPI will allow POMDPs to be used to model real-world problems, with the expectation that they can be solved effectively. We also described several improvements to the existing VDC and BPI algorithms. Although VDC offers the advantage that any existing solution algorithm can be used to solve compressed POMDPs, it would be interesting to combine BPI or PBVI with a factored representation such as DTs or ADDs, allowing one to directly solve large scale POMDPs without recourse to an initial compression. Beyond policy space complexity and high dimensional belief spaces, further research will be necessary to deal with exponentially large action and observation spaces. References [1] D. Aberdeen and J. Baxter. Scaling internal-state policy-gradient methods for POMDPs. Proc. of the Nineteenth Intl. Conf. on Machine Learning, pp.3–10, Sydney, Australia, 2002. [2] C. Boutilier and D. Poole. Computing optimal policies for partially observable decision processes using compact representations. Proc. AAAI-96, pp.1168–1175, Portland, OR, 1996. [3] D. Braziunas and C. Boutilier. Stochastic local search for POMDP controllers. Proc. AAAI-04, to appear, San Jose, CA, 2004. [4] A. R. Cassandra, M. L. Littman, and N. L. Zhang. Incremental pruning: A simple, fast, exact method for POMDPs. Proc. UAI-97, pp.54–61, Providence, RI, 1997. [5] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. Proc. IJCAI-01, pp.673–680, Seattle, WA, 2001. [6] C. Guestrin, D. Koller, and R. Parr. Solving factored POMDPs with linear value functions. IJCAI-01 Wkshp. on Planning under Uncertainty and Incomplete Information, Seattle, 2001. [7] E. A. Hansen. Solving POMDPs by searching in policy space. Proc. UAI-98, pp.211–219, Madison, Wisconsin, 1998. [8] E. A. Hansen and Z. Feng. Dynamic programming for POMDPs using a factored state representation. Proc. AIPS-2000, pp.130–139, Breckenridge, CO, 2000. [9] E. A. Hansen and Z. Feng. Approximate planning for factored POMDPs. Proc. ECP-2001, Toledo, Spain, 2000. [10] L. P. Kaelbling, M. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artif. Intel., 101:99–134, 1998. [11] N. Meuleau, L. Peshkin, K. Kim, and L. P. Kaelbling. Learning finite-state controllers for partially observable environments. Proc. UAI-99, pp.427–436, Stockholm, 1999. [12] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: an anytime algorithm for POMDPs. IJCAI-03, Acapulco, Mexico, 2003. [13] P. Poupart and C. Boutilier. Value-directed compressions of POMDPs. Advances in Neural Information Processing Systems, pp.1547–1554, Vancouver, Canada, 2002. [14] P. Poupart and C. Boutilier. Bounded finite state controllers. Advances in Neural Information Processing Systems, Vancouver, Canada, 2003. [15] N. Roy and G. Gordon. Exponential family PCA for belief compression in pomdps. Advances in Neural Information Processing Systems, pp.1635–1642, Vancouver, BC, 2002. [16] M. T. J. Spaan and N. Vlassis. A point-based pomdp algorithm for robot planning. IEEE Intl. Conf. on Robotics and Automation, to appear, New Orleans, 2004.
|
2004
|
100
|
2,509
|
A Generalized Bradley-Terry Model: From Group Competition to Individual Skill Tzu-Kuo Huang Chih-Jen Lin Department of Computer Science National Taiwan University Taipei 106, Taiwan Ruby C. Weng Department of Statistics National Chenechi University Taipei 116, Taiwan Abstract The Bradley-Terry model for paired comparison has been popular in many areas. We propose a generalized version in which paired individual comparisons are extended to paired team comparisons. We introduce a simple algorithm with convergence proofs to solve the model and obtain individual skill. A useful application to multi-class probability estimates using error-correcting codes is demonstrated. 1 Introduction The Bradley-Terry model [2] for paired comparisons has been broadly applied in many areas such as statistics, sports, and machine learning. It considers the model P(individual i beats individual j) = πi πi + πj , (1) where πi is the overall skill of the ith individual. Given k individuals and rij as the number of times that i beats j, an approximate skill pi can be found by minimizing the negative log likelihood of the model (1): min p l(p) = − X i<j µ rij log pi pi + pj + rji log pj pi + pj ¶ subject to 0 ≤pi, i = 1, . . . , k, k X i=1 pi = 1. (2) Thus, from paired comparisons, we can obtain individual performance. This model dates back to [14] and has been extended to more general settings. Some reviews are, for example, [5, 6]. Problem (2) can be solved by a simple iterative procedure: Algorithm 1 1. Start with any initial p0 j > 0, j = 1, . . . , k. 2. Repeat (t = 0, 1, . . .) a. Let s = (t mod k) + 1. For j = 1, . . . , k, define pt,n j ≡ P i:i̸=s rsi P i:i̸=s rsi+ris pts+pt i if j = s, pt j if j ̸= s. (3) b. Normalize pt,n to be pt+1. until ∂l(pt)/∂pj = 0, j = 1, . . . , k are satisfied. This algorithm is so simple that there is no need to use sophisticated optimization techniques. If rij > 0, ∀i, j, Algorithm 1 globally converges to the unique minimum of (2). A systematic study of the convergence is in [9]. Several machine learning work have used the Bradley-Terry model and one is to obtain multi-class probability estimates from pairwise coupling [8]. For any data instance x, if nij is the number of training data in the ith or jth class, and rij ≈nijP(x in class i | x in class i or j) is available, solving (2) obtains the estimate of P(x in class i), i = 1, . . . , k. [13] tried to extend this algorithm to other multi-class settings such as “one-against-the rest” or “errorcorrecting coding,” but did not provide a convergence proof. In Section 5.2 we show that the algorithm proposed in [13] indeed has some convergence problems. In this paper, we propose a generalized Bradley-Terry model where each comparison is between two disjoint subsets of subjects. Then from the results of team competitions, we can approximate the skill of each individual. This model has many potential applications. For example, from records of tennis or badminton doubles (or singles and doubles combined), we may obtain the rank of all individuals. A useful application in machine learning is multi-class probability estimates using error-correcting codes. We then introduce a simple iterative method to solve the generalized model with a convergence proof. Experiments on multi-class probability estimates demonstrate the viability of the proposed model and algorithm. Due to space limitation, we omit all proofs in this paper. 2 Generalized Bradley-Terry Model We propose a generalized Bradley-Terry model where, using team competition results, we can approximate individual skill levels. Consider a group of k individuals: {1, . . . , k}. Two disjoint subsets I+ i and I− i form teams for games and ri ≥0 (r′ i ≥0) is the number of times that I+ i beats I− i (I− i beats I+ i ). Thus, we have Ii ⊂{1, . . . , k}, i = 1, . . . , m so that Ii = I+ i ∪I− i , I+ i ̸= ∅, I− i ̸= ∅, and I+ i ∩I− i = ∅. Under the model that P(I+ i beats I− i ) = P j∈I+ i πj P j∈I+ i πj + P j∈I− i πj = P j∈I+ i πj P j∈Ii πj , we can define qi ≡ X j∈Ii pj, q+ i ≡ X j∈I+ i pj, q− i ≡ X j∈I− i pj, and minimize the negative log likelihood min p l(p) = − m X i=1 ¡ ri log(q+ i /qi) + r′ i log(q− i /qi) ¢ , (4) under the same constraints of (2). If Ii, i = 1, . . . , k(k −1)/2 are as the following: I+ i I− i ri r′ i {1} {2} r12 r21 ... ... ... ... {k −1} {k} rk−1,k rk,k−1 then (4) goes back to (2). The difficulty of solving (4) over solving (2) is that now l(p) is expressed in terms of q+ i , q− i , qi but the real variable is p. The original Bradley-Terry model is a special case of other statistical models such as log-linear or generalized linear model, so methods other than Algorithm 1 (e.g., iterative scaling and iterative weighted least squares) can also be used. However, (4) is not in a form of such models and hence these methods cannot be applied. We propose the following algorithm to solve (4). Algorithm 2 1. Start with p0 j > 0, j = 1, . . . , k and corresponding q0,+ i , q0,− i , q0 i , i = 1, . . . , m. 2. Repeat (t = 0, 1, . . .) a. Let s = (t mod k) + 1. For j = 1, . . . , k, define pt,n j ≡ P i:s∈I+ i ri qt,+ i +P i:s∈I− i r′ i qt,− i P i:s∈Ii ri+r′ i qt i pt j if j = s, pt j if j ̸= s. (5) b. Normalize pt,n to pt+1. c. Update qt,+ i , qt,− i , qt i to qt+1,+ i , qt+1,− i , qt+1 i , i = 1, . . . , m. until ∂l(pt)/∂pj = 0, j = 1, . . . , k are satisfied. For the multiplicative factor in (5) to be well defined (i.e., non-zero denominator), we need Assumption 1, which will be discussed in Section 3. Eq. (5) is a simple fixed-point type update; in each iteration, only one component (i.e., pt s) is modified while the others remain the same. It is motivated from using a descent direction to strictly decrease l(p): If ∂l(pt)/∂ps ̸= 0, then ∂l(pt) ∂ps · (pt,n s −pt s) = à − µ∂l(pt) ∂ps ¶2 pt s ! . à X i:s∈Ii ri + r′ i qt i ! < 0, (6) where ∂l(p) ∂ps = − X i:s∈I+ i ri q+ i − X i:s∈I− i r′ i q− i + X i:s∈Ii ri + r′ i qi . Thus, pt,n s −pt s is a descent direction in optimization since a sufficiently small step along this direction guarantees the strict decrease of the function value. Since now we take the whole direction without searching for the step size, more efforts are needed to prove the strict decrease in Lemma 1. However, (6) does hint that (5) is a reasonable update. Lemma 1 If pt s > 0 is the index to be updated and ∂l(pt)/∂ps ̸= 0, then l(pt+1) < l(pt). If we apply the update rule (5) on the pairwise model, P i:i̸=s rsi pt s P i:i̸=s rsi pt s+pt i + P i:i̸=s ris pt s+pt i pt s = P i:i̸=s rsi P i:i̸=s rsi+ris pt s+pt i and (5) goes back to (3). 3 Convergence of Algorithm 2 For any point satisfying ∂l(p)/∂pj = 0, j = 1, . . . , k and constraints of (4), it is a stationary point of (4)1. We will prove that Algorithm 2 converges to such a point. If 1A stationary point means a Karash-Kunh-Tucker (KKT) point for constrained optimization problems like (2) and (4). Note that here ∂l(p)/∂pj = 0 implies (and is more restricted than) the KKT condition. it stops in a finite number of iterations, then ∂l(p)/∂pj = 0, j = 1, . . . , k, which means a stationary point of (4) is already obtained. Thus, we only need to handle the case where {pt} is an infinite sequence. As {pt}∞ t=0 is in a compact (i.e., closed and bounded) set {p | 0 ≤pj ≤1, Pk j=1 pj = 1}, it has at least one convergent subsequence. Assume p∗is one such convergent point. In the following we will prove that ∂l(p∗)/∂pj = 0, j = 1, . . . , k. To prove the convergence of a fixed-point type algorithm, we need that if p∗ s > 0 and ∂l(p∗)/∂ps ̸= 0, then from p∗ s we can use (5) to update it to p∗,n s ̸= p∗ s. We thus make the following assumption to ensure that p∗ s > 0 (see also Theorem 1). Assumption 1 For each j ∈{1, . . . , k}, ∪i:i∈AIi = {1, . . . , k}, where A = {i | (I+ i = {j}, ri > 0) or (I− i = {j}, r′ i > 0)}. That is, each individual forms a winning (losing) team in some competitions which together involve all subjects. An issue left in Section 2 is whether the multiplicative factor in (5) is well defined. With Assumption 1 and initial p0 j > 0, j = 1, . . . , k, one can show by induction that pt j > 0, ∀t and hence the denominator of (5) is never zero: If pt j > 0, Assumption 1 implies that P i:j∈I+ i ri/qt,+ i or P i:j∈I− i r′ i/qt,− i is positive. Thus, both numerator and denominator in the multiplicative factor are positive, and so is pt+1 j . If rij > 0, the original Bradley-Terry model satisfies Assumption 1. No matter the model satisfies the assumption or not, an easy way to fulfill it is to add an additional term −µ k X s=1 log à ps Pk j=1 pj ! (7) to l(p), where µ is a small positive number. That is, for each s, we make an Ii = {1, . . . , k} with I+ i = {s}, ri = µ, and r′ i = 0. As Pk j=1 pj = 1 is one of the constraints, (7) reduces to −µ Pk s=1 log ps, which is a barrier term in optimization to ensure that ps does not go to zero. The property p∗ s > 0 and the convergence of Algorithm 2 are in Theorem 1: Theorem 1 Under Assumption 1, any convergent point p∗of Algorithm 2 satisfies p∗ s > 0, s = 1, . . . , k and is a stationary point of (4). 4 Asymptotic Distribution of the Maximum Likelihood Estimator For the standard Bradley-Terry model, asymptotic distribution of the MLE (i.e., p) has been discussed in [5]. In this section, we discuss the asymptotic distribution for the proposed estimator. To work on the real probability π, we define ¯qi ≡P j∈Ii πj, ¯q+ i ≡P j∈I+ i πj, ¯q− i ≡P j∈I− i πj, and consider ni ≡ri + r′ i as a constant. Note that ri ∼BIN(ni, ¯q+ i /¯qi) is a random variable representing the number of times that I+ i beats I− i in ni competitions. By defining for s, t = 1, . . . , k, λss ≡ var h ∂l(π) ∂ps i = P i:s∈I+ i ni ¯q− i ¯q+ i ¯q2 i + P i:s∈I− i ni ¯q+ i ¯q− i ¯q2 i , λst ≡ cov h ∂l(π) ∂ps , ∂l(π) ∂pt i = P i:s,t∈I+ i ¯q− i ni ¯q+ i ¯q2 i − P i:(s,t)∈I+ i ×I− i ni ¯q2 i −P i:(s,t)∈I− i ×I+ i ni ¯q2 i + P i:s,t∈I− i ¯q+ i ni ¯q− i ¯q2 i , s ̸= t, we have the following theorem: Theorem 2 Let n be the total number of comparisons. If ri is independent of rj, ∀i ̸= j, then √n(p1 −π1), . . . , √n(pk−1 −πk−1) have for large samples the multivariate normal distribution with zero means and dispersion matrix [λ′ st]−1, where λ′ st = λst −λsk −λtk + λkk, s, t = 1, . . . , k −1. 5 Application to Multi-class Probability Estimates Many classification methods are two-class based approaches and there are different ways to extend them for multi-class cases. Most existing studies focus on predicting class labels but not probability estimates. In this section, we discuss how the generalized Bradley-Terry model can be applied to multi-class probability estimates. Error-correction coding [7, 1] is a general method to construct binary classifiers and combine them for multi-class prediction. It suggests some ways to construct I+ i and I− i ; both are subsets of {1, . . . , k}. Then one trains a binary model using data from classes in I+ i (I− i ) as positive (negative). Simple and commonly used methods such as “one-against-one” and “one-against-the rest” are its special cases. Given ni the number of training data with classes in Ii = I+ i ∪I− i , we assume here that for any data x, ri ≈niP(x in classes of I+ i | x in classes of I+ i or I− i ) (8) is available, and the task is to approximate P(x in class s), s = 1, . . . , k. In the rest of this section we discuss the special case “one-against-the rest” and the earlier results in [13]. 5.1 Properties of the “One-against-the rest” Approach For this approach, Ii, i = 1, . . . , m are I+ i I− i ri r′ i {1} {2, . . . , k} r1 1 −r1 {2} {1, 3, . . . , k} r2 1 −r2 ... ... ... ... Now n1 = · · · = nm = the total number of training data, so the solution to (4) is not affected by ni. Thus, we remove it from (8), so ri + r′ i = 1. As ∂l(p)/∂ps = 0 becomes rs ps + X j:j̸=s r′ j 1 −pj = k, we have r1 p1 −1 −r1 1 −p1 = · · · = rk pk −1 −rk 1 −pk = k− k X j=1 r′ j 1 −pj = δ, where δ is a constant. These equalities provide another way to solve p, and ps = ((1 + δ) − p (1 + δ)2 −4rsδ)/2δ. Note that ((1 + δ) + p (1 + δ)2 −4rsδ)/2δ also satisfies the equalities, but it is negative when δ < 0, and greater than 1 when δ > 0. By solving Pm s=1 ps = 1, we obtain δ and the optimal p. From the formula of ps, if δ > 0, larger ps implies smaller (1+δ)2 −4rsδ and hence larger rs. It is similar for δ < 0. Thus, the order of p1, . . . , pk is the same as that of r1, . . . , rk: Theorem 3 If rs ≥rt, then ps ≥pt. 5.2 The Approach in [13] for Error-Correcting Codes [13] was the first attempt to address the probability estimates using general error-correcting codes. By considering the same optimization problem (4), it proposes a heuristic update rule pt,n s ≡ P i:s∈I+ i ri + P i:s∈I− i r′ i P i:s∈I+ i niqt,+ i qt i + P i:s∈I− i niqt,− i qt i pt s, (9) but does not provide a convergence proof. For a fixed-point update, we expect that at the optimum, the multiplicative factor in (9) is one. However, unlike (5), when the factor is one, (9) does not relate to ∂l(p)/∂ps = 0. In fact, a simple example shows that this algorithm may never converge. Taking the “one-against-the rest” approach, if we keep Pk i=1 pt i = 1 and assume ni = 1, then ri + r′ i = 1 and the factor in the update rule (9) is rs+P i:i̸=s r′ i pt s+P i:i̸=s(1−pt i) = k−1+2rs−Pk i=1 ri k−2+2pts . If the algorithm converges and the factor approaches one, then ps = (1+2rs −Pk i=1 ri)/2 but they may not satisfy Pk s=1 ps = 1. Therefore, if in the algorithm we keep Pk i=1 pt i = 1 as [13] did, the factor may not approach one and the algorithm does not converge. More generally, if Ii = {1, . . . , k}, ∀i, the algorithm may not converge. As qt i = 1, the condition that the factor equals one can be written as a linear equation of p. Together with Pk i=1 pi = 1, there is an over-determined linear system (i.e., k + 1 equations and k variables). 6 Experiments on Multi-class Probability Estimates 6.1 Simulated Examples We consider the same settings in [8, 12] by defining three possible class probabilities: (a) p1 = 1.5/k, pj = (1 −p1)/(k −1), j = 2, . . . , k. (b) k1 = k/2 if k is even, and (k + 1)/2 if k is odd; then p1 = 0.95 × 1.5/k1, pi = (0.95 −p1)/(k1 −1) for i = 2, . . . , k1, and pi = 0.05/(k −k1) for i = k1 + 1, . . . , k. (c) p1 = 0.95 × 1.5/2, p2 = 0.95 −p1, and pi = 0.05/(k −2), i = 3, . . . , k. Classes are competitive in case (a), but only two dominate in case (c). We then generate ri by adding some noise to q+ i /qi: ri = min(max(ǫ, q+ i qi (1 + 0.1N(0, 1))), 1 −ǫ). Then r′ i = 1 −ri. Here ǫ = 10−7 is used so that all ri, r′ i are positive. We consider the four encodings used in [1] to generate Ii: 1. “1vs1”: the pairwise approach (eq. (2)). 2. “1vsrest”: the “one-against-the rest” approach in Section 5.1. 3. “dense”: Ii = {1, . . . , k} for all i. Ii is randomly split to two equally-sized sets I+ i and I− i . [10 log2 k] such splits are generated2. Following [1], we repeat this procedure 100 times and select the one whose [10 log2 k] splits have the smallest distance. 4. “sparse”: I+ i , I− i are randomly drawn from {1, . . . , k} with E(|I+ i |) = E(|I− i |) = k/4. Then [15 log2 k] such splits are generated. Similar to “dense,” we repeat the procedure 100 times to find a good coding. Figure 1 shows averaged accuracy rates over 500 replicates for each of the four methods when k = 22, 23, . . . , 26. “1vs1” is good for (a) and (b), but suffers some losses in (c), where the class probabilities are highly unbalanced. [12] has observed this and proposed some remedies. “1vsrest” is quite competitive in all three scenarios. Furthermore, “dense” and “sparse” are less competitive in cases (a) and (b) when k is large. Due to the large |I+ i | and |I− i |, the model is unable to single out a clear winner when probabilities are more balanced. We also analyze the (relative) mean square error (MSE) in Figure 2: MSE = 1 500 500 X j=1 à k X i=1 (ˆpj i −pi)2/ k X i=1 p2 i ! , (10) where ˆpj is the probability estimate obtained in the jth of the 500 replicates. Results of Figures 2(b) and 2(c) are consistent with those of the accuracy. Note that in Figure 2(a), as p (and ˆpj) are balanced, Pk i=1(ˆpj i −pi)2 is small. Hence, all approaches have small MSE though some have poor accuracy. 2We use [x] to denote the nearest integer value of x. 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 log2 k Test Accuracy (a) 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 log2 k Test Accuracy (b) 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 log2 k Test Accuracy (c) Figure 1: Accuracy by the four encodings: “1vs1” (dashed line, square), “1vsrest” (solid line, cross), “dense” (dotted line, circle), “sparse” (dashdot line, asterisk) 2 3 4 5 6 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 log2 k MSE (a) 2 3 4 5 6 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 log2 k MSE (b) 2 3 4 5 6 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 log2 k MSE (c) Figure 2: MSE by the four encodings: legend the same as Figure 1 6.2 Experiments on Real Data In this section we present experimental results on some real-world multi-class problems. They have been used in [12], which provides more information about data preparation. Two problem sizes, 300/500 and 800/1,000 for training/testing, are used. 20 training/testing splits are generated and the testing error rates are averaged. All data used are available at http://www.csie.ntu.edu.tw/˜cjlin/papers/svmprob/data. We use the same four ways in Section 6.1 to generate Ii. All of them have |I1| ≈· · · ≈|Im|. With the property that these multi-class problems are reasonably balanced, we set ni = 1 in (8). Since there are no probability values available for these problems, we compare the accuracy by predicting the label with the largest probability estimate. The purpose here is to compare the four probability estimates but not to check the difference from existing multiclass classification techniques. We consider support vector machines (SVM) [4] with the RBF kernel as the binary classifier. An improved version [10] of [11] obtains ri. Full SVM parameter selection is conducted before testing, although due to space limitation, we omit details here. The code is modified from LIBSVM [3], a library for support vector machines. The resulting accuracy is in Table 1 for smaller and larger training/testing sets. Except “1vs1,” the other three approaches are quite competitive. These results indicate that practical problems are more similar to the case of (c) in Section 6.1, where few classes dominate. This observation is consistent with the findings in [12]. Moreover, “1vs1” suffers some losses when k is larger (e.g., letter), the same as in Figure 1(c); so for “1vs1,” [12] proposed using a quadratic model instead of the Bradley-Terry model. In terms of the computational time, because the number of binary problems for “dense” and “sparse” ([10 log2 k] and [15 log2 k], respectively) is larger than k, and each binary problem involves many classes of data (all and one half), their training time is longer than “1vs1” and “1vsrest.” “Dense” is particularly time consuming. Note that though “1vs1” solves k(k −1)/2 binaries, it is efficient as each binary problem involves only two classes of data. Table 1: Average of 20 test errors (in percentage) by four encodings (lowest boldfaced) 300 training and 500 testing 800 training and 1,000 testing Problem k 1vs1 1vsrest dense sparse 1vs1 1vsrest dense sparse dna 3 10.47 10.33 10.45 10.19 6.21 6.45 6.415 6.345 waveform 3 15.01 15.35 15.66 15.12 13.525 13.635 13.76 13.99 satimage 6 14.22 15.08 14.72 14.8 11.54 11.74 11.865 11.575 segment 7 6.24 6.69 6.62 6.19 3.295 3.605 3.52 3.25 USPS 10 11.37 10.89 10.81 11.14 7.78 7.49 7.31 7.575 MNIST 10 13.84 12.56 13.0 12.29 8.11 7.37 7.59 7.535 letter 26 39.73 35.17 33.86 33.88 21.11 19.685 20.14 19.49 In summary, we propose a generalized Bradley-Terry model which gives individual skill from group competition results. A useful application to general multi-class probability estimate is demonstrated. References [1] E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141, 2001. [2] R. A. Bradley and M. Terry. The rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39:324–345, 1952. [3] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm. [4] C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:273–297, 1995. [5] H. A. David. The method of paired comparisons. Oxford University Press, New York, second edition, 1988. [6] R. R. Davidson and P. H. Farquhar. A bibliography on the method of paired comparisons. Biometrics, 32:241–252, 1976. [7] T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263–286, 1995. [8] T. Hastie and R. Tibshirani. Classification by pairwise coupling. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10. MIT Press, Cambridge, MA, 1998. [9] D. R. Hunter. MM algorithms for generalized Bradley-Terry models. The Annals of Statistics, 32:386–408, 2004. [10] H.-T. Lin, C.-J. Lin, and R. C. Weng. A note on Platt’s probabilistic outputs for support vector machines. Technical report, Department of Computer Science, National Taiwan University, 2003. [11] J. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In A. Smola, P. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, Cambridge, MA, 2000. MIT Press. [12] T.-F. Wu, C.-J. Lin, and R. C. Weng. Probability estimates for multi-class classification by pairwise coupling. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [13] B. Zadrozny. Reducing multiclass to binary by coupling probability estimates. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 1041–1048. MIT Press, Cambridge, MA, 2002. [14] E. Zermelo. Die berechnung der turnier-ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 29:436–460, 1929.
|
2004
|
101
|
2,510
|
Rate- and Phase-coded Autoassociative Memory Máté Lengyel Peter Dayan Gatsby Computational Neuroscience Unit, University College London 17 Queen Square, London WC1N 3AR, United Kingdom {lmate,dayan}@gatsby.ucl.ac.uk Abstract Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1 Introduction Autoassociative memory in recurrently coupled networks seems fondly regarded as having been long since solved, at least from a computational perspective. Its neurobiological importance, as a model of episodic (event) memory storage and retrieval (from noisy and partial inputs) in structures such as area CA3 in the hippocampus, is of course clear [1]. This perhaps suggests that it is only the exact mapping of the models to the neural substrate that holds any remaining theoretical interest. However, the characteristic patterns of activity in areas such as CA3 that are involved in memory are quite unlike those specified in the bulk of models. In particular neurons (for instance hippocampal place cells) show graded activity during recall [2], prominent theta frequency oscillations [3] and an apparent variety of rules governing synaptic plasticity [4, 5]. The wealth of studies of memory capacity of attractor networks of binary units does not give many clues to the specification, analysis or optimization of networks acting in these biologically relevant regimes. In fact, even theoretical approaches to autoassociative memories with graded activities are computationally brittle. Here, we generalize previous analyses [6, 7] to address these issues. Formally, these models interpret recall as Bayesian inference based on information given by the noisy input, the synaptic weight matrix, and prior knowledge about the distribution of possible activity patterns coding for memories. More concretely (see section 2), the assumed activity patterns and synaptic plasticity rules determine the term in neuronal update dynamics that describes interactions between interconnected cells. Different aspects of biologically reasonable autoassociative memories arise from different assumptions. We show (section 3) We thank Boris Gutkin for helpful discussions on the phase resetting characteristics of different neuron types. This work was supported by the Gatsby Charitable Foundation. that for neurons are characterized by their graded firing rates, the regular rate-based characterization of neurons effectively approximates optimal Bayesian inference. Optimal values for parameters of the update dynamics, such as level of inhibition or leakage conductance, are inherently provided by our formalism. We then extend the model (section 4) to a setting involving spiking neurons in the context of a coherent local field potential oscillation (LFPO). Memories are coded by the the phase of the LFPO at which each neuron fires, and are stored by spike timing dependent plasticity. In this case, the biophysically plausible neuronal interaction function takes the form of a phase reset curve: presynaptic firing accelerates or decelerates the postsynaptic cell, depending on the relative timing of the two spikes, to a degree that is proportional to the synaptic weight between the two cells. 2 MAP autoassociative recall The first requirement is to specify the task for autoassociative recall in a probabilistically sound manner. This specification leads to a natural account of the dynamics of the neurons during recall, whose form is largely determined by the learning rule. Unfortunately, the full dynamics includes terms that are not purely local to the information a post-synaptic neuron has about pre-synaptic activity, and we therefore consider approximations that restore essential characteristics necessary to satisfy the most basic biological constraints. We validate the quality of the approximations in later sections. The construction of the objective function: Consider an autoassociative network which has stored information about M memories x1 . . . xM in a synaptic weight matrix, W between a set of N neurons. We specify these quantities rather generally at first to allow for different ways of construing the memories later. The most complete probabilistic description of its task is to report the conditional distribution P [x|˜x, W] over the activities x given noisy inputs ˜x and the weights. The uncertainty in this posterior distribution has two roots. First, the activity pattern referred to by the input is unclear unless there is no input noise. Second, biological synaptic plasticity rules are data-lossy ‘compression algorithms’, and so W specifies only imprecise information about the stored memories. In an ideal case, P [x|˜x, W] would have support only on the M stored patterns x1 . . . xM. However, biological storage methods lead to weights W that permit a much greater range of possibilities. We therefore consider methods that work in the full space of activities x. In order to optimize the probability of extracting just the correct memory, decision theory encourages us to maximize the posterior probability [8]: ˆx := argmaxxP [x|˜x, W] , P [x|˜x, W] ∝P [x] P [˜x|x] P [W|x] (1) The first term in Eq.1 imports prior knowledge of the statistical characteristics of the memories, and is assumed to factorize: P [x] := Q i Px [xi]. The second term describes the noise process corrupting the inputs. For unbiased noise it will be a term in x that is effectively centered on ˜x. We assume that the noise corrupting each element of the patterns is independent, and independent of the original pattern, so P [˜x|x] := Q i P [˜xi|x] := Q i P [˜xi|xi]. The third term assesses the likelihood that the weight matrix came from a training set of size M including pattern x.1 Biological constraints encourage consideration of learning updates for the synapse from neuron j to neuron i that are local to the pre-synaptic (xm j ) and post-synaptic (xm i ) activities of connected neurons when pattern xm is stored: ∆wm i,j := Ω xm i , xm j (2) We assume the contributions of individual training patterns are additive, Wi,j := P m ∆wm i,j, and that there are no autapses in the network, Wi,i := 0. 1Uncertainty about M could also be incorporated into the model, but is neglected here. Storing a single random pattern drawn from the prior distribution will result in a synaptic weight change with a distribution determined by the prior and the learning rule, having µ∆w = ⟨Ω(x1, x2)⟩Px[x1]·Px[x2] mean, and σ2 ∆w = Ω2 (x1, x2) Px[x1]·Px[x2] −µ2 ∆w variance. Storing M −1 random patterns means adding M −1 iid. random variables and thus, for moderately large M, results in a synaptic weight with an approximately Gaussian distribution P [Wi,j] ≃G (Wi,j; µW , σW ), with mean µW = (M −1) µ∆w and variance σ2 W = (M −1) σ2 ∆w. Adding a further particular pattern x is equivalent to adding a random variable with a mean determined by the learning rule, and zero variance, thus: P [Wi,j|xi, xj] ≃G (Wi,j; µW + Ω(xi, xj) , σW ) (3) We also make the approximation that elements of the synaptic weight matrix are independent, and thus write: P [W|x] := Q i,j̸=i P [Wi,j|xi, xj]. Having restricted our horizons to maximum a posteriori (MAP) inference, we can consider as an objective function the log of the posterior distribution. In the light of our factorizability assumptions, this is O (x) = log P [x] + log P [˜x|x] + log P [W|x] = P i log P [xi] + P i log P [˜xi|xi] + P i,j̸=i log P [Wi,j|xi, xj] (4) Neuronal update dynamics: Finding the global maximum of the objective function, as stated in equation 1, is computationally extravagant, and biologically questionable. We therefore specify neuronal dynamics arising from gradient ascent on the objective function: τx ˙x ∝∇xO (x) . (5) Combining equations 4 and 5 we get τx dxi dt = ∂ ∂xi log P [x] + ∂ ∂xi log P [˜x|x] + ∂ ∂xi log P [W|x] , where (6) ∂ ∂xi log P [W|x] = P j̸=i ∂ ∂xi log P [Wi,j|xi, xj] + ∂ ∂xi log P [Wj,i|xj, xi] . (7) The first two terms in equation 6 only depend on the activity of the neuron itself and its input. For example, for a Gaussian prior Px [xi] = G (Wi,j; µx, σx) and unbiased Gaussian noise on the input P [˜xi|xi] = G (˜xi; xi, σ˜x), these would be: d dxi log P [xi] + d dxi log P [˜xi|xi] = µx−xi σ2 x + ˜xi−xi σ2 ˜x = µx σ2 x − 1 σ2 x + 1 σ2 ˜x xi + ˜xi σ2 ˜x (8) The first term on the right-hand side of the last equality expresses a constant bias; the second involves self-decay; and the third describes the effect of the input. The terms in equation 7 indicate how a neuron should take into account the activity of other neurons based on the synaptic weights. From equation 3, the terms are ∂ ∂xi log P [Wi,j|xi, xj]= 1 σ2 W h (Wi,j −µW ) ∂ ∂xi Ω(xi, xj) −Ω(xi, xj) ∂ ∂xi Ω(xi, xj) i (9) ∂ ∂xi log P [Wj,i|xj, xi]= 1 σ2 W h (Wj,i −µW ) ∂ ∂xi Ω(xj, xi) −Ω(xj, xi) ∂ ∂xi Ω(xj, xi) i (10) Two aspects of the above formulæ are biologically troubling. The last terms in each express the effects of other cells, but without there being corresponding synaptic weights. We approximate these terms using their mean values over the prior distribution. In this case α+ i = ⟨Ω(xi, xj) ∂ ∂xi Ω(xi, xj)⟩Px[xj] and α− i = ⟨Ω(xj, xi) ∂ ∂xi Ω(xj, xi)⟩Px[xj] contribute terms that only depend on the activity of the updated cell, and so can be lumped with the prior- and input-dependent terms of Eq.8. Further, equation 10 includes synaptic weights, Wj,i, that are postsynaptic with respect to the updated neuron. This would require the neuron to change its activity depending on the weights of its postsynaptic synapses. One simple work-around is to approximate a postsynaptic weight by the mean of its conditional distribution given the corresponding presynaptic weight: Wj,i ≃⟨P [Wj,i|Wi,j]⟩. In the simplest case of perfectly symmetric or anti-symmetric learning, with Ω(xi, xj) = ±Ω(xj, xi), we have Wj,i = ±Wj,i and α+ i = α− i = αi. In the anti-symmetric case µw = 0. Making these assumptions, the neuronal interaction function simplifies to H (xi, xj) = (Wi,j −µW ) ∂ ∂xi Ω(xi, xj) (11) and 2 σ2 W hP j̸=i H (xi, xj) −(N −1) αi i is the weight-dependent term of equation 7. Equation 11 shows that there is a simple relationship between the synaptic plasticity rule, Ω(xi, xj), and the neuronal interaction function, H (xi, xj), that is approximately optimal for reading out the information that is encoded in the synaptic weight matrix by that synaptic plasticity rule. It also shows that the magnitude of this interaction should be proportional to the synaptic weight connecting the two cells, Wi,j. We specialize this analysis to two important cases with (a) graded, rate-based, or (b) spiking, oscillatory phase-based, activities. We derive appropriate dynamics from learning rules, and show that, despite the approximations, the networks have good recall performance. 3 Rate-based memories The most natural assumption about pattern encoding is that the activity of each unit is interpreted directly as its firing rate. Note, however, that most approaches to autoassociative memory assume binary patterns [9], sitting ill with the lack of saturation in cortical or hippocampal neurons in the appropriate regime. Experiments [10] suggest that regulating activity levels in such networks is very tricky, requiring exquisitely carefully tuned neuronal dynamics. There has been work on graded activities in the special case of line or surface attractor networks [11, 12], but these also pose dynamical complexitiese. By contrast, graded activities are straightforward in our framework. Consider Hebbian covariance learning: Ωcov (xi, xj) := Acov (xi −µx) (xj −µx), where Acov > 0 is a normalizing constant and µx is the mean of the prior distribution of the patterns to be stored. The learning rule is symmetric, and so, based on Eq.11, the optimal neuronal interaction function is Hcov (xi, xj) = Acov (Wi,j −µW ) (xj −µx). This leads to a term in the dynamics which is the conventional weighted sum of pre-synaptic firing rates. The other key term in the dynamics is αi = −A2 covσ2 x (xi −µx), where σ2 x is the variance of the prior distribution, expressing self-decay to a baseline activity level determined by µx. The prior- and input-dependent terms also contribute to self-decay as shown in Eq.8. Integration of the weighted sum of inputs plus decay to baseline constitute the widely used leaky integrator reduction of a single neuron [10]. Thus, canonical models of synaptic plasticity (the Hebbian covariance rule) and single neuron firing rate dynamics are exactly matched for autoassociative recall. Optimal values for all parameters of single neuron dynamics (except the membrane time constant determining the speed of gradient ascent) are directly implied. This is important, since it indicates how to solve the problem for graded autoassociative memories (as opposed to saturing ones [14, 15]), that neuronal dynamics have to be finely tuned. As examples, the leak conductance is given by the sum of the coefficients of all terms linear in xi, the optimal bias current is the sum of all terms independent of xi, and the level of inhibition can be determined from the negative terms in the interaction function, −µW and −µx. Since our derivation embodies a number of approximations, we performed numerical simulations. To gauge the performance of the Bayes-optimal network we compared it to networks of increasing complexity (Fig. 1A,B). A trivial lower bound of performance is −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Stored activity Recalled activity prior input ideal observer Bayesian: prior + input Bayesian: prior + input + synapses 1 10 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Number of stored patterns Average error prior input ideal observer Bayesian Treves −5 −4 −3 −2 −1 0 1 2 3 4 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Error Frequency prior input Bayesian: prior + input Bayesian: prior + input + synapses 1 10 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Number of stored patterns Average normalized error prior input ideal observer Bayesian Treves A B C D Figure 1: Performance of the rate-coded Bayesian inference network (□), compared to a Bayesian network that only takes into account evidence from the prior and the input but not from the synaptic weight matrix (×), a network that randomly generates patterns from the prior distribution (•), a network that transmits its input to its output (+), and the ‘ideal observer’ having access to the list of stored patterns (♦). A. Firing rates of single units at the end of the recall process (y-axis) against firing rates in the original pattern (x-axis). B. Frequency histograms of errors (difference between recalled and stored firing rates). The ideal observer is not plotted because its error distribution was a Dirac-delta at 0. C, D. Benchmarking the Bayesian network against the network of Treves [13] (∗) on patterns of non-negative firing rates. Average error is the square root of the mean squared error (C), average normalized error measures only the angle difference between true and recalled activities (D). (These measures are not exactly the same as that used to derive the dynamics (equation 1), but are reasonably appropriate.) The prior distribution was Gaussian with µx = 0 mean and σ2 x = 1 variance (A,B), or a Gaussian with µx = 0.5 mean and σ2 x = 0.25 variance truncated below 0 (C) (yielding approximately a = 0.5 density), or ternary with a = 0.5 mean and density (D). The input was corrupted by unbiased Gaussian noise of σ2 ˜x = 1 variance (A,B), or σ2 ˜x = 1.5 variance (C,D) and cut at 0 (C,D). The learning rule was the covariance rule with Acov = 1 (A,B), or with Acov = 1/Na2 (C,D). The number of cells in the network was N = 50 (A,B) and N = 100 (C,D), and the number of memories stored was M = 2 (A,B) or varied between M = 2 . . . 100 (C,D, note logarithmic scale). For each data point, 10 different networks were simulated with a different set of stored patterns, and for each network, 10 attempts at recall were made, with a noisy version of a randomly chosen pattern as the input and with activities initialized at this input. given by a network that generates random patterns from the same prior distribution from which the patterns to be stored were drawn (P [x]). Another simple alternative is a network that simply transmits its input (˜x) to its output. (Note that the ‘input only’ network is not necessarily superior to the ‘prior only’ network: their relative effectiveness depends on the relative variances of the prior and noise distributions, a narrow prior with a wide noise distribution would make the latter perform better, as in Fig. 1D). The Bayesian inference network performs considerably better than any of these simple networks. Crucially, this improvement depends on the information encoded in synaptic weights: the network practically falls back to the level of the ‘input only’ network (or the ‘prior only’ network, whichever is the better, data not shown) if this information is ignored at the construction of the recall dynamics (by taking the third term in Eq. 6 to be 0). An upper bound on the performance of any network using some biological form of synaptic plasticity comes from an ‘ideal observer’ which knows the complete list of stored patterns (rather than its distant reflection in the synaptic weight matrix) and computes and compares the probability that each was corrupted to form the input ˜x to find the best match (rather than using neural dynamics). Such an ideal observer only makes errors when both the number of patterns stored and the noise in the input is sufficiently large, so that corrupting a stored pattern is likely to make it more similar to another stored pattern. In the case shown in Fig. 1A,B, this is not the case, since only two patterns were stored, and the ideal observer performs perfectly as expected. Nevertheless, there may be situations in which perfect performance is out of reach even for an ideal observer (Fig. 1C,D), which makes it a meaningful touchstone. In summary, the performance of any network can be assessed by measuring where it lies between the better one of the ‘prior only’ and ‘input only’ networks and the ideal observer. As a further challenge, we also benchmarked our model against the model of Treves [13] (Fig. 1C,D), which we chose because it is a rare example of a network that was designed to have near optimal recall performance in the face of non-binary patterns. In this work, Treves considered ternary patterns, drawn from the distribution P [xi] := 1 −4 3a δ (xi) + aδ xi −1 2 + a 3δ xi −3 2 , where δ (x) is the Dirac-delta function. Here, a = µx quantifies the density of the patterns (i.e. how non-sparse they are). The patterns are stored using the covariance rule as stated above (with Acov := 1 Na2 ). Neuronal update in the model is discrete, asynchronous, and involves two steps. First the ‘local field’ is calculated as hi := P j̸=i Wi,jxj −k (P i xi −N)3 + Input, then the output of the neuron is calculated as a threshold linear function of the local field: xi := g (hi −hThr) if hi > hThr and xi := 0 otherwise, where g := 0.53 a/ (1 −a) is the gain parameter, and hThr := 0 is the threshold, and the value of k is set by iterative search to optimize performance. The comparison between Treves’ network as we implemented it and our network is imperfect, since the former is optimized for recalling ternary patterns while, in the absence of neural evidence for ternary patterns, we used the simpler and more reasonable neural dynamics for our network that emerge from an assumption that the distribution over the stored patterns is Gaussian. Further, we corrupted the inputs by unbiased additive Gaussian noise (with variance σ2 ˜x = 1.5), but truncated the activities at 0, though did not adjust the dynamics of our network in the light of the truncation. Of course, these can only render our network less effective. Still, the Bayesian network clearly outperformed the Treves network when the patterns were drawn from a truncated Gaussian (Fig. 1C). The performance of the Bayesian network stayed close to that of an ideal observer assuming non-truncated Gaussian input, showing that most of the errors were caused by this assumption and not from suboptimality of neural interactions decoding the information in synaptic weights. Despite extensive efforts to find the optimal parameters for the Treves network, its performance did not even reach that of the ‘input only’ network. Finally, again for ternary patterns, we also considered only penalizing errors about the direction of the vectors of recalled activities ignoring errors about their magnitudes (Fig. 1D). The Treves network did better in this case, but still not as well as the Bayesian network. Importantly, in both cases, in the regime where synaptic weights were saturated in the M →N limit and thus it was no longer possible to extract any useful information from the synaptic weights, the Bayesian network still only fell back to the level of the ‘prior only’ network, but the Treves network did not seem to have any such upper bound on its errors. 4 Phase-based memories Brain areas known to be involved in memory processing demonstrate prominent oscillations (LFPOs) under a variety of conditions, including both wake and sleep states [16]. Under these conditions, the phases of the spikes of a neuron relative to the LFPO have been shown to be carefully controlled [17], and even to convey meaningful stimulus information, e.g. about the position of an animal in its environment [3] or retrieved odor identity [18]. The discovery of spike timing dependent plasticity (STDP) in which the relative timing of pre- and postsynaptic firings determines the sign and extent of synaptic weight change, offered new insights into how the information represented by spike times may be stored in neural networks [19]. However, bar some interesting suggestions about neuronal resonance [20], it is less clear how one might correctly recall information thereby stored in the synaptic weights. The theory laid out in Section 2 allows us to treat this problem systematically. First, neuronal activities, xi, will be interpreted as firing times relative to a reference phase of the ongoing LFPO, such as the peak of theta oscillation in the hippocampus, and will thus be circular variables drawn from a circular Gaussian. Next, our learning rule is an exponentially decaying Gabor-function of the phase difference between pre- and postsynaptic firing: ΩSTDP (xi, xj) := ASTDP exp[κSTDP cos(∆φi,j)] sin(∆φi,j −φSTDP) with ∆φi,j = 2π (xi −xj) /TSTDP. STDP characteristics in different brain regions are well captured by this general formula, but the parameters determining their exact shapes greatly differ among regions. We constrain our analysis to the antisymmetric case, so that φSTDP = 0, and set other parameters to match experimental data on hippocampal STDP [5]. The neuronal interaction function that satisfies Eq.11 is HSTDP (xi, xj) = 2πASTDP/TSTDP Wi,j exp[κSTDP cos(∆φi,j)] cos(∆φi,j) −κSTDP sin2(∆φi,j) . This interaction function decreases firing phase, and thus accelerates the postsynaptic cell if the presynaptic spike precedes postsynaptic firing, and delays the postsynaptic cell if the presynaptic spike arrives just after the postsynaptic cell fired. This characteristic is the essence of the biphasic phase reset curve of type II cells [21], and has been observed in various types of neurons, including neocortical cells [22]. Thus again, our derivation directly couples STDP, a canonical model of synaptic plasticity, and phase reset curves in a canonical model of neural dynamics. Numerical simulations tested again the various approximations. Performance of the network is shown in Fig.2 as is comparable to that of the rate coded network (Fig.2). Further simulations will be necessary to map out the performance of our network over a wider range of parameters, such as the signal-to-noise ratio. 5 Discussion We have described a Bayesian approach to recall in autoassociative memories. This permits the derivation of neuronal dynamics appropriate to a synaptic plasticity rule, and we used this to show a coupling between canonical Hebbian and STDP plasticity rules and canonical rate-based and phase-based neuronal dynamics respectively. This provides an unexpectedly close link between optimal computations and actual implementations. Our method also leads to networks that are highly competent at recall. There are a number of important direction for future work. First, even in phase-based networks, not all neurons fire in each period of the oscillation. This suggests that neurons may employ a dual code – the more rate-based probability of being active in a cycle, and the phase-based timing of the spike relative to the cycle [24]. The advantages of such a scheme have yet to be fully characterized. Second, in the present framework the choice of the learning rule is arbitrary, as long as the −60 −40 −20 0 20 40 60 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Error Frequency prior input Bayesian 1 10 100 0 1 2 3 4 5 6 7 Number of stored patterns Average error input ideal observer Bayesian A B Figure 2: Performance of the phase-coded network. Error distribution for the ideal observer was a Dirac-delta at 0 (B) and was thus omitted from A. Average error of the ‘prior only’ network was too large (A) to be plotted in B. The prior was a von Mises distribution with µx = 0 mean, κx = 0.5 concentration on a TΘ = 125 ms long cycle matching data on theta frequency modulation of pyramidal cell population activity in the hippocampus [23]. Input was corrupted by unbiased circular Gaussian (von Mises) noise with κx = 10 concentration. Learning rule was circular STDP rule with ASTDP = 0.03, κSTDP = 4 and TSTDP = TΘ parameters matching experimental data on hippocampal STDP [5] and theta periodicity. The network consisted of N = 100 cells, and the number of memories stored was M = 10 (A) or varied between M = 2 . . . 100 (B, note logarithmic scale). For further explanation of symbols and axes, see Figure 1. recall dynamics is optimally matched to it. Our formalism also suggests that there may be a way to optimally choose the learning rule itself in the first place, by matching it to the prior distribution of patterns. This approach would thus be fundamentally different from those seeking ‘globally’ optimal learning rules [25], and may be more similar to those used to find optimal tuning curves appropriately matching stimulus statistics [26]. References [1] Marr D. Philos Trans R Soc Lond B Biol Sci 262:23, 1971. [2] O’Keefe J. Exp Neurol 51:78, 1976. [3] O’Keefe J, Recce ML. Hippocampus 3:317, 1993. [4] Bliss TVP, Lømo T. J Physiol (Lond) 232:331, 1973. [5] Bi GQ, Poo MM. J Neurosci 18:10464, 1998. [6] MacKay DJC. In Maximum entropy and Bayesian methods, 237, 1990. [7] Sommer FT, Dayan P. IEEE Trans Neural Netw 9:705, 1998. [8] Jaynes ET. Probability theory: the logic of science. Cambridge University Press, 2003. [9] Amit DJ. Modeling brain function. Cambridge University Press, 1989. [10] Dayan P, Abbott LF. Theoretical neuroscience. MIT Press, 2001. [11] Zhang K. J Neurosci 16:2112, 1996. [12] Seung HS. Proc Natl Acad Sci USA 93:13339, 1996. [13] Treves A. Phys Rev A 42:2418, 1990. [14] Hopfield JJ. Proc Natl Acad Sci USA 76:2554, 1982. [15] Hopfield JJ. Proc Natl Acad Sci USA 81:3088, 1984. [16] Buzsáki Gy. Neuron 33:325, 2002. [17] Harris KD, et al. Nature 424:552, 2003. [18] Li Z, Hopfield JJ. Biol Cybern 61:379, 1989. [19] Abbott LF, Nelson SB. Nat Neurosci 3:1178, 2000. [20] Scarpetta S, et al. Neural Comput 14:2371, 2002. [21] Ermentrout B, et al. Neural Comput 13:1285, 2001. [22] Reyes AD, Fetz FE. J Neurophysiol 69:1673, 1993. [23] Klausberger T, et al. Nature 421:844, 2003. [24] Mueller R, et al. In BioNet’96 , 70, 1976. [25] Gardner E, Derrida B. J Phys A 21:271, 1988. [26] Laughlin S. Z Naturforsch 36:901, 1981.
|
2004
|
102
|
2,511
|
Constraining a Bayesian Model of Human Visual Speed Perception Alan A. Stocker and Eero P. Simoncelli Howard Hughes Medical Institute, Center for Neural Science, and Courant Institute of Mathematical Sciences New York University, U.S.A. Abstract It has been demonstrated that basic aspects of human visual motion perception are qualitatively consistent with a Bayesian estimation framework, where the prior probability distribution on velocity favors slow speeds. Here, we present a refined probabilistic model that can account for the typical trial-to-trial variabilities observed in psychophysical speed perception experiments. We also show that data from such experiments can be used to constrain both the likelihood and prior functions of the model. Specifically, we measured matching speeds and thresholds in a two-alternative forced choice speed discrimination task. Parametric fits to the data reveal that the likelihood function is well approximated by a LogNormal distribution with a characteristic contrast-dependent variance, and that the prior distribution on velocity exhibits significantly heavier tails than a Gaussian, and approximately follows a power-law function. Humans do not perceive visual motion veridically. Various psychophysical experiments have shown that the perceived speed of visual stimuli is affected by stimulus contrast, with low contrast stimuli being perceived to move slower than high contrast ones [1, 2]. Computational models have been suggested that can qualitatively explain these perceptual effects. Commonly, they assume the perception of visual motion to be optimal either within a deterministic framework with a regularization constraint that biases the solution toward zero motion [3, 4], or within a probabilistic framework of Bayesian estimation with a prior that favors slow velocities [5, 6]. The solutions resulting from these two frameworks are similar (and in some cases identical), but the probabilistic framework provides a more principled formulation of the problem in terms of meaningful probabilistic components. Specifically, Bayesian approaches rely on a likelihood function that expresses the relationship between the noisy measurements and the quantity to be estimated, and a prior distribution that expresses the probability of encountering any particular value of that quantity. A probabilistic model can also provide a richer description, by defining a full probability density over the set of possible “percepts”, rather than just a single value. Numerous analyses of psychophysical experiments have made use of such distributions within the framework of signal detection theory in order to model perceptual behavior [7]. Previous work has shown that an ideal Bayesian observer model based on Gaussian forms a probability density prior visual speed likelihood posterior high contrast µ vˆ b vˆ probability density prior visual speed likelihood posterior low contrast µ Figure 1: Bayesian model of visual speed perception. a) For a high contrast stimulus, the likelihood has a narrow width (a high signal-to-noise ratio) and the prior induces only a small shift µ of the mean ˆv of the posterior. b) For a low contrast stimuli, the measurement is noisy, leading to a wider likelihood. The shift µ is much larger and the perceived speed lower than under condition (a). for both likelihood and prior is sufficient to capture the basic qualitative features of global translational motion perception [5, 6]. But the behavior of the resulting model deviates systematically from human perceptual data, most importantly with regard to trial-to-trial variability and the precise form of interaction between contrast and perceived speed. A recent article achieved better fits for the model under the assumption that human contrast perception saturates [8]. In order to advance the theory of Bayesian perception and provide significant constraints on models of neural implementation, it seems essential to constrain quantitatively both the likelihood function and the prior probability distribution. In previous work, the proposed likelihood functions were derived from the brightness constancy constraint [5, 6] or other generative principles [9]. Also, previous approaches defined the prior distribution based on general assumptions and computational convenience, typically choosing a Gaussian with zero mean, although a Laplacian prior has also been suggested [4]. In this paper, we develop a more general form of Bayesian model for speed perception that can account for trial-to-trial variability. We use psychophysical speed discrimination data in order to constrain both the likelihood and the prior function. 1 Probabilistic Model of Visual Speed Perception 1.1 Ideal Bayesian Observer Assume that an observer wants to obtain an estimate for a variable v based on a measurement m that she/he performs. A Bayesian observer “knows” that the measurement device is not ideal and therefore, the measurement m is affected by noise. Hence, this observer combines the information gained by the measurement m with a priori knowledge about v. Doing so (and assuming that the prior knowledge is valid), the observer will – on average – perform better in estimating v than just trusting the measurements m. According to Bayes’ rule p(v|m) = 1 αp(m|v)p(v) (1) the probability of perceiving v given m (posterior) is the product of the likelihood of v for a particular measurements m and the a priori knowledge about the estimated variable v (prior). α is a normalization constant independent of v that ensures that the posterior is a proper probability distribution. a + b 0 1 vmatch Pcum=0.5 Pcum=0.875 vthres P(v2 > v1) v2 ^ ^ Figure 2: 2AFC speed discrimination experiment. a) Two patches of drifting gratings were displayed simultaneously (motion without movement). The subject was asked to fixate the center cross and decide after the presentation which of the two gratings was moving faster. b) A typical psychometric curve obtained under such paradigm. The dots represent the empirical probability that the subject perceived stimulus2 moving faster than stimulus1. The speed of stimulus1 was fixed while v2 is varied. The point of subjective equality, vmatch, is the value of v2 for which Pcum = 0.5. The threshold velocity vthresh is the velocity for which Pcum = 0.875. It is important to note that the measurement m is an internal variable of the observer and is not necessarily represented in the same space as v. The likelihood embodies both the mapping from v to m and the noise in this mapping. So far, we assume that there is a monotonic function f(v) : v →vm that maps v into the same space as m (m-space). Doing so allows us to analytically treat m and vm in the same space. We will later propose a suitable form of the mapping function f(v). An ideal Bayesian observer selects the estimate that minimizes the expected loss, given the posterior and a loss function. We assume a least-squares loss function. Then, the optimal estimate ˆv is the mean of the posterior in Equation (1). It is easy to see why this model of a Bayesian observer is consistent with the fact that perceived speed decreases with contrast. The width of the likelihood varies inversely with the accuracy of the measurements performed by the observer, which presumably decreases with decreasing contrast due to a decreasing signal-to-noise ratio. As illustrated in Figure 1, the shift in perceived speed towards slow velocities grows with the width of the likelihood, and thus a Bayesian model can qualitatively explain the psychophysical results [1]. 1.2 Two Alternative Forced Choice Experiment We would like to examine perceived speeds under a wide range of conditions in order to constrain a Bayesian model. Unfortunately, perceived speed is an internal variable, and it is not obvious how to design an experiment that would allow subjects to express it directly 1. Perceived speed can only be accessed indirectly by asking the subject to compare the speed of two stimuli. For a given trial, an ideal Bayesian observer in such a two-alternative forced choice (2AFC) experimental paradigm simply decides on the basis of the two trial estimates ˆv1 (stimulus1) and ˆv2 (stimulus2) which stimulus moves faster. Each estimate ˆv is based on a particular measurement m. For a given stimulus with speed v, an ideal Bayesian observer will produce a distribution of estimates p(ˆv|v) because m is noisy. Over trials, the observers behavior can be described by classical signal detection theory based on the distributions of the estimates, hence e.g. the probability of perceiving stimulus2 moving 1Although see [10] for an example of determining and even changing the prior of a Bayesian model for a sensorimotor task, where the estimates are more directly accessible. faster than stimulus1 is given as the cumulative probability Pcum(ˆv2 > ˆv1) = ∞ 0 p(ˆv2|v2) ˆv2 0 p(ˆv1|v1) dˆv1 dˆv2 (2) Pcum describes the full psychometric curve. Figure 2b illustrates the measured psychometric curve and its fit from such an experimental situation. 2 Experimental Methods We measured matching speeds (Pcum = 0.5) and thresholds (Pcum = 0.875) in a 2AFC speed discrimination task. Subjects were presented simultaneously with two circular patches of horizontally drifting sine-wave gratings for the duration of one second (Figure 2a). Patches were 3deg in diameter, and were displayed at 6deg eccentricity to either side of a fixation cross. The stimuli had an identical spatial frequency of 1.5 cycle/deg. One stimulus was considered to be the reference stimulus having one of two different contrast values (c1=[0.075 0.5]) and one of five different speed values (u1=[1 2 4 8 12] deg/sec) while the second stimulus (test) had one of five different contrast values (c2=[0.05 0.1 0.2 0.4 0.8]) and a varying speed that was determined by an interleaved staircase procedure. For each condition there were 96 trials. Conditions were randomly interleaved, including a random choice of stimulus identity (test vs. reference) and motion direction (right vs. left). Subjects were asked to fixate during stimulus presentation and select the faster moving stimulus. The threshold experiment differed only in that auditory feedback was given to indicate the correctness of their decision. This did not change the outcome of the experiment but increased significantly the quality of the data and thus reduced the number of trials needed. 3 Analysis With the data from the speed discrimination experiments we could in principal apply a parametric fit using Equation (2) to derive the prior and the likelihood, but the optimization is difficult, and the fit might not be well constrained given the amount of data we have obtained. The problem becomes much more tractable given the following weak assumptions: • We consider the prior to be relatively smooth. • We assume that the measurement m is corrupted by additive Gaussian noise with a variance whose dependence on stimulus speed and contrast is separable. • We assume that there is a mapping function f(v) : v →vm that maps v into the space of m (m-space). In that space, the likelihood is convolutional i.e. the noise in the measurement directly defines the width of the likelihood. These assumptions allow us to relate the psychophysical data to our probabilistic model in a simple way. The following analysis is in the m-space. The point of subjective equality (Pcum = 0.5) is defined as where the expected values of the speed estimates are equal. We write E⟨ˆvm,1⟩ = E⟨ˆvm,2⟩ (3) vm,1 −E⟨µ1⟩ = vm,2 −E⟨µ2⟩ where E⟨µ⟩is the expected shift of the perceived speed compared to the veridical speed. For the discrimination threshold experiment, above assumptions imply that the variance var⟨ˆvm⟩of the speed estimates ˆvm is equal for both stimuli. Then, (2) predicts that the discrimination threshold is proportional to the standard deviation, thus vm,2 −vm,1 = γ var⟨ˆvm⟩ (4) a b likelihood prior vm Figure 3: Piece-wise approximation We perform a parametric fit by assuming the prior to be piece-wise linear and the likelihood to be LogNormal (Gaussian in the m-space). where γ is a constant that depends on the threshold criterion Pcum and the exact shape of p(ˆvm|vm). 3.1 Estimating the prior and likelihood In order to extract the prior and the likelihood of our model from the data, we have to find a generic local form of the prior and the likelihood and relate them to the mean and the variance of the speed estimates. As illustrated in Figure 3, we assume that the likelihood is Gaussian with a standard deviation σ(c, vm). Furthermore, the prior is assumed to be wellapproximated by a first-order Taylor series expansion over the velocity ranges covered by the likelihood. We parameterize this linear expansion of the prior as p(vm) = avm + b. We now can derive a posterior for this local approximation of likelihood and prior and then define the perceived speed shift µ(m). The posterior can be written as p(vm|m) = 1 α p(m|vm)p(vm) = 1 α [exp(− v2 m 2σ(c, vm)2 )(avm + b)] (5) where α is the normalization constant α = ∞ −∞ p(m|vm)p(vm)dvm = b 2 π2σ(c, vm)2 (6) We can compute µ(m) as the first order moment of the posterior for a given m. Exploiting the symmetries around the origin, we find µ(m) = ∞ −∞ vp(vm|m)dvm ≡a(m) b(m) σ(c, vm)2 (7) The expected value of µ(m) is equal to the value of µ at the expected value of the measurement m (which is the stimulus velocity vm), thus E⟨µ⟩= µ(m)|m=vm = a(vm) b(vm) σ(c, vm)2 (8) Similarly, we derive var⟨ˆvm⟩. Because the estimator is deterministic, the variance of the estimate only depends on the variance of the measurement m. For a given stimulus, the variance of the estimate can be well approximated by var⟨ˆvm⟩ = var⟨m⟩(∂ˆvm(m) ∂m |m=vm)2 (9) = var⟨m⟩(1 −∂µ(m) ∂m |m=vm)2 ≈var⟨m⟩ Under the assumption of a locally smooth prior, the perceived velocity shift remains locally constant. The variance of the perceived speed ˆvm becomes equal to the variance of the measurement m, which is the variance of the likelihood (in the m-space), thus var⟨ˆvm⟩= σ(c, vm)2 (10) With (3) and (4), above derivations provide a simple dependency of the psychophysical data to the local parameters of the likelihood and the prior. 3.2 Choosing a Logarithmic speed representation We now want to choose the appropriate mapping function f(v) that maps v to the m-space. We define the m-space as the space in which the likelihood is Gaussian with a speedindependent width. We have shown that discrimination threshold is proportional to the width of the likelihood (4), (10). Also, we know from the psychophysics literature that visual speed discrimination approximately follows a Weber-Fechner law [11, 12], thus that the discrimination threshold increases roughly proportional with speed and so would the likelihood. A logarithmic speed representation would be compatible with the data and our choice of the likelihood. Hence, we transform the linear speed-domain v into a normalized logarithmic domain according to vm = f(v) = ln(v + v0 v0 ) (11) where v0 is a small normalization constant. The normalization is chosen to account for the expected deviation of equal variance behavior at the low end. Surprisingly, it has been found that neurons in the Medial Temporal area (Area MT) of macaque monkeys have speed-tuning curves that are very well approximated by Gaussians of constant width in above normalized logarithmic space [13]. These neurons are known to play a central role in the representation of motion. It seems natural to assume that they are strongly involved in tasks such as our performed psychophysical experiments. 4 Results Figure 4 shows the contrast dependent shift of speed perception and the speed discrimination threshold data for two subjects. Data points connected with a dashed line represent the relative matching speed (v2/v1) for a particular contrast value c2 of the test stimulus as a function of the speed of the reference stimulus. Error bars are the empirical standard deviation of fits to bootstrapped samples of the data. Clearly, low contrast stimuli are perceived to move slower. The effect, however, varies across the tested speed range and tends to become smaller for higher speeds. The relative discrimination thresholds for two different contrasts as a function of speed show that the Weber-Fechner law holds only approximately. The data are in good agreement with other data from the psychophysics literature [1, 11, 8]. For each subject, data from both experiments were used to compute a parametric leastsquares fit according to (3), (4), (7), and (10). In order to test the assumption of a LogNormal likelihood we allowed the standard deviation to be dependent on contrast and speed, thus σ(c, vm) = g(c)h(vm). We split the speed range into six bins (subject2: five) and parameterized h(vm) and the ratio a/b accordingly. Similarly, we parameterized g(c) for the seven contrast values. The resulting fits are superimposed as bold lines in Figure 4. Figure 5 shows the fitted parametric values for g(c) and h(v) (plotted in the linear domain), and the reconstructed prior distribution p(v) transformed back to the linear domain. The approximately constant values for h(v) provide evidence that a LogNormal distribution is an appropriate functional description of the likelihood. The resulting values for g(c) suggest for the likelihood width a roughly exponential decaying dependency on contrast with strong saturation for higher contrasts. a 1.5 normalized matching speed 1 10 reference stimulus contrast c1: 0.075 0.5 subject 1 contrast c2 1 normalized matching speed 1 10 subject 2 speed of reference stimulus [deg/sec] 1.5 1 1 10 0.5 0.5 1 10 contrast c2 b 1 0 0.1 0.2 0.3 0.4 0.5 10 0.79 0.075 0.5 stimulus speed [deg/sec] 1 10 0.1 0.2 0.3 0.4 0.5 discrimination threshold (relative) discrimination threshold (relative) contrast: Figure 4: Speed discrimination data for two subjects. a) The relative matching speed of a test stimulus with different contrast levels (c2=[0.05 0.1 0.2 0.4 0.8]) to achieve subjective equality with a reference stimulus (two different contrast values c1). b) The relative discrimination threshold for two stimuli with equal contrast (c1,2=[0.075 0.5]). 0.5 1 1.5 2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 g(c) 1 10 0 0.5 1 1.5 2 0.1 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 subject 1 subject 2 reconstructed prior 10 1 0.01 0.1 1 0.01 0.1 1 10 1 1 10 0.1 1 p(v) [unnormalized] p(v) [unnormalized] speed [deg/sec] contrast speed [deg/sec] Gaussian Power-Law n=-1.41 n=-1.35 h(v) Figure 5: Reconstructed prior distribution and parameters of the likelihood function. The reconstructed prior for both subjects show much heavier tails than a Gaussian (dashed fit), approximately following a power-law function with exponent n ≈−1.4 (bold line). 5 Conclusions We have proposed a probabilistic framework based on a Bayesian ideal observer and standard signal detection theory. We have derived a likelihood function and prior distribution for the estimator, with a fairly conservative set of assumptions, constrained by psychophysical measurements of speed discrimination and matching. The width of the resulting likelihood is nearly constant in the logarithmic speed domain, and decreases approximately exponentially with contrast. The prior expresses a preference for slower speeds, and approximately follows a power-law distribution, thus has much heavier tails than a Gaussian. It would be interesting to compare the here derived prior distributions with measured true distributions of local image velocities that impinge on the retina. Although a number of authors have measured the spatio-temporal structure of natural images [14, e.g. ], it is clearly difficult to extract therefrom the true prior distribution because of the feedback loop formed through movements of the body, head and eyes. Acknowledgments The authors thank all subjects for their participation in the psychophysical experiments. References [1] P. Thompson. Perceived rate of movement depends on contrast. Vision Research, 22:377–380, 1982. [2] L.S. Stone and P. Thompson. Human speed perception is contrast dependent. Vision Research, 32(8):1535–1549, 1992. [3] A. Yuille and N. Grzywacz. A computational theory for the perception of coherent visual motion. Nature, 333(5):71–74, May 1988. [4] Alan Stocker. Constraint Optimization Networks for Visual Motion Perception - Analysis and Synthesis. PhD thesis, Dept. of Physics, Swiss Federal Institute of Technology, Z¨urich, Switzerland, March 2002. [5] Eero Simoncelli. Distributed analysis and representation of visual motion. PhD thesis, MIT, Dept. of Electrical Engineering, Cambridge, MA, 1993. [6] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience, 5(6):598–604, June 2002. [7] D.M. Green and J.A. Swets. Signal Detection Theory and Psychophysics. Wiley, New York, 1966. [8] F. H¨urlimann, D. Kiper, and M. Carandini. Testing the Bayesian model of perceived speed. Vision Research, 2002. [9] Y. Weiss and D.J. Fleet. Probabilistic Models of the Brain, chapter Velocity Likelihoods in Biological and Machine Vision, pages 77–96. Bradford, 2002. [10] K. Koerding and D. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427(15):244–247, January 2004. [11] Leslie Welch. The perception of moving plaids reveals two motion-processing stages. Nature, 337:734–736, 1989. [12] S. McKee, G. Silvermann, and K. Nakayama. Precise velocity discrimintation despite random variations in temporal frequency and contrast. Vision Research, 26(4):609–619, 1986. [13] C.H. Anderson, H. Nover, and G.C. DeAngelis. Modeling the velocity tuning of macaque MT neurons. Journal of Vision/VSS abstract, 2003. [14] D.W. Dong and J.J. Atick. Statistics of natural time-varying images. Network: Computation in Neural Systems, 6:345–358, 1995.
|
2004
|
103
|
2,512
|
Efficient Kernel Machines Using the Improved Fast Gauss Transform Changjiang Yang, Ramani Duraiswami and Larry Davis Department of Computer Science, Perceptual Interfaces and Reality Laboratory University of Maryland, College Park, MD 20742 {yangcj,ramani,lsd}@umiacs.umd.edu Abstract The computation and memory required for kernel machines with N training samples is at least O(N 2). Such a complexity is significant even for moderate size problems and is prohibitive for large datasets. We present an approximation technique based on the improved fast Gauss transform to reduce the computation to O(N). We also give an error bound for the approximation, and provide experimental results on the UCI datasets. 1 Introduction Kernel based methods, including support vector machines [16], regularization networks [5] and Gaussian processes [18], have attracted much attention in machine learning. The solid theoretical foundations and good practical performance of kernel methods make them very popular. However one major drawback of the kernel methods is their scalability. Kernel methods require O(N 2) storage and O(N 3) operations for direct methods, or O(N 2) operations per iteration for iterative methods, which is impractical for large datasets. To deal with this scalability problem, many approaches have been proposed, including the Nystr¨om method [19], sparse greedy approximation [13, 12], low rank kernel approximation [3] and reduced support vector machines [9]. All these try to find a reduced subset of the original dataset using either random selection or greedy approximation. In these methods there is no guarantee on the approximation of the kernel matrix in a deterministic sense. An assumption made in these methods is that most eigenvalues of the kernel matrix are zero. This is not always true and its violation results in either performance degradation or negligible reduction in computational time or memory. We explore a deterministic method to speed up kernel machines using the improved fast Gauss transform (IFGT) [20, 21]. The kernel machine is solved iteratively using the conjugate gradient method, where the dominant computation is the matrix-vector product which we accelerate using the IFGT. Rather than approximating the kernel matrix by a low-rank representation, we approximate the matrix-vector product by the improved fast Gauss transform to any desired precision. The total computational and storage costs are of linear order in the size of the dataset. We present the application of the IFGT to kernel methods in the context of the Regularized Least-Squares Classification (RLSC) [11, 10], though the approach is general and can be extended to other kernel methods. 2 Regularized Least-Squares Classification The RLSC algorithm [11, 10] solves the binary classification problems in Reproducing Kernel Hilbert Space (RKHS) [17]: given N training samples in d-dimensional space xi ∈ Rd and the labels yi ∈{−1, 1}, find f ∈H that minimizes the regularized risk functional min f∈H 1 N N X i=1 V (yi, f(xi)) + λ∥f∥2 K, (1) where H is an RKHS with reproducing kernel K, V is a convex cost function and λ is the regularization parameter controlling the tradeoff between the cost and the smoothness. Based on the Representer Theorem [17], the solution has a representation as fλ(x) = N X i=1 ciK(x, xi). (2) If the loss function V is the hinge function, V (y, f) = (1 −yf)+, where (τ)+ = τ for τ > 0 and 0 otherwise, then the minimization of (1) leads to the popular Support Vector Machines which can be solved using quadratic programming. If the loss function V is the square-loss function, V (y, f) = (y −f)2, the minimization of (1) leads to the so-called Regularized Least-Squares Classification which requires only the solution of a linear system. The algorithm has been rediscovered several times and has many different names [11, 10, 4, 15]. In this paper, we stick to the term “RLSC” for consistency. It has been shown in [11, 4] that RLSC achieves accuracy comparable to the popular SVMs for binary classification problems. If we substitute (2) into (1), and denote c = [c1, . . . , cN]T , K = K(xi, xj), we can find the solution of (1) by solving the linear system (K + λ′I)c = y (3) where λ′ = λN, I is the identity matrix, and y = [y1, . . . , yN]T . There are many choices for the kernel function K. The Gaussian is a good kernel for classification and is used in many applications. If a Gaussian kernel is applied, as shown in [10], the classification problem can be solved by the solution of a linear system, i.e., Regularized Least-Squares Classification. A direct solution of the linear system will require O(N 3) computation and O(N 2) storage, which is impractical even for problems of moderate size. Algorithm 1 Regularized Least-Squares Classification Require: Training dataset SN = (xi, yi)N i=1. 1. Choose the Gaussian kernel: K(x, x′) = e−∥x−x′∥2/σ2. 2. Find the solution as f(x) = PN i=1 ciK(x, xi), where c satisfies the linear system (3). 3. Solve the linear system (3). An effective way to solve the large-scale linear system (3) is to use iterative methods. Since the matrix K is symmetric, we consider the well-known conjugate gradient method. The conjugate gradient method solves the linear system (3) by iteratively performing the matrix-vector multiplication Kc. If rank(K) = r, then the conjugate gradient algorithm converges in at most r+1 steps. Only one matrix-vector multiplication and 10N arithmetic operations are required per iteration. Only four N-vectors are required for storage. So the computational complexity is O(N 2) for low-rank K and the storage requirement is O(N 2). While this represents an improvement for most problems, the rank of the matrix may not be small, and moreover the quadratic storage and computational complexity are still too high for large datasets. In the following sections, we present an algorithm to reduce the computational and storage complexity to linear order. 3 Fast Gauss Transform The matrix-vector product Kc can be written in the form of the so-called discrete Gauss transform [8] G(yj) = N X i=1 cie−∥xi−yj∥2/σ2, (4) where ci are the weight coefficients, {xi}N i=1 are the centers of the Gaussians (called “sources”), and σ is the bandwidth parameter of the Gaussians. The sum of the Gaussians is evaluated at each of the “target” points {yj}M j=1. Direct evaluation of the Gauss transform at M target points due to N sources requires O(MN) operations. The Fast Gauss Transform (FGT) was invented by Greengard and Strain [8] for efficient evaluation of the Gauss transform in O(M + N) operations. It is an important variant of the more general Fast Multipole Method [7]. The FGT [8] expands the Gaussian function into Hermite functions. The expansion of the univariate Gaussian is e−∥yj−xi∥2/σ2 = p−1 X n=0 1 n! xi −x∗ σ n hn yj −x∗ σ + ϵ(p), (5) where hn(x) are the Hermite functions defined by hn(x) = (−1)n dn dxn e−x2 , and x∗ is the expansion center. The d-dimensional Gaussian function is treated as a Kronecker product of d univariate Gaussians. For simplicity, we adopt the multi-index notation of the original FGT papers [8]. A multi-index α = (α1, . . . , αd) is a d-tuple of nonnegative integers. For any multi-index α ∈Nd and any x ∈Rd, we have the monomial xα = xα1 1 xα2 2 · · · xαd d . The length and the factorial of α are defined as |α| = α1 + α2 + . . . + αd, α! = α1!α2! · · · αd!. The multidimensional Hermite functions are defined by hα(x) = hα1(x1)hα2(x2) · · · hαd(xd). The sum (4) is then equal to the Hermite expansion about center x∗: G(yj) = X α≥0 Cαhα yj −x∗ h , Cα = 1 α! N X i=1 ci xi −x∗ h α . (6) where Cα are the coefficients of the Hermite expansions. If we truncate each of the Hermite series (6) after p terms (or equivalently order p −1), then each of the coefficients Cα is a d-dimensional matrix with pd terms. The total computational complexity for a single Hermite expansion is O((M + N)pd). The factor O(pd) grows exponentially as the dimensionality d increases. Despite this defect in higher dimensions, the FGT is quite effective for two and three-dimensional problems, and has achieved success in some physics, computer vision and pattern recognition applications. In practice a single expansion about one center is not always valid or accurate over the entire domain. A space subdivision scheme is applied in the FGT and the Gaussian functions are expanded at multiple centers. The original FGT subdivides space into uniform boxes, which is simple, but highly inefficient in higher dimensions. The number of boxes grows exponentially with dimensionality, which makes it inefficient for storage and for searching nonempty neighbor boxes. Most important, since the ratio of volume of the hypercube to that of the inscribed sphere grows exponentially with dimension, points have a high probability of falling into the area inside the box and outside the sphere, where the truncation error of the Hermite expansion is much larger than inside of the sphere. 3.1 Improved Fast Gauss Transform In brief, the original FGT suffers from the following two defects: 1. The exponential growth of computationally complexity with dimensionality. 2. The use of the box data structure in the FGT is inefficient in higher dimensions. We introduced the improved FGT [20, 21] to address these deficiencies, and it is summarized below. 3.1.1 Multivariate Taylor Expansions Instead of expanding the Gaussian into Hermite functions, we factorize it as e−∥yj−xi∥2/σ2 = e−∥∆yj∥2/σ2 e−∥∆xi∥2/σ2 e2∆yj·∆xi/σ2, (7) where x∗is the center of the sources, ∆yj = yj −x∗, ∆xi = xi −x∗. The first two exponential terms can be evaluated individually at the source points or target points. In the third term, the sources and the targets are entangled. Here we break the entanglement by expanding it into a multivariate Taylor series e2∆yj·∆xi/σ2 = ∞ X n=0 2n ∆xi σ · ∆yj σ n = X |α|≥0 2|α| α! ∆xi σ α ∆yj σ α . (8) If we truncate the series after total order p −1, then the number of terms is rp−1,d = p+d−1 d which is much less than pd in higher dimensions. For d = 12 and p = 10, the original FGT needs 1012 terms, while the multivariate Taylor expansion needs only 293930. For d →∞and moderate p, the number of terms is O(dp), a substantial reduction. From Eqs.(7) and (8), the weighted sum of Gaussians (4) can be expressed as a multivariate Taylor expansions about center x∗: G(yj) = X |α|≥0 Cαe−∥yj−x∗∥2/σ2 yj −x∗ σ α , (9) where the coefficients Cα are given by Cα = 2|α| α! N X i=1 cie−∥xi−x∗∥2/σ2 xi −x∗ σ α . (10) The coefficients Cα can be efficiently evaluate with rnd storage and rnd −1 multiplications using the multivariate Horner’s rule [20]. 3.1.2 Spatial Data Structures To efficiently subdivide the space, we need a scheme that adaptively subdivides the space according to the distribution of points. It is also desirable to generate cells as compact as possible. Based on these consideration, we model the space subdivision task as a k-center problem [1]: given a set of N points and a predefined number of clusters k, find a partition of the points into clusters S1, . . . , Sk, with cluster centers c1, . . . , ck, that minimizes the maximum radius of any cluster: max i max v∈Si ∥v −ci∥. The k-center problem is known to be NP-hard. Gonzalez [6] proposed a very simple greedy algorithm, called farthest-point clustering. Initially, pick an arbitrary point v0 as the center of the first cluster and add it to the center set C. Then, for i = 1 to k do the follows: in iteration i, for every point, compute its distance to the set C: di(v, C) = minc∈C ∥v −c∥. Let vi be a point that is farthest away from C, i.e., a point for which di(vi, C) = maxv di(v, C). Add vi to the center set C. After k iterations, report the points v0, v1, . . . , vk−1 as the cluster centers. Each point is then assigned to its nearest center. Gonzalez [6] proved that farthest-point clustering is a 2-approximation algorithm, i.e., it computes a partition with maximum radius at most twice the optimum. The direct implementation of farthest-point clustering has running time O(Nk). Feder and Greene [2] give a two-phase algorithm with optimal running time O(N log k). In practice, we used circular lists to index the points and achieve the complexity O(N log k) empirically. 3.1.3 The Algorithm and Error Bound The improved fast Gauss transform consists of the following steps: Algorithm 2 Improved Fast Gauss Transform 1. Assign N sources into k clusters using the farthest-point clustering algorithm such that the radius is less than σρx. 2. Choose p sufficiently large such that the error estimate (11) is less than the desired precision ϵ. 3. For each cluster Sk with center ck, compute the coefficients given by (10). 4. Repeat for each target yj, find its neighbor clusters whose centers lie within the range σρy. Then the sum of Gaussians (4) can be evaluated by the expression (9). The amount of work required in step 1 is O(N log k) using Feder and Greene’s algorithm [2]. The amount of work required in step 3 is of O(N rpd). The work required in step 4 is O(Mn rpd), where n ≤k is the maximum number of neighbor clusters for each target. So, the improved fast Gauss transform achieves linear running time. The algorithm needs to store the k coefficients of size rpd, so the storage complexity is reduced to O(Krpd). To verify the linear order of our algorithm, we generate N source points and N target points in 4, 6, 8, 10 dimensional unit hypercubes using a uniform distribution. The weights on the source points are generated from a uniform distribution in the interval [0, 1] and σ = 1. The results of the IFGT and the direct evaluation are displayed in Figure 1(a), (b), and confirm the linear order of the IFGT. The error of the improved fast Gauss transform (2) is bounded by |E(G(yj))| ≤ N X i=1 |ci| 2p p! ρp xρp y + e−(ρy−ρx)2 . (11) The details are in [21]. The comparison between the maximum absolute errors in the simulation and the estimated error bound (11) is displayed in Figure 1(c) and (d). It shows that the error bound is very conservative compared with the real errors. Empirically we can obtain the parameters on a randomly selected subset and use them on the entire dataset. 4 IFGT Accelerated RLSC: Discussion and Experiments The key idea of all acceleration methods is to reduce the cost of the matrix-vector product. In reduced subset methods, this is performed by evaluating the product at a few points, assuming that the matrix is low rank. The general Fast Multipole Methods (FMM) seek to analytically approximate the possibly full-rank matrix as a sum of low rank approximations with a tight error bound [14] (The FGT is a variant of the FMM with Gaussian kernel). It is expected that these methods can be more robust, while at the same time achieve significant acceleration. The problems to which kernel methods are usually applied are in higher dimensions, though the intrinsic dimensionality of the data is expected to be much smaller. The original FGT does not scale well to higher dimensions. Its cost is of linear order in the number of samples, but exponential order in the number of dimensions. The improved FGT uses new data structures and a modified expansion to reduce this to polynomial order. Despite this improvement, at first glance, even with the use of the IFGT, it is not clear if the reduction in complexity will be competitive with the other approaches proposed. Reason 10 2 10 3 10 4 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 N CPU time direct method, 4D fast method, 4D direct method, 6D fast method, 6D direct method, 8D fast method, 8D direct method, 10D fast method, 10D 10 2 10 3 10 4 10 −6 10 −5 10 −4 10 −3 N Max abs error 4D 6D 8D 10D (a) (b) 0 2 4 6 8 10 12 14 16 18 20 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 p Error Real max abs error Estimated error bound 0.3 0.4 0.5 0.6 0.7 0.8 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 rx Error Real max abs error Estimated error bound (c) (d) Figure 1: (a) Running time and (b) maximum absolute error w.r.t. N in d = 4, 6, 8, 10. The comparison between the real maximum absolute errors and the estimated error bound (11) w.r.t. (c) the order of the Taylor series p, and (d) the radius of the farthest-point clustering algorithm rx = σρx. The uniformly distributed sources and target points are in 4-dimension. for hope is provided by the fact that in high dimensions we expect that the IFGT with very low order expansions will converge rapidly (because of the sharply vanishing exponential terms multiplying the expansion in factorization (7). Thus we expect that combined with a dimensionality reduction technique, we can achieve very competitive solutions. In this paper we explore the application of the IFGT accelerated RLSC to certain standard problems that have already been solved by the other techniques. While dimensionality reduction would be desirable, here we do not perform such a reduction for fair comparison. We use small order expansions (p = 1 and p = 2) in the IFGT and run the iterative solver. In the first experiment, we compared the performance of the IFGT on approximating the sums (4) with the Nystr¨om method [19]. The experiments were carried out on a Pentium 4 1.4GHz PC with 512MB memory. We generate N source points and N target points in 100 dimensional unit hypercubes using a uniform distribution. The weights on the source points are generated using a uniform distribution in the interval [0, 1]. We directly evaluate the sums (4) as the ground truth, where σ2 = (0.5)d and d is the dimensionality of the data. Then we estimate it using the improved fast Gauss transform and Nystr¨om method. To compare the results, we use the maximum relative error to measure the precision of the approximations. Given a precision of 0.5%, we use the error bound (11) to find the parameters of the IFGT, and use a trial and error method to find the parameter of the Nystr¨om method. Then we vary the number of points, N, from 500 to 5000 and plot the time against N in Figure 2 (a). The results show the IFGT is much faster than the Nystr¨om method. We also fix the number of points to N = 1000 and vary the size of centers (or random subset) k from 10 to 1000 and plot the results in Figure 2 (b). The results show that the errors of the IFGT are not sensitive to the number of the centers, which means we can use very a small number of centers to achieve a good approximation. The accuracy of the Nystr¨om method catches up at large k, where the direct evaluation may be even faster. The intuition is that the use of expansions improves the accuracy of the approximation and relaxes the requirement of the centers. 10 3 10 −2 10 −1 N Time (s) IFGT, p=1 IFGT, p=2 Nystrom 10 1 10 2 10 3 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 IFGT, p=1 IFGT, p=2 Nystrom k Max Relative Error (a) (b) Figure 2: Performance comparison between the approximation methods. (a) Running time against N and (b) maximum relative error against k for fixed N = 1000 in 100 dimensions. Table 1: Ten-fold training and testing accuracy in percentage and training time in seconds using the four classifiers on the five UCI datasets. Same value of σ2 = (0.5)d is used in all the classifiers. A rectangular kernel matrix with random subset size of 20% of N was used in PSVM on Galaxy Dim and Mushroom datasets. Dataset RLSC+FGT RLSC Nystr¨om PSVM Size × Dimension %, %, s %, %, s %, %, s %, %, s Ionosphere 94.8400 97.7209 91.8656 95.1250 251 × 34 91.7302 90.6032 88.8889 94.0079 0.3711 1.1673 0.4096 0.8862 BUPA Liver 79.6789 81.7318 76.7488 75.8134 345 × 6 71.0336 67.8403 69.2857 71.4874 0.1279 0.4833 0.1475 0.3468 Tic-Tac-Toe 88.7263 88.6917 88.4945 92.9715 958 × 9 86.9507 85.4890 84.1272 87.2680 0.3476 2.9676 1.8326 3.9891 Galaxy Dim 93.2967 93.3206 93.7023 93.6705 4192 × 14 93.2014 93.2258 93.7020 93.5589 2.0972 78.3526 3.1081 44.5143 Mushroom 88.2556 87.9001 85.5955 8124 × 22 87.9615 87.6658 failed 85.4629 14.7422 341.7148 285.1126 In the second experiment, five datasets from the UCI repository are used to compare the performance of four different methods for classification: RLSC with the IFGT, RLSC with full kernel evaluation, RLSC with the Nystr¨om method and the Proximal Support Vector Machines (PSVM) [4]. The Gaussian kernel is used for all these methods. We use the same value of σ2 = (0.5)d for a fair comparison. The ten-fold cross validation accuracy on training and testing and the training time are listed in Table 1. The RLSC with the IFGT is fastest among the four classifiers on all five datasets, while the training and testing accuracy is close to the accuracy of the RLSC with full kernel evaluation. The RLSC with the Nystr¨om approximation is nearly as fast, but the accuracy is lower than the other methods. Worst of all, it is not always feasible to solve the linear systems, which results in the failure on the Mushroom dataset. The PSVM is accurate on the training and testing, but slow and memory demanding for large datasets, even with subset reduction. 5 Conclusions and Discussion We presented an improved fast Gauss transform to speed up kernel machines with Gaussian kernel to linear order. The simulations and the classification experiments show that the algorithm is in general faster and more accurate than other matrix approximation methods. At present, we do not consider the reduction from the support vector set or dimensionality reduction. The combination of the improved fast Gauss transform with these techniques should bring even more reduction in computation. Another improvement to the algorithm is an automatic procedure to tune the parameters. A possible solution could be running a series of testing problems and tuning the parameters accordingly. If the bandwidth is very small compared with the data range, the nearest neighbor searching algorithms could be a better solution to these problems. Acknowledgments We would like to thank Dr. Nail Gumerov for many discussions. We also gratefully acknowledge support of NSF awards 9987944, 0086075 and 0219681. References [1] M. Bern and D. Eppstein. Approximation algorithms for geometric problems. In D. Hochbaum, editor, Approximation Algorithms for NP-Hard Problems, chapter 8, pages 296–345. PWS Publishing Company, Boston, 1997. [2] T. Feder and D. Greene. Optimal algorithms for approximate clustering. In Proc. 20th ACM Symp. Theory of computing, pages 434–444, Chicago, Illinois, 1988. [3] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2:243–264, Dec. 2001. [4] G. Fung and O. L. Mangasarian. Proximal support vector machine classifiers. In Proceedings KDD-2001: Knowledge Discovery and Data Mining, pages 77–86, San Francisco, CA, 2001. [5] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219–269, 1995. [6] T. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293–306, 1985. [7] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comput. Phys., 73(2):325–348, 1987. [8] L. Greengard and J. Strain. The fast Gauss transform. SIAM J. Sci. Statist. Comput., 12(1):79– 94, 1991. [9] Y.-J. Lee and O. Mangasarian. RSVM: Reduced support vector machines. In First SIAM International Conference on Data Mining, Chicago, 2001. [10] T. Poggio and S. Smale. The mathematics of learning: Dealing with data. Notices of the American Mathematical Society (AMS), 50(5):537–544, 2003. [11] R. Rifkin. Everything Old Is New Again: A Fresh Look at Historical Approaches in Machine Learning. PhD thesis, MIT, Cambridge, MA, 2002. [12] A. Smola and P. Bartlett. Sparse greedy gaussian process regression. In Advances in Neural Information Processing Systems, pages 619–625. MIT Press, 2001. [13] A. Smola and B. Sch¨olkopf. Sparse greedy matrix approximation for machine learning. In Proc. Int’l Conf. Machine Learning, pages 911–918. Morgan Kaufmann, 2000. [14] X. Sun and N. P. Pitsianis. A matrix version of the fast multipole method. SIAM Review, 43(2):289–300, 2001. [15] J. A. K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Processing Letters, 9(3):293–300, 1999. [16] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. [17] G. Wahba. Spline Models for Observational Data. SIAM, Philadelphia, PA, 1990. [18] C. K. Williams and D. Barber. Bayesian classification with gaussian processes. IEEE Trans. Pattern Anal. Mach. Intell., 20(12):1342–1351, Dec. 1998. [19] C. K. I. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. In Advances in Neural Information Processing Systems, pages 682–688. MIT Press, 2001. [20] C. Yang, R. Duraiswami, N. Gumerov, and L. Davis. Improved fast Gauss transform and efficient kernel density estimation. In Proc. ICCV 2003, pages 464–471, 2003. [21] C. Yang, R. Duraiswami, and N. A. Gumerov. Improved fast gauss transform. Technical Report CS-TR-4495, UMIACS, Univ. of Maryland, College Park, 2003.
|
2004
|
104
|
2,513
|
Responding to modalities with different latencies Fredrik Bissmarck Computational Neuroscience Labs ATR International Hikari-dai 2-2-2, Seika, Soraku Kyoto 619-0288 JAPAN xfredrik@atr.jp Hiroyuki Nakahara Laboratory for Mathematical Neuroscience RIKEN Brain Science Institute Hirosawa 2-1-1, Wako Saitama 351-0198 JAPAN hiro@brain.riken.jp Kenji Doya Initial Research Project Okinawa Institute of Science and Technology 12-22 Suzaki, Gushikawa Okinawa 904-2234 JAPAN doya@irp.oist.jp Okihide Hikosaka Laboratory of Sensorimotor Research National Eye Institute, NIH Building 49, Room 2A50 Bethesda, MD 20892 oh@lsr.nei.nih.gov Abstract Motor control depends on sensory feedback in multiple modalities with different latencies. In this paper we consider within the framework of reinforcement learning how different sensory modalities can be combined and selected for real-time, optimal movement control. We propose an actor-critic architecture with multiple modules, whose output are combined using a softmax function. We tested our architecture in a simulation of a sequential reaching task. Reaching was initially guided by visual feedback with a long latency. Our learning scheme allowed the agent to utilize the somatosensory feedback with shorter latency when the hand is near the experienced trajectory. In simulations with different latencies for visual and somatosensory feedback, we found that the agent depended more on feedback with shorter latency. 1 Introduction For motor response, the brain relies on several modalities. These may carry different information. For example, vision keeps us updated on external world events, while somatosensation gives us detailed information about the state of the motor system. For most human behaviour, both are crucial for optimal performance. However, modalities may also differ in latency. For example, information may be perceived faster by the somatosensory pathway than the visual. For quick responses it would be reasonable that the modality with shorter latency is more important. The slower modality would be useful if it carries additional information, for example when we have to attend to a visual cue. There has been a lot of research on modular organisation where each module is an expert of a particular part of the state space (e.g. [1]). We address questions concerning modules with different feedback delays, and how they are used for real-time motor control. How does the latency affect the influence of a modality over action? How can modalities be combined? Here, we propose an actor-critic framework, where modules compete for influence over action by reinforcement. First, we present the generic framework and learning algorithm. Then, we apply our model to a visuomotor sequence learning task, and give details of the simulation results. 2 General framework Figure 1: The general framework. This section describes the generic concepts of our model: a set of modules with delayed feedback, a function for combining them and a learning algorithm. 2.1 Network architecture Consider M modules, where each module has its own feedback signal ym(x(t−τ m)) (m = 1, 2, .., M) computed from the state of the environment x(t). Each module has a corresponding time delay τ m (see figure 1). (The same feedback signals are used to compute the critic, see the next subsection). Each module outputs a populationcoded output am(t), where each element am j (j = 1, 2, ..J) corresponds to the motor output vector uj, which represents, for example, joint torques. The output of an actor is given by a function approximator am(t) = f(ym(t −τ m); wm) with parameters wm. The actual motor command u ∈RD is given by combination of population vector outputs am of the modules. Here we consider the use of softmax combination. The probablity of taking j-th motor output vector is given by πj(t) = exp βPM m=1 am j PJ j=1 exp βPM m=1 am j , where β is the inverse temperature, controlling the stochasticity. At each moment, one of the motor command vectors is selected as p(u(t) = ¯uj) = πj(t). We define q(t) to be a binary vector of J elements where the one corresponding to the chosen action is 1 and others 0. There is no explicit mechanism in the architecture that explicitly favour a module with shorter latency. Instead, we test whether a reinforcement learning algorithm can learn to select the modules which are more useful to the agent. 2.2 Learning algorithm Our model is a form of the continuous actor-critic [2]. The function of the critic is to estimate the expected future reward, i.e. to learn the value function V = V (y1(t −τ 1), y2(t −τ 2), .., yM(t −τ M); wc) where wc is a set of trainable parameters. The temporal difference (TD) error δT D is the discrepancy between expected and actual reward r(t). In its continuous form: δT D(t) = r(t) − 1 τ T D V (t) + ˙V (t) where τ T D is the future reward discount time constant. The TD error is used to update the parameters for both the critic, and the actor, which in our framework is the set of modules. Learning of each actor module is guided by the action deviation signal Ej(t) = (qj(t) −πj(t))2 2 which is the difference between the its output and the action that was actually selected. Parameters of the critic and actors are updated using eligibility traces ˙ec k(t) = −1 κec k + ∂V ∂wc k ˙em kj(t) = −1 κec kj + ∂Ej(t) ∂wm kj where k is the index of parameters and κ is a time constant. The trace for m-th actor is given from ∂Ej(t) ∂wm kj = (qj(t) −πj(t))∂πj(t) ∂wm kj The parameters are updated by gradient descent as ˙wc k = αδT D(t)ec k(t) ˙wm kj = αδT D(t)em kj(t) where α denotes the learning rate. 2.3 Neuroanatomical correlates Our network architecture is modeled to resemble the function of the basal gangliathalamocortical (BG-TC) system to select and learn actions for goal-directed movement. Actor-critic models of the basal ganglia has been proposed by many (e.g. [3], [4]). The modular organisation of the BG-TC loop circuits ([5], [6]), where modules depends on different sensory feedback, implies that the actor-critic depends on several modules. 3 An example application To demonstrate our paradigm we exemplify by a motor sequence learning task, inspired by “the n x n task”, an experimental paradigm where monkeys and humans learn a sequence of reaching movements, where error performance improved across days, and performance time decreased across months [7]. The results from these experiments suggested that the influence of the motor BG-TC loop for motor execution is relatively stronger for learned sequences than for new ones, compared to the prefrontal BG-TC loop. In our model implementation, we want to investigate how the feedback delay affects the influence of visual and sensorimotor modalities when learning a stereotype real-time motor sequence. In our implementation (see figure 2), we use two modules, one “visual”, and one “motor”, corresponding to visual and somatosensory feedback respectively. The visual module represents a preknown, visually guided reaching policy for arbitrary start and endpoints within reach. This module does not learn. The motor module represents the motor skill memory to be learned. It gives zero output initially, but learn by associating reinforcement with sequences of actions. The controlled object is a 2DOF arm, for which the agent gives a joint torque motor command, with action selection sampled at 100 Hz. 3.1 Environment Figure 2: Implementation of the example simulation. The visual module is fed back the hand position {ξhand 1 , ξhand 2 } and the position of the active target {ξtarget 1 , ξtarget 2 }, while the motor module is fed back a population code representing the joint angles {θ1, θ2}. S : Start, G : Goal. See text for further details. The environment consists of a 2DOF arm (both links are 0.3 m long and 0.1 m in diam., weight 1.0 kg), starting at position S, directly controlled by the agent, and a variable target (see environment box in figure 2). The task is to press three targets in consecutive order, which always appear at the same positions (one at one time), marked 1, 2 and 3 in the figure. If the hand of the arm satisfy a proximity condition (|ξtarget −ξhand| < ξprox and | ˙ξhand| < vprox) a key (target) is considered pressed, and the next target appears immediately. To allow a larger possibility of modifying the movement, we have a very loose velocity constraint vprox (for all simulations, ξprox = 0.02 m and vprox = 0.5 m/s ). Each trial ended after successful completion of the task, or after 5 s. For each successful key press, the agent is rewarded instantaneously, with an increasing amount of reward for later keys in the sequence (50, 100, 150 respectively). A small, constant running time cost (10/s) was subtracted from the reward function r(t). 3.2 The visual module The visual module is designed as a computed torque feedback controller for simplicity. It was designed to give an output as similar as possible to biological reaching movements, but we did not attempt to design the controller itself in a biologically plausible way. The feedback signal yv to the visual module consists of the hand kinematics ξhand, ˙ξhand and the target position ξtarget. Using a computed torque feedback control law, the visual module uses these signals to generate a reaching movement, representing the preknown motor behaviour of the agent. As such a control law does not have measures to deal with delayed signals, we make the assumption that the control law relies on ˜ξhand(t) = ξhand(t), i.e. the controller can predict for the delay regarding the arm movement (the target signal is still delayed by τ v. This is a limitation of our example, but is a necessity to avoid “motor babbling “, for which learning time would be infinitely long. The controller output ˙uvisual(t) = −1 τ CT uvisual(t) + λuvisual′(¨˜ξ hand , ˙˜ξ hand , e) where τ CT and λ are constants, e = ξtarget(t −τ v) −˜ξhand(t) and uvisual′(t) = JT (M(¨˜ξ hand + K1 ˙˜ξ hand −K2e) + C˙˜ξ hand ) where J is the Jacobian (∂θ/∂˜ξhand), M the moment of inertia matrix and C the Coriolis matrix. With proper control gains K1 and K2, the filter helps to give bell-shaped velocity profiles for the reaching movement, desirable for its resemblance to biological motion. The output uvisual is then expanded to a population vector av j(t) = 1 Z exp(−1 2{ X d (uvisual d (t) −¯ujd σ′′ jd )2}) where Z is the normalisation term, ¯ujd is a preferable joint torque for Cartesian dimension d for vector element j, σ′′ jd the corresponding variance. Parameters: τ CT = 50 ms, λ = 100, K1 = [10 0;0 10], K2 = [50 0;0 50]. The prefered joint torques ¯uj corresponding to action j were distributed symmetrically over the origin in a 5x5 grid, in the range (-100:100,-100:100) with the middle (0,0) unit removed. The corresponding variances σ′′ jd were half the distance to the closest node in each direction. 3.3 The motor module The motor module relies on information about the motor state of the arm. In the vicinity of a target, by the immediate motor state alone it may be difficult to determine whether the hand should move towards or away from the target position. We solve this by adding contextual neurons. These neurons fire after a particular key is pressed. Thus, the feedback signal ym with k = 1, 2, .., K is partitioned by K0: The first part (k ≤K0) represents the motor state, and the second part (k > K0) represents the context. The feedback to the motor module are the joint angles and angular velocities θ, ˙θ of the arm, expanded to a population vector with K0 elements: ym k (t) = 1 Z exp −1 2 nX d (θd(t) −¯θkd σkd )2 + X d ( ˙θd(t) −¯ωkd σ′ kd )2o where ¯ θkd, ¯ ωkd are preferable joint angles and velocities, σkd and σ′ kd are corresponding variances, Z is a normalisation term. The context units are a number of n = 1, 2, .., N tapped delay lines (where N correspond to the number of keys in the sequence), where each delay line has Q units. For (k > K0, k ̸= K0 + Q(n −1) + 1): ˙ym k (t) = −1 τ C ym k (t) + yk−1(t) Each delay line is initiated by the input at (k = K0 + Q(n −1) + 1): ym k (t) = δ(t −τ keypress n ) where δ is the Dirac delta function, and τ keypress n is the instant the nth key was pressed. The response signal am is the linear combination of ym and the trainable matrix Wm, am(t) = Wmym(t −τ m) Though it is reasonable to use both feedback pathways for the critic, for simplicity we use only the motor: V (t) = Wcym(t −τ m) Parameters: The prefered joint angles ¯θkd and angular velocities ¯ωkd were distributed uniformly in a 7*7*3*3 grid (K0 = 441 nodes) for k = 1, 2, ..K0 nodes , in the ranges (-0.2:1.2,1,2:1.6) rad and (-1:1,-1:1) rad/s. The corresponding variances σkd and σ′ kd were half the distance to the closest node in each direction. The contextual part of the vector has Q = 8, N = 3, which makes 24 elements. The time constant τ C = 30 ms. 4 Simulation results We trained the model for four different feedback delay pairs (τ v / τ m, in ms): 100/0, 100/50, 100/100, 0/100 (β = 10, τ T D = 200 ms, κ = 200 ms, α = 0.1 s−1). We stopped the simulations after 125,000 trials. Two properties are essential for our argument: the shortest feedback delay τ min = min(τ v, τ m) and the relative latency ∆τ = (τ v −τ m). Figure 3: (Left) Change in performance time (running averages) across trials for different feedback delays (displayed in ms as visual/motor). (Right) Example hand trajectories for the initial (gray lines) and learned (black lines) behaviour for the run with 100 ms/0 ms delay. 4.1 Final performance time depends on the shortest latency Figure 3 shows that the performance time (PT, the time it takes to complete one trial) was improved for all four simulations. The final PT relates to the shortest latency τ min, the shorter the better final performance. However, there are three possible reasons for speedup: 1) a more deterministic (greedy) policy π, 2) a change in trajectory and 3) faster reaction by utilization of faster feedback. As we observed more stereotyped trajectories and more deterministic policies after learning, reason 1) is true, but does it account for the entire improvement? For the rather exploratory, visually guided initial movement, the average PT is 1.55 s and 1.25 s for τ v = 100 ms and τ v = 0 ms respectively, while the corresponding greedy policy PTs are 1.41 s and 1.13 s. Since the final PTs always were lower, the speedup must also be due to other changes in behaviour. Figure 3 (right) shows example trajectories of the inital (gray) and learned (black) policy in 100/0. We see that while the initial movement was directed target-bytarget, the learned displays a smoothly curved movement, optimized to perform the entire sequence. This is expected, as the discounted reward (determined by τ T D) and time cost favour fast movements over slow. This change was to some degree observed in all four simulations, although it was most evident (see the next subsection) in the 100/0. So reason 2) also seems to be true. We also see that the shorter τ min, the shorter final PT. Reason 3) is also significant: the possibility to speed up the movement is limited by τ min. Figure 4: Performance after learning with typical examples of hand trajectories in a normal condition, and a condition with the visual module turned off, for agents with different feedback delay. Average performance times are displayed for each. When the visual module was turned off, the agent often failed to complete the sequence in 5 s. Success rate are shown in parantheses, and the corresponding average are for the successful trials only. The solid lines highlight the trajectory while execution is stable, while the dashed lines show the parts when the agent is out of control. 4.2 The module with shorter latency is more influential over motor control Figure 4 shows the performance of sufficiently learned behaviour (after 125,000 trials) for two conditions: one normal (“condition 1”) and one with the visual module turned off (“condition 2”). Condition 1 is shown mainly for reference. The difference in trajectories in condition 1 are marginal, but execution tends to destabilize with longer τ min. Condition 2 reveals the dependence of the visual module. In the 100/0 case, the correct spatial trajectory is generated each time, but a sometimes too fast movement leads to overshoots for 2nd and 3rd keys. For smaller ∆τ (rightwards in figure 4) the execution becomes unstable, and the 0/100 case it could never execute the movement. For some reason, when the 100/100 kept the hand on track, it was less likely to do overshoots than the 100/50 case, which is why the average PT and success rate is better. Thus, we conclude that the faster module are more influential over motor control. The adaptiveness of the motor loop also offer the motor module an advantage over the visual. 5 Conclusion Our framework offers a natural way to combine modules with different feedback latencies. In any particular situation, the learning algorithm will reinforce the better module to use. When execution is fast, the module with shorter latency may be favourable, and when slow, the one with more information. For example, in vicinity of the experienced sequence, our agent utilized the somatosensory feedback to execute the movement more quickly, but once it lost control the visual feedback was needed to put the arm back on track again. By using the softmax function it is possible to flexibly gate or combine module outputs. Sometimes the asynchrony of modules can cause the visual and motor modules to be directed towards different targets. Then it is desirable to suppress the slower module to favour the faster, which also occured in our example by reinforcing the motor module enough to suppress the visual. In other situations the reliability of one module may be insufficient for robust execution, making it necessary to combine modules. In our 100/0 example, the slower visual module was used to assist the faster motor module to learn a skill. Once acquired, the visual module was not necessary for the skillful execution anymore, unless something went wrong. Thus, the visual module is more free to attend to other tasks. When we learn to ride a bicycle, for example, we first need to attend to what we do, but once we have learned, we can attend to other things, like the surrounding traffic or a conversation. Our result suggests that a longer relative latency helps to make the faster modality independent, so the slower can be decoupled from execution after learning. In the human brain, forward models are likely to have access to an efference copy of the motor command, which may be more important than the incoming feedback for fast movements [1]. This is something we intend to look at in future work. Also, we will extend this work with a more theoretical analysis, and compare the performance of multiple adaptive modules. Acknowledgements The research is supported by CREST. The authors would like to thank Erhan Oztop and Jun Morimoto for helpful comments. References [1] M. Haruno, D. M. Wolpert, and M. Kawato. Mosaic model for sensorimotor learning and control. Neural Comput, 13(10):2201–20, 2001. [2] K. Doya. Reinforcement learning in continuous time and space. Neural Comput, 12(1):219–45, 2000. [3] K. Doya. What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural Netw, 12(7-8):961–974, 1999. [4] N. Daw. Reinforcement learning models of the dopamine system and their behavioral implications. PhD thesis, Carnegie Mellon University, 2003. [5] G. E. Alexander and M. D. Crutcher. Functional architecture of basal ganglia circuits: neural substrates of parallel processing. Trends Neurosci, 13(7):266–71, 1990. [6] H. Nakahara, K. Doya, and O. Hikosaka. Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuomotor sequences - a computational approach. J Cogn Neurosci, 13(5):626–47, 2001. [7] O. Hikosaka, H. Nakahara, M. K. Rand, K. Sakai, X. Lu, K. Nakamura, S. Miyachi, and K. Doya. Parallel neural networks for learning sequential procedures. Trends Neurosci, 22(10):464–71, 1999.
|
2004
|
105
|
2,514
|
Two-Dimensional Linear Discriminant Analysis Jieping Ye Department of CSE University of Minnesota jieping@cs.umn.edu Ravi Janardan Department of CSE University of Minnesota janardan@cs.umn.edu Qi Li Department of CIS University of Delaware qili@cis.udel.edu Abstract Linear Discriminant Analysis (LDA) is a well-known scheme for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as face recognition and image retrieval. An intrinsic limitation of classical LDA is the so-called singularity problem, that is, it fails when all scatter matrices are singular. A well-known approach to deal with the singularity problem is to apply an intermediate dimension reduction stage using Principal Component Analysis (PCA) before LDA. The algorithm, called PCA+LDA, is used widely in face recognition. However, PCA+LDA has high costs in time and space, due to the need for an eigen-decomposition involving the scatter matrices. In this paper, we propose a novel LDA algorithm, namely 2DLDA, which stands for 2-Dimensional Linear Discriminant Analysis. 2DLDA overcomes the singularity problem implicitly, while achieving efficiency. The key difference between 2DLDA and classical LDA lies in the model for data representation. Classical LDA works with vectorized representations of data, while the 2DLDA algorithm works with data in matrix representation. To further reduce the dimension by 2DLDA, the combination of 2DLDA and classical LDA, namely 2DLDA+LDA, is studied, where LDA is preceded by 2DLDA. The proposed algorithms are applied on face recognition and compared with PCA+LDA. Experiments show that 2DLDA and 2DLDA+LDA achieve competitive recognition accuracy, while being much more efficient. 1 Introduction Linear Discriminant Analysis [2, 4] is a well-known scheme for feature extraction and dimension reduction. It has been used widely in many applications such as face recognition [1], image retrieval [6], microarray data classification [3], etc. Classical LDA projects the data onto a lower-dimensional vector space such that the ratio of the between-class distance to the within-class distance is maximized, thus achieving maximum discrimination. The optimal projection (transformation) can be readily computed by applying the eigendecomposition on the scatter matrices. An intrinsic limitation of classical LDA is that its objective function requires the nonsingularity of one of the scatter matrices. For many applications, such as face recognition, all scatter matrices in question can be singular since the data is from a very high-dimensional space, and in general, the dimension exceeds the number of data points. This is known as the undersampled or singularity problem [5]. In recent years, many approaches have been brought to bear on such high-dimensional, undersampled problems, including pseudo-inverse LDA, PCA+LDA, and regularized LDA. More details can be found in [5]. Among these LDA extensions, PCA+LDA has received a lot of attention, especially for face recognition [1]. In this two-stage algorithm, an intermediate dimension reduction stage using PCA is applied before LDA. The common aspect of previous LDA extensions is the computation of eigen-decomposition of certain large matrices, which not only degrades the efficiency but also makes it hard to scale them to large datasets. In this paper, we present a novel approach to alleviate the expensive computation of the eigen-decomposition in previous LDA extensions. The novelty lies in a different data representation model. Under this model, each datum is represented as a matrix, instead of as a vector, and the collection of data is represented as a collection of matrices, instead of as a single large matrix. This model has been previously used in [8, 9, 7] for the generalization of SVD and PCA. Unlike classical LDA, we consider the projection of the data onto a space which is the tensor product of two vector spaces. We formulate our dimension reduction problem as an optimization problem in Section 3. Unlike classical LDA, there is no closed form solution for the optimization problem; instead, we derive a heuristic, namely 2DLDA. To further reduce the dimension, which is desirable for efficient querying, we consider the combination of 2DLDA and LDA, namely 2DLDA+LDA, where the dimension of the space transformed by 2DLDA is further reduced by LDA. We perform experiments on three well-known face datasets to evaluate the effectiveness of 2DLDA and 2DLDA+LDA and compare with PCA+LDA, which is used widely in face recognition. Our experiments show that: (1) 2DLDA is applicable to high-dimensional undersampled data such as face images, i.e., it implicitly avoids the singularity problem encountered in classical LDA; and (2) 2DLDA and 2DLDA+LDA have distinctly lower costs in time and space than PCA+LDA, and achieve classification accuracy that is competitive with PCA+LDA. 2 An overview of LDA In this section, we give a brief overview of classical LDA. Some of the important notations used in the rest of this paper are listed in Table 1. Given a data matrix A ∈IRN×n, classical LDA aims to find a transformation G ∈IRN×ℓ that maps each column ai of A, for 1 ≤i ≤n, in the N-dimensional space to a vector bi in the ℓ-dimensional space. That is G : ai ∈IRN →bi = GT ai ∈IRℓ(ℓ< N). Equivalently, classical LDA aims to find a vector space G spanned by {gi}ℓ i=1, where G = [g1, · · · , gℓ], such that each ai is projected onto G by (gT 1 · ai, · · · , gT ℓ· ai)T ∈IRℓ. Assume that the original data in A is partitioned into k classes as A = {Π1, · · · , Πk}, where Πi contains ni data points from the ith class, and k i=1 ni = n. Classical LDA aims to find the optimal transformation G such that the class structure of the original high-dimensional space is preserved in the low-dimensional space. In general, if each class is tightly grouped, but well separated from the other classes, the quality of the cluster is considered to be high. In discriminant analysis, two scatter matrices, called within-class (Sw) and between-class (Sb) matrices, are defined to quantify the quality of the cluster, as follows [4]: Sw = k i=1 x∈Πi(x −mi)(x −mi)T , and Sb = k i=1 ni(mi −m)(mi −m)T , where mi = 1 ni x∈Πi x is the mean of the ith class, and m = 1 n k i=1 x∈Πi x is the global mean. Notation Description n number of images in the dataset k number of classes in the dataset Ai ith image in matrix representation ai ith image in vectorized representation r number of rows in Ai c number of columns in Ai N dimension of ai (N = r ∗c) Πj jth class in the dataset L transformation matrix (left) by 2DLDA R transformation matrix (right) by 2DLDA I number of iterations in 2DLDA Bi reduced representation of Ai by 2DLDA ℓ1 number of rows in Bi ℓ2 number of columns in Bi Table 1: Notation It is easy to verify that trace(Sw) measures the closeness of the vectors within the classes, while trace(Sb) measures the separation between classes. In the low-dimensional space resulting from the linear transformation G (or the linear projection onto the vector space G), the within-class and between-class matrices become SL b = GT SbG, and SL w = GT SwG. An optimal transformation G would maximize trace(SL b ) and minimize trace(SL w). Common optimizations in classical discriminant analysis include (see [4]): max G trace((SL w)−1SL b ) and min G trace((SL b )−1SL w) . (1) The optimization problems in Eq. (1) are equivalent to the following generalized eigenvalue problem: Sbx = λSwx, for λ ̸= 0. The solution can be obtained by applying an eigendecomposition to the matrix S−1 w Sb, if Sw is nonsingular, or S−1 b Sw, if Sb is nonsingular. There are at most k −1 eigenvectors corresponding to nonzero eigenvalues, since the rank of the matrix Sb is bounded from above by k −1. Therefore, the reduced dimension by classical LDA is at most k −1. A stable way to compute the eigen-decomposition is to apply SVD on the scatter matrices. Details can be found in [6]. Note that a limitation of classical LDA in many applications involving undersampled data, such as text documents and images, is that at least one of the scatter matrices is required to be nonsingular. Several extensions, including pseudo-inverse LDA, regularized LDA, and PCA+LDA, were proposed in the past to deal with the singularity problem. Details can be found in [5]. 3 2-Dimensional LDA The key difference between classical LDA and the 2DLDA that we propose in this paper is in the representation of data. While classical LDA uses the vectorized representation, 2DLDA works with data in matrix representation. We will see later in this section that the matrix representation in 2DLDA leads to an eigendecomposition on matrices with much smaller sizes. More specifically, 2DLDA involves the eigen-decomposition of matrices with sizes r×r and c×c, which are much smaller than the matrices in classical LDA. This dramatically reduces the time and space complexities of 2DLDA over LDA. Unlike classical LDA, 2DLDA considers the following (ℓ1 ×ℓ2)-dimensional space L⊗R, which is the tensor product of the following two spaces: L spanned by {ui}ℓ1 i=1 and R spanned by {vi}ℓ2 i=1. Define two matrices L = [u1, · · · , uℓ1] ∈IRr×ℓ1 and R = [v1, · · · , vℓ2] ∈IRc×ℓ2. Then the projection of X ∈IRr×c onto the space L ⊗R is LT XR ∈Rℓ1×ℓ2. Let Ai ∈IRr×c, for i = 1, · · · , n, be the n images in the dataset, clustered into classes Π1, · · · , Πk, where Πi has ni images. Let Mi = 1 ni X∈Πi X be the mean of the ith class, 1 ≤i ≤k, and M = 1 n k i=1 X∈Πi X be the global mean. In 2DLDA, we consider images as two-dimensional signals and aim to find two transformation matrices L ∈IRr×ℓ1 and R ∈IRc×ℓ2 that map each Ai ∈IRr×c, for 1 ≤i ≤n, to a matrix Bi ∈IRℓ1×ℓ2 such that Bi = LT AiR. Like classical LDA, 2DLDA aims to find the optimal transformations (projections) L and R such that the class structure of the original high-dimensional space is preserved in the low-dimensional space. A natural similarity metric between matrices is the Frobenius norm [8]. Under this metric, the (squared) within-class and between-class distances Dw and Db can be computed as follows: Dw = k i=1 X∈Πi ||X −Mi||2 F , Db = k i=1 ni||Mi −M||2 F . Using the property of the trace, that is, trace(MM T ) = ||M||2 F , for any matrix M, we can rewrite Dw and Db as follows: Dw = trace k i=1 X∈Πi (X −Mi)(X −Mi)T , Db = trace k i=1 ni(Mi −M)(Mi −M)T . In the low-dimensional space resulting from the linear transformations L and R, the withinclass and between-class distances become ˜Dw = trace k i=1 X∈Πi LT (X −Mi)RRT (X −Mi)T L , ˜Db = trace k i=1 niLT (Mi −M)RRT (Mi −M)T L . The optimal transformations L and R would maximize ˜Db and minimize ˜Dw. Due to the difficulty of computing the optimal L and R simultaneously, we derive an iterative algorithm in the following. More specifically, for a fixed R, we can compute the optimal L by solving an optimization problem similar to the one in Eq. (1). With the computed L, we can then update R by solving another optimization problem as the one in Eq. (1). Details are given below. The procedure is repeated a certain number of times, as discussed in Section 4. Computation of L For a fixed R, ˜Dw and ˜Db can be rewritten as ˜Dw = trace LT SR wL , ˜Db = trace LT SR b L , Algorithm 2DLDA(A1, · · · , An, ℓ1, ℓ2) Input: A1, · · · , An, ℓ1, ℓ2 Output: L, R, B1, · · · , Bn 1. Compute the mean Mi of ith class for each i as Mi = 1 ni X∈Πi X; 2. Compute the global mean M = 1 n k i=1 X∈Πi X; 3. R0 ←(Iℓ2, 0)T ; 4. For j from 1 to I 5. SR w ←k i=1 X∈Πi(X −Mi)Rj−1RT j−1(X −Mi)T , SR b ←k i=1 ni(Mi −M)Rj−1RT j−1(Mi −M)T ; 6. Compute the first ℓ1 eigenvectors {φL ℓ}ℓ1 ℓ=1 of SR w −1 SR b ; 7. Lj ← φL 1 , · · · , φL ℓ1 8. SL w ←k i=1 X∈Πi(X −Mi)T LjLT j (X −Mi), SL b ←k i=1 ni(Mi −M)T LjLT j (Mi −M); 9. Compute the first ℓ2 eigenvectors {φR ℓ}ℓ2 ℓ=1 of SL w −1 SL b ; 10. Rj ← φR 1 , · · · , φR ℓ2 ; 11. EndFor 12. L ←LI, R ←RI; 13. Bℓ←LT AℓR, for ℓ= 1, · · · , n; 14. return(L, R, B1, · · · , Bn). where SR w = k i=1 X∈Πi (X −Mi)RRT (X −Mi)T , SR b = k i=1 ni(Mi −M)RRT (Mi −M)T . Similar to the optimization problem in Eq. (1), the optimal L can be computed by solving the following optimization problem: maxL trace (LT SR wL)−1(LT SR b L) . The solution can be obtained by solving the following generalized eigenvalue problem: SR wx = λSR b x. Since SR w is in general nonsingular, the optimal L can be obtained by computing an eigendecomposition on SR w −1 SR b . Note that the size of the matrices SR w and SR b is r × r, which is much smaller than the size of the matrices Sw and Sb in classical LDA. Computation of R Next, consider the computation of R, for a fixed L. A key observation is that ˜Dw and ˜Db can be rewritten as ˜Dw = trace RT SL wR , ˜Db = trace RT SL b R , where SL w = k i=1 X∈Πi (X −Mi)T LLT (X −Mi), SL b = k i=1 ni(Mi −M)T LLT (Mi −M). This follows from the following property of trace, that is, trace(AB) = trace(BA), for any two matrices A and B. Similarly, the optimal R can be computed by solving the following optimization problem: maxR trace (RT SL wR)−1(RT SL b R) . The solution can be obtained by solving the following generalized eigenvalue problem: SL wx = λSL b x. Since SL w is in general nonsingular, the optimal R can be obtained by computing an eigen-decomposition on SL w −1 SL b . Note that the size of the matrices SL w and SL b is c × c, much smaller than Sw and Sb. The pseudo-code for the 2DLDA algorithm is given in Algorithm 2DLDA. It is clear that the most expensive steps in Algorithm 2DLDA are in Lines 5, 8 and 13 and the total time complexity is O(n max(ℓ1, ℓ2)(r + c)2I), where I is the number of iterations. The 2DLDA algorithm depends on the initial choice R0. Our experiments show that choosing R0 = (Iℓ2, 0)T , where Iℓ2 is the identity matrix, produces excellent results. We use this initial R0 in all the experiments. Since the number of rows (r) and the number of columns (c) of an image Ai are generally comparable, i.e., r ≈c ≈ √ N, we set ℓ1 and ℓ2 to a common value d in the rest of this paper, for simplicity. However, the algorithm works in the general case. With this simplification, the time complexity of the 2DLDA algorithm becomes O(ndNI). The space complexity of 2DLDA is O(rc) = O(N). The key to the low space complexity of the algorithm is that the matrices SR w, SR b , SL w, and SL b can be formed by reading the matrices Aℓincrementally. 3.1 2DLDA+LDA As mentioned in the Introduction, PCA is commonly applied as an intermediate dimensionreduction stage before LDA to overcome the singularity problem of classical LDA. In this section, we consider the combination of 2DLDA and LDA, namely 2DLDA+LDA, where the dimension by 2DLDA is further reduced by LDA, since small reduced dimension is desirable for efficient querying. More specifically, in the first stage of 2DLDA+LDA, each image Ai ∈IRr×c is reduced to Bi ∈IRd×d by 2DLDA, with d < min(r, c). In the second stage, each Bi is first transformed to a vector bi ∈IRd2 (matrix-to-vector alignment), then bi is further reduced to bL i ∈IRk−1 by LDA with k −1 < d2, where k is the number of classes. Here, “matrix-to-vector alignment” means that the matrix is transformed to a vector by concatenating all its rows together consecutively. The time complexity of the first stage by 2DLDA is O(ndNI). The second stage applies classical LDA to data in d2-dimensional space, hence takes O(n(d2)2), assuming n > d2. Hence the total time complexity of 2DLDA+LDA is O nd(NI + d3) . 4 Experiments In this section, we experimentally evaluate the performance of 2DLDA and 2DLDA+LDA on face images and compare with PCA+LDA, used widely in face recognition. For PCA+LDA, we use 200 principal components in the PCA stage, as it produces good overall results. All of our experiments are performed on a P4 1.80GHz Linux machine with 1GB memory. For all the experiments, the 1-Nearest-Neighbor (1NN) algorithm is applied for classification and ten-fold cross validation is used for computing the classification accuracy. Datasets: We use three face datasets in our study: PIX, ORL, and PIE, which are publicly available. PIX (available at http://peipa.essex.ac.uk/ipa/pix/faces/manchester/testhard/), contains 300 face images of 30 persons. The image size is 512 × 512. We subsample the images down to a size of 100 × 100 = 10000. ORL (available at http://www.uk.research.att.com/facedatabase.html), contains 400 face images of 40 persons. The image size is 92 × 112. PIE is a subset of the CMU–PIE face image dataset (available at http://www.ri.cmu.edu/projects/project 418.html). It contains 6615 face images of 63 persons. The image size is 640 × 480. We subsample the images down to a size of 220 × 175 = 38500. Note that PIE is much larger than the other two datasets. 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 2 4 6 8 10 12 14 16 18 20 Accuracy Number of iterations 2DLDA+LDA 2DLDA 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 2 4 6 8 10 12 14 16 18 20 Accuracy Number of iterations 2DLDA+LDA 2DLDA 0.8 0.85 0.9 0.95 1 1.05 2 4 6 8 10 12 14 16 18 20 Accuracy Number of iterations 2DLDA+LDA 2DLDA Figure 1: Effect of the number of iterations on 2DLDA and 2DLDA+LDA using the three face datasets; PIX, ORL and PIE (from left to right). The impact of the number, I, of iterations: In this experiment, we study the effect of the number of iterations (I in Algorithm 2DLDA) on 2DLDA and 2DLDA+LDA. The results are shown in Figure 1, where the x-axis denotes the number of iterations, and the y-axis denotes the classification accuracy. d = 10 is used for both algorithms. It is clear that both accuracy curves are stable with respect to the number of iterations. In general, the accuracy curves of 2DLDA+LDA are slightly more stable than those of 2DLDA. The key consequence is that we only need to run the “for” loop (from Line 4 to Line 11) in Algorithm 2DLDA only once, i.e., I = 1, which significantly reduces the total running time of both algorithms. The impact of the value of the reduced dimension d: In this experiment, we study the effect of the value of d on 2DLDA and 2DLDA+LDA, where the value of d determines the dimensionality in the transformed space by 2DLDA. We did extensive experiments using different values of d on the face image datasets. The results are summarized in Figure 2, where the x-axis denotes the values of d (between 1 and 15) and the y-axis denotes the classification accuracy with 1-Nearest-Neighbor as the classifier. As shown in Figure 2, the accuracy curves on all datasets stabilize around d = 4 to 6. Comparison on classification accuracy and efficiency: In this experiment, we evaluate the effectiveness of the proposed algorithms in terms of classification accuracy and efficiency and compare with PCA+LDA. The results are summarized in Table 2. We can observe that 2DLDA+LDA has similar performance as PCA+LDA in classification, while it outperforms 2DLDA. Hence the LDA stage in 2DLDA+LDA not only reduces the dimension, but also increases the accuracy. Another key observation from Table 2 is that 2DLDA is almost one order of magnitude faster than PCA+LDA, while, the running time of 2DLDA+LDA is close to that of 2DLDA. Hence 2DLDA+LDA is a more effective dimension reduction algorithm than PCA+LDA, as it is competitive to PCA+LDA in classification and has the same number of reduced dimensions in the transformed space, while it has much lower time and space costs. 5 Conclusions An efficient algorithm, namely 2DLDA, is presented for dimension reduction. 2DLDA is an extension of LDA. The key difference between 2DLDA and LDA is that 2DLDA works on the matrix representation of images directly, while LDA uses a vector representation. 2DLDA has asymptotically minimum memory requirements, and lower time complexity than LDA, which is desirable for large face datasets, while it implicitly avoids the singularity problem encountered in classical LDA. We also study the combination of 2DLDA and LDA, namely 2DLDA+LDA, where the dimension by 2DLDA is further reduced by LDA. Experiments show that 2DLDA and 2DLDA+LDA are competitive with PCA+LDA, in terms of classification accuracy, while they have significantly lower time and space costs. 0.4 0.5 0.6 0.7 0.8 0.9 1 2 4 6 8 10 12 14 Accuracy Value of d 2DLDA+LDA 2DLDA 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2 4 6 8 10 12 14 Accuracy Value of d 2DLDA+LDA 2DLDA 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2 4 6 8 10 12 14 Accuracy Value of d 2DLDA+LDA 2DLDA Figure 2: Effect of the value of the reduced dimension d on 2DLDA and 2DLDA+LDA using the three face datasets; PIX, ORL and PIE (from left to right). Dataset PCA+LDA 2DLDA 2DLDA+LDA Accuracy Time(Sec) Accuracy Time(Sec) Accuracy Time(Sec) PIX 98.00% 7.73 97.33% 1.69 98.50% 1.73 ORL 97.75% 12.5 97.50% 2.14 98.00% 2.19 PIE — — 99.32% 153 100% 157 Table 2: Comparison on classification accuracy and efficiency: “—” means that PCA+LDA is not applicable for PIE, due to its large size. Note that PCA+LDA involves an eigendecomposition of the scatter matrices, which requires the whole data matrix to reside in main memory. Acknowledgment Research of J. Ye and R. Janardan is sponsored, in part, by the Army High Performance Computing Research Center under the auspices of the Department of the Army, Army Research Laboratory cooperative agreement number DAAD19-01-2-0014, the content of which does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred. References [1] P.N. Belhumeour, J.P. Hespanha, and D.J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997. [2] R.O. Duda, P.E. Hart, and D. Stork. Pattern Classification. Wiley, 2000. [3] S. Dudoit, J. Fridlyand, and T. P. Speed. Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of the American Statistical Association, 97(457):77–87, 2002. [4] K. Fukunaga. Introduction to Statistical Pattern Classification. Academic Press, San Diego, California, USA, 1990. [5] W.J. Krzanowski, P. Jonathan, W.V McCarthy, and M.R. Thomas. Discriminant analysis with singular covariance matrices: methods and applications to spectroscopic data. Applied Statistics, 44:101–115, 1995. [6] Daniel L. Swets and Juyang Weng. Using discriminant eigenfeatures for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(8):831–836, 1996. [7] J. Yang, D. Zhang, A.F. Frangi, and J.Y. Yang. Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(1):131–137, 2004. [8] J. Ye. Generalized low rank approximations of matrices. In ICML Conference Proceedings, pages 887–894, 2004. [9] J. Ye, R. Janardan, and Q. Li. GPCA: An efficient dimension reduction scheme for image compression and retrieval. In ACM SIGKDD Conference Proceedings, pages 354–363, 2004.
|
2004
|
106
|
2,515
|
Assignment of Multiplicative Mixtures in Natural Images Odelia Schwartz HHMI and Salk Institute La Jolla, CA 92014 odelia@salk.edu Terrence J. Sejnowski HHMI and Salk Institute La Jolla, CA 92014 terry@salk.edu Peter Dayan GCNU, UCL 17 Queen Square, London dayan@gatsby.ucl.ac.uk Abstract In the analysis of natural images, Gaussian scale mixtures (GSM) have been used to account for the statistics of filter responses, and to inspire hierarchical cortical representational learning schemes. GSMs pose a critical assignment problem, working out which filter responses were generated by a common multiplicative factor. We present a new approach to solving this assignment problem through a probabilistic extension to the basic GSM, and show how to perform inference in the model using Gibbs sampling. We demonstrate the efficacy of the approach on both synthetic and image data. Understanding the statistical structure of natural images is an important goal for visual neuroscience. Neural representations in early cortical areas decompose images (and likely other sensory inputs) in a way that is sensitive to sophisticated aspects of their probabilistic structure. This structure also plays a key role in methods for image processing and coding. A striking aspect of natural images that has reflections in both top-down and bottom-up modeling is coordination across nearby locations, scales, and orientations. From a topdown perspective, this structure has been modeled using what is known as a Gaussian Scale Mixture model (GSM).1–3 GSMs involve a multi-dimensional Gaussian (each dimension of which captures local structure as in a linear filter), multiplied by a spatialized collection of common hidden scale variables or mixer variables∗(which capture the coordination). GSMs have wide implications in theories of cortical receptive field development, eg the comprehensive bubbles framework of Hyv¨arinen.4 The mixer variables provide the top-down account of two bottom-up characteristics of natural image statistics, namely the ‘bowtie’ statistical dependency,5,6 and the fact that the marginal distributions of receptive field-like filters have high kurtosis.7,8 In hindsight, these ideas also bear a close relationship with Ruderman and Bialek’s multiplicative bottom-up image analysis framework9 and statistical models for divisive gain control.6 Coordinated structure has also been addressed in other image work,10–14 and in other domains such as speech15 and finance.16 Many approaches to the unsupervised specification of representations in early cortical areas rely on the coordinated structure.17–21 The idea is to learn linear filters (eg modeling simple cells as in22,23), and then, based on the coordination, to find combinations of these (perhaps non-linearly transformed) as a way of finding higher order filters (eg complex cells). One critical facet whose specification from data is not obvious is the neighborhood arrangement, ie which linear filters share which mixer variables. ∗Mixer variables are also called mutlipliers, but are unrelated to the scales of a wavelet. Here, we suggest a method for finding the neighborhood based on Bayesian inference of the GSM random variables. In section 1, we consider estimating these components based on information from different-sized neighborhoods and show the modes of failure when inference is too local or too global. Based on these observations, in section 2 we propose an extension to the GSM generative model, in which the mixer variables can overlap probabilistically. We solve the neighborhood assignment problem using Gibbs sampling, and demonstrate the technique on synthetic data. In section 3, we apply the technique to image data. 1 GSM inference of Gaussian and mixer variables In a simple, n-dimensional, version of a GSM, filter responses l are synthesized † by multiplying an n-dimensional Gaussian with values g = {g1 . . . gn}, by a common mixer variable v. l = vg (1) We assume g are uncorrelated (σ2 along diagonal of the covariance matrix). For the analytical calculations, we assume that v has a Rayleigh distribution: p[v] ∝[v exp −v2/2]a where 0 < a ≤1 parameterizes the strength of the prior (2) For ease, we develop the theory for a = 1. As is well known,2 and repeated in figure 1(B), the marginal distribution of the resulting GSM is sparse and highly kurtotic. The joint conditional distribution of two elements l1 and l2, follows a bowtie shape, with the width of the distribution of one dimension increasing for larger values (both positive and negative) of the other dimension. The inverse problem is to estimate the n+1 variables g1 . . . gn, v from the n filter responses l1 . . . ln. It is formally ill-posed, though regularized through the prior distributions. Four posterior distributions are particularly relevant, and can be derived analytically from the model: rv distribution posterior mean p[v|l1] √ σ |l1| B “ 1 2 , |l1| σ ” exp −v2 2 − l2 1 2v2σ2 q |l1| σ B “ 1, |l1| σ ” B “ 1 2 , |l1| σ ” p[v|l] ( l σ) 1 2 (n−2) B(1−n 2 , l σ)v−(n−1) exp −v2 2 − l2 2v2σ2 q l σ B( 3 2 −n 2 , l σ) B(1−n 2 , l σ) p[|g1||l1] √ σ|l1| B “ −1 2 , |l1| σ ” 1 g2 1 exp −g2 1 2σ2 − l2 1 2g2 1 σ q |l1| σ B “ 0, |l1| σ ” B “ −1 2 , |l1| σ ” p[|g1||l] √ σ|l1| “ |l1| l ” 1 2 (2−n) B( n 2 −1, l σ) g(n−3) 1 exp −g2 1 2σ2 l2 l2 1 − l2 1 2g2 1 σ q |l1| σ q |l1| l B( n 2 −1 2 , l σ) B( n 2 −1, l σ) where B(n, x) is the modified Bessel function of the second kind (see also24), l = pP i l2 i and gi is forced to have the same sign as li, since the mixer variables are always positive. Note that p[v|l1] and p[g1|l1] (rows 1,3) are local estimates, while p[v|l] and p[g|l] (rows 2,4) are estimates according to filter outputs {l1 . . . ln}. The posterior p[v|l] has also been estimated numerically in noise removal for other mixer priors, by Portilla et al25 The full GSM specifies a hierarchy of mixer variables. Wainwright2 considered a prespecified tree-based hierarhical arrangement. In practice, for natural sensory data, given a heterogeneous collection of li, it is advantageous to learn the hierachical arrangement from examples. In an approach related to that of the GSM, Karklin and Lewicki19 suggested †We describe the l as being filter responses even in the synthetic case, to facilitate comparison with images. l ... l 1 20 A C D v E(v |l 1) E(v |l1 .. 20 ) 1 g Actual 1 filter, too local 20 filters l ... l 21 40 0 0.06 E(v |l1 .. 40 ) Mixer Gaussian Distribution Distribution E(g | l 1) 1 E(g |l1 .. 20 ) 1 E(g | l1 .. 40 ) 1 40 filters, too global E(g |l1 .. 20 ) 2 E(g | l1 .. 40 ) 2 E(g | l 2) 2 E(g | l 1) 1 E(g |l1 .. 20 ) 1 E(g | l1 .. 40 ) 1 -5 0 5 0 0.1 0 0 0 0 B Distribution 1 l 1 l 2l 21 l 1 l α α α α 0 5 0 0.06 0 5 0 0.06 0 5 0 0.06 0 5 0 0.06 E Gaussian joint conditional g ... g 1 Multiply vα 20 g ... g 21 Multiply 40 v β -5 0 5 0 0.06 -5 0 5 0 0.06 -5 0 5 0 0.06 -5 0 5 1 g 2 g Figure 1: A Generative model: each filter response is generated by multiplying its Gaussian variable by either mixer variable vα, or mixer variable vβ. B Marginal and joint conditional statistics (bowties) of sample synthetic filter responses. For the joint conditional statistics, intensity is proportional to the bin counts, except that each column is independently re-scaled to fill the range of intensities. C-E Left: actual distributions of mixer and Gaussian variables; other columns: estimates based on different numbers of filter responses. C Distribution of estimate of the mixer variable vα. Note that mixer variable values are by definition positive. D Distribution of estimate of one of the Gaussian variables, g1. E Joint conditional statistics of the estimates of Gaussian variables g1 and g2. generating log mixer values for all the filters and learning the linear combinations of a smaller collection of underlying values. Here, we consider the problem in terms of multiple mixer variables, with the linear filters being clustered into groups that share a single mixer. This poses a critical assignment problem of working out which filter responses share which mixer variables. We first study this issue using synthetic data in which two groups of filter responses l1 . . . l20 and l21 . . . l40 are generated by two mixer variables vα and vβ (figure 1). We attempt to infer the components of the GSM model from the synthetic data. Figure 1C;D shows the empirical distributions of estimates of the conditional means of a mixer variable E(vα|{l}) and one of the Gaussian variables E(g1|{l}) based on different assumed assignments. For estimation based on too few filter responses, the estimates do not well match the actual distributions. For example, for a local estimate based on a single filter response, the Gaussian estimate peaks away from zero. For assignments including more filter responses, the estimates become good. However, inference is also compromised if the estimates for vα are too global, including filter responses actually generated from vβ (C and D, last column). In (E), we consider the joint conditional statistics of two components, each 100 0 1 B A Generative model v v v α β γ l ... l 1 100 g ... g 1 100 Multiply v v α β v γ Actual Filter number Filter number Filter number Inferred C Gaussian Mixer Pixel α assumed Gibbs fit 0.2 0 -20 20 Distribution 0 15 0 0.15 -4 0 4 0 0.1 0 0 0 0 E(g | l ) 1 E(g | l ) 2 E(g | l ) 1 E(v | l ) E(v | l ) E(v | l ) 0 0 β α Filter number Filter number Filter number 1 100 0 1 1 100 0 1 1 100 0 1 1 100 0 1 1 100 0 1 1 v v v α β γ 1 l 2 l 1 l assumed Gibbs fit Distribution Distribution Figure 2: A Generative model in which each filter response is generated by multiplication of its Gaussian variable by a mixer variable. The mixer variable, vα, vβ, or vγ, is chosen probabilistically upon each filter response sample, from a Rayleigh distribution with a = .1. B Top: actual probability of filter associations with vα, vβ, and vγ; Bottom: Gibbs estimates of probability of filter associations corresponding to vα, vβ, and vγ. C Statistics of generated filter responses, and of Gaussian and mixer estimates from Gibbs sampling. estimating their respective g1 and g2. Again, as the number of filter responses increases, the estimates improve, provided that they are taken from the right group of filter responses with the same mixer variable. Specifically, the mean estimates of g1 and g2 become more independent (E, third column). Note that for estimations based on a single filter response, the joint conditional distribution of the Gaussian appears correlated rather than independent (E, second column); for estimation based on too many filter responses (40 in this example), the joint conditional distribution of the Gaussian estimates shows a dependent (rather than independent) bowtie shape (E, last column). Mixer variable joint statistics also deviate from the actual when the estimations are too local or global (not shown). We have observed qualitatively similar statistics for estimation based on coefficients in natural images. Neighborhood size has also been discussed in the context of the quality of noise removal, assuming a GSM model.26 2 Neighborhood inference: solving the assignment problem The plots in figure 1 suggest that it should be possible to infer the assignments, ie work out which filter responses share common mixers, by learning from the statistics of the resulting joint dependencies. Hard assignment problems (in which each filter response pays allegiance to just one mixer) are notoriously computationally brittle. Soft assignment problems (in which there is a probabilistic relationship between filter responses and mixers) are computationally better behaved. Further, real world stimuli are likely better captured by the possibility that filter responses are coordinated in somewhat different collections in different images. We consider a richer, mixture GSM as a generative model (Figure 2). To model the generation of filter responses li for a single image patch, we multiply each Gaussian variable gi by a single mixer variable from the set v1 . . . vm. We assume that gi has association probability pij (satisfying P j pij = 1, ∀i) of being assigned to mixer variable vj. The assignments are assumed to be made independently for each patch. We use si ∈{1, 2, . . . m} for the assignments: li = givsi (3) Inference and learning in this model proceeds in two stages, according to the expectation maximization algorithm. First, given a filter response li, we use Gibbs sampling for the E phase to find possible appropriate (posterior) assignments. Williams et al.27 suggested using Gibbs sampling to solve a similar assignment problem in the context of dynamic tree models. Second, for the M phase, given the collection of assignments across multiple filter responses, we update the association probabilities pij. Given sample mixer assignments, we can estimate the Gaussian and mixer components of the GSM using the table of section 1, but restricting the filter response samples just to those associated with each mixer variable. We tested the ability of this inference method to find the associations in the probabilistic mixer variable synthetic example shown in figure 2, (A,B). The true generative model specifies probabilistic overlap of 3 mixer variables. We generated 5000 samples for each filter according to the generative model. We ran the Gibbs sampling procedure, setting the number of possible neighborhoods to 5 (e.g., > 3); after 500 iterations the weights converged near to the proper probabilities. In (B, top), we plot the actual probability distributions for the filter associations with each of the mixer variables. In (B, bottom), we show the estimated associations: the three non-zero estimates closely match the actual distributions; the other two estimates are zero (not shown). The procedure consistently finds correct associations even in larger examples of data generated with up to 10 mixer variables. In (C) we show an example of the actual and estimated distributions of the mixer and Gaussian components of the GSM. Note that the joint conditional statistics of both mixer and Gaussian are independent, since the variables were generated as such in the synthetic example. The Gibbs procedure can be adjusted for data generated with different parameters a of equation 2, and for related mixers,2 allowing for a range of image coefficient behaviors. 3 Image data Having validated the inference model using synthetic data, we turned to natural images. We derived linear filters from a multi-scale oriented steerable pyramid,28 with 100 filters, at 2 preferred orientations, 25 non-overlapping spatial positions (with spatial subsampling of 8 pixels), and two phases (quadrature pairs), and a single spatial frequency peaked at 1/6 cycles/pixel. The image ensemble is 4 images from a standard image compression database (boats, goldhill, plant leaves, and mountain) and 4000 samples. We ran our method with the same parameters as for synthetic data, with 7 possible neighborhoods and Rayleigh parameter a = .1 (as in figure 2). Figure 3 depicts the association weights pij of the coefficients for each of the obtained mixer variables. In (A), we show a schematic (template) of the association representation that will follow in (B, C) for the actual data. Each mixer variable neighborhood is shown for coefficients of two phases and two orientations along a spatial grid (one grid for each phase). The neighborhood is illustrated via the probability of each coefficient to be generated from a given mixer variable. For the first two neighborhoods (B), we also show the image patches that yielded the maximum log likelihood of P(v|patch). The first neighborhood (in B) prefers vertical patterns across most of its “receptive field”, while the second has a more localized region of horizontal preference. This can also be seen by averaging the 200 image patches with the maximum log likelihood. Strikingly, all the mixer variables group together two phases of quadrature pair (B, C). Quadrature pairs have also been extracted from cortical data, and are the components of ideal complex cell models. Another tendency is to group -19 0 19 -19 0 19 Phase 1 A X position Y position -19 0 19 -19 0 19 Phase 2 X position Y position D assumed Gibbs fit Gaussian Mixer 0.25 0 -50 50 Distribution 1 l 2 l 1l Coefficient Phase 1 Phase 2 Distribution Distribution 0 15 0 0.15 -5 0 5 0 0.12 0 0 0 0 E(g | l ) 1 E(g | l ) 2 E(g | l ) 1 E(v | l ) ) E(v | l ) Neighborhood Example max patches B 0 0 α α Example max patches Neighborhood Neighborhood Average Average C E(v | l ) β assumed Gibbs fit Figure 3: A Schematic of the mixer variable neighborhood representation. The probability that each coefficient is associated with the mixer variable ranges from 0 (black) to 1 (white). Left: Vertical and horizontal filters, at two orientations, and two phases. Each phase is plotted separately, on a 38 by 38 pixel spatial grid. Right: summary of representation, with filter shapes replaced by oriented lines. Filters are approximately 6 pixels in diameter, with the spacing between filters 8 pixels. B First two image ensemble neighborhoods obtained from Gibbs sampling. Also shown, are four 38×38 pixel patches that had the maximum log likelihood of P(v|patch), and the average of the first 200 maximal patches. C Other image ensemble neighborhoods. D Statistics of representative coefficients of two spatially displaced vertical filters, and of inferred Gaussian and mixer variables. orientations across space. The phase and iso-orientation grouping bear some interesting similarity to other recent suggestions;17,18 as do the maximal patches.19 Wavelet filters have the advantage that they can span a wider spatial extent than is possible with current ICA techniques, and the analysis of parameters such as phase grouping is more controlled. We are comparing the analysis with an ICA first-stage representation, which has other obvious advantages. We are also extending the analysis to correlated wavelet filters;25 and to simulations with a larger number of neighborhoods. From the obtained associations, we estimated the mixer and Gaussian variables according to our model. In (D) we show representative statistics of the coefficients and of the inferred variables. The learned distributions of Gaussian and mixer variables are quite close to our assumptions. The Gaussian estimates exhibit joint conditional statistics that are roughly independent, and the mixer variables are weakly dependent. We have thus far demonstrated neighborhood inference for an image ensemble, but it is also interesting and perhaps more intuitive to consider inference for particular images or image classes. In figure 4 (A-B) we demonstrate example mixer variable neighborhoods derived from learning patches of a zebra image (Corel CD-ROM). As before, the neighborhoods are composed of quadrature pairs; however, the spatial configurations are richer and have A B Average Neighborhood Example max patches Average Neighborhood Example max patches Top 25 max patches Top 25 max patches Figure 4: Example of Gibbs on Zebra image. Image is 151×151 pixels, and each spatial neighborhood spans 38×38 pixels. A, B Example mixer variable neighborhoods. Left: example mixer variable neighborhood, and average of 200 patches that yielded the maximum likelihood of P(v|patch). Right: Image and marked on top of it example patches that yielded the maximum likelihood of P(v|patch). not been previously reported with unsupervised hierarchical methods: for example, in (A), the mixture neighborhood captures a horizontal-bottom/vertical-top spatial configuration. This appears particularly relevant in segmenting regions of the front zebra, as shown by marking in the image the patches i that yielded the maximum log likelihood of P(v|patch). In (B), the mixture neighborhood captures a horizontal configuration, more focused on the horizontal stripes of the front zebra. This example demonstrates the logic behind a probabilistic mixture: coefficients corresponding to the bottom horizontal stripes might be linked with top vertical stripes (A) or to more horizontal stripes (B). 4 Discussion Work on the study of natural image statistics has recently evolved from issues about scalespace hierarchies, wavelets, and their ready induction through unsupervised learning models (loosely based on cortical development) towards the coordinated statistical structure of the wavelet components. This includes bottom-up (eg bowties, hierarchical representations such as complex cells) and top-down (eg GSM) viewpoints. The resulting new insights inform a wealth of models and ideas and form the essential backdrop for the work in this paper. They also link to impressive engineering results in image coding and processing. A most critical aspect of an hierarchical representational model is the way that the structure of the hierarchy is induced. We addressed the hierarchy question using a novel extension to the GSM generative model in which mixer variables (at one level of the hierarchy) enjoy probabilistic assignments to filter responses (at a lower level). We showed how these assignments can be learned (using Gibbs sampling), and illustrated some of their attractive properties using both synthetic and a variety of image data. We grounded our method firmly in Bayesian inference of the posterior distributions over the two classes of random variables in a GSM (mixer and Gaussian), placing particular emphasis on the interplay between the generative model and the statistical properties of its components. An obvious question raised by our work is the neural correlate of the two different posterior variables. The Gaussian variable has characteristics resembling those of the output of divisively normalized simple cells;6 the mixer variable is more obviously related to the output of quadrature pair neurons (such as orientation energy or motion energy cells, which may also be divisively normalized). How these different information sources may subsequently be used is of great interest. Acknowledgements This work was funded by the HHMI (OS, TJS) and the Gatsby Charitable Foundation (PD). We are very grateful to Patrik Hoyer, Mike Lewicki, Zhaoping Li, Simon Osindero, Javier Portilla and Eero Simoncelli for discussion. References [1] D Andrews and C Mallows. Scale mixtures of normal distributions. J. Royal Stat. Soc., 36:99–102, 1974. [2] M J Wainwright and E P Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, editors, Adv. Neural Information Processing Systems, volume 12, pages 855–861, Cambridge, MA, May 2000. MIT Press. [3] M J Wainwright, E P Simoncelli, and A S Willsky. Random cascades on wavelet trees and their use in modeling and analyzing natural imagery. Applied and Computational Harmonic Analysis, 11(1):89–123, July 2001. Special issue on wavelet applications. [4] A Hyv¨arinen, J Hurri, and J Vayrynen. Bubbles: a unifying framework for low-level statistical properties of natural image sequences. Journal of the Optical Society of America A, 20:1237–1252, May 2003. [5] R W Buccigrossi and E P Simoncelli. Image compression via joint statistical characterization in the wavelet domain. IEEE Trans Image Proc, 8(12):1688–1701, December 1999. [6] O Schwartz and E P Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8):819–825, August 2001. [7] D J Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4(12):2379–2394, 1987. [8] H Attias and C E Schreiner. Temporal low-order statistics of natural sounds. In M Jordan, M Kearns, and S Solla, editors, Adv in Neural Info Processing Systems, volume 9, pages 27–33. MIT Press, 1997. [9] D L Ruderman and W Bialek. Statistics of natural images: Scaling in the woods. Phys. Rev. Letters, 73(6):814–817, 1994. [10] C Zetzsche, B Wegmann, and E Barth. Nonlinear aspects of primary vision: Entropy reduction beyond decorrelation. In Int’l Symposium, Society for Information Display, volume XXIV, pages 933–936, 1993. [11] J Huang and D Mumford. Statistics of natural images and models. In CVPR, page 547, 1999. [12] J. Romberg, H. Choi, and R. Baraniuk. Bayesian wavelet domain image modeling using hidden Markov trees. In Proc. IEEE Int’l Conf on Image Proc, Kobe, Japan, October 1999. [13] A Turiel, G Mato, N Parga, and J P Nadal. The self-similarity properties of natural images resemble those of turbulent flows. Phys. Rev. Lett., 80:1098–1101, 1998. [14] J Portilla and E P Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. Int’l Journal of Computer Vision, 40(1):49–71, 2000. [15] Helmut Brehm and Walter Stammler. Description and generation of spherically invariant speech-model signals. Signal Processing, 12:119–141, 1987. [16] T Bollersley, K Engle, and D Nelson. ARCH models. In B Engle and D McFadden, editors, Handbook of Econometrics V. 1994. [17] A Hyv¨arinen and P Hoyer. Emergence of topography and complex cell properties from natural images using extensions of ICA. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, editors, Adv. Neural Information Processing Systems, volume 12, pages 827–833, Cambridge, MA, May 2000. MIT Press. [18] P Hoyer and A Hyv¨arinen. A multi-layer sparse coding network learns contour coding from natural images. Vision Research, 42(12):1593–1605, 2002. [19] Y Karklin and M S Lewicki. Learning higher-order structures in natural images. Network: Computation in Neural Systems, 14:483–499, 2003. [20] W Laurenz and T Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715– 770, 2002. [21] C Kayser, W Einh¨auser, O D¨ummer, P K¨onig, and K P K¨ording. Extracting slow subspaces from natural videos leads to complex cells. In G Dorffner, H Bischof, and K Hornik, editors, Proc. Int’l Conf. on Artificial Neural Networks (ICANN-01), pages 1075–1080, Vienna, Aug 2001. Springer-Verlag, Heidelberg. [22] B A Olshausen and D J Field. Emergence of simple-cell receptive field properties by learning a sparse factorial code. Nature, 381:607–609, 1996. [23] A J Bell and T J Sejnowski. The ’independent components’ of natural scenes are edge filters. Vision Research, 37(23):3327– 3338, 1997. [24] U Grenander and A Srivastava. Probabibility models for clutter in natural images. IEEE Trans. on Patt. Anal. and Mach. Intel., 23:423–429, 2002. [25] J Portilla, V Strela, M Wainwright, and E Simoncelli. Adaptive Wiener denoising using a Gaussian scale mixture model in the wavelet domain. In Proc 8th IEEE Int’l Conf on Image Proc, pages 37–40, Thessaloniki, Greece, Oct 7-10 2001. IEEE Computer Society. [26] J Portilla, V Strela, M Wainwright, and E P Simoncelli. Image denoising using a scale mixture of Gaussians in the wavelet domain. IEEE Trans Image Processing, 12(11):1338–1351, November 2003. [27] C K I Williams and N J Adams. Dynamic trees. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Adv. Neural Information Processing Systems, volume 11, pages 634–640, Cambridge, MA, 1999. MIT Press. [28] E P Simoncelli, W T Freeman, E H Adelson, and D J Heeger. Shiftable multi-scale transforms. IEEE Trans Information Theory, 38(2):587–607, March 1992. Special Issue on Wavelets.
|
2004
|
107
|
2,516
|
Convergence and No-Regret in Multiagent Learning Michael Bowling Department of Computing Science University of Alberta Edmonton, Alberta Canada T6G 2E8 bowling@cs.ualberta.ca Abstract Learning in a multiagent system is a challenging problem due to two key factors. First, if other agents are simultaneously learning then the environment is no longer stationary, thus undermining convergence guarantees. Second, learning is often susceptible to deception, where the other agents may be able to exploit a learner’s particular dynamics. In the worst case, this could result in poorer performance than if the agent was not learning at all. These challenges are identifiable in the two most common evaluation criteria for multiagent learning algorithms: convergence and regret. Algorithms focusing on convergence or regret in isolation are numerous. In this paper, we seek to address both criteria in a single algorithm by introducing GIGA-WoLF, a learning algorithm for normalform games. We prove the algorithm guarantees at most zero average regret, while demonstrating the algorithm converges in many situations of self-play. We prove convergence in a limited setting and give empirical results in a wider variety of situations. These results also suggest a third new learning criterion combining convergence and regret, which we call negative non-convergence regret (NNR). 1 Introduction Learning to select actions to achieve goals in a multiagent setting requires overcoming a number of key challenges. One of these challenges is the loss of the stationarity assumption when multiple agents are learning simultaneously. Another challenge is guaranteeing that the learner cannot be deceptively exploited by another agent. Both of these challenges distinguish the multiagent learning problem from traditional single-agent learning, and have been gaining recent attention as multiagent applications continue to proliferate. In single-agent learning tasks, it is reasonable to assume that the same action from the same state will result in the same distribution over outcomes, both rewards and next states. In other words, the environment is stationary. In a multiagent task with other learning agents, the outcomes of an agent’s action will vary with the changing policies of the other agents. Since most of the convergence results in reinforcement learning depend upon the environment being stationary, convergence is often difficult to obtain in multiagent settings. The desirability of convergence has been recently contested. We offer some brief insight into this debate in the introduction of the extended version of this paper [1]. Equilibrium learners [2, 3, 4] are one method of handling the loss of stationarity. These algorithms learn joint-action values, which are stationary, and in certain circumstances guarantee these values converge to Nash (or correlated) equilibrium values. Using these values, the player’s strategy corresponds to the player’s component of some Nash (or correlated) equilibrium. This convergence of strategies is guaranteed nearly independently of the actions selected by the other agents, including when other agents play suboptimal responses. Equilibrium learners, therefore, can fail to learn best-response policies even against simple non-learning opponents.1 Best-response learners [5, 6, 7] are an alternative approach that has sought to learn best-responses, but still considering whether the resulting algorithm converges in some form. These approaches usually examine convergence in self-play, and have included both theoretical and experimental results. The second challenge is the avoidance of exploitation. Since learning strategies dynamically change their action selection over time, it is important to know that the change cannot be exploited by a clever opponent. A deceptive strategy may “lure” a dynamic strategy away from a safe choice in order to switch to a strategy where the learner receives much lower reward. For example, Chang and Kaelbling [8] demonstrated that the best-response learner PHC [7] could be exploited by a particular dynamic strategy. One method of measuring whether an algorithm can be exploited is the notion of regret. Regret has been explored both in game theory [9] and computer science [10, 11]. Regret measures how much worse an algorithm performs compared to the best static strategy, with the goal to guarantee at least zero average regret, no-regret, in the limit. These two challenges result in two completely different criteria for evaluation: convergence and no-regret. In addition, they have almost exclusively been explored in isolation. For example, equilibrium learners can have arbitrarily large average regret. On the other hand, no-regret learners’ strategies rarely converge in self-play [12] in even the simplest of games.2 In this paper, we seek to explore these two criteria in a single algorithm for learning in normal-form games. In Section 2 we present a more formal description of the problem and the two criteria. We also examine key related work in applying gradient ascent algorithms to this learning problem. In Section 3 we introduce GIGA-WoLF, an algorithm with both regret and convergence properties. The algorithm is followed by theoretical and experimental analyses in Sections 4 and 5, respectively, before concluding. 2 Online Learning in Games A game in normal form is a tuple, (n, A1...n, R1...n), where n is the number of players in the game, Ai is a set of actions available to player i (A = A1×. . .×An), and Ri : A →ℜ is a mapping from joint actions to player i’s reward. The problem of learning in a normalform game is one of repeatedly selecting an action and receiving a reward, with a goal of maximizing average reward against an unknown opponent. If there are two players then it is convenient to write a player’s reward function as a |A1| × |A2| matrix. Three example normal-form games are shown in Table 1. Unless stated otherwise we will assume the learning algorithm is player one. In the context of a particular learning algorithm and a particular opponent, let rt ∈ℜ|A1| be the vector of actual rewards that player one would receive at time t for each of its actions. Let xt ∈ 1This work is not restricted to zero-sum games and our use of the word “opponent” refers simply to other players in the game. 2A notable exception is Hart and Mas-Colell’s algorithm that guarantees the empirical distribution of play converges to that of a correlated equilibrium. Neither strategies nor expected values necessarily converge, though. Table 1: Examples of games in normal-form. R1 = H T „ 1 H −1 T −1 1 « R2 = −R1 R1 = A B „ 0 A 3 B 1 2 « R2 = A B „ 3 2 0 1 « R1 = R P S 0 @ 0 R −1 P 1 S 1 0 −1 −1 1 0 1 A R2 = −R1 (a) Matching Pennies (b) Tricky Game (c) Rock–Paper–Scissors PD(A1) be the algorithm’s strategy at time t, selected from probability distributions over actions. So, player one’s expected payoff at time t is (rt · xt). Let 1a be the probability distribution that assigns probability 1 to action a ∈A1. Lastly, we will assume the reward for any action is bounded by rmax and therefore ||rt||2 ≤|A1|r2 max. 2.1 Evaluation Criteria One common evaluation criterion for learning in normal-form games is convergence. There are a number of different forms of convergence that have been examined in the literature. These include, roughly increasing in strength: average reward (i.e., P(rt · xt)/T), empirical distribution of actions (i.e., P xt/T), expected reward (i.e., (rt · xt)), and strategies (i.e., xt). We focus in this paper on convergence of strategies as this implies the other three forms of convergence as well. In particular, we will say an algorithm converges against a particular opponent if and only if limt→∞xt = x∗. The second common evaluation criterion is regret. Total regret3 is the difference between the maximum total reward of any static strategy given the past history of play and the algorithm’s total reward. RT ≡max a∈A1 T X t=1 ((rt · 1a) −(rt · xt)) Average regret is just the asymptotic average of total regret, limT →∞RT /T. An algorithm has no-regret if and only if the average regret is less than or equal to zero against all opponent strategies. The no-regret property makes a strong claim about the performance of the algorithm: the algorithm’s expected average reward is at least as large as the expected average award any static strategy could have achieved. In other words, the algorithm is performing at least as well as any static strategy. 2.2 Gradient Ascent Learning Gradient ascent is a simple and common technique for finding parameters that optimize a target function. In the case of learning in games, the parameters represent the player’s strategy, and the target function is expected reward. We will examine three recent results evaluating gradient ascent learning algorithms in normal-form games. Singh, Kearns, and Mansour [6] analyzed gradient ascent (IGA) in two-player, two-action games, e.g., Table 1(a) and (b). They examined the resulting strategy trajectories and payoffs in self-play, demonstrating that strategies do not always converge to a Nash equilibrium, depending on the game. They proved, instead, that average payoffs converge (a 3Our analysis focuses on expectations of regret (total and average), similar to [10, 11]. Although note that for any self-oblivious behavior, including GIGA-WoLF, average regret of at most zero on expectation implies universal consistency, i.e., regret of at most ϵ with high probability [11]. weaker form of convergence) to the payoffs of the equilibrium. WoLF-IGA [7] extended this work to the stronger form of convergence, namely convergence of strategies, through the use of a variable learning rate. Using the WoLF (“Win or Learn Fast”) principle, the algorithm would choose a larger step size when the current strategy had less expected payoff than the equilibrium strategy. This results in strategies converging to the Nash equilibrium in a variety of games including all two-player, two-action games.4 Zinkevich [11] looked at gradient ascent using the evaluation criterion of regret. He first extended IGA beyond two-player, two-action games. His algorithm, GIGA (Generalized Infinitesimal Gradient Ascent), updates strategies using an unconstrained gradient, and then projects the resulting strategy vector back into the simplex of legal probability distributions, xt+1 = P(xt + ηtrt) where P(x) = argmin x′∈P D(A1) ||x −x′||, (1) ηt is the stepsize at time t, and ||·|| is the standard L2 norm. He proved GIGA’s total regret is bounded by, RT ≤ √ T + |A|r2 max( √ T −1/2). (2) Since GIGA is identical to IGA in two-player, two-action games, we also have that GIGA achieves the weak form of convergence in this subclass of games. It is also true, though, that GIGA’s strategies do not converge in self-play even in simple games like matching pennies. In the next section, we present an algorithm that simultaneously achieves GIGA’s no-regret result and part of WoLF-IGA’s convergence result. We first present the algorithm and then analyze these criteria both theoretically and experimentally. 3 GIGA-WoLF GIGA-WoLF is a gradient based learning algorithm that internally keeps track of two different gradient-updated strategies, xt and zt. The algorithm chooses actions according to the distribution xt, but updates both xt and zt after each iteration. The update rules consist of three steps. (1) ˆxt+1 = P(xt + ηtrt) (2) zt+1 = P(zt + ηtrt/3) δt+1 = min 1, ||zt+1 −zt|| ||zt+1 −ˆxt+1|| (3) xt+1 = ˆxt+1 + δt+1(zt+1 −ˆxt+1) Step (1) updates xt according to GIGA’s standard gradient update and stores the result as ˆxt+1. Step (2) updates zt in the same manner, but with a smaller step-size. Step (3) makes a final adjustment on xt+1 by moving it toward zt+1. The magnitude of this adjustment is limited by the change in zt that occurred in step (2). A key factor in understanding this algorithm is the observance that a strategy a receives higher reward than a strategy b if and only if the gradient at a is in the direction of b (i.e., rt · (b −a) > 0). Therefore, the step (3) adjustment is in the direction of the gradient if and only if zt received higher reward than xt. Notice also that, as long as xt is not near the boundary, the change due to step (3) is of lower magnitude than the change due 4WoLF-IGA may, in fact, be a limited variant of the extragradient method [13] for variational inequality problems. The extragradient algorithm is guaranteed to converge to a Nash equilibrium in self-play for all zero-sum games. Like WoLF-IGA, though, it does not have any known regret guarantees, but more importantly requires the other players’ payoffs to be known. to step (1). Hence, the combination of steps (1) and (3) result in a change with two key properties. First, the change is in the direction of positive gradient. Second, the magnitude of the change is larger when zt received higher reward than xt. So, we can interpret the update rule as a variation on the WoLF principle of “win or learn fast”, i.e., the algorithm is learning faster if and only if its strategy x is losing to strategy z. GIGA-WoLF is a major improvement on the original presentation of WoLF-IGA, where winning was determined by comparison with an equilibrium strategy that was assumed to be given. Not only is less knowledge required, but the use of a GIGA-updated strategy to determine winning will allow us to prove guarantees on the algorithm’s regret. In the next section we present a theoretical examination of GIGA-WoLF’s regret in nplayer, n-action games, along with a limited guarantee of convergence in two-player, twoaction games. In Section 5, we give experimental results of learning using GIGA-WoLF, demonstrating that convergence extends beyond the theoretical analysis presented. 4 Theoretical Analysis We begin by examining GIGA-WoLF’s regret against an unknown opponent strategy. We will prove the following bound on average regret. Theorem 1 If ηt = 1/ √ t, the regret of GIGA-WoLF is, RT ≤2 √ T + |A|r2 max(2 √ T −1). Therefore, limT →∞RT /T ≤0, hence GIGA-WoLF has no-regret. Proof. We begin with a brief overview of the proof. We will find a bound on the regret of the strategy xt with respect to the dynamic strategy zt. Since zt is unmodified GIGA, we already have a bound on the regret of zt with respect to any static strategy. Hence, we can bound the regret of xt with respect to any static strategy. We start by examining the regret of xt with respect to zt using a similar analysis as used by Zinkevich [11]. Let ρx→z t refer to the difference in expected payoff between zt and xt at time t, and Rx→z T be the sum of these differences, i.e., the total regret of xt with respect to zt, ρx→z t = rt · (zt −xt) Rx→z T ≡ T X t=1 ρx→z t . We will use the following potential function, Φt ≡(xt −zt)2/2ηt. We can examine how this potential changes with each step of the update. ∆Φ1 t, ∆Φ2 t, and ∆Φ3 t refers to the change in potential caused by steps (1), (2), and (3), respectively. ∆Φ4 t refers to the change in potential caused by the learning rate change from ηt−1 to ηt. This gives us the following equations for the potential change. ∆Φ1 t+1 = 1/2ηt((ˆxt+1 −zt)2 −(xt −zt)2) ∆Φ2 t+1 = 1/2ηt((ˆxt+1 −zt+1)2 −(ˆxt+1 −zt)2) ∆Φ3 t+1 = 1/2ηt((xt+1 −zt+1)2 −(ˆxt+1 −zt+1)2) ∆Φ4 t+1 = (1/2ηt+1 −1/2ηt) (xt+1 −zt+1)2 ∆Φt+1 = ∆Φ1 t+1 + ∆Φ2 t+1 + ∆Φ3 t+1 + ∆Φ4 t+1 Notice that if δt+1 = 1 then xt+1 = zt+1. Hence Φt+1 = 0, and ∆Φ2 t+1 + ∆Φ3 t+1 ≤0. If δt+1 < 1, then ||xt+1 −ˆxt+1|| = ||zt+1 −zt||. Notice also that in this case xt+1 is co-linear and between ˆxt+1 and zt+1. So, ||ˆxt+1 −zt+1|| = ||ˆxt+1 −xt+1|| + ||xt+1 −zt+1|| = ||zt+1 −zt|| + ||xt+1 −zt+1|| We can bound the left with the triangle inequality, ||ˆxt+1 −zt+1|| ≤ ||ˆxt+1 −zt|| + ||zt −zt+1|| ||xt+1 −zt+1|| ≤ ||ˆxt+1 −zt||. So regardless of δt+1, ∆Φ2 t+1 + ∆Φ3 t+1 < 0. Hence, ∆Φt+1 ≤∆Φ1 t+1 + ∆Φ4 t+1. We will now use this bound on the change in the potential to bound the regret of xt with respect to zt. We know from Zinkevich that, (ˆxt+1 −zt)2 −(xt −zt)2 ≤2ηtrt · (zt −xt) + η2 t r2 t . Therefore, ρx→z t ≤ −1 2ηt (ˆxt+1 −zt)2 −(xt −zt)2 −r2 t ≤ −∆Φ1 t+1 + 1/2ηtr2 t ≤ −∆Φt+1 + ∆Φ4 t+1 + 1/2ηtr2 t . Since we assume rewards are bounded by rmax we can bound r2 t by |A|r2 max. Summing up regret and using the fact that ηt = 1/ √ t, we get the following bound. Rx→z T ≤ T X t=1 −∆Φt + ∆Φ4 t + ηt 2 |A|r2 max ≤ (Φ1 −ΦT ) + 1 ηT −1 + |A|r2 max 2 T X t=1 ηt ≤ √ T + |A|r2 max( √ T −1/2) We know that GIGA’s regret with respect to any strategy is bounded by the same value (see Inequality 2). Hence, RT ≤2 √ T + |A|r2 max(2 √ T −1), as claimed. □ The second criterion we want to consider is convergence. As with IGA, WoLF-IGA, and other algorithms, our theoretical analysis will be limited to two-player, two-action generalsum games. We further limit ourselves to the situation of GIGA-WoLF playing “against” GIGA. These restrictions are a limitation of the proof method, which uses a case-by-case analysis that is combinatorially impractical for the case of self-play. This is not necessarily a limitation on GIGA-WoLF’s convergence. This theorem along with the empirical results we present later in Section 5 give a strong sense of GIGA-WoLF’s convergence properties. The full proof can be found in [1]. Theorem 2 In a two-player, two-action repeated game, if one player follows the GIGAWoLF algorithm and the other follows the GIGA algorithm, then their strategies will converge to a Nash equilibrium. 5 Experimental Analysis We have presented here two theoretical properties of GIGA-WoLF relating to guarantees on both regret and convergence. There have also been extensive experimental results performed on GIGA-WoLF in a variety of normal-form games [1]. We summarize the results here. The purpose of these experiments was to demonstrate the theoretical results from the previous section as well as explore the extent to which the results (convergence, in particular) can be generalized. In that vein, we examined the same suite of normal-form games used in experiments with WoLF-PHC, the practical variant of WoLF-IGA [7]. One of the requirements of GIGA-WoLF (and GIGA) is knowledge of the entire reward vector (rt), which requires knowledge of the game and observation of the opponent’s action. In practical situations, one or both of these are unlikely to be available. Instead, only the reward of the selected action is likely to be observable. We have relaxed this requirement in these experiments by providing GIGA-WoLF (and GIGA) with only estimates of the gradient from stochastic approximation. In particular, after selecting action a and receiving reward ˆra, we update the current estimate of action a’s component of the reward vector, rt+1 = rt + αt(ˆra −1a · rt)1a, where αt is the learning rate. This is a standard method of estimation commonly used in reinforcement learning (e.g., Q-learning). For almost all of the games explored, including two-player, two-action games as well as n-action zero-sum games, GIGA-WoLF strategies converged in self-play to equilibrium strategies of the game. GIGA’s strategies, on the other hand, failed to converge in self-play over the same suite of games. These results are nearly identical to the PHC and WoLF-PHC experiments over the same games. A prototypical example of these results is provided in Figure 1(a) and (b), showing strategy trajectories while learning in Rock-Paper-Scissors. GIGA’s strategies do not converge, while GIGA-WoLF’s strategies do converge. GIGAWoLF also played directly against GIGA in this game resulting in convergence, but with a curious twist. The resulting expected and average payoffs are shown in Figure 1(c). Since both are no-regret learners, average payoffs are guaranteed to go to zero, but the short-term payoff is highly favoring GIGA-WoLF. This result raises an interesting question about the relative short-term performance of no-regret learning algorithms, which needs to be explored further. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Pr(Paper) Pr(Rock) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Pr(Paper) Pr(Rock) -0.08 -0.04 0 0.04 0.08 0.12 0 500000 1e+06 Reward Iterations Average Reward Expected Reward (a) GIGA (b) GIGA-WoLF (c) GIGA v. GIGA-WoLF Figure 1: Trajectories of joint strategies in Rock-Paper-Scissors when both players use GIGA (a) or GIGA-WoLF (b). Also shown (c) are the expected and average payoffs of the players when GIGA and GIGA-WoLF play against each other. GIGA-WoLF did not lead to convergence in all of the explored games. The “problematic” Shapley’s game, for which many similarly convergent algorithms fail in, also resulted in non-convergence for GIGA-WoLF. On the other hand, this game has the interesting property that both players’ when using GIGA-WoLF (or GIGA) actually achieve negative regret. In other words, the algorithms are outperforming any static strategy to which they could converge upon. This suggests a new desirable property for future multiagent (or online) learning algorithms, negative non-convergence regret (NNR). An algorithm has NNR, if it satisfies the no-regret property and either (i) achieves negative regret or (ii) its strategy converges. This property combines the criteria of regret and convergence, and GIGA-WoLF is a natural candidate for achieving this compelling result. 6 Conclusion We introduced GIGA-WoLF, a new gradient-based algorithm, that we believe is the first to simultaneously address two criteria: no-regret and convergence. We proved GIGAWoLF has no-regret. We also proved that in a small class of normal-form games, GIGAWoLF’s strategy when played against GIGA will converge to a Nash equilibrium. We summarized experimental results of GIGA-WoLF playing in a variety of zero-sum and general-sum games. These experiments verified our theoretical results and exposed two interesting phenomenon that deserve further study: short-term performance of no-regret learners and the new desiderata of negative non-convergence regret. We expect GIGAWoLF and these results to be the foundation for further understanding of the connections between the regret and convergence criteria. References [1] Michael Bowling. Convergence and no-regret in multiagent learning. Technical Report TR04-11, Department of Computing Science, University of Alberta, 2004. [2] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the Eleventh International Conference on Machine Learning, pages 157–163, 1994. [3] Junling Hu and Michael P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 242–250, 1998. [4] Amy Greenwald and Keith Hall. Correlated Q-learning. In Proceedings of the AAAI Spring Symposium Workshop on Collaborative Learning Agents, 2002. [5] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 746–752, 1998. [6] Satinder Singh, Michael Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general-sum games. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 541–548, 2000. [7] Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artificial Intelligence, 136:215–250, 2002. [8] Yu-Han Chang and Leslie Pack Kaelbling. Playing is believing: the role of beliefs in multi-agent learning. In Advances in Neural Information Processing Systems 14, 2001. [9] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68:1127–1150, 2000. [10] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The adversarial multi-arm bandit problem. In 36th Annual Symposium on Foundations of Computer Science, pages 322–331, 1995. [11] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning, pages 928–925, 2003. [12] Amir Jafari, Amy Greenwald, David Gondek, and Gunes Ercal. On no-regret learning, fictitious play, and nash equilibrium. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 226–223, 2001. [13] G. M. Korpelevich. The extragradient method for finding saddle points and other problems. Matecon, 12:747–756, 1976.
|
2004
|
108
|
2,517
|
Learning efficient auditory codes using spikes predicts cochlear filters Evan Smith1 evan@cnbc.cmu.edu Michael S. Lewicki2 lewicki@cnbc.cmu.edu Departments of Psychology1 & Computer Science2 Center for the Neural Basis of Cognition Carnegie Mellon University Abstract The representation of acoustic signals at the cochlear nerve must serve a wide range of auditory tasks that require exquisite sensitivity in both time and frequency. Lewicki (2002) demonstrated that many of the filtering properties of the cochlea could be explained in terms of efficient coding of natural sounds. This model, however, did not account for properties such as phase-locking or how sound could be encoded in terms of action potentials. Here, we extend this theoretical approach with algorithm for learning efficient auditory codes using a spiking population code. Here, we propose an algorithm for learning efficient auditory codes using a theoretical model for coding sound in terms of spikes. In this model, each spike encodes the precise time position and magnitude of a localized, time varying kernel function. By adapting the kernel functions to the statistics natural sounds, we show that, compared to conventional signal representations, the spike code achieves far greater coding efficiency. Furthermore, the inferred kernels show both striking similarities to measured cochlear filters and a similar bandwidth versus frequency dependence. 1 Introduction Biological auditory systems perform tasks that require exceptional sensitivity to both spectral and temporal acoustic structure. This precision is all the more remarkable considering these computations begin with an auditory code that consists of action potentials whose duration is in milliseconds and whose firing in response to hair cell motion is probabilistic. In computational audition, representing the acoustic signal is the first step in any algorithm, and there are numerous approaches to this problem which differ in both their computational complexity and in what aspects of signal structure are extracted. The auditory nerve representation subserves a wide variety of different auditory tasks and is presumably welladapted for these purposes. Here, we investigate the theoretical question of what computational principles might underlie cochlear processing and the representation of the auditory nerve. For sensory representations, a theoretical principle that has attracted considerable interest is efficient coding. This posits that (assuming low noise) one goal of sensory coding is to represent signals in the natural sensory environment efficiently, i.e. with minimal redundancy [1–3]. Recently, it was shown that efficient coding of natural sounds could explain auditory nerve filtering properties and their organization as a population [4] and also account for some non-linear properties of auditory nerve responses [5]. Although those results provided an explanation for auditory nerve encoding of spectral information, they fail to explain the encoding of temporal information. Here, we extend the standard efficient coding model, which has an implicit stationarity assumption, to form efficient representations of non-stationary and time-relative acoustic structures. 2 An abstract model for auditory coding In standard models of efficient coding, sensory signals are represented by vectors of fixed length, and the representation is a linear transformation of the input pattern. A simple method to encode temporal signals is to divide the signal into discrete blocks; however, this approach has several drawbacks. First, the underlying acoustic structures have no relation to the block boundaries, so elemental acoustic features may be split across blocks. Second, this representation implicitly assumes that the signal structures are stationary, and provides no way to represent time-relative structures such as transient sounds. Finally, this approach has limited plausibility as a model of cochlear encoding. To address all of these problems, we use a theoretical model in which sounds are represented as spikes [6,7]. In this model, the signal, x(t), is encoded with a set of kernel functions, φ1 . . . φM, that can be positioned arbitrarily and independently in time. The mathematical form of the representation with additive noise is x(t) = M X m=1 nm X i=1 sm i φm(t −τ m i ) + ϵ(t), (1) where τ m i and sm i are the temporal position and coefficient of the ith instance of kernel φm, respectively. The notation nm indicates the number of instances of φm, which need not be the same across kernels. In addition, the kernels are not restricted in form or length. The key theoretical abstraction of the model is that the signal is decomposed in terms of discrete acoustic events, each of which has a precise amplitude and temporal position. We interpret the analog amplitude values as representing a local population of auditory nerve spikes. Thus, this theory posits that the purpose of the (binary) spikes at the auditory nerve is to encode as accurately as possible the temporal position and amplitude of the acoustic events defined by φm(t). The main questions we address are 1) encoding, i.e. what are the optimal values of τ m i and sm i and 2) learning, i.e. what are the optimal kernel functions φm(t). 2.1 Encoding Finding the optimal representation of arbitrary signals in terms of spikes is a hard problem, and currently there are no known biologically plausible algorithms that solve this problem well [7]. There are reasons to believe that this problem can be solved (approximately) with biological mechanisms, but for our purposes here, we compute the values of τ m i and sm i for a given signal we using the matching pursuit algorithm [8]. It iteratively approximates the input signal with successive orthogonal projections onto a basis. The signal can be decomposed into x(t) =< x(t)φm > φm + Rx(t), (2) where < x(t)φm > is the inner product between the signal and the kernel and is equivalent to sm i in equation 1. The final term in equation 2, Rx(t), is the residual signal after approximating x(t) in the direction of φm. The projection with the largest magnitude inner product will minimize the power of Rx(t), thereby capturing the most structure possible with a single kernel. 0 5 10 15 20 100 200 500 1000 2000 5000 K Input Residual Reconstruction 0 5 10 15 20 25 ms 100 200 500 1000 2000 5000 Kernel CF (Hz) Input Figure 1: A brief segment of the word canteen (input) is represented as a spike code (top). A reconstruction of the speech based only on the few spikes shown (ovals in spike code) is very accurate with relatively little residual error (reconstruction and residual). The colored arrows and matching curves illustrate the correspondence between a few of the ovals and the underlying acoustic structure represented by the kernel functions. Equation 2 can be rewritten more generally as Rn x(t) =< Rn x(t)φm > φm + Rn+1 x (t), (3) with R0 x(t) = x(t) at the start of the algorithm. On each iteration, the current residual is projected onto the basis. The projection with the largest inner product is subtracted out, and its coefficient and time are recorded. This projection and subtraction leaves < Rn x(t)φm > φm orthogonal to the residual signal, Rn+1 x (t) and to all previous and future projections [8]. As a result, matching pursuit codes are composed of mutually orthogonal signal structures. For the results reported here, the encoding was halted when sm i fell below a preset threshold (the spiking threshold). Figure 1 illustrates the spike code model and its efficiency in representing speech. The spoken word ”canteen” was encoded as a set of spikes using a fixed set of kernel functions. The kernels can have arbitrary shape and for illustration we have chosen gammatones (mathematical approximations of cochlear filters) as the kernel functions. A brief segment from input signal (1, Input) consists of three glottal pulses in the /a/ vowel. The resulting spike code is show above it. Each oval represents the temporal position and center frequency of an underlying kernel function, with oval size and gray value indicating kernel amplitude. For four spikes, colored arrows and curves indicate the relationship between the ovals and the acoustics events they represent. As evidenced from the figure, the very small set of spike events is sufficient to produce a very accurate reconstruction of the sound (reconstruction and residual). 2.2 Learning We adapt the method used in [9] to train our kernel function. Equation 1 can be rewritten in probabilistic form as p(x|Φ) = Z p(x|Φ, ˆs)p(ˆs)ds, (4) where ˆs, an approximation of the posterior maximum, comes from the set of coefficient generated by matching pursuit. We assume the noise in the likelihood, p(x|Φ, ˆs), is Gaussian and the prior, p(s), is sparse. The basis is updated by taking the gradient of the log probability, ∂ ∂φm log(p(x|Φ)) = ∂ ∂φm log(p(x|Φ, s)) + log(p(s)) (5) = 1 2σε ∂ ∂φm [x − M X m=1 nm X i=1 ˆsm i φm(t −τ m i )]2 (6) = 1 σε [x −ˆx] X i ˆsm i (7) As noted by Olshausen (2002), equation 7 indicates that the kernels are updated in Hebbian fashion, simply as a product of activity and residual [9] (i.e., the unit shifts its preferred stimuli in the direction of the stimuli that just made it spike minus those elements already encoded by other units). But in the case of the spike code, rather than updating for every time-point, we need only update at times when the kernel spiked. As noted earlier, the model can use kernels of any form or length. This capability also extends to the learning algorithm such that it can learn functions of differing temporal extents, growing or shrinking them as needed. Low frequency functions and others requiring longer temporal extent can be grown from shorter initial seeds, while brief functions can be trimmed to speed processing and minimize the effects of over-fitting. Periodically during training, a simple heuristic is used to trim or extend the kernels, φm. The functions are initially zero-padded. If learning causes the power of the padding to surpass a threshold, the padding is extended. If the power of the padding plus an adjacent segment falls below the threshold, the padding is trimmed from the end. Following the gradient step and length adjustment, the kernels are again normalized and the next training signal is encoded. 3 Adapting kernels to natural sounds The spike coding algorithm was used to learn kernel functions for two different classes of sounds: human speech and music. For speech, the algorithm trained on a subset the TIMIT Speech Corpus. Each training sample consisted of a single speaker saying a single sentence. The signals were bandpass filtered to remove DC components of the signal and to prevent aliasing from affecting learning. The signals were all normalized to a maximum amplitude of 1. Each of the 30 kernel functions were initialized to random Gaussian vectors of 100 samples in duration. The threshold below which spikes (values of sm) were ignored during the encoding stage was set at 0.1, which allowed for an initial encoding of ∼12dB signalto-noise ratio (SNR). As indicated by equation 7, the gradient depends on the residual. If the residual drops near zero or is predominately noise then learning is impeded. By slowly increasing the spiking threshold as the average residual drops, we retain some signal structure in the residual for further training. At the same time, the power distribution of natural sounds means that high frequency signal components might fall entirely below threshold, preventing their being learned. One possible solution that was not implemented here is using separate thresholds for each kernel. Figure 2: When adapted to speech, kernel functions become asymmetric sinusoids (smooth curves in red, zero padding has been removed for plotting), with sharp attacks and gradual decays. They also adapt in temporal extent, with longer and shorter functions emerging from the same initial length. These learned kernels are strikingly similar to revcor functions obtained from cat auditory nerve fibers (noisy curves in blue). The revcor functions were normalized and aligned in phase with the learned kernels but are otherwise unaltered (no smoothing or fitting). Figure 2 shows the kernel functions trained on speech (red curves). All are temporally localized, bandpass filters. They are similar in form to previous results but with several notable differences. Most notably, the learned kernel functions are temporally asymmetric, with sharp attack and gradual decay which matches physiological filtering properties of the auditory nerves. Each kernel function in figure 2 is overlayed on a so-called reversecorrelation (revcor) function which is an estimate of the physiological impulse response function for an individual auditory nerve fiber [10]. The revcor functions have been normalized, and the most closely matching in terms of center frequency and envelop were phase aligned with learned kernels by hand. No additional fitting was done, yet there is a striking similarity between the inferred kernels functions and physiologically estimated reverse-correlation functions. For 25 out of 30 kernel functions, we found a close match to the physiological revcor functions (correlation > 0.8). Of the remaining filters, all possessed the same basic asymmetric filter structure show in figure 2 and showed a more modest match to the data (correlation > 0.5). In the standard efficient coding model, the signal and the basis functions are all the same length. In order for the basis to span the signal space in the time domain and still be temporally localized, some of the learned functions are essentially replications of one another. In the spike coding model, this redundancy does not occur because coding is time-relative. Kernel functions can be placed arbitrarily in time such that one kernel function can code for similar acoustic events at different points in the signal. So, temporally extended functions can be learned without causing an explosion in the number of high-frequency functions 0.1 0.2 0.5 1 2 5 0.1 0.2 0.5 1 2 5 Center Frequency (kHz) Bandwidth (kHz) Speech Prediction Auditory Nerve Filters Figure 3: The center frequency vs. bandwidth distribution of learned kernel functions (red squares) plotted against physiological data (blue pluses). needed to span the signal space. Because cochlear coding also shares this quality, it might also allow more precise predictions about the population characteristics of cochlear filters. Individually, the learned kernel functions closely match the linear component of cochlear filters. We can also compare the learned kernels against physiological data in terms of population distributions. In frequency space, our learned population follows the approximately logarithmic distribution found in the cochlea, a more natural distribution of filters compared to previous findings, where the need to tile high-frequency space biased the distribution [4]. Figure 3 presents a log-log scatter-plot of the center frequency of each kernel versus its bandwidth (red squares). Plotted on the same axis are two sets of empirical data. One set (blue pluses) comes from a large corpus of reverse-correlation functions derived from physiological recordings of auditory nerve fibers [10]. Both the slope and distribution of the learned kernel functions match those of the empirical data. The distribution of learned kernels even appears to follow shifts in the slope of the empirical data at the high and low frequencies. 4 Coding Efficiency We can quantify the coding efficiency of the learned kernel functions in bits so as to objectively evaluate the model and compare it quantitatively to other signal representations. Rate-fidelity provides a useful objective measure for comparison. Here we use a method developed in [7] which we now briefly describe. Computing the rate-fidelity curves begins with associated pairs of coefficients and time values, {sm i , τ m i }, which are initially stored as double precision variables. Storing the original time values referenced to the start of the signal is costly because their range can be arbitrarily large and the distribution of time points is essentially uniform. Storing only the time since the last spike, δτ m i , greatly restricts the range and produces a variable that approximately follows a gamma distribution. Rate-fidelity curves are generated by varying the precision of the code, {sm i , δτ m i }, and computing the resulting fidelity through reconstruction. A uniform quantizer is used to vary the precision of the code between 1 and 16 bits. At all levels of precision, the bin widths for quantization are selected so that equal numbers of values fall in each bin. All sm i or δτ m i that fall within a bin are recoded to have the same value. We use the mean of the non-quantized values that fell within the bin. sm i and δτ m i are quantized independently. Treating the quantized values as samples from a random variable, we estimate a code’s entropy (bits/coefficient) from histograms of the values. Rate is then the product of the estimated entropy of the quantized variables and the number of coefficients per second for a given signal. At each level of precision the signal is reconstructed based on the quantized values, and an SNR for the code is computed. This process was repeated across a set of signals and the results were averaged to produce rate-fidelity curves. Coding efficiency can be measured in nearly identical fashion for other signal representations. For comparison we generate rate-fidelity curves for Fourier and wavelet representations as well as for a spike code using either learned kernel functions or gammatone functions. Fourier coefficients were obtained for each signal via Fast Fourier Transform. The real and imaginary parts were quantized independently, and the rate was based on the estimated entropy of the quantized coefficients. Reconstruction was simply the inverse Fourier transform of the quantized coefficients. Similarly, coding efficiency using Daubechies wavelets was estimated using Matlab’s discrete wavelet transform and inverse wavelet transform functions. Curves for the gammatone spike code were generated as described above. Figure 4 shows the rate-fidelity curves calculated for speech from the TIMIT speech corpus [11]. At low bit rates (below 40 Kbps), both of the spike codes produce more efficient representations of speech than the other traditional representations. For example, between 10 and 20 Kbps the fidelity of the spike representation of speech using learned kernels is approximately twice that of either Fourier or wavelets. The learned kernels are also sightly but significantly more efficient than spike codes using gammatones, particularly in the case of music. The kernel functions trained on music are more extended in time and appear better able to describe harmonic structure than the gammatones. As the number of spikes increases the spike codes become less efficient, with the curve for learned kernels dropping more rapidly than for gammatones. Encoding sounds to very high precision requires setting the spike threshold well below the threshold used in training. It may be that the learned kernel functions are not well adapted to the statistics of very low amplitude sounds. At higher bit rates (above 60 Kbps) the Fourier and wavelet representations produce much higher rate-fidelity curves than either spike codes. 5 Conclusion We have presented a theoretical model of auditory coding in which temporal kernels are the elemental features of natural sounds. The essential property of these features is that they can describe acoustic structure at arbitrary time points, and can thus represent nonstationary, transient sounds in a compact and shift-invariant manner. We have shown that by using this time-relative spike coding model and adapting the kernel shapes to efficiently code natural sounds, it is possible to account for both the detailed filter shapes of auditory nerve fibers and their distribution as a population. Moreover, we have demonstrated quantitatively that, at a broad range of low to medium bit rates, this type of code is substantially more efficient than conventional signal representations such as Fourier or wavelet transforms. References [1] H. B. Barlow. Possible principles underlying the transformation of sensory messages. In W. A. Rosenbluth, editor, Sensory Communication, pages 217–234. MIT Press, 0 10 20 30 40 50 60 70 80 90 0 5 10 15 20 25 30 35 40 Rate (Kbps) SNR (dB) Spike Code: adapted Spike Code: gammatone Block Code: wavelet Block Code: Fourier Figure 4: Rate-Fidelity curves speech were made for spike coding using both learned kernels (red) and gammatones (light blue) as well as using discrete Daubechies wavelet transform (black) and Fourier transform (dark blue). Cambridge, 1961. [2] J. J. Atick. Could information-theory provide an ecological theory of sensory processing. Network, 3(2):213–251, 1992. [3] E. Simoncelli and B. Olshausen. Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193–1216, 2001. [4] M. S. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356– 363, 2002. [5] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4:819–825, 2001. [6] M. S. Lewicki. Efficient coding of time-varying patterns using a spiking population code. In R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki, editors, Probabilistic Models of the Brain: Perception and Neural Function, pages 241–255. MIT Press, Cambridge, MA, 2002. [7] E. C. Smith and M. S. Lewicki. Efficient coding of time-relative structure using spikes. Neural Computation, 2004. [8] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. [9] B. A. Olshausen. Sparse codes and spikes. In R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki, editors, Probabilistic Models of the Brain: Perception and Neural Function, pages 257–272. MIT Press, Cambridge, MA, 2002. [10] L. H. Carney, M. J. McDuffy, and I. Shekhter. Frequency glides in the impulse responses of auditory-nerve fibers. Journal of the Acoustical Society of America, 105:2384–2391, 1999. [11] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren, and V. Zue. Timit acoustic-phonetic continuous speech corpus, 1990.
|
2004
|
109
|
2,518
|
Non-Local Manifold Tangent Learning Yoshua Bengio and Martin Monperrus Dept. IRO, Universit´e de Montr´eal P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7, Qc, Canada {bengioy,monperrm}@iro.umontreal.ca Abstract We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation suggests to explore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where a local non-parametric method fails. 1 Introduction A central issue of generalization is how information from the training examples can be used to make predictions about new examples, and without strong prior assumptions, i.e. in non-parametric models, this may be fundamentally difficult as illustrated by the curse of dimensionality. There has been in recent years a lot of work on unsupervised learning based on characterizing a possibly non-linear manifold near which the data would lie, such as Locally Linear Embedding (LLE) (Roweis and Saul, 2000), Isomap (Tenenbaum, de Silva and Langford, 2000), kernel Principal Components Analysis (PCA) (Sch¨olkopf, Smola and M¨uller, 1998), Laplacian Eigenmaps (Belkin and Niyogi, 2003), and Manifold Charting (Brand, 2003). These are all essentially non-parametric methods that can be shown to be kernel methods with an adaptive kernel (Bengio et al., 2004), and which represent the manifold on the basis of local neighborhood relations, very often constructed using the nearest neighbors graph (the graph with one vertex per observed example, and arcs between near neighbors). The above methods characterize the manifold through an embedding which associates each training example (an input object) with a low-dimensional coordinate vector (the coordinates on the manifold). Other closely related methods characterize the manifold as well as “noise” around it. Most of these methods consider the density as a mixture of flattened Gaussians, e.g. mixtures of factor analyzers (Ghahramani and Hinton, 1996), Manifold Parzen windows (Vincent and Bengio, 2003), and other local PCA models such as mixtures of probabilistic PCA (Tipping and Bishop, 1999). This is not an exhaustive list, and recent work also combines modeling through a mixture density and dimensionality reduction (Teh and Roweis, 2003; Brand, 2003). In this paper we claim that there is a fundamental weakness with such kernel methods, due to the locality of learning: we show that the local tangent plane of the manifold at a point x is defined based mostly on the near neighbors of x according to some possibly datadependent kernel KD. As a consequence, it is difficult with such methods to generalize to new combinations of values x that are “far” from the training examples xi, where being “far” is a notion that should be understood in the context of several factors: the amount of noise around the manifold (the examples do not lie exactly on the manifold), the curvature of the manifold, and the dimensionality of the manifold. For example, if the manifold curves quickly around x, neighbors need to be closer for a locally linear approximation to be meaningful, which means that more data are needed. Dimensionality of the manifold compounds that problem because the amount of data thus needed will grow exponentially with it. Saying that y is “far” from x means that y is not well represented by its projection on the tangent plane at x. In this paper we explore one way to address that problem, based on estimating the tangent planes of the manifolds as a function of x, with parameters that can be estimated not only from the data around x but from the whole dataset. Note that there can be more than one manifold (e.g. in vision, one may imagine a different manifold for each “class” of object), but the structure of these manifolds may be related, something that many previous manifold learning methods did not take advantage of. We present experiments on a variety of tasks illustrating the weaknesses of the local manifold learning algorithms enumerated above. The most striking result is that the model is able to generalize a notion of rotation learned on one kind of image (digits) to a very different kind (alphabet characters), i.e. very far from the training data. 2 Local Manifold Learning By “local manifold learning”, we mean a method that derives information about the local structure of the manifold (i.e. implicitly its tangent directions) at x based mostly on the training examples “around” x. There is a large group of manifold learning methods (as well as the spectral clustering methods) that share several characteristics, and can be seen as data-dependent kernel PCA (Bengio et al., 2004). These include LLE (Roweis and Saul, 2000), Isomap (Tenenbaum, de Silva and Langford, 2000), kernel PCA (Sch¨olkopf, Smola and M¨uller, 1998) and Laplacian Eigenmaps (Belkin and Niyogi, 2003). They first build a data-dependent Gram matrix M with n × n entries KD(xi, xj) where D = {x1, . . . , xn} is the training set and KD is a data-dependent kernel, and compute the eigenvectoreigenvalue pairs {(vk, λk)} of M. The embedding of the training set is obtained directly from the principal eigenvectors vk of M (the i-th element of vk gives the k-th coordinate of xi’s embedding, possibly scaled by q λk n , i.e. ek(xi) = vik) and the embedding for a new example can be estimated using the Nystr¨om formula (Bengio et al., 2004): ek(x) = 1 λk Pn i=1 vkiKD(x, xi) for the k-th coordinate of x, where λk is the k-th eigenvalue of M (the optional scaling by q λk n would also apply). The above equation says that the embedding for a new example x is a local interpolation of the manifold coordinates of its neighbors xi, with interpolating weights given by KD(x,xi) λk . To see more clearly how the tangent plane may depend only on the neighbors of x, consider the relation between the tangent plane and the embedding function: the tangent plane at x is simply the subspace spanned by the vectors ∂ek(x) ∂x . In the case of very “local” kernels like that of LLE, spectral clustering with Gaussian kernel, Laplacian Eigenmaps or kernel PCA with Gaussian kernel, that derivative only depends significantly on the near neighbors of x. Consider for example kernel PCA with a Gaussian kernel: then ∂ek(x) ∂x can be closely approximated by a linear combination of the difference vectors (x −xj) for xj near x. The weights of that combination may depend on the whole data set, but if the ambiant space has many more dimensions then the number of such “near” neighbors of x, this is a very strong locally determined constraint on the shape of the manifold. The case of Isomap is less obvious but we show below that it is also local. Let D(a, b) denote the graph geodesic distance going only through a, b and points from the training set. As shown in (Bengio et al., 2004), the corresponding data-dependent kernel can be defined as KD(x, xi) = −1 2(D(x, xi)2 −1 n P j D(x, xj)2 −¯Di + ¯D) where ¯Di = 1 n P j D(xi, xj)2 and ¯D = 1 n P j ¯Dj. Let N(x, xi) denote the index j of the training set example xj that is a neighbor of x that minimizes ||x −xj|| + D(xj, xi). Then ∂ek(x) ∂x = 1 λk X i vki 1 n X j D(x, xj) (x −xN(x,xj)) ||x −xN(x,xj)|| −D(x, xi) (x −xN(x,xi)) ||x −xN(x,xi)|| (1) which is a linear combination of vectors (x−xk), where xk is a neighbor of x. This clearly shows that the tangent plane at x associated with Isomap is also included in the subspace spanned by the vectors (x −xk) where xk is a neighbor of x. There are also a variety of local manifold learning algorithms which can be classified as “mixtures of pancakes” (Ghahramani and Hinton, 1996; Tipping and Bishop, 1999; Vincent and Bengio, 2003; Teh and Roweis, 2003; Brand, 2003). These are generally mixtures of Gaussians with a particular covariance structure. When the covariance matrix is approximated using its principal eigenvectors, this leads to “local PCA” types of methods. For these methods the local tangent directions directly correspond to the principal eigenvectors of the local covariance matrices. Learning is also local since it is mostly the examples around the Gaussian center that determine its covariance structure. The problem is not so much due to the form of the density as a mixture of Gaussians. The problem is that the local parameters (e.g. local principal directions) are estimated mostly based on local data. There is usually a non-local interaction between the different Gaussians, but its role is mainly of global coordination, e.g. where to set the Gaussian centers to allocate them properly where there is data, and optionally how to orient the principal directions so as to obtain a globally coherent coordinate system for embedding the data. 2.1 Where Local Manifold Learning Would Fail It is easy to imagine at least four failure causes for local manifold learning methods, and combining them will create even greater problems: • Noise around the manifold: data are not exactly lying on the manifold. In the case of non-linear manifolds, the presence of noise means that more data around each pancake region will be needed to properly estimate the tangent directions of the manifold in that region. • High curvature of the manifold. Local manifold learning methods basically approximate the manifold by the union of many locally linear patches. For this to work, there must be at least d close enough examples in each patch (more with noise). With a high curvature manifold, more – smaller – patches will be needed, and the number of required patches will grow exponentially with the dimensionality of the manifold. Consider for example the manifold of translations of a high-contrast image.The tangent direction corresponds to the change in image due a small translation, i.e. it is non-zero only at edges in the image. After a one-pixel translation, the edges have moved by one pixel, and may not overlap much with the edges of the original image if it had high contrast. This is indeed a very high curvature manifold. • High intrinsic dimension of the manifold. We have already seen that high manifold dimensionality d is hurtful because O(d) examples are required in each patch and O(rd) patches (for some r depending on curvature and noise) are necessary to span the manifold. In the translation example, if the image resolution is increased, then many more training images will be needed to capture the curvature around the translation manifold with locally linear patches. Yet the physical phenomenon responsible for translation is expressed by a simple equation, which does not get more complicated with increasing resolution. • Presence of many manifolds with little data per manifold. In many real-world contexts there is not just one global manifold but a large number of manifolds which however share something about their structure. A simple example is the manifold of transformations (view-point, position, lighting,...) of 3D objects in 2D images. There is one manifold per object instance (corresponding to the successive application of small amounts of all of these transformations). If there are only a few examples for each such class then it is almost impossible to learn the manifold structures using only local manifold learning. However, if the manifold structures are generated by a common underlying phenomenon then a non-local manifold learning method could potentially learn all of these manifolds and even generalize to manifolds for which a single instance is observed, as demonstrated in the experiments below. 3 Non-Local Manifold Tangent Learning Here we choose to characterize the manifolds in the data distribution through a matrixvalued function F(x) that predicts at x ∈Rn a basis for the tangent plane of the manifold near x, hence F(x) ∈Rd×n for a d-dimensional manifold. Basically, F(x) specifies “where” (in which directions) one expects to find near neighbors of x. We are going to consider a simple supervised learning setting to train this function. As with Isomap, we consider that the vectors (x −xi) with xi a near neighbor of x span a noisy estimate of the manifold tangent space. We propose to use them to define a “target” for training F(x). In our experiments we simply collected the k nearest neighbors of each example x, but better selection criteria might be devised. Points on the predicted tangent subspace can be written F ′(x)w with w ∈Rd being local coordinates in the basis specified by F(x). Several criteria are possible to match the neighbors differences with the subspace defined by F(x). One that yields to simple analytic calculations is simply to minimize the distance between the x −xj vectors and their projection on the subspace defined by F(x). The low-dimensional local coordinate vector wtj ∈Rd that matches neighbor xj of example xt is thus an extra free parameter that has to be optimized, but is obtained analytically. The overall training criterion involves a double optimization over function F and local coordinates wtj of what we call the relative projection error: min F,{wtj} X t X j∈N(xt) ||F ′(xt)wtj −(xt −xj)||2 ||xt −xj||2 (2) where N(x) denotes the selected set of near neighbors of x. The normalization by ||xt − xj||2 is to avoid giving more weight to the neighbors that are further away. The above ratio amounts to minimizing the square of the sinus of the projection angle. To perform the above minimization, we can do coordinate descent (which guarantees convergence to a minimum), i.e. alternate changes in F and changes in w’s which at each step go down the total criterion. Since the minimization over the w’s can be done separately for each example xt and neighbor xj, it is equivalent to minimize ||F ′(xt)wtj −(xt −xj)||2 ||xt −xj||2 (3) over vector wtj for each such pair (done analytically) and compute the gradient of the above over F (or its parameters) to move F slightly (we used stochastic gradient on the parameters of F). The solution for wtj is obtained by solving the linear system F(xt)F ′(xt)wtj = F(xt) (xt −xj) ||xt −xj||2 . (4) In our implementation this is done robustly through a singular value decomposition F ′(xt) = USV ′ and wtj = B(xt −xj) where B can be precomputed for all the neighbors of xt: B = (Pd k=1 1Sk>ϵV.kV ′ .k/S2 k)F(xt). The gradient of the criterion with respect to the i-th row of F(xt), holding wtj fixed, is simply 2 X j wtji ||xt −xj||(F ′(xt)w −(xt −xj)) (5) where wtji is the i-th element of wtj. In practice, it is not necessary to store more than one wtj vector at a time. In the experiments, F(·) is parameterized as a an ordinary one hidden layer neural network with n inputs and d × n outputs. It is trained by stochastic gradient descent, one example xt at a time. Although the above algorithm provides a characterization of the manifold, it does not directly provide an embedding nor a density function. However, once the tangent plane function is trained, there are ways to use it to obtain all of the above. The simplest method is to apply existing algorithms that provide both an embedding and a density function based on a Gaussian mixture with pancake-like covariances. For example one could use (Teh and Roweis, 2003) or (Brand, 2003), the local covariance matrix around x being constructed from F ′(x)diag(σ2(x))F(x), where σ2 i (x) should estimate V ar(wi) around x. 3.1 Previous Work on Non-Local Manifold Learning The non-local manifold learning algorithm presented here (find F(·) which minimizes the criterion in eq. 2) is similar to the one proposed in (Rao and Ruderman, 1999) to estimate the generator matrix of a Lie group. That group defines a one-dimensional manifold generated by following the orbit x(t) = eGtx(0), where G is an n × n matrix and t is a scalar manifold coordinate. A multi-dimensional manifold can be obtained by replacing Gt above by a linear combination of multiple generating matrices. In (Rao and Ruderman, 1999) the matrix exponential is approximated to first order by (I + Gt), and the authors estimate G for a simple signal undergoing translations, using as a criterion the minimization of P x,˜x mint ||(I + Gt)x −˜x||2, where ˜x is a neighbor of x in the data. Note that in this model the tangent plane is a linear function of x, i.e. F1(x) = Gx. By minimizing the above across many pairs of examples, a good estimate of G for the artificial data was recovered by (Rao and Ruderman, 1999). Our proposal extends this approach to multiple dimensions and non-linear relations between x and the tangent planes. Note also the earlier work on Tangent Distance (Simard, LeCun and Denker, 1993), in which the tangent planes are not learned but used to build a nearest neighbor classifier that is based on the distance between the tangent subspaces around two examples to be compared. The main advantage of the approach proposed here over local manifold learning is that the parameters of the tangent plane predictor can be estimated using data from very different regions of space, thus in principle allowing to be less sensitive to all four of the problems described in 2.1, thanks to sharing of information across these different regions. 4 Experimental Results The objective of the experiments is to validate the proposed algorithm: does it estimate well the true tangent planes? does it learn better than a local manifold learning algorithm? Error Measurement In addition to visualizing the results for the low-dimensional data, we measure performance by considering how well the algorithm learns the local tangent distance, as measured by the normalized projection error of nearest neighbors (eq. 3). We compare the errors of four algorithms, always on test data not used to estimate the tangent plane: (a) true analytic (using the true manifold’s tangent plane at x computed analytically), (b) tangent learning (using the neural-network tangent plane predictor F(x), trained using the k ≥d nearest neighbors in the training set of each training set example), (c) Isomap (using the tangent plane defined on Eq. 1), (d) Local PCA (using the d principal components of the k nearest neighbors of x in the training set). 0 1 2 3 4 5 6 7 8 9 10 −2 −1 0 1 2 3 4 Generalization of Tangent Learning Figure 1: Task 1 2-D data with 1-D sinusoidal manifolds: the method indeed captures the tangent planes. The small segments are the estimated tangent planes. Red points are training examples. 1 2 3 4 5 6 7 8 9 10 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 Analytic Local PCA Isomap Tangent Learning Figure 2: Task 2 relative projection error for k-th nearest neighbor, w.r.t. k, for compared methods (from lowest to highest at k=1: analytic, tangent learning, local PCA, Isomap). Note U-shape due to opposing effects of curvature and noise. Task 1 We first consider a low-dimensional but multi-manifold problem. The data {xi} are in 2 dimensions and coming from a set of 40 1-dimensional manifolds. Each manifold is composed of 4 near points obtained from a randomly based sinus, i.e ∀i ∈1..4, xi = (a + ti, sin(a + ti) + b, where a, b, and ti are randomly chosen. Four neighbors were used for training both the Tangent Learning algorithm and the benchmark local nonparametric estimator (local PCA of the 4 neighbors). Figure 1 shows the training set and the tangent planes recovered, both on the training examples and generalizing away from the data. The neural network has 10 hidden units (chosen arbitrarily). This problem is particularly difficult for local manifold learning, which does very poorly here: the out-ofsample relative prediction error are respectively 0.09 for the true analytic plane, 0.25 for non-local tangent learning, and 0.81 for local PCA. Task 2 This is a higher dimensional manifold learning problem, with 41 dimensions. The data are generated by sampling Gaussian curves. Each curve is of the form x(i) = et1−(−2+i/10)2/t2 with i ∈{0, 1, . . . , 40}. Note that the tangent vectors are not linear in x. The manifold coordinates are t1 and t2, sampled uniformly, respectively from (−1, 1) and (.1, 3.1). Normal noise (standard deviation = 0.001) is added to each point. 100 example curves were generated for training and 200 for testing. The neural network has 100 hidden units. Figure 2 shows the relative projection error for the four methods on this task, for the k-th nearest neighbor, for increasing values of k. First, the error decreases because of the effect of noise (near noisy neighbors may form a high angle with the tangent plane). Then, it increases because of the curvature of manifold (further away neighbors form a larger angle). Task 3 This is a high-dimensional multi-manifold task, involving digit images to which we have applied slight rotations, in such a way as to have the knowledge of the analytic formulation of the manifolds. There is one rotation manifold for each instance of digit from the database, but only two examples for each manifold: one real image from the MNIST dataset and one slightly rotated image. 1000×2 examples are used for training and 1000×2 for testing. In this context we use k = 1 nearest neighbor only and manifold dimension is 1. The average relative projection error for the nearest neighbor is 0.27 for the analytic tangent plane, 0.43 with tangent learning (100 hidden units), and 1.5 with Local PCA. Here the neural network would probably overfit if trained too much (here only 100 epochs). 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 Figure 3: Left column: original image. Middle: applying a small amount of the predicted rotation. Right: applying a larger amount of the predicted rotation. Top: using the estimated tangent plane predictor. Bottom: using local PCA, which is clearly much worse. An even more interesting experiment consists in applying the above trained predictor on novel images that come from a very different distribution but one that shares the same manifold structure: it was applied to images of other characters that are not digits. We have used the predicted tangent planes to follow the manifold by small steps (this is very easy to do in the case of a one-dimensional manifold). Figure 3 shows for example on a letter ’M’ image the effect of a few such steps and a larger number of steps, both for the neural network predictor and for the local PCA predictor. This example illustrates the crucial point that non-local tangent plane learning is able to generalize to truly novel cases, where local manifold learning fails. In all the experiments we found that all the randomly initialized neural networks converged to similarly good solutions. The number of hidden units was not optimized, although preliminary experimentation showed phenomena of over-fitting and under-fitting due to too small or too large number hidden units was possible. 5 Conclusion The central claim of this paper is that there are fundamental problems with non-parametric local approaches to manifold learning, essentially due to the curse of dimensionality (at the dimensionality of the manifold), but worsened by manifold curvature, noise, and the presence of several disjoint manifolds. To address these problems, we propose that learning algorithms should be designed in such a way that they can share information, coming from different regions of space, about the structure of the manifold. In this spirit we have proposed a simple learning algorithm based on predicting the tangent plane at x with a function F(x) whose parameters are estimated based on the whole data set. Note that the same fundamental problems are present with non-parametric approaches to semi-supervised learning (e.g. as in (Szummer and Jaakkola, 2002; Chapelle, Weston and Scholkopf, 2003; Belkin and Niyogi, 2003; Zhu, Ghahramani and Lafferty, 2003)), which rely on proper estimation of the manifold in order to propagate label information. Future work should investigate how to better handle the curvature problem, e.g. by following the manifold (using the local tangent estimates), to estimate a manifold-following path between pairs of neighboring examples. The algorithm can also be extended in a straightforward way to obtain a Gaussian mixture or a mixture of factor analyzers (with the factors or the principal eigenvectors of the Gaussian centered at x obtained from F(x)). This view can also provide an alternative criterion to optimize F(x) (the local log-likelihood of such a Gaussian). This criterion also tells us how to estimate the missing information (the variances along the eigenvector directions). Since we can estimate F(x) everywhere, a more ambitious view would consider the density as a “continuous” mixture of Gaussians (with an infinitesimal component located everywhere in space). Acknowledgments The authors would like to thank the following funding organizations for support: NSERC, MITACS, IRIS, and the Canada Research Chairs. References Belkin, M. and Niyogi, P. (2003). Using manifold structure for partially labeled classification. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press. Bengio, Y., Delalleau, O., Le Roux, N., Paiement, J.-F., Vincent, P., and Ouimet, M. (2004). Learning eigenfunctions links spectral embedding and kernel PCA. Neural Computation, 16(10):2197– 2219. Brand, M. (2003). Charting a manifold. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15. MIT Press. Chapelle, O., Weston, J., and Scholkopf, B. (2003). Cluster kernels for semi-supervised learning. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press. Ghahramani, Z. and Hinton, G. (1996). The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, Dpt. of Comp. Sci., Univ. of Toronto. Rao, R. and Ruderman, D. (1999). Learning lie groups for invariant visual perception. In Kearns, M., Solla, S., and Cohn, D., editors, Advances in Neural Information Processing Systems 11, pages 810–816. MIT Press, Cambridge, MA. Roweis, S. and Saul, L. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326. Sch¨olkopf, B., Smola, A., and M¨uller, K.-R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319. Simard, P., LeCun, Y., and Denker, J. (1993). Efficient pattern recognition using a new transformation distance. In Giles, C., Hanson, S., and Cowan, J., editors, Advances in Neural Information Processing Systems 5, pages 50–58, Denver, CO. Morgan Kaufmann, San Mateo. Szummer, M. and Jaakkola, T. (2002). Partially labeled classification with markov random walks. In Dietterich, T., Becker, S., and Ghahramani, Z., editors, Advances in Neural Information Processing Systems 14, Cambridge, MA. MIT Press. Teh, Y. W. and Roweis, S. (2003). Automatic alignment of local representations. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15. MIT Press. Tenenbaum, J., de Silva, V., and Langford, J. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323. Tipping, M. and Bishop, C. (1999). Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443–482. Vincent, P. and Bengio, Y. (2003). Manifold parzen windows. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press. Zhu, X., Ghahramani, Z., and Lafferty, J. (2003). Semi-supervised learning using gaussian fields and harmonic functions. In ICML’2003.
|
2004
|
11
|
2,519
|
Active Learning for Anomaly and Rare-Category Detection Dan Pelleg and Andrew Moore School of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 USA dpelleg@cs.cmu.edu, awm@cs.cmu.edu Abstract We introduce a novel active-learning scenario in which a user wants to work with a learning algorithm to identify useful anomalies. These are distinguished from the traditional statistical definition of anomalies as outliers or merely ill-modeled points. Our distinction is that the usefulness of anomalies is categorized subjectively by the user. We make two additional assumptions. First, there exist extremely few useful anomalies to be hunted down within a massive dataset. Second, both useful and useless anomalies may sometimes exist within tiny classes of similar anomalies. The challenge is thus to identify “rare category” records in an unlabeled noisy set with help (in the form of class labels) from a human expert who has a small budget of datapoints that they are prepared to categorize. We propose a technique to meet this challenge, which assumes a mixture model fit to the data, but otherwise makes no assumptions on the particular form of the mixture components. This property promises wide applicability in real-life scenarios and for various statistical models. We give an overview of several alternative methods, highlighting their strengths and weaknesses, and conclude with a detailed empirical analysis. We show that our method can quickly zoom in on an anomaly set containing a few tens of points in a dataset of hundreds of thousands. 1 Introduction We begin with an example of a rare-category-detection problem: an astronomer needs to sift through a large set of sky survey images, each of which comes with many numerical parameters. Most of the objects (99.9%) are well explained by current theories and models. The remainder are anomalies, but 99% of these anomalies are uninteresting, and only 1% of them (0.001% of the full dataset) are useful. The first type of anomalies, called “boring anomalies”, are records which are strange for uninteresting reasons such as sensor faults or problems in the image processing software. The useful anomalies are extraordinary objects which are worthy of further research. For example, an astronomer might want to crosscheck them in various databases and allocate telescope time to observe them in greater detail. The goal of our work is finding this set of rare and useful anomalies. Although our example concerns astrophysics, this scenario is a promising general area for exploration wherever there is a very large amount of scientific, medical, business or intelligence data and a domain expert wants to find truly exotic rare events while not becoming of records Random set Ask expert Build model from data and labels Run all data through model to classify some records Spot "important" records Figure 1: Anomalies in Sloan data: Diffraction spikes (left). Satellite trails (center). The active-learning loop is shown on the right. swamped with uninteresting anomalies. Two rare categories of “boring” anomalies in our test astrophysics data are shown in Figure 1. The first, a well-known optical artifact, is the phenomenon of diffraction spikes. The second consists of satellites that happened to be flying overhead as the photo was taken. As a first step, we might try defining a statistical model for the data, and identifying objects which do not fit it well. At this point, objects flagged as “anomalous” can still be almost entirely of the uninteresting class of anomalies. The computational and statistical question is then how to use feedback from the human user to iteratively reorder the queue of anomalies to be shown to the user in order to increase the chance that the user will soon see an anomaly of a whole new category. We do this in the familiar pool-based active learning framework1. In our setting, learning proceeds in rounds. Each round starts with the teacher labeling a small number of examples. Then the learner models the data, taking into account the labeled examples as well as the remainder of the data, which we assume to be much larger in volume. The learner then identifies a small number of input records (“hints”) which are important in the sense that obtaining labels for them would help it improve the model. These are shown to the teacher (in our scenario, a human expert) for labeling, and the cycle repeats. The model, which we call “irrelevance feedback”, is shown in Figure 1. It may seem too demanding to ask the human expert to give class labels instead of a simple “interesting” or “boring” flag. But in practice, this is not an issue—it seems easier to place objects into such “mental bins”. For example, in the astronomical data we have seen a user place most objects into previously-known categories: point sources, low-surface-brightness galaxies, etc. This also holds for the negative examples: it is frustrating to have to label all anomalies as “bad” without being able to explain why. Often, the data is better understood as time goes by, and users wish to revise their old labels in light of new examples. Note that the statistical model does not care about the names of the labels. For all it cares, the label set can be utterly changed by the user from one round to another. Our tools allow that: the labels are unconstrained and the user can add, refine, and delete classes at will. It is trivial to accommodate the simpler “interesting or not” model in this richer framework. Our work differs from traditional applications of active learning in that we assume the distribution of class sizes to be extremely skewed. For example, the smallest class may have just a few members whereas the largest may contain a few million. Generally in active learning, it is believed that, right from the start, examples from each class need to be presented to the oracle [1, 2, 3]. If the class frequencies were balanced, this could be achieved by random sampling. But in datasets with the rare categories property, this no longer holds, and much of our effort is an attempt to remedy the situation. Previous active-learning work tends to tie intimately to a particular model [4, 3]. We would like to be able to “plug in” different types of models or components and therefore propose model-independent criteria. The same reasoning also precludes us from directly using distances between data points, as is done in [5]. 1More precisely, we allow multiple queries and labels in each learning round — the traditional presentation has just one. (a) (b) (c) (d) (e) (f) Figure 2: Underlying data distribution for the example (a); behavior of the lowlik method (b–f). The original data distribution is in (a). The unsupervised model fit to it in (b). The anomalous points according to lowlik, given the model in (b), are shown in (c). Given labels for the points in (c), the model in (d) is fitted. Given the new model, anomalous points according to lowlik are flagged (e). Given labels for the points in (c) and (e), this is the new fitted model (f). Another desired property is resilience to noise. Noise can be inherent in the data (e.g., from measurement errors) or be an artifact of a ill-fitting model. In any case, we need to be able to identify query points in the presence of noise. This is a not just a bonus feature: points which the model considers noisy could very well be the key to improvement if presented to the oracle. This is in contrast to the approach taken by some: a pre-assumption that the data is noiseless [6, 7]. 2 Overview of Hint Selection Methods In this section we survey several proposed methods for active learning as they apply to our setting. While the general tone is negative, what follows should not be construed as general dismissal of these methods. Rather, it is meant to highlight specific problems with them when applied to a particular setting. Specifically, the rare-categories assumption (and in some cases, just having more than 2 classes) breaks the premises for some of them. As an example, consider the data shown in Figure 2 (a). It is a mixture of two classes. One is an X-shaped distribution, from which 2000 points are drawn. The other is a circle with 100 points. In this example, the classifier is a Gaussian Bayes classifier trained in a semi-supervised manner from labeled and unlabeled data, with one Gaussian per class. The model is learned with a standard EM procedure, with the following straightforward modification [8, 9] to enable semi-supervised learning. Before each M step we clamp the class membership values for the hinted records to match the hints (i.e., one for the labeled class for this record, and zero elsewhere). Given fully labeled data, our learner would perfectly predict class membership for this data (although it would be a poor generative model): one Gaussian centered on the circle, and another spherical Gaussian with high variance centered on the X. Now, suppose we plan to perform active learning in which we take the following steps: 1. Start with entirely unlabeled data. 2. Perform semi-supervised learning (which, on the first iteration degenerates to unsupervised learning). 3. Ask an expert to classify the 35 strangest records. 4. Go to Step 2. On the first iteration (when unsupervised) the algorithm will naturally use the two Gaussians to model the data as in Figure 2(b), with one Gaussian for each of the arms of the “X”, and the points in the circle represented as members of one of them. What happens next all depends on the choice of the datapoints to show to the human expert. We now survey the methods for hint selection. Choosing Points with Low Likelihood: A rather intuitive approach is to select as hints the points which the model performs worst on. This can be viewed as model variance (a) (b) (c) (d) (e) Figure 3: Behavior of the ambig (a–c) and interleave (d–e) methods. The unsupervised model and the points which ambig flags as anomalous, given this model (a). The model learned using labels for these points is (b), along with the point it flags. The last refinement, given both sets of labels (c). minimization [4] or as selection of points furthest away from any labeled points [5]. We do this by ranking each point in order of increasing model likelihood, and choosing the most anomalous items. We show what this approach would flag in the given configuration in Figure 2. It is derived from a screenshot of a running version of our code, redrawn by hand for clarity. Each subsequent drawing shows a model which EM converged to after including the new labels, and the hints it chooses under a particular scheme (here it is what we call lowlik). These hints affect the model shown for the next round. The underlying distribution is shown in gray shading. We use this same convention for the other methods below. In the first round, the Mahalanobis distance for the points in the corners is greater than those in the circle, therefore they are flagged. Another effect we see is that one of the arms is represented more heavily. This is probably due to its lower variance. In any event, none of the points in the circle is flagged. The outcome is that the next round ends up in a similar local minimum. We can also see that another step will not result in the desired model. Only after obtaining labels for all of the “outlier” points (that is, those on the extremes of the distribution) will this approach go far enough down the list to hit a point in the circle. This means that in scenarios where there are more than a few hundred noisy data, classification accuracy is likely to be very low. Choosing Ambiguous Points: Another popular approach is to choose the points which the learner is least certain about. This is the spirit of “query by committee” [10] and “uncertainty sampling” [11]. In our setting this is implemented in the following way. For each data point, the EM algorithm maintains an estimate of the probability of its membership in every mixture component. For each point, we compute the entropy of the set of all such probabilities, and rank the points in decreasing order of the entropy. This way, the top of the list will have the objects which are “owned” by multiple components. For our example, this would choose the points shown in Figure 3. As expected, points on the decision boundaries between classes are chosen. Here, the ambiguity sets are useless for the purpose of modeling the entire distribution. One might argue this only holds for this contrived distribution. However, in general this is a fairly common occurrence, in the sense that the ambiguity criterion works to nudge the decision surfaces so they better fit a relatively small set of labeled examples. It may help modeling the points very close to the boundaries, but it does not improve generalization accuracy in the general case. Indeed, we see that if we repeatedly apply this criterion we end up asking for labels for a great number of points in close proximity, to very little effect on the overall model. In the results section below, we call this method ambig. Combining Unlikely and Ambiguous Points: Our next candidate is a hybrid method which tries to combine the hints from the two previous methods. Recall they both produce a ranked list of all the points. We merge the lists into another ranked list in the following way. Alternate between the lists when picking items. For each list, pick the top item that has not already been placed in the output list. When all elements are taken, the output list is a ranked list as required. We now pick the top items from this list for hints. As expected we get a good mix of points in both hint sets (not shown). But, since neither method identifies the small cluster, their union fails to find it as well. However, in general it is useful to combine different criteria in this way, as our empirical results below show. There, this method is called mix-ambig-lowlik. Interleaving: We now present what we consider is the logical conclusion of the observations above. To the best of our knowledge, the approach is novel. The key insight is that our group of anomalies was, in fact, reasonably ordinary when analyzed on a global scale. In other words, the mixture density of the region we chose for the group of anomalies is not sufficiently low for them to rank high on the hint list. Recall that the mixture model sums up the weighted per-model densities. Therefore, a point that is “split” among several components approximately evenly, and scores reasonably high on at least some of them, will not be flagged as anomalous. Another instance of the same problem occurs when a point which is somewhat “owned” by a component with high mixture weight. Even if the small component that “owns” most of it predicts it is very unlikely, that term has very little effect on the overall density. Therefore, our goal is to eliminate the mixture weights from the equation. Our idea is that if we restrict the focus to match the “point of view” of just one component, these anomalies will become more apparent. We do this by considering just the points that “belong” to one component, and by ranking them according to the PDF of this component. The hope is that given this restricted view, anomalies that do not fit the component’s own model will stand out. More precisely, let c be a component and i a data point. The EM algorithm maintains, for every c and i, an estimate zc i of the degree of “ownership” that c exerts over i. For each component c we create a list of all the points for which c = arg maxc′ zc′ i , ranked by zc i . Having constructed the sorted lists, we merge them in a generalization of the merge method described above. We cycle through the lists in some order. For each list, we pick the top item that has not already been placed in the output list, and place it at the next position in the output list. This strategy is appealing intuitively, although we have no further theoretical justification for it. We show results for this strategy for our example in Figure 3, and in the experimental section below. We see it meets the requirement of representation for all true components. Most of the points are along the major axes of the two elongated Gaussians, but two of the points are inside the small circle. Correct labels for even just these two points result in perfect classification in the next EM run. In our experiments, we found it beneficial to modify this method as follows. One of the components is a uniform-density “background”. This modification lets it nominate hints more often than any other component. In terms of list merging, we take one element from each of the lists of standard components, and then several elements from the list produced for the background component. All of the results shown were obtained using an oversampling ratio of 20. In other words, if there are N components (excluding uniform), then the first cycle of hint nomination will result in 20 + N hints, 20 of which from uniform. 3 Experimental Results To establish the results hinted by the intuition above, we conducted a series of experiments. The first one uses synthetic data. The data distribution is a mixture of components in 5, 10, 15 and 20 dimensions. The class size distribution is a geometric series with the largest class owning half of the data and each subsequent class being half as small. The components are multivariate Gaussians whose covariance structure can be modeled 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 200 400 600 800 1000 1200 1400 1600 %classes discovered hints lowlik mix-ambig-lowlik random ambig interleave 0.4 0.5 0.6 0.7 0.8 0.9 1 0 500 1000 1500 2000 2500 3000 %classes discovered hints lowlik random ambig interleave Figure 4: Learning curves for simulated data drawn from a mixture of dependency trees (left), and for the SHUTTLE set (right). The Y axis shows the fraction of classes represented in queries sent to the teacher. For SHUTTLE and ABALONE below, mix-ambig-loglike is omitted because it is so similar to lowlik. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 250 300 350 400 %classes discovered hints lowlik random ambig interleave 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 250 300 350 400 %classes discovered hints lowlik mix-ambig-lowlik random ambig interleave Figure 5: Learning curves for the ABALONE (left) and KDD (right) sets. with dependency trees. Each Gaussian component has its covariance generated in the following way. Random attribute pairs are chosen, and added to an undirected dependency tree structure unless they close a cycle. Each edge describes a linear dependency between nodes, with the coefficients drawn uniformly at random, with random noise added to each value. Each data set contains 10, 000 points. There are ten tree classes and a uniform background component. The number of “background” points ranges from 50 to 200. Only the results for 15 dimensions and 100 noisy points are shown as they are representative of the other experiments. In each round of learning, the learner queries the teacher with a list of 50 points for labeling, and has access to all the queries and replies submitted previously. This data generation scheme is still very close to the one which our tested model assumes. Note, however, that we do not require different components to be easily identifiable. The results of this experiment are shown in Figure 4. Also included, are results for random, which is a baseline method choosing hints at random. Our scoring function is driven by our application, and estimates the amount of effort the teacher has to expend before being presented by representatives of every single class. The assumption is that the teacher can generalize from a single example (or a very few examples) to an entire class, and the valuable information is concentrated in the first queried member of each class. More precisely, if there are n classes, then the score under this metric is 1/n times the number of classes represented in the query set. In the query set we include all items queried in preceding rounds, as we do for other applicable metrics. The best performer so far is interleave, taking five rounds or less to reveal all of the classes, including the very rare ones. Below we show it is superior in many of the real-life data sets. We can also see that ambig performs worse than random. This can be explained by the fact that ambig only chooses points that already have several existing components “competing” for them. Rarely do these points belong to a new, yet-undiscovered component. 0.4 0.5 0.6 0.7 0.8 0.9 1 50 100 150 200 250 300 350 400 450 500 %classes discovered hints lowlik random interleave 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 20 40 60 80 100 120 140 160 180 200 %classes discovered hints lowlik random interleave Figure 6: Learning curves for the EDSGC (left) and SDSS (right) sets. Table 1: Properties of the data sets used. NAME DIMS RECORDS CLASSES SMALLEST CLASS LARGEST CLASS SOURCE SHUTTLE 9 43500 7 0.01% 78.4% [12] ABALONE 7 4177 20 0.34% 16% [13] KDD 33 50000 19 0.002% 21.6% [13] EDSGC 26 1439526 7 0.002% 76% [14] SDSS 22 517371 3 0.05% 50.6% [15] We were concerned that the poor performance of lowlik was just a consequence of our choice of metric. After all, it does not measure the number of noise points (i.e points from the uniform background component) found. These points are genuine anomalies, so it is possible that lowlik is being penalized unfairly for its focusing on the noise points. After examining the fraction of noise points (i.e., points drawn from the uniform background component) found by each algorithm, we discovered that lowlik actually scores worse than interleave on this metric as well. The remaining experiments were run on various real data sets. Table 1 has a summary of their properties. They represent data and computational effort orders of magnitude larger than any active-learning result of which we are aware. Results for the SHUTTLE set appear in Figure 4. We see that it takes the interleave algorithm five rounds to spot all classes, whereas the next best is lowlik, with 11. The ABALONE set (Figure 5) is a very noisy set, in which random seems to be the best longterm strategy. Again, note how ambig performs very poorly. Due to resource limitations, results for kdd were obtained on a 50000-record random subsample of the original training set (which is roughly ten times bigger). This set has an extremely skewed distribution of class sizes, and a large number of classes. In Figure 5 we see that lowlik performs uncharacteristically poorly. Another surprise is that the combination of lowlik and ambig outperforms them both. It also matches interleave in performance, and this is the only case where we have seen it do so. The EDSGC set, as distributed, is unlabeled. The class labels relate to the shape and size of the sky object. We see in Figure 6 that for the purpose of class discovery, we can do a good job in a small number of rounds: here, a human would have had to label just 250 objects before being presented with a member of the smallest class - comprising just 24 records out of a set of 1.4 million. 4 Conclusion We have shown that some of the popular methods for active learning perform poorly in realistic active-learning scenarios where classes are imbalanced. Working from the definition of a mixture model we were able to propose methods which let each component “nominate” its favorite queries. These methods work well in the presence of noisy data and extremely rare classes and anomalies. Our simulations show that a human user only needs to label one or two hundred examples before being presented with very rare anomalies in huge data sets. In our experience, this kind of interaction takes just an hour or two of combined human and computer time [16]. We make no assumptions about the particular form a component takes. Consequently, we expect our results to apply to many different kinds of component models, including the case where components are not dependency trees, or even not all from the same distribution. We are using lessons learned from our empirical comparison in an application for anomalyhunting in the astrophysics domain. Our application presents multiple indicators to help a user spot anomalous data, as well as controls for labeling points and adding classes. The application will be described in a companion paper. References [1] Sugato Basu, Arindam Banerjee, and Raymond J. Mooney. Active semi-supervision for pairwise constrained clustering. Submitted for publication, February, 2003. [2] M. Seeger. Learning with labeled and unlabeled data. Technical report, Institue for Adaptive and Neural Computation, Universiy of Edinburgh, 2000. [3] Klaus Brinker. Incorporating diversity in active learning with support vector machines. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. [4] David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. Active learning with statistical models. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 705–712. The MIT Press, 1995. [5] Nirmalie Wiratunga, Susan Craw, and Stewart Massie. Index driven selective sampling for CBR, 2003. To appear in Proceedings of the Fifth International Conference on Case-Based Reasoning, Springer-Verlag, Trondheim, Norway, 23-26 June 2003. [6] David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [7] Mark Plutowski and Halbert White. Selecting concise training sets from clean data. IEEE Transactions on Neural Networks, 4(2):305–318, March 1993. [8] Shahshashani and Landgrebe. The effect of unlabeled examples in reducing the small sample size problem. IEEE Trans Geoscience and Remote Sensing, 32(5):1087–1095, 1994. [9] Miller and Uyar. A mixture of experts classifier with learning based on both labeled and unlabelled data. In NIPS-9, 1997. [10] H. S. Seung, Manfred Opper, and Haim Sompolinsky. Query by committee. In Computational Learning Theory, pages 287–294, 1992. [11] David D. Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In William W. Cohen and Haym Hirsh, editors, Proceedings of ICML-94, 11th International Conference on Machine Learning, pages 148–156, New Brunswick, US, 1994. Morgan Kaufmann Publishers, San Francisco, US. [12] P.Brazdil and J.Gama. StatLog, 1991. http://www.liacc.up.pt/ML/statlog. [13] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. http:// www.ics.uci.edu/∼mlearn/MLRepository.html. [14] R. C. Nichol, C. A. Collins, and S. L. Lumsden. The Edinburgh/Durham southern galaxy catalogue — IX. Submitted to the Astrophysical Journal, 2000. [15] SDSS. The Sloan Digital Sky Survey, 1998. www.sdss.org. [16] Dan Pelleg. Scalable and Practical Probability Density Estimators for Scientific Anomaly Detection. PhD thesis, Carnegie-Mellon University, 2004. Tech Report CMU-CS-04-134. [17] David MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590–604, 1992. [18] Fabio Gagliardi Cozman, Ira Cohen, and Marclo Cesar Cirelo. Semi-supervised learning of mixture models and bayesian networks. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. [19] Yoram Baram, Ran El-Yaniv, and Kobi Luz. Online choice of active learning algorithms. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. [20] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In Advances in Neural Information Processing Systems 18, 2004.
|
2004
|
110
|
2,520
|
A feature selection algorithm based on the global minimization of a generalization error bound Dori Peleg Department of Electrical Engineering Technion Haifa, Israel dorip@tx.technion.ac.il Ron Meir Department of Electrical Engineering Technion Haifa, Israel rmeir@tx.technion.ac.il Abstract A novel linear feature selection algorithm is presented based on the global minimization of a data-dependent generalization error bound. Feature selection and scaling algorithms often lead to non-convex optimization problems, which in many previous approaches were addressed through gradient descent procedures that can only guarantee convergence to a local minimum. We propose an alternative approach, whereby the global solution of the non-convex optimization problem is derived via an equivalent optimization problem. Moreover, the convex optimization task is reduced to a conic quadratic programming problem for which efficient solvers are available. Highly competitive numerical results on both artificial and real-world data sets are reported. 1 Introduction This paper presents a new approach to feature selection for linear classification where the goal is to learn a decision rule from a training set of pairs Sn = © x(i), y(i)ªn i=1, where x(i) ∈Rd are input patterns and y(i) ∈{−1, 1} are the corresponding labels. The goal of a classification algorithm is to find a separating function f(·), based on the training set, which will generalize well, i.e. classify new patterns with as few errors as possible. Feature selection schemes often utilize, either explicitly or implicitly, scaling variables, {σj}d j=1, which multiply each feature. The aim of such schemes is to optimize an objective function over σ ∈Rd. Feature selection can be viewed as the case σj ∈{0, 1}, j = 1, . . . , d, where a feature j is removed if σj = 0. The more general case of feature scaling is considered here, i.e. σj ∈R+. Clearly feature selection is a special case of feature scaling. The overwhelming majority of feature selection algorithms in the literature, separate the feature selection and classification tasks, while solving either a combinatorial or a nonconvex optimization problem (e.g. [1],[2],[3],[4]). In either case there is no guarantee of efficiently locating a global optimum. This is particularly problematic in large scale classification tasks which may initially contain several thousand features. Moreover, the objective function of many feature selection algorithms is unrelated to the Generalization Error (GE). Even for global solutions of such algorithms there is no theoretical guarantee of proximity to the minimum of the GE. To overcome the above shortcomings we propose a feature selection algorithm based on the Global Minimization of an Error Bound (GMEB). This approach is based on simultaneously finding the optimal classifier and scaling factors of each feature by minimizing a GE bound. As in previous feature selection algorithms, a non-convex optimization problem must be solved. A novelty of this paper is the use of the equivalent optimization problems concept, whereby a global optimum is guaranteed in polynomial time. The development of the GMEB algorithm begins with the design of a GE bound for feature selection. This is followed by formulating an optimization problem which minimizes this bound. Invariably, the resulting problem is non-convex. To avoid the drawbacks of solving non-convex optimization problems, an equivalent convex optimization problem is formulated whereby the exact global optimum of the non-convex problem can be computed. Next the dual problem is derived and formulated as a Conic Quadratic Programming (CQP) problem. This is advantageous because efficient CQP algorithms are available. Comparative numerical results on both artificial and real-world datsets are reported. The notation and definitions were adopted from [5]. All vectors are column vectors unless transposed. Mathematical operators on scalars such as the square root are expanded to vectors by operating componentwise. The notation R+ denotes nonnegative real numbers. The notation x ⪯y denotes componentwise inequality between vectors x and y respectively. A vector with all components equal to one is denoted as 1. The domain of a function f is denoted as dom f. The set of points for which the objective and all the constraint functions are defined is called the domain of the optimization problem, D. For lack of space, only proof sketches will be presented; the complete proofs are deferred to the full paper. 2 The Generalization Error Bounds We establish GE bounds which are used to motivate an effective algorithm for feature scaling. Consider a sample Sn = {(x(1), y(1)), . . . , (x(n), y(n))}, x(i) ∈X ⊆Rd, y(i) ∈Y, where (x(i), y(i)) are generated independently from some distribution P. A set of nonnegative variables σ = (σ1, . . . , σd)T is introduced to allow the additional freedom of feature scaling. The scaling variables σ transform the linear classifiers from f(x) = wT x + b to f(x) = wT Σx + b, where Σ = diag(σ). It may seem at first glance that these classifiers are essentially the same since w can be redefined as Σw. However the role of σ is to offer an extra degree of freedom to scale the features independently of w, in a way which can be exploited by an optimization algorithm. For a real-valued classifier f, the 0 −1 loss is the probability of error given by P (yf(x) ≤0) = EI (yf(x) ≤0), where I(·) is the indicator function. Definition 1 The margin cost function φγ : R →R+ is defined as φγ(z) = 1 −z/γ if z ≤γ, and zero otherwise (note that I (yf(x) ≤0) ≤φγ(yf(x))). Consider a classifier f for which the input features have been rescaled, namely f(Σx) is used instead of f(x). Let F be some class of functions and let ˆEn be the empirical mean. Using standard GE bounds, one can establish that for any choice of σ, with probability at least 1 −δ, for any f ∈F P (yf(Σx) ≤0) ≤ˆEnφγ (yf(Σx)) + Ω(f, δ, σ), (1) for some appropriate complexity measure Ωdepending on the bounding technique. Unfortunately, (1) cannot be used directly when attempting to select optimal values of the variables σ because the bound is not uniform in σ. In particular, we need a result which holds with probability 1 −δ for every choice of σ. Definition 2 The indices of training patterns with labels {−1, 1} are denoted by I−, I+ respectively. The cardinalities of the sets I−, I+ are n−, n+ respectively. The empirical mean of the second order moment of the jth feature over the training patterns belonging to indices I−, I+ are v− j = 1 n− P i∈I− ³ x(i) j ´2 , v+ j = 1 n+ P i∈I+ ³ x(i) j ´2 respectively. Theorem 3 Fix B, r, γ > 0, and suppose that {(x(i), y(i))}n i=1 are chosen independently at random according to some probability distribution P on X × {±1}, where ∥x∥≤r for x ∈X. Define the class of functions F F = © f : f(x) = wT Σx + b, ∥w∥≤B, |b| ≤r, σ ⪰0 ª . Let σ0 be an arbitrary positive number, and set `σj = 2 max(σj, σ0). Then with probability at least 1 −δ, for every function f ∈F P (yf(x) ≤0) ≤ˆEnφγ (yf(x)) + 2B γ √n+ n v u u t d X j=1 v+ j `σ2 j + √n− n v u u t d X j=1 v− j `σ2 j + Λ , (2) where K(σ) = (B∥`σ∥+ 1)r and Λ = Λ(σ,γ,δ) √n , Λ(σ, γ, δ) = 2r γ + K(σ) v u u t2 d X j=1 ln log2 `σj σ0 + K(σ) µ 2 γ + 1 ¶ r 2 ln 2 δ . Proof sketch We begin by assuming a fixed upper bound on the values of σj, say σj ≤sj, j = 1, 2, . . . , d. This allows us to use the methods developed in [6] in order to establish upper bounds on the Rademacher complexity of the class F, where σj ≤sj for all j. Finally, a simple variant of the union bound (the so-called multiple testing lemma) is used in order to obtain a bound which is uniform with respect to σ (see the proof technique of Theorem 10 in [6]). In principle, we would like to minimize the r.h.s. of (2) with respect to the variables w, σ, b. However, in this work the focus is only on the data-dependent terms in (2), which include the empirical error term and the weighted norms of σ. Note that all other terms of (2) are of the same order of magnitude (as a function of n), but do not depend explicitly on the data. It should be commented that the extra terms appearing in the bound arise because of the assumed unboundedness of σ. Assuming σ to be bounded, e.g. σ ⪯s, as is the case in most other bounds in the literature, one may replace σ by s in all terms except the first two, thus removing the explicit dependence on σ. The data-dependent terms of the GE bound (2) are the basis of the objective function 1 nγ n X i=1 φγ ³ y(i)f(x(i)) ´ + C+√n+ nγ v u u t d X j=1 v+ j σ2 j + C−√n− nγ v u u t d X j=1 v− j σ2 j , (3) where C+ = C−= 4 and the variables are subject to wT w ≤1, σ ⪰0. The transition was performed by setting B = 1, and replacing `σ by 2σ (assuming that σ > σ0). Due to the fact that only the sign of f determines the estimated labels, it can be multiplied by any positive factor and produce identical results. The constraint on the norm of w induces a normalization on the classifier f(x) = wT x + b, without which the classifier is not unique. However, by introducing the scale variables σ, the classifier was transformed to f(x) = wT Σx+b. Hence, despite the constraint on w, the classifier is not unique again. If the variable γ in (3) is set to an arbitrary positive constant then the solution is unique. This is true because γ appears in (3) only in the expressions b γ , σ1 γ , . . . , σd γ . We chose γ = 1. The objective function is comprised of two elements: (1) the mean of the penalty on the training errors (2) and two weighted l2 norms of the scale variables σ. The second term acts as the feature selection element. Note that the values of C+, C−following from Theorem 3 depend specifically on the bounding technique used in the proof. To allow more generality and flexibility in practical applications, we propose to turn the norm terms of (3) into inequality constraints which are bounded by hyperparameters R+, R−respectively. The interpretation of these hyperparameters is essentially the number of informative features. We propose that R+, R−are chosen via a Cross Validation (CV) scheme. These hyperparameters enable fine-tuning a general classifier to a specific classification task as is done in many other classification algorithms such as the SVM algorithm. Note that the present bound is sensitive to a shift of the features. Therefore, as a preprocessing step the features of the training patterns should be set to zero mean and the features of the test set shifted accordingly. 3 The primal non-convex optimization problem The problem of minimizing (3) with γ = 1 can then be expressed as minimize 1T ξ subject to wT w ≤1 y(i)(Pd j=1 x(i) j wjσj + b) ≥1 −ξi, i = 1, . . . , n R+ ≥Pd j=1 v+ j σ2 j R−≥Pd j=1 v− j σ2 j ξ, σ ⪰0, (4) with variables w, σ ∈Rd, ξ ∈Rn, b ∈R. Note that the constant value 1 n was discarded. Remark 4 Consider a solution of problem (4) in which σ⋆ j = 0 for some feature j. Only the constraint wT w ≤1 affects the value of w⋆ j . A unique solution is established by setting σ⋆ j = 0 ⇒w⋆ j = 0. If the original solution w⋆satisfies the constraint wT w ≤1 then the amended solution will also satisfy the constraint and won’t affect the value of the objective function. The functions wjσj in the second inequality constraints are neither convex nor concave (in fact they are quasiconcave [5]). To make matters worse, the functions wjσj are multiplied by constants −y(i)x(i) j which can be either positive or negative. Consequently problem (4) is not a convex optimization problem. The objective of Section 3.1 is to find the global minimum of (4) in polynomial time despite its non-convexity. 3.1 Convexification In this paper the informal definition of equivalent optimization problems is adopted from [5, pp. 130–135]: two optimization problems are called equivalent if from a solution of one, a solution of the other is found, and vice versa. Instead of detailing a complicated formal definition of general equivalence, the specific equivalence relationships utilized in this paper are either formally introduced or cited from [5]. The functions wjσj in problem (4) are not convex and the signs of the multiplying constants −y(i)x(i) j are data dependant. The only functions that remain convex irrespective of the sign of the constants which multiply them are linear functions. Therefore the functions wjσj must be transformed into linear functions. However, such a transformation must also maintain the convexity of the objective function and the remaining constraints. For this purpose the change of variables equivalence relationship, described in appendix A, was utilized. The transformation φ : Rd×Rd →Rd×Rd was used on the variables w, σ: σj = + p ˜σj, wj = ˜wj p˜σj , j = 1, . . . , d, (5) where dom φ = {(˜σ, ˜w)|˜σ ⪰0}. If ˜σj = 0 then σj = wj = 0 without regard to the value of ˜wj, in accordance with remark 4. Transformation (5) is clearly one-to-one and φ(dom φ) ⊇D. Lemma 5 The problem minimize 1T ξ subject to y(i)( ˜wT x(i) + b) ≥1 −ξi, i = 1, . . . , n Pd j=1 ˜ w2 j ˜σj ≤1 R+ ≥(v+)T ˜σ R−≥(v−)T ˜σ ξ, ˜σ ⪰0 (6) is convex and equivalent to the primal non-convex problem (4) with transformation (5). Note that since ˜wj = wjσj, the new classifier is f(x) = ˜wT x + b. Therefore there is no need to use transformation (5) to obtain the desired classifier. Also one can use Schur’s complement [5] to transform the non-linear constraint into a sparse linear matrix inequality constraint · Σ w wT 1 ¸ ⪰0. Thus problem (6) can be cast as a Semi-Definite Programming (SDP) problem. The primal problem therefore, consists of n+2d+1 variables, 2n+d+2 linear inequality constraints and a linear matrix inequality of [(d + 1) × (d + 1)] dimensions. Although the primal problem (6) is convex, it heavily relies on the number of features d which is typically the bottleneck for feature selection datasets. To alleviate this dependency the Dual problem is formulated. Theorem 6 (Dual problem) The dual optimization problem associated with problem (6) is maximize 1T µ −µ1 −R+µ+ −R−µ− subject to ³Pn i=1 µiy(i)x(i) j , 2µ1, (µ+v+ j + µ−v− j ) ´ ∈Kr , j = 1, . . . , d µT y = 0 0 ⪯µ ⪯1 µ+, µ−≥0, (7) where Kr is the Rotated Quadratic Cone (RQC) Kr = {(x, y, z) ∈Rn × R × R|xT x ≤ 2yz, y ≥0, z ≥0} and with the variables µ ∈Rn, µ1, µ2 ∈R. Theorem 7 (Strong duality) Strong duality holds between problems (6) and (7). The dual problem (7) is a CQP problem. The number of variables is n + 3, there are 2n+2 linear inequality constraints, a single linear equality constraint and d RQC inequality constraints. Due to the reduced computational complexity we used the dual formulation in all the experiments. 4 Experiments Several algorithms were comparatively evaluated on a number of artificial and real world two class problem datasets. The GMEB algorithm was compared to the linear SVM (standard SVM with linear kernel) and the l1 SVM classifier [7]. 4.1 Experimental Methodology The algorithms are compared by two criteria: the number of selected features and the error rates. The weight assigned by a linear classifier to a feature j, determines whether it shall be ‘selected’ or ‘rejected’. This weight must fulfil at least one of the following two requirements: 1. Absolute measure - |wj| ≥ϵ. 2. Relative measure |wj| maxj{|wj|} ≥ϵ. In this paper ϵ = 0.01 was used. Ideally, ϵ should be set adaptively. Note that for the GMEB algorithm ˜w should be used. The definition of the error rate is intrinsically entwined with the protocol for determining the hyperparameter. Given an a-priori partitioning of the dataset into training and test sets, the following protocol for determining the value of R+, R−and defining the error rate is suggested: 1. Define a set R of values of the hyperparameters R+, R−for all datasets. The set R consists of a predetermined number of values. For each algorithm the cardinality |R| = 49 was used. 2. Calculate the N-fold CV error for each value of R+, R−from set R on the training set. Five fold CV was used throughout all the datasets. 3. Use the classifier with the value of R+, R−which produced the lowest CV error to classify the test set. This is the reported error rate. If the dataset is not partitioned a-priori into a training and test set, it is randomly divided into np contiguous training and ‘test’ sets. Each training set contains n np−1 np patterns and the corresponding test set consists of n np patterns. Once the dataset is thus partitioned, the above steps 1 −3 can be implemented. The error rate and the number of selected features are then defined as the average on the np problems. The value np = 10 was used for all datasets, where an a-priori partitioning was not available. The hyperparameter sets R used for the GMEB algorithm consisted of 7×7 linearly spaced values between 1 and 10. For the SVM algorithms the set R consisted of the values Λ 1−Λ where Λ = {0.02, 0.04, . . . , 0.98}, i.e. 49 linearly spaced values between 0.02 and 0.98. 4.2 Data sets Tests were performed on the ‘Linear problem’ synthetic datasets as described in [2], and eight real-world problems. The number of features, the number of patterns and the partitioning into train and test sets of the real-world datasets are detailed in Table 2. The datasets were taken form the UCI repository unless stated otherwise. Dataset (1) is termed Wisconsin Diagnostic Breast Cancer ‘WDBC’, (2) ‘Multiple Features’ dataset, which was first introduced by ([8]), (3) the ‘Internet Advertisements’ dataset, was separated into a training and test set randomly, (4) the ‘Colon’ dataset, taken from ([2]), (5) the ‘BUPA’ dataset, (6) the ‘Pima Indians Diabetes’ dataset, (7) the ‘Cleveland heart disease’ dataset, and (8), the ‘Ionosphere’ dataset. Table 1: Mean and standard deviation of the mean of test error rate percentage on synthetic datasets given n training patterns. The number of selected features is in brackets. n SVM l1 SVM GMEB 10 46.2 ± 1.9 (197.1±2.1) 49.6 ± 1.9 (77.7±83.8) 33.8 ± 14.2 (3.7±2.1) 20 44.9 ± 2.1 (196.8±1.9) 38.5 ± 12.7 (10.7±6.1) 13.9 ± 7.2 (4.8±2.7) 30 43.6 ± 1.7 (196.7±2.8) 27.4 ± 12.4 (14.5±8.7) 7.1 ± 5.6 (5.1±2.3) 40 41.8 ± 1.9 (197.2±1.8) 19.2 ± 6.9 (16.2±11.1) 5.0 ± 3.5 (5.5±2.1) 50 41.9 ± 1.8 (196.6±2.6) 16.0 ± 5.3 (18.4±11.3) 3.1 ± 2.7 (5.1±1.8) Table 2: The real-world datasets and the performance of the algorithms. The set R for the linear SVM algorithm and for datasets 1,5,6 had to be set to Λ to allow convergence. Feat. Patt. Linear SVM l1 SVM GMEB 30 569 5.3±0.8 (27.3±0.3) 4.9±1.1 (16.4±1.3) 4.2±0.9 (6.0±0.3) 649 200/1800 0.3 (616) 3.5 (15) 0.2 (32) 1558 200/3080 5.3 (322) 4.7 (12) 5.5 (98) 2000 62 13.6±5.9 (1941.8±1.9) 10.7±4.4 (23.3±1.5) 10.7±4.4 (59.1±25.0) 6 345 33.1±3.5 (6.0±0.0) 33.6±3.6 (5.9±0.1) 34.2±4.4 (5.4±0.5) 8 768 22.8±1.5 (5.8±0.2) 22.9±1.4 (5.8±0.2) 22.5±1.8 (4.8±0.2) 13 297 17.5±1.9 (11.6±0.2) 16.8±1.6 (10.7±0.3) 15.5±2.0 (9.1±0.3) 34 351 11.7±2.6 (32.8±0.2) 12.0±2.3 (27.9±1.6) 10.0±2.3 (12.1±1.7) 4.3 Experimental results Table 1 provides a comparison of the GMEB algorithm with the SVM algorithms on the synthetic datasets. The Bayes error is 0.4%. For further numerical comparison see [3]. Note that the number of features selected by the l1 SVM and the GMEB algorithms increase with the sample size. A possible explanation for this observation is that with only a few training patterns a small training error can be achieved by many subsets containing a small number of features, i.e. a sparse solution. The particular subset selected is essentially random, leading to a large test error, possibly due to overfitting. For all the synthetic datasets the GMEB algorithm clearly attained the lowest error rates. On the real-world datasets it produced the lowest error rates and the smallest number of features for the majority of datasets investigated. 4.4 Discussion The GMEB algorithm performs comparatively well against the linear and l1 SVM algorithms, in regard to both the test error and the number of selected features. A possible explanation is that the l1 SVM algorithm performs both classification and feature selection with the same variable w. In contrast, the GMEB algorithm performs the feature selection and classification simultaneously, while using variables σ and w respectively. The use of two variables also allows the GMEB algorithm to reduce the weight of a feature j with both wj and σj, while the l1 SVM uses only wj. Perhaps this property of GMEB could explain why it produces comparable (and at times better) results than the SVM algorithms both in classification problems where feature selection is and is not required. 5 Summary and future work This paper presented a feature selection algorithm motivated by minimizing a GE bound. The global optimum of the objective function is found by solving a non-convex optimization problem. The equivalent optimization problems technique reduces this task to a convex problem. The dual problem formulation depends more weakly on the number of features d and this enabled an extension of the GMEB algorithm to large scale classification problems. The GMEB classifier is a linear classifier. Linear classifiers are the most important type of classifiers in a feature selection framework because feature selection is highly susceptible to overfitting. We believe that the GMEB algorithm is just the first of a series of algorithms which may globally minimize increasingly tighter bounds on the generalization error. Acknowledgment R.M. is partially supported by the fund for promotion of research at the Technion and by the Ollendorff foundation of the Electrical Engineering department at the Technion. A Change of variables Consider optimization problem minimize f0(x) subject to fi(x) ≤0, i = 1, . . . , m. (8) Suppose φ : Rn →Rn is one-to-one, with image covering the problem domain D, i.e., φ(dom φ) ⊇D . We define functions ˜fi as ˜fi(z) = fi(φ(z)), i = 0, . . . , m. Now consider the problem minimize ˜f0(z) subject to ˜fi(z) ≤0, i = 1, . . . , m, (9) with variable z. Problem (8) and (9) are said to be related by the change of variable x = φ(z) and are equivalent: if x solves the problem (8), then z = φ−1(x) solves problem(9); if z solves problem (9), then x = φ(z) solves problem (8). References [1] Y. Grandvalet and S. Canu. Adaptive scaling for feature selection in svms. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 553– 560. MIT Press, 2003. [2] Jason Weston, Sayan Mukherjee, Olivier Chapelle, Massimiliano Pontil, Tomaso Poggio, and Vladimir Vapnik. Feature selection for SVMs. In Advances in Neural Information Processing Systems 13, pages 668–674, 2000. [3] Alain Rakotomamonjy. Variable selection using svm based criteria. The Journal of Machine Learning Research, 3:1357–1370, 2003. [4] Jason Weston, Andr´e Elisseeff, Bernhard Sch¨olkopf, and Mike Tipping. Use of the zero norm with linear models and kernel methods. The Journal of Machine Learning Research, 3:1439– 1461, March 2003. [5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. http://www.stanford.edu/∼boyd/cvxbook.html. [6] R. Meir and T. Zhang. Generalization bounds for Bayesian mixture algorithms. Journal of Machine Learning Research, 4:839–860, 2003. [7] Glenn Fung and O. L. Mangasarian. Data selection for support vector machines classifiers. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 64–70, 2000. [8] Simon Perkins, Kevin Lacker, and James Theiler. Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3:1333–1356, March 2003.
|
2004
|
111
|
2,521
|
Semi-supervised Learning with Penalized Probabilistic Clustering Zhengdong Lu and Todd K. Leen Department of Computer Science and Engineering OGI School of Science and Engineering , OHSU Beaverton, OR 97006 {zhengdon,tleen}@cse.ogi.edu Abstract While clustering is usually an unsupervised operation, there are circumstances in which we believe (with varying degrees of certainty) that items A and B should be assigned to the same cluster, while items A and C should not. We would like such pairwise relations to influence cluster assignments of out-of-sample data in a manner consistent with the prior knowledge expressed in the training set. Our starting point is probabilistic clustering based on Gaussian mixture models (GMM) of the data distribution. We express clustering preferences in the prior distribution over assignments of data points to clusters. This prior penalizes cluster assignments according to the degree with which they violate the preferences. We fit the model parameters with EM. Experiments on a variety of data sets show that PPC can consistently improve clustering results. 1 Introduction While clustering is usually executed completely unsupervised, there are circumstances in which we have prior belief that pairs of samples should (or should not) be assigned to the same cluster. Such pairwise relations may arise from a perceived similarity (or dissimilarity) between samples, or from a desire that the algorithmically generated clusters match the geometric cluster structure perceived by the experimenter in the original data. Continuity, which suggests that neighboring pairs of samples in a time series or in an image are likely to belong to the same class of object, is also a source of clustering preferences. We would like these preferences to be incorporated into the cluster structure so that the assignment of out-of-sample data to clusters captures the concept(s) that give rise to the preferences expressed in the training data. Some work [1, 2, 3] has been done on adopting traditional clustering methods, such as Kmeans, to incorporate pairwise relations. These models are based on hard clustering and the clustering preferences are expressed as hard pairwise constraints that must be satisfied. While this work was in progress, we became aware of the algorithm of Shental et al. [4] who propose a Gaussian mixture model (GMM) for clustering that incorporates hard pairwise constraints. In this paper, we propose a soft clustering algorithm based on GMM that expresses clustering preferences (in the form of pairwise relations) in the prior probability on assignments of data points to clusters. This framework naturally accommodates both hard constraints and soft preferences in a framework in which the preferences are expressed as a Bayesian probability that pairs of points should (or should not) be assigned to the same cluster. We call the algorithm Penalized Probabilistic Clustering (PPC). Experiments on several datasets demonstrate that PPC can consistently improve the clustering result by incorporating reliable prior knowledge. 2 Prior Knowledge for Cluster Assignments PPC begins with a standard GMM P(x|Θ) = M X α=1 πα P(x|α, θα) where Θ = (π1, . . . πK, θ1, . . . , θK). We augment the dataset X = {xi}, i = 1 . . . N with (latent) cluster assignments Z = z(xi), i = 1, . . . , N to form the familiar complete data (X, Z). The complete data likelihood is P(X, Z|Θ) = P(X|Z, Θ)P(Z|Θ). (1) 2.1 Prior distribution in latent space We incorporate our clustering preferences by manipulating the prior probability P(Z|Θ). In the standard Gaussian mixture model, the prior distribution is trivial: P(Z|Θ) = Q i πzi. We incorporate prior knowledge (our clustering preferences) through a weighting function g(Z) that has large values when the assignment of data points to clusters Z conforms to our preferences, and low values when Z conflicts with our preferences. Hence we write P(Z|Θ, G) = Q i πzig(Z) P Z Q j πzjg(Z) ≡1 K Y i πzig(Z) (2) where the sum is over all possible assignments of the data to clusters. The likelihood of the data, given a specific cluster assignment, is independent of the cluster assignment preferences, and so the complete data likelihood is P(X, Z|Θ, G) = P(X|Z, Θ) 1 K Y i πzig(Z) = 1 K P(X, Z|Θ)g(Z), (3) where P(X, Z|Θ) is the complete data likelihood for a standard GMM. The data likelihood is the sum of complete data likelihood over all possible Z, that is, L(X|Θ) = P(X|Θ, G) = P Z P(X, Z|Θ, G), which can be maximized with the EM algorithm. Once the model parameters are fit, we do soft clustering according to the posterior probabilities for new data p(α|x, Θ). (Note that cluster assignment preferences are not expressed for the new data, only for the training data.) 2.2 Pairwise relations Pairwise relations provide a special case of the framework discussed above. We specify two types of pairwise relations: • link: two sample should be assigned into one cluster • do-not-link: two samples should be assigned into different clusters. The weighting factor given to the cluster assignment configuration Z is simple: g(Z) = Y i,j exp(W p ij δ(zi, zj)), where δ is the Kronecker δ-function and W p ij is the weight associated with sample pair (xi, xj). It satisfies W p ij ∈[−∞, ∞], W p ij = W p ji. The weight W p ij reflects our preference and confidence in assigning xi and xj into one cluster. We use a positive W p ij when we prefer to assign xi and xj into one cluster (link), and a negative W p ij when we prefer to assign them into different clusters (do-not-link). The value |W p ij| reflects how certain we are in the preference. If W p ij = 0, we have no prior knowledge on the assignment relevancy of xi and xj. In the extreme cases where |W p ij| → ∞, the Z violating the pairwise relations about xi and xj have zero prior probability, since for those assignments P(Z|Θ, G) = Q n πzn Q i,j exp(W p ij δ(zi, zj)) P Z Q n πzn Q i,j exp(W p ij δ(zi, zj)) →0. Then the relations become hard constraints, while the relations with |W p ij| < ∞are called soft preferences. In the remainder of this paper, we will use W p to denote the prior knowledge on pairwise relations, that is P(X, Z|Θ, W p) = 1 K P(X, Z|Θ) Y i,j exp(W p ij δ(zi, zj)) (4) 2.3 Model fitting We use the EM algorithm [5] to fit the model parameters Θ. Θ∗= arg max Θ L(X|Θ, G) The expectation step (E-step) and maximization step (M-step) are E-step: Q(Θ, Θ(t−1)) = EZ|X(log P(X, Z|Θ, G)|X, Θ(t−1), G) M-step: Θ(t) = arg max Θ Q(Θ, Θ(t−1)) In the M-step, the optimal mean and covariance matrix of each component is: µk = PN j=1 xjP(k|xj, Θ(t−1), G) PN j=1 P(k|xj, Θ(t−1), G) Σk = PN j=1 P(k|xj, Θ(t−1), G)(xj −µk)(xj −µk)T PN j=1 P(k|xj, Θ(t−1), G) . However, the update of prior probability of each component is more difficult than for the standard GMM, we need to find π ≡{π1, . . . , πm} = arg max π M X l=1 N X i=1 log πlP(l|xi, Θ(t−1), G) −log K(π). In this paper, we use a numerical method to find the solution. 2.4 Posterior Inference and Gibbs sampling The M-step requires the cluster membership posterior. Computing this posterior is simple for the standard GMM since each data point xi can be assigned to a cluster independent of the other data points and we have the familiar cluster origin posterior p(zi = k|xi, Θ). For the PPC model calculating the posteriors is no longer trivial. If two sample points, xi and xj participate in a pairwise relations, equation (4) tells us P(zi, zj|X, Θ, W p) ̸= P(zi|X, Θ, W p)P(zj|X, Θ, W p) . and the posterior probability of xi and xj cannot be computed separately. For pairwise relations, the joint posterior distribution must be calculated over the entire transitive closure of the “link” or “do-not-link” relations. See Fig. 1 for an illustration. (a) (b) Figure 1: (a) Links (solid line) and do-not-links (dotted line) among six samples; (b) Relevancy (solid line) translated from links in (a) In the remainder of this paper, we will refer to the smallest sets of samples whose posterior assignment probabilities can be calculated independently as cliques. The posterior probability of a given sample xi in a clique T is calculated by marginalizing the posterior over the entire clique P(zi = k|X, Θ, W p) = X ZT |zi=k P(ZT |XT , Θ, W p), with the posterior on the clique given by P(ZT |XT , Θ, W p) = P(ZT , XT |Θ, W p) P(XT |Θ, W p) = P(ZT , XT |Θ, W p) P Z′ T P(Z′ T , XT |Θ, W p). Computing the posterior probability of a sample in clique T requires time complexity O(M |T |), where |T| is the size of clique T and M is the number of components in the mixture model. This is very expensive if |T| is very big and model size M ≥2. Hence small size cliques are required to make the marginalization computationally reasonable. In some circumstances it is natural to limit ourselves to the special case of pairwise relation with |T| ≤2, called non-overlapping relations. See Fig. 2 for illustration. More generally, we can avoid the expensive computation in posterior inference by breaking large clique into many small ones. To do this, we need to ignore some links or do-not-links. In section 3.2, we will give an application of this idea. For some choices of g(Z), the posterior probability can be given in a simple form even when the clique is big. One example is when there are only hard links. This case is useful when we are sure that a group of samples are from one source. For more general cases, where exact inference is computationally prohibitive, we propose to use Gibbs sampling [6] to estimate the posterior probability. (a) (b) Figure 2: (a) Overlapping pairwise relations; (b) Non-overlapping pairwise relations. In Gibbs sampling, we estimate P(zi|X, Θ, G) as a sample mean P(zi = k|X, Θ, G) = E(δ(zi, k)|X, Θ, G) ≈1 S S X t=1 δ(z(t) i , k) where the sum is over a sequence of S samples from P(Z|X, Θ, G) generated by the Gibbs MCMC. The tth sample in the sequence is generated by the usual Gibbs sampling technique: • Pick z(t) 1 from distribution P(z1|z(t−1) 2 , z(t−1) 3 , ..., z(t−1) N , X, G, Θ) • Pick z(t) 2 from distribution P(z2|zt 1, z(t−1) 3 , ..., z(t−1) N , X, G, Θ) · · · • Pick z(t) N from distribution P(zN|z(t) 1 , z(t) 2 , ..., z(t) N−1, X, G, Θ) For pairwise relations it is helpful to introduce some notation. Let Z−i denote an assignment of data points to clusters that leaves out the assignment of xi. Let U(i) be the indices of the set of samples that participate in a pairwise relation with sample xi, U(i) = {j : W p ij ̸= 0}. Then we have P(zi|Z−i, X, Θ, W p) ∝P(xi, zi|Θ) Y j∈U(i) exp(2W p ij δ(zi, zj)). (5) When W p is sparse, the size of U(i) is small, thus calculating P(zi|Z−i, X, Θ, W p) is very cheap and Gibbs sampling can effectively estimate the posterior probability. 3 Experiments 3.1 Clustering with different number of hard pairwise constraints In this experiment, we demonstrate how the number of pairwise relations affects the performance of clustering. We apply PPC model to three UCI data sets: Iris,Waveform, and Pendigits. Iris data set has 150 samples and three classes, 50 samples in each class; Waveform data set has 5000 samples and three classes, 33% samples in each class; Pendigits data set includes four classes (digits 0,6,8,9), each with 750 samples. All data sets have labels for all samples, which are used to generate the relations and to evaluate performance. We try PPC (with component number same as the number of classes) with various number of pairwise relations. For each relations number, we conduct 100 runs and calculate the averaged classification accuracy. In each run, the data set is randomly split into training set (90%) and test set (10%). The pairwise relations are generated as follows: we randomly pick two samples from the training set without replacement and check their labels. If the two have the same label, we then add a link constraint between them; otherwise, we add a do-not-link constraint. Note the generated pairwise relations are non-overlapping, as described in section 2.4. The model fitted on the training set is applied to test set. Experiment results on two data sets are shown in Fig. 3 (a) and (b) respectively. As Fig. 3 indicates, PPC can consistently improve its clustering accuracy on the training set when more pairwise constraints are added; also, the effect brought by constraints generalizes to the test set. 0 10 20 30 40 50 0.75 0.8 0.85 0.9 0.95 1 Number of Relations Averaged Classification Accuracy on training set on test set (a) on Iris data 0 100 200 300 400 500 600 0.72 0.74 0.76 0.78 0.8 0.82 0.84 Number of Relations Averaged Classification Accuracy on training set on test set (b) on Waveform data 0 200 400 600 800 1000 1200 0.7 0.75 0.8 0.85 0.9 Number of relations Averaged Classification Accuracy on training set on test set (c) on Pendigits data Figure 3: The performance of PPC with various number of relations 3.2 Hard pairwise constraints for encoding partial label The experiment in this subsection shows the application of pairwise constraints on partially labeled data. For example, consider a problem with six classes A, B, ..., F. The classes are grouped into several class-sets C1 = {A, B, C}, C2 = {D, E}, C3 = {F}. The samples are partially labeled in the sense that we are told which class-set a sample is from, but not which specific class it is from. We can logically derive a do-not-link constraint between any pair of samples known to belong to different class-sets, while no link constraint can be derived if each class-set has more than one class in it. Fig. 4 (a) is a 120x400 region from Greenland ice sheet from NASA Langley DAAC. This region is partially labeled into snow area and non-snow area, as indicated in Fig. 4 (b). The snow area can be ice, melting snow or dry snow, while the non-snow area can be bare land, water or cloud. Each pixel has attributes from seven spectrum bands. To segment the image, we first divide the image into 5x5x7 blocks (175 dim vectors). We use the first 50 principal components as feature vectors. For PPC, we use half of data samples for training set and the rest for test. Hard do-not-link constraints (only on training set) are generated as follows: for each block in the non-snow area, we randomly choose (without replacement) six blocks from the snow area to build do-not-link constraints. By doing this, we achieve cliques with size seven (1 non-snow block + 6 snow blocks). Like in section 3.1, we apply the model fitted with PPC to test set and combine the clustering results on both data sets into a complete picture. A typical clustering result of 3-component standard GMM and 3-component PPC are shown as Fig. 4 (c) and (d) respectively. From Fig. 4, standard GMM gives a clustering that is clearly in disagreement with the human labeling in Fig. 4 (b). The PPC segmentation makes far fewer mis-assignments of snow areas (tagged white and gray) to non-snow (black) than does the GMM. The PPC segmentation properly labels almost all of the non-snow regions as non-snow. Furthermore, the segmentation of the snow areas into the two classes (not labeled) tagged white and gray in Fig. 4 (d) reflects subtle differences in the snow regions captured by the gray-scale image from spectral channel 2, as shown in Fig. 4 (a). Figure 4: (a) Gray-scale image from the first spectral channel 2. (b) Partial label given by expert, black pixels denote non-snow area and white pixels denote snow area. Clustering result of standard GMM (c) and PPC (d). (c) and (d) are colored according to image blocks’ assignment. 3.3 Soft pairwise preferences for texture image segmentation In this subsection, we propose an unsupervised texture image segmentation algorithm as an application of PPC model. Like in section 3.2, the image is divided into blocks and rearranged into feature vectors. We use GMM to model those feature vectors, hoping each Gaussian component represents one texture. However, standard GMM often fails to give a good segmentation because it cannot make use of the spatial continuity of image, which is essential in many image segmentation models, such as random field [7]. In our algorithm, the spatial continuity is incorporated as the soft link preferences with uniform weight between each block and its neighbors. The complete data likelihood is P(X, Z|Θ, W p) = 1 K P(X, Z|Θ) Y i Y j∈U(i) exp(w δ(zi, zj)), (6) where U(i) means the neighbors of the ith block. The EM algorithm can be roughly interpreted as iterating on two steps: 1) estimating the texture description (parameters of mixture model) based on segmentation, and 2) segmenting the image based on the texture description given by step 1. Gibbs sampling is used to estimate the posterior probability in each EM iteration. Equation (5) is reduced to P(zi|Z−i, X, Θ, W p) ∝P(xi, zi|Θ) Y j∈U(i) exp(2w δ(zi, zj)). The image shown in Fig. 5 (a) is combined from four Brodatz textures 1 . This image is divided into 7x7 blocks and then rearranged to 49-dim vectors. We use those vectors’ first five principal components as the associated feature vectors. For PPC model, the soft links 1Downloaded from http://sipi.usc.edu/services/database/Database.html, April, 2004 with weight w are added between each block and its four neighbors, as shown in Fig. 5 (b). A typical clustering result of 4-component standard GMM and 4-component PPC with w = 2 are shown in Fig. 5 (c) and Fig. 5 (d) respectively. Obviously, PPC achieves a better segmentation after incorporating spatial continuity. Figure 5: (a) Texture combination. (b) One block and its four neighbor. Clustering result of standard GMM (c) and PPC (d). (c) and (d) are shaded according to the blocks assignments to clusters. 4 Conclusion and Discussion We have proposed a probabilistic clustering model that incorporates prior knowledge in the form of pairwise relations between samples. Unlike previous work in semi-supervised clustering, this work formulates clustering preferences as a Bayesian prior over the assignment of data points to clusters, and so naturally accommodates both hard constraints and soft preferences. For the computational difficulty brought by large cliques, we proposed a Markov chain estimation method to reduce the computational cost. Experiments on different data sets show that pairwise relations can consistently improve the performance of the clustering process. Acknowledgments The authors thank Ashok Srivistava for helpful conversations. This work was funded by NASA Collaborative Agreement NCC 2-1264. References [1] K. Wagstaff, C. Cardie, S. Rogers, and S. Schroedl. Constrained K-means clustering with background knowledge. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 577–584, 2001. [2] S. Basu, A. Bannerjee, and R. Mooney. Semi-supervised clustering by seeding. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 19–26, 2002. [3] D. Klein, S. Kamvar, and C. Manning. From instance Level to space-level constraints: making the most of prior knowledge in data clustering. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 307–313, 2002. [4] N. Shental, A. Bar-Hillel, T. Hertz, and D. Weinshall. Computing Gaussian mixture models with EM using equivalence constraints. In Advances in Neural Information Processing System, volume 15, 2003. [5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1–38, 1977. [6] R. Neal. Probabilistic inference using Markov Chain Monte Carlo methods. Technical Report CRG-TR-93-1, Computer Science Department, Toronto University, 1993. [7] C. Bouman and M. Shapiro. A multiscale random field model for Bayesian image segmentation. IEEE Trans. Image Processing, 3:162–177, March 1994.
|
2004
|
112
|
2,522
|
A direct formulation for sparse PCA using semidefinite programming Alexandre d’Aspremont EECS Dept. U.C. Berkeley Berkeley, CA 94720 alexandre.daspremont@m4x.org Laurent El Ghaoui SAC Capital 540 Madison Avenue New York, NY 10029 laurent.elghaoui@sac.com (on leave from EECS, U.C. Berkeley) Michael I. Jordan EECS and Statistics Depts. U.C. Berkeley Berkeley, CA 94720 jordan@cs.berkeley.edu Gert R. G. Lanckriet EECS Dept. U.C. Berkeley Berkeley, CA 94720 gert@eecs.berkeley.edu Abstract We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming based relaxation for our problem. 1 Introduction Principal component analysis (PCA) is a popular tool for data analysis and dimensionality reduction. It has applications throughout science and engineering. In essence, PCA finds linear combinations of the variables (the so-called principal components) that correspond to directions of maximal variance in the data. It can be performed via a singular value decomposition (SVD) of the data matrix A, or via an eigenvalue decomposition if A is a covariance matrix. The importance of PCA is due to several factors. First, by capturing directions of maximum variance in the data, the principal components offer a way to compress the data with minimum information loss. Second, the principal components are uncorrelated, which can aid with interpretation or subsequent statistical analysis. On the other hand, PCA has a number of well-documented disadvantages as well. A particular disadvantage that is our focus here is the fact that the principal components are usually linear combinations of all variables. That is, all weights in the linear combination (known as loadings), are typically non-zero. In many applications, however, the coordinate axes have a physical interpretation; in biology for example, each axis might correspond to a specific gene. In these cases, the interpretation of the principal components would be facilitated if these components involve very few non-zero loadings (coordinates). Moreover, in certain applications, e.g., financial asset trading strategies based on principal component techniques, the sparsity of the loadings has important consequences, since fewer non-zero loadings imply fewer fixed transaction costs. It would thus be of interest to be able to discover “sparse principal components”, i.e., sets of sparse vectors spanning a low-dimensional space that explain most of the variance present in the data. To achieve this, it is necessary to sacrifice some of the explained variance and the orthogonality of the principal components, albeit hopefully not too much. Rotation techniques are often used to improve interpretation of the standard principal components [1]. [2] considered simple principal components by restricting the loadings to take values from a small set of allowable integers, such as 0, 1, and −1. [3] propose an ad hoc way to deal with the problem, where the loadings with small absolute value are thresholded to zero. We will call this approach “simple thresholding.” Later, a method called SCoTLASS was introduced by [4] to find modified principal components with possible zero loadings. In [5] a new approach, called sparse PCA (SPCA), was proposed to find modified components with zero loadings, based on the fact that PCA can be written as a regression-type optimization problem. This allows the application of LASSO [6], a penalization technique based on the L1 norm. In this paper, we propose a direct approach (called DSPCA in what follows) that improves the sparsity of the principal components by directly incorporating a sparsity criterion in the PCA problem formulation and then relaxing the resulting optimization problem, yielding a convex optimization problem. In particular, we obtain a convex semidefinite programming (SDP) formulation. SDP problems can be solved in polynomial time via general-purpose interior-point methods [7], and our current implementation of DSPCA makes use of these general-purpose methods. This suffices for an initial empirical study of the properties of DSPCA and for comparison to the algorithms discussed above on problems of small to medium dimensionality. For high-dimensional problems, the general-purpose methods are not viable and it is necessary to attempt to exploit special structure in the problem. It turns out that our problem can be expressed as a special type of saddle-point problem that is well suited to recent specialized algorithms, such as those described in [8, 9]. These algorithms offer a significant reduction in computational time compared to generic SDP solvers. In the current paper, however, we restrict ourselves to an investigation of the basic properties of DSPCA on problems for which the generic methods are adequate. Our paper is structured as follows. In Section 2, we show how to efficiently derive a sparse rank-one approximation of a given matrix using a semidefinite relaxation of the sparse PCA problem. In Section 3, we derive an interesting robustness interpretation of our technique, and in Section 4 we describe how to use this interpretation in order to decompose a matrix into sparse factors. Section 5 outlines different algorithms that can be used to solve the problem, while Section 6 presents numerical experiments comparing our method with existing techniques. Notation Here, Sn is the set of symmetric matrices of size n. We denote by 1 a vector of ones, while Card(x) is the cardinality (number of non-zero elements) of a vector x. For X ∈ Sn, ∥X∥F is the Frobenius norm of X, i.e., ∥X∥F = p Tr(X2), and by λmax(X) the maximum eigenvalue of X, while |X| is the matrix whose elements are the absolute values of the elements of X. 2 Sparse eigenvectors In this section, we derive a semidefinite programming (SDP) relaxation for the problem of approximating a symmetric matrix by a rank one matrix with an upper bound on the cardinality of its eigenvector. We first reformulate this as a variational problem, we then obtain a lower bound on its optimal value via an SDP relaxation (we refer the reader to [10] for an overview of semidefinite programming). Let A ∈Sn be a given n × n positive semidefinite, symmetric matrix and k be an integer with 1 ≤k ≤n. We consider the problem: Φk(A) := min ∥A −xxT ∥F subject to Card(x) ≤k, (1) in the variable x ∈Rn. We can solve instead the following equivalent problem: Φ2 k(A) = min ∥A −λxxT ∥2 F subject to ∥x∥2 = 1, λ ≥0, Card(x) ≤k, in the variable x ∈Rn and λ ∈R. Minimizing over λ, we obtain: Φ2 k(A) = ∥A∥2 F −νk(A), where νk(A) := max xT Ax subject to ∥x∥2 = 1 Card(x) ≤k. (2) To compute a semidefinite relaxation of this program (see [10], for example), we rewrite (2) as: νk(A) := max Tr(AX) subject to Tr(X) = 1 Card(X) ≤k2 X ⪰0, Rank(X) = 1, (3) in the symmetric, matrix variable X ∈Sn. Indeed, if X is a solution to the above problem, then X ⪰0 and Rank(X) = 1 means that we have X = xxT , and Tr(X) = 1 implies that ∥x∥2 = 1. Finally, if X = xxT then Card(X) ≤k2 is equivalent to Card(x) ≤k. Naturally, problem (3) is still non-convex and very difficult to solve, due to the rank and cardinality constraints. Since for every u ∈Rp, Card(u) = q implies ∥u∥1 ≤√q∥u∥2, we can replace the non-convex constraint Card(X) ≤k2, by a weaker but convex one: 1T |X|1 ≤k, where we have exploited the property that ∥X∥F = √ xT x = 1 when X = xxT and Tr(X) = 1. If we also drop the rank constraint, we can form a relaxation of (3) and (2) as: νk(A) := max Tr(AX) subject to Tr(X) = 1 1T |X|1 ≤k X ⪰0, (4) which is a semidefinite program (SDP) in the variable X ∈Sn, where k is an integer parameter controlling the sparsity of the solution. The optimal value of this program will be an upper bound on the optimal value vk(a) of the variational program in (2), hence it gives a lower bound on the optimal value Φk(A) of the original problem (1). Finally, the optimal solution X will not always be of rank one but we can truncate it and keep only its dominant eigenvector x as an approximate solution to the original problem (1). In Section 6 we show that in practice the solution X to (4) tends to have a rank very close to one, and that its dominant eigenvector is indeed sparse. 3 A robustness interpretation In this section, we show that problem (4) can be interpreted as a robust formulation of the maximum eigenvalue problem, with additive, component-wise uncertainty in the matrix A. We again assume A to be symmetric and positive semidefinite. In the previous section, we considered in (2) a cardinality-constrained variational formulation of the maximum eigenvalue problem. Here we look at a small variation where we penalize the cardinality and solve: max xT Ax −ρ Card2(x) subject to ∥x∥2 = 1, in the variable x ∈Rn, where the parameter ρ > 0 controls the size of the penalty. Let us remark that we can easily move from the constrained formulation in (4) to the penalized form in (5) by duality. This problem is again non-convex and very difficult to solve. As in the last section, we can form the equivalent program: max Tr(AX) −ρ Card(X) subject to Tr(X) = 1 X ⪰0, Rank(X) = 1, in the variable X ∈Sn. Again, we get a relaxation of this program by forming: max Tr(AX) −ρ1T |X|1 subject to Tr(X) = 1 X ⪰0, (5) which is a semidefinite program in the variable X ∈Sn, where ρ > 0 controls the penalty size. We can rewrite this last problem as: max X⪰0,Tr(X)=1 min |Uij|≤ρ Tr(X(A + U)) (6) and we get a dual to (5) as: min λmax(A + U) subject to |Uij| ≤ρ, i, j = 1, . . . , n, (7) which is a maximum eigenvalue problem with variable U ∈Rn×n. This gives a natural robustness interpretation to the relaxation in (5): it corresponds to a worst-case maximum eigenvalue computation, with component-wise bounded noise of intensity ρ on the matrix coefficients. 4 Sparse decomposition Here, we use the results obtained in the previous two sections to describe a sparse equivalent to the PCA decomposition technique. Suppose that we start with a matrix A1 ∈Sn, our objective is to decompose it in factors with target sparsity k. We solve the relaxed problem in (4): max Tr(A1X) subject to Tr(X) = 1 1T |X|1 ≤k X ⪰0, to get a solution X1, and truncate it to keep only the dominant (sparse) eigenvector x1. Finally, we deflate A1 to obtain A2 = A1 −(xT 1 A1x1)x1xT 1 , and iterate to obtain further components. The question is now: When do we stop the decomposition? In the PCA case, the decomposition stops naturally after Rank(A) factors have been found, since ARank(A)+1 is then equal to zero. In the case of the sparse decomposition, we have no guarantee that this will happen. However, the robustness interpretation gives us a natural stopping criterion: if all the coefficients in |Ai| are smaller than the noise level ρ⋆(computed in the last section) then we must stop since the matrix is essentially indistinguishable from zero. So, even though we have no guarantee that the algorithm will terminate with a zero matrix, the decomposition will in practice terminate as soon as the coefficients in A become undistinguishable from the noise. 5 Algorithms For problems of moderate size, our SDP can be solved efficiently using solvers such as SEDUMI [7]. For larger-scale problems, we need to resort to other types of algorithms for convex optimization. Of special interest are the recently-developed algorithms due to [8, 9]. These are first-order methods specialized to problems having a specific saddlepoint structure. It turns out that our problem, when expressed in the saddle-point form (6), falls precisely into this class of algorithms. Judged from the results presented in [9], in the closely related context of computing the Lovascz capacity of a graph, the theoretical complexity, as well as practical performance, of the method as applied to (6) should exhibit very significant improvements over the general-purpose interior-point algorithms for SDP. Of course, nothing comes without a price: for fixed problem size, the first-order methods mentioned above converge in O(1/ϵ), where ϵ is the required accuracy on the optimal value, while interior-point methods converge in O(log(1/ϵ)). We are currently evaluating the impact of this tradeoff both theoretically and in practice. 6 Numerical results In this section, we illustrate the effectiveness of the proposed approach both on an artificial and a real-life data set. We compare with the other approaches mentioned in the introduction: PCA, PCA with simple thresholding, SCoTLASS and SPCA. The results show that our approach can achieve more sparsity in the principal components than SPCA does, while explaining as much variance. We begin by a simple example illustrating the link between k and the cardinality of the solution. 6.1 Controlling sparsity with k Here, we illustrate on a simple example how the sparsity of the solution to our relaxation evolves as k varies from 1 to n. We generate a 10×10 matrix U with uniformly distributed coefficients in [0, 1]. We let v be a sparse vector with: v = (1, 0, 1, 0, 1, 0, 1, 0, 1, 0). We then form a test matrix A = U T U + σvvT , where σ is a signal-to-noise ratio equal to 15 in our case. We sample 50 different matrices A using this technique. For each k between 1 and 10 and each A, we solve the following SDP in (4). We then extract the first eigenvector of the solution X and record its cardinality. In Figure 1, we show the mean cardinality (and standard deviation) as a function of k. We observe that k + 1 is actually a good predictor of the cardinality, especially when k + 1 is close to the actual cardinality (5 in this case). 0 2 4 6 8 10 12 0 2 4 6 8 10 12 k cardinality Figure 1: Cardinality versus k. 6.2 Artificial data We consider the simulation example proposed by [5]. In this example, three hidden factors are created: V1 ∼N(0, 290), V2 ∼N(0, 300), V3 = −0.3V1 + 0.925V2 + ϵ, ϵ ∼N(0, 300) (8) with V1, V2 and ϵ independent. Afterwards, 10 observed variables are generated as follows: Xi = Vj + ϵj i, ϵj i ∼N(0, 1), with j = 1 for i = 1, 2, 3, 4, j = 2 for i = 5, 6, 7, 8 and j = 3 for i = 9, 10 and {ϵj i} independent for j = 1, 2, 3, i = 1, . . . , 10. Instead of sampling data from this model and computing an empirical covariance matrix of (X1, . . . , X10), we use the exact covariance matrix to compute principal components using the different approaches. Since the three underlying factors have about the same variance, and the first two are associated with 4 variables while the last one is only associated with 2 variables, V1 and V2 are almost equally important, and they are both significantly more important than V3. This, together with the fact that the first 2 principal components explain more than 99% of the total variance, suggests that considering two sparse linear combinations of the original variables should be sufficient to explain most of the variance in data sampled from this model. This is also discussed by [5]. The ideal solution would thus be to only use the variables (X1, X2, X3, X4) for the first sparse principal component, to recover the factor V1, and only (X5, X6, X7, X8) for the second sparse principal component to recover V2. Using the true covariance matrix and the oracle knowledge that the ideal sparsity is 4, [5] performed SPCA (with λ = 0). We carry out our algorithm with k = 4. The results are reported in Table 1, together with results for PCA, simple thresholding and SCoTLASS (t = 2). Notice that SPCA, DSPCA and SCoTLASS all find the correct sparse principal components, while simple thresholding yields inferior performance. The latter wrongly includes the variables X9 and X10 to explain most variance (probably it gets misled by the high correlation between V2 and V3), even more, it assigns higher loadings to X9 and X10 than to one of the variables (X5, X6, X7, X8) that are clearly more important. Simple thresholding correctly identifies the second sparse principal component, probably because V1 has a lower correlation with V3. Simple thresholding also explains a bit less variance than the other methods. 6.3 Pit props data The pit props data (consisting of 180 observations and 13 measured variables) was introduced by [11] and has become a standard example of the potential difficulty in interpreting Table 1: Loadings and explained variance for first two principal components, for the artificial example. ’ST’ is the simple thresholding method, ’other’ is all the other methods: SPCA, DSPCA and SCoTLASS. X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 explained variance PCA, PC1 .116 .116 .116 .116 -.395 -.395 -.395 -.395 -.401 -.401 60.0% PCA, PC2 -.478 -.478 -.478 -.478 -.145 -.145 -.145 -.145 .010 .010 39.6% ST, PC1 0 0 0 0 0 0 -.497 -.497 -.503 -.503 38.8% ST, PC2 -.5 -.5 -.5 -.5 0 0 0 0 0 0 38.6% other, PC1 0 0 0 0 .5 .5 .5 .5 0 0 40.9% other, PC2 .5 .5 .5 .5 0 0 0 0 0 0 39.5% principal components. [4] applied SCoTLASS to this problem and [5] used their SPCA approach, both with the goal of obtaining sparse principal components that can better be interpreted than those of PCA. SPCA performs better than SCoTLASS: it identifies principal components with respectively 7, 4, 4, 1, 1, and 1 non-zero loadings, as shown in Table 2. As shown in [5], this is much sparser than the modified principal components by SCoTCLASS, while explaining nearly the same variance (75.8% versus 78.2% for the 6 first principal components). Also, simple thresholding of PCA, with a number of non-zero loadings that matches the result of SPCA, does worse than SPCA in terms of explained variance. Following this previous work, we also consider the first 6 principal components. We try to identify principal components that are sparser than the best result of this previous work, i.e., SPCA, but explain the same variance. Therefore, we choose values for k of 5, 2, 2, 1, 1, 1 (two less than those of the SPCA results reported above, but no less than 1). Figure 2 shows the cumulative number of non-zero loadings and the cumulative explained variance (measuring the variance in the subspace spanned by the first i eigenvectors). The results for DSPCA are plotted with a red line and those for SPCA with a blue line. The cumulative explained variance for normal PCA is depicted with a black line. It can be seen that our approach is able to explain nearly the same variance as the SPCA method, while clearly reducing the number of non-zero loadings for the first 6 principal components. Adjusting the first k from 5 to 6 (relaxing the sparsity), we obtain the results plotted with a red dash-dot line: still better in sparsity, but with a cumulative explained variance that is fully competitive with SPCA. Moreover, as in the SPCA approach, the important variables associated with the 6 principal components do not overlap, which leads to a clearer interpretation. Table 2 shows the first three corresponding principal components for the different approaches (DSPCAw5 for k1 = 5 and DSPCAw6 for k1 = 6). Table 2: Loadings for first three principal components, for the real-life example. topdiam length moist testsg ovensg ringtop ringbud bowmax bowdist whorls clear knots diaknot SPCA, PC1 -.477 -.476 0 0 .177 0 -.250 -.344 -.416 -.400 0 0 0 SPCA, PC2 0 0 .785 .620 0 0 0 -.021 0 0 0 .013 0 SPCA, PC3 0 0 0 0 .640 .589 .492 0 0 0 0 0 -.015 DSPCAw5, PC1 -.560 -.583 0 0 0 0 -.263 -.099 -.371 -.362 0 0 0 DSPCAw5, PC2 0 0 .707 .707 0 0 0 0 0 0 0 0 0 DSPCAw5, PC3 0 0 0 0 0 -.793 -.610 0 0 0 0 0 .012 DSPCAw6, PC1 -.491 -.507 0 0 0 -.067 -.357 -.234 -.387 -.409 0 0 0 DSPCAw6, PC2 0 0 .707 .707 0 0 0 0 0 0 0 0 0 DSPCAw6, PC3 0 0 0 0 0 -.873 -.484 0 0 0 0 0 .057 7 Conclusion The semidefinite relaxation of the sparse principal component analysis problem proposed here appears to significantly improve the solution’s sparsity, while explaining the same 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6 8 10 12 14 16 18 Number of principal components Cumulative cardinality 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 10 20 30 40 50 60 70 80 90 100 Number of principal components Cumulative explained variance Figure 2: Cumulative cardinality and cumulative explained variance for SPCA and DSPCA as a function of the number of principal components: black line for normal PCA, blue for SPCA and red for DSPCA (full for k1 = 5 and dash-dot for k1 = 6). variance as previously proposed methods in the examples detailed above. The algorithms we used here handle moderate size problems efficiently. We are currently working on large-scale extensions using first-order techniques. Acknowledgements Thanks to Andrew Mullhaupt and Francis Bach for useful suggestions. We would like to acknowledge support from ONR MURI N00014-00-1-0637, Eurocontrol-C20052E/BM/03, NASA-NCC2-1428. References [1] I. T. Jolliffe. Rotation of principal components: choice of normalization constraints. Journal of Applied Statistics, 22:29–35, 1995. [2] S. Vines. Simple principal components. Applied Statistics, 49:441–451, 2000. [3] J. Cadima and I. T. Jolliffe. Loadings and correlations in the interpretation of principal components. Journal of Applied Statistics, 22:203–214, 1995. [4] I. T. Jolliffe and M. Uddin. A modified principal component technique based on the lasso. Journal of Computational and Graphical Statistics, 12:531–547, 2003. [5] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Technical report, statistics department, Stanford University, 2004. [6] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal statistical society, series B, 58(267-288), 1996. [7] Jos F. Sturm. Using sedumi 1.0x, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software, 11:625–653, 1999. [8] I. Nesterov. Smooth minimization of non-smooth functions. CORE wroking paper, 2003. [9] A. Nemirovski. Prox-method with rate of convergence o(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle-point problems. MINERVA Working paper, 2004. [10] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [11] J. Jeffers. Two case studies in the application of principal components. Applied Statistics, 16:225–236, 1967.
|
2004
|
113
|
2,523
|
Stable adaptive control with online learning Andrew Y. Ng Stanford University Stanford, CA 94305, USA H. Jin Kim Seoul National University Seoul, Korea Abstract Learning algorithms have enjoyed numerous successes in robotic control tasks. In problems with time-varying dynamics, online learning methods have also proved to be a powerful tool for automatically tracking and/or adapting to the changing circumstances. However, for safety-critical applications such as airplane flight, the adoption of these algorithms has been significantly hampered by their lack of safety, such as “stability,” guarantees. Rather than trying to show difficult, a priori, stability guarantees for specific learning methods, in this paper we propose a method for “monitoring” the controllers suggested by the learning algorithm online, and rejecting controllers leading to instability. We prove that even if an arbitrary online learning method is used with our algorithm to control a linear dynamical system, the resulting system is stable. 1 Introduction Online learning algorithms provide a powerful set of tools for automatically fine-tuning a controller to optimize performance while in operation, or for automatically adapting to the changing dynamics of a control problem. [2] Although one can easily imagine many complex learning algorithms (SVMs, gaussian processes, ICA, ...,) being powerfully applied to online learning for control, for these methods to be widely adopted for applications such as airplane flight, it is critical that they come with safety guarantees, specifically stability guarantees. In our interactions with industry, we also found stability to be a frequently raised concern for online learning. We believe that the lack of safety guarantees represents a significant barrier to the wider adoption of many powerful learning algorithms for online adaptation and control. It is also typically infeasible to replace formal stability guarantees with only empirical testing: For example, to convincingly demonstrate that we can safely fly a fleet of 100 aircraft for 10000 hours would require 106 hours of flight-tests. The control literature contains many examples of ingenious stability proofs for various online learning schemes. It is impossible to do this literature justice here, but some examples include [10, 7, 12, 8, 11, 5, 4, 9]. However, most of this work addresses only very specific online learning methods, and usually quite simple ones (such as ones that switch between only a finite number of parameter values using a specific, simple, decision rule, e.g., [4]). In this paper, rather than trying to show difficult a priori stability guarantees for specific algorithms, we propose a method for “monitoring” an arbitrary learning algorithm being used to control a linear dynamical system. By rejecting control values online that appear to be leading to instability, our algorithm ensures that the resulting controlled system is stable. 2 Preliminaries Following most work in control [6], we will consider control of a linear dynamical system. Let xt ∈Rnx be the nx dimensional state at time t. The system is initialized to x0 = ⃗0. At each time t, we select a control action ut ∈Rnu, as a result of which the state transitions to xt+1 = Axt + But + wt. (1) Here, A ∈Rnx×nx and B ∈Rnx×nu govern the dynamics of the system, and wt is a disturbance term. We will not make any distributional assumptions about the source of the disturbances wt for now (indeed, we will consider a setting where an adversary chooses them from some bounded set). For many applications, the controls are chosen as a linear function of the state: ut = Ktxt. (2) Here, the Kt ∈Rnu×nx are the control gains. If the goal is to minimize the expected value of a quadratic cost function over the states and actions J = (1/T) PT t=1 xT t Qxt + uT t Rut and the wt are gaussian, then we are in the LQR (linear quadratic regulation) control setting. Here, Q ∈Rnx×nx and R ∈Rnu×nu are positive semi-definite matrices. In the infinite horizon setting, under mild conditions there exists an optimal steady-state (or stationary) gain matrix K, so that setting Kt = K for all t minimizes the expected value of J. [1] We consider a setting in which an online learning algorithm (also called an adaptive control algorithm) is used to design a controller. Thus, on each time step t, an online algorithm may (based on the observed states and action sequence so far) propose some new gain matrix Kt. If we follow the learning algorithm’s recommendation, then we will start choosing controls according to u = Ktx. More formally, an online learning algorithm is a function f : ∪∞ t=1(Rnx × Rnu)t 7→Rnu×nx mapping from finite sequences of states and actions (x0, u0, . . . , xt−1, ut−1) to controller gains Kt. We assume that f’s outputs are bounded (||Kt||F ≤ψ for some ψ > 0, where || · ||F is the Frobenius norm). 2.1 Stability In classical control theory [6], probably the most important desideratum of a controlled system is that it must be stable. Given a fixed adaptive control algorithm f and a fixed sequence of disturbance terms w0, w1, . . ., the sequence of states xt visited is exactly determined by the equations Kt = f(x0, u0, . . . , xt−1, ut−1); xt+1 = Axt + B · Ktxt + wt. t = 0, 1, 2, . . . (3) Thus, for fixed f, we can think of the (controlled) dynamical system as a mapping from the sequence of disturbance terms wt to the sequence of states xt. We now give the most commonly-used definition of stability, called BIBO stability (see, e.g., [6]). Definition. A system controlled by f is bounded-input bounded-output (BIBO) stable if, given any constant c1 > 0, there exists some constant c2 > 0 so that for all sequences of disturbance terms satisfying ||wt||2 ≤c1 (for all t = 1, 2, . . .), the resulting state sequence satisfies ||xt||2 ≤c2 (for all t = 1, 2, . . .). Thus, a system is BIBO stable if, under bounded disturbances to it (possibly chosen by an adversary), the state remains bounded and does not diverge. We also define the t-th step dynamics matrix Dt to be Dt = A+BKt. Note therefore that the state transition dynamics of the system (right half of Equation 3) may now be written xt+1 = Dtxt + wt. Further, the dependence of xt on the wt’s can be expressed as follows: xt = wt−1 + Dt−1xt−1 = wt−1 + Dt−1(wt−2 + Dt−2xt−2) = · · · (4) = wt−1 + Dt−1wt−2 + Dt−1Dt−2wt−3 + · · · + Dt−1 · · · D1w0. (5) Since the number of terms in the sum above grows linearly with t, to ensure BIBO stability of a system—i.e., that xt remains bounded for all t—it is usually necessary for the terms in the sum to decay rapidly, so that the sum remains bounded. For example, if it were true that ||Dt−1 · · · Dt−k+1wt−k||2 ≤(1 −ϵ)k for some 0 < ϵ < 1, then the terms in the sequence above would be norm bounded by a geometric series, and thus the sum is bounded. More generally, the disturbance wt contributes a term Dt+k−1 · · · Dt+1wt to the state xt+k, and we would like Dt+k−1 · · · Dt+1wt to become small rapidly as k becomes large (or, in the control parlance, for the effects of the disturbance wt on xt+k to be attenuated quickly). If Kt = K for all t, then we say that we using a (nonadaptive) stationary controller K. In this setting, it is straightforward to check if our system is stable. Specifically, it is BIBO stable if and only if the magnitude of all the eigenvalues of D = Dt = A+BKt are strictly less than 1. [6] To informally see why, note that the effect of wt on xt+k can be written Dk−1wt (as in Equation 5). Moreover, |λmax(D)| < 1 implies Dk−1wt →0 as k →∞. Thus, the disturbance wt has a negligible influence on xt+k for large k. More precisely, it is possible to show that, under the assumption that ||wt|| ≤c1, the sequence on the right hand side of (5) is upper-bounded by a geometrically decreasing sequence, and thus its sum must also be bounded. [6] It was easy to check for stability when Kt was stationary, because the mapping from the wt’s to the xt’s was linear. In more general settings, if Kt depends in some complex way on x1, . . . , xt−1 (which in turn depend on w0, . . . , wt−2), then xt+1 = Axt +BKtxt +wt will be a nonlinear function of the sequence of disturbances.1 This makes it significantly more difficult to check for BIBO stability of the system. Further, unlike the stationary case, it is well-known that λmax(Dt) < 1 (for all t) is insufficient to ensure stability. For example, consider a system where Dt = Dodd if t is odd, and Dt = Deven otherwise, where2 Dodd = h 0.9 0 10 0.9 i ; Deven = h 0.9 10 0 0.9 i . (6) Note that λmax(Dt) = 0.9 < 1 for all t. However, if we pick w0 = [1 0]T and w1 = w2 = . . . = 0, then (following Equation 5) we have x2t+1 = D2tD2t−1D2t−2 . . . D2D1w0 (7) = (DevenDodd)t w0 (8) = h 100.81 9 9 0.81 it w0 (9) Thus, even though the wt’s are bounded, we have ||x2t+1||2 ≥(100.81)t, showing that the state sequence is not bounded. Hence, this system is not BIBO stable. 3 Checking for stability If f is a complex learning algorithm, it is typically very difficult to guarantee that the resulting system is BIBO stable. Indeed, even if f switches between only two specific sets of gains K, and if w0 is the only non-zero disturbance term, it can still be undecidable to determine whether the state sequence remains bounded. [3] Rather than try to give a priori guarantees on f, we instead propose a method for ensuring BIBO stability of a system by “monitoring” the control gains proposed by f, and rejecting gains that appear to be leading to instability. We start computing controls according to a set of gains ˆKt only if it is accepted by the algorithm. From the discussion in Section 2.1, the criterion for accepting or rejecting a set of gains ˆKt cannot simply be to check if λmax(A+BKt) = λmax(Dt) < 1. Specifically, λmax(D2D1) is not bounded by λmax(D2)λmax(D1), and so even if λmax(Dt) is small for all t—which would be the case if the gains Kt for any fixed t could be used to obtain a stable stationary controller—the quantity λmax(Qt τ=1 Dτ) can still be large, and thus (Qt τ=1 Dτ)w0 can be large. However, the following holds for the largest singular value σmax of matrices. Though the result is quite standard, for the sake of completeness we include a proof.3 Proposition 3.1: Let any matrices P ∈Rl×m and Q ∈Rm×n be given. Then σmax(PQ) ≤σmax(P)σmax(Q). Proof. σmax(PQ) = maxu,v:||u||2=||v||2=1 uT PQv. Let u∗and v∗be a pair of vectors attaining the maximum in the previous equation. Then σmax(PQ) = u∗T PQv∗≤ ||u∗T P||2 · ||Qv∗||2 ≤maxv,u:||v||2=||u||2=1 ||uT P||2 · ||Qv||2 = σmax(P)σmax(Q). □ Thus, if we could ensure that σmax(Dt) ≤1 −ϵ for all t, we would find that the influence of w0 on xt has norm bounded by ||Dt−1Dt−2 . . . D1w0||2 = σmax(Dt−1 . . . D1w0) ≤ 1Even if f is linear in its inputs so that Kt is linear in x1, . . . , xt−1, the state sequence’s dependence on (w0, w1, . . .) is still nonlinear because of the multiplicative term Ktxt in the dynamics (Equation 3). 2Clearly, such as system can be constructed with appropriate choices of A, B and Kt. 3The largest singular value of M is σmax(M) = σmax(M T ) = maxu,v:||u||2=||v||2=1 uT Mv = maxu:||u||2=1 ||Mu||2. If x is a vector, then σmax(x) is just the L2-norm of x. σmax(Dt−1) . . . σmax(D1)||w0||2 ≤(1 −ϵ)t−1||w0||2 (since ||v||2 = σmax(v) if v is a vector). Thus, the influence of wt on xt+k goes to 0 as k →∞. However, it would be an overly strong condition to demand that σmax(Dt) < 1−ϵ for every t. Specifically, there are many stable, stationary controllers that do not satisfy this. For example, either one of the matrices Dt in (6), if used as the stationary dynamics, is stable (since λmax = 0.9 < 1). Thus, it should be acceptable for us to use a controller with either of these Dt (so long as we do not switch between them on every step). But, these Dt have σmax ≈10.1 > 1, and thus would be rejected if we were to demand that σmax(Dt) < 1−ϵ for every t. Thus, we will instead ask only for a weaker condition, that for all t, σmax(Dt · Dt−1 · · · · Dt−N+1) < 1 −ϵ. (10) This is motivated by the following, which shows that any stable, stationary controller meets this condition (for sufficiently large N): Proposition 3.2: Let any 0 < ϵ < 1 and any D with λmax(D) < 1 be given. Then there exists N0 > 0 so that for all N ≥N0, we have that σmax(DN) ≤1 −ϵ. The proof follows from the fact that λmax(D) < 1 implies DN →0 as N →∞. Thus, given any fixed, stable controller, if N is sufficiently large, it will satisfy (10). Further, if (10) holds, then w0’s influence on xkN+1 is bounded by ||DkN · DkN−1 · · · D1w0||2 ≤ σmax(DkN · DkN−1 · · · D1)||w0||2 ≤ Qk−1 i=0 σmax(DiN+NDiN+N−1 · · · DiN+1)||w0||2 ≤ (1 −ϵ)k||w0||2, (11) which goes to 0 geometrically quickly as k →∞. (The first and second inequalities above follow from Proposition 3.1.) Hence, the disturbances’ effects are attenuated quickly. To ensure that (10) holds, we propose the following algorithm. Below, N > 0 and 0 < ϵ < 1 are parameters of the algorithm. 1. Initialization: Assume we have some initial stable controller K0, so that λmax(D0) < 1, where D0 = A + BK0. Also assume that σmax(DN 0 ) ≤1 −ϵ.4 Finally, for all values of τ < 0. define Kτ = K0 and Dτ = D0. 2. For t = 1, 2, . . . (a) Run the online learning algorithm f to compute the next set of proposed gains ˆKt = f(x0, u0, . . . , xt−1, ut−1). (b) Let ˆDt = A + B ˆKt, and check if σmax( ˆDtDt−1Dt−2Dt−3 . . . Dt−N+1) ≤ 1 −ϵ (12) σmax( ˆD2 t Dt−1Dt−2 . . . Dt−N+2) ≤ 1 −ϵ (13) σmax( ˆD3 t Dt−1 . . . Dt−N+3) ≤ 1 −ϵ (14) . . . σmax( ˆDN t ) ≤ 1 −ϵ (15) (c) If all of the σmax’s above are less than 1 −ϵ, we ACCEPT ˆKt, and set Kt = ˆKt. Otherwise, REJECT ˆKt, and set Kt = Kt−1. (d) Let Dt = A + BKt, and pick our action at time t to be ut = Ktxt. We begin by showing that, if we use this algorithm to “filter” the gains output by the online learning algorithm, Equation (10) holds. Lemma 3.3: Let f and w0, w1, . . . be arbitrary, and let K0, K1, K2, . . . be the sequence of gains selected using the algorithm above. Let Dt = A + BKt be the corresponding dynamics matrices. Then for every −∞< t < ∞, we have5 σmax(Dt · Dt−1 · · · · · Dt−N+1) ≤1 −ϵ. (16) 4From Proposition 3.2, it must be possible to choose N satisfying this. 5As in the algorithm description, Dt = D0 for t < 0. Proof. Let any t be fixed, and let τ = max({0} ∪{t′ : 1 ≤t′ ≤t, ˆKt′was accepted}). Thus, τ is the index of the time step at which we most recently accepted a set of gains from f (or 0 if no such gains exist). So, Kτ = Kτ+1 = . . . = Kt, since the gains stay the same in every time step on which we do not accept a new one. This also implies Dτ = Dτ+1 = . . . = Dt. (17) We will treat the cases (i) τ = 0, (ii) 1 ≤τ ≤t −N + 1 and (iii) τ > t −N + 1, τ ≥1 separately. In case (i), τ = 0, and we did not accept any gains after time 0. Thus Kt = · · · = Kt−N+1 = K0, which implies Dt = · · · = Dt−N+1 = D0. But from Step 1 of the algorithm, we had chosen N sufficiently large that σmax(DN 0 ) ≤1 −ϵ. This shows (16). In case (ii), τ ≤t −N + 1 (and τ > 0). Together with (17), this implies Dt · Dt−1 · · · · · Dt−N+1 = DN τ . (18) But σmax(DN τ ) ≤1−ϵ, because at time τ, when we accepted Kτ, we would have checked that Equation (15) holds. In case (iii), τ > t −N + 1 (and τ > 0). From (17) we have Dt · Dt−1 · · · · · Dt−N+1 = Dt−τ+1 τ · Dτ−1 · Dτ−2 · · · · · Dt−N+1. (19) But when we accepted Kτ, we would have checked that (12-15) hold, and the t −τ + 1-st equation in (12-15) is exactly that the largest singular value of (19) is at most 1 −ϵ. □ Theorem 3.4: Let an arbitrary learning algorithm f be given, and suppose we use f to control a system, but using our algorithm to accept/reject gains selected by f. Then, the resulting system is BIBO stable. Proof. Suppose ||wt||2 ≤c1 for all t. For convenience also define w−1 = w−2 = · · · = 0, and let ψ′ = ||A||F + ψ||B||F . From (5), ||xt||2 = || P∞ k=0 Dt−1Dt−2 · · · Dt−kwt−k−1||2 ≤ c1 P∞ k=0 ||Dt−1Dt−2 · · · Dt−k||2 = c1 P∞ j=0 PN−1 k=0 σmax(Dt−1Dt−2 · · · Dt−jN−k) ≤ c1 P∞ j=0 PN−1 k=0 σmax((Qj−1 l=0 Dt−lN−1Dt−lN−2 · · · Dt−lN−N) · Dt−jN−1 · · · Dt−jN−k) ≤ c1 P∞ j=0 PN−1 k=0 (1 −ϵ)j · σmax(Dt−jN−1 · · · Dt−jN−k) ≤ c1 P∞ j=0 PN−1 k=0 (1 −ϵ)j · (ψ′)k ≤ c1 1 ϵ N(1 + ψ′)N The third inequality follows from Lemma 3.3, and the fourth inequality follows from our assumption that ||Kt||F ≤ψ, so that σmax(Dt) ≤||Dt||F ≤||A||F + ||B||F ||Kt||F ≤ ||A||F + ψ||Bt||F = ψ′. Hence, ||xt||2 remains uniformly bounded for all t. □ Theorem 3.4 guarantees that, using our algorithm, we can safely apply any adaptive control algorithm f to our system. As discussed previously, it is difficult to exactly characterize the class of BIBO-stable controllers, and thus the set of controllers that we can safety accept. However, it is possible to show a partial converse to Theorem 3.4 that certain large, “reasonable” classes of adaptive control methods will always have their proposed controllers accepted by our method. For example, it is a folk theorem in control that if we use only stable sets of gains (K : λmax(A + BK) < 1), and if we switch “sufficiently slowly” between them, then system will be stable. For our specific algorithm, we can show the following: Theorem 3.5: Let any 0 < ϵ < 1 be fixed, and let K ⊆Rnu×nx be a finite set of controller gains, so that for all K ∈K, we have λmax(A + BK) < 1. Then there exist constants N0 and k so that for all N ≥N0, if (i) Our algorithm is run with parameters N, ϵ, and (ii) The adaptive control algorithm f picks only gains in K, and moreover switches gains no more than once every k steps (i.e., ˆKt ̸= ˆKt+1 ⇒ˆKt+1 = ˆKt+2 = · · · = ˆKt+k), then all controllers proposed by f will be accepted. 0 10 20 30 40 50 60 70 80 90 100 10 0 10 10 10 20 10 30 10 40 10 50 10 60 10 70 10 80 10 90 10 100 t state (x1) 0 10 20 30 40 50 60 70 80 90 100 −2 −1 0 1 2 3 4 t state (x1) 0 200 400 600 800 1000 1200 1400 1600 1800 2000 −0.5 0 0.5 1 1.5 t controller numder (a) (b) (c) Figure 1: (a) Typical state sequence (first component xt,1 of state vector) using switching controllers from Equation (6). (Note log-scale on vertical axis.) (b) Typical state sequence using our algorithm and the same controller f. (N = 150, ϵ = 0.1) (c) Index of the controller used over time, when using our algorithm. The proof is omitted due to space constraints. A similar result also holds if K is infinite (but ∃c > 0, ∀K ∈K, λmax(A + BK) ≤1 −c), and if the proposed gains change on every step but the differences || ˆKt −ˆKt+1||F between successive values is small. 4 Experiments We now present experimental results illustrating the behavior of our algorithm. In the first experiment, we apply the switching controller given in (6). Figure 1a shows a typical state sequence resulting from using this controller without using our algorithm to monitor it (and wt’s from an IID standard Normal distribution). Even though λmax(Dt) < 1 for all t, the controlled system is unstable, and the state rapidly diverges. In contrast, Figure 1b shows the result of rerunning the same experiment, but using our algorithm to accept or reject controllers. The resulting system is stable, and the states remain small. Figure 1c also shows which of the two controllers in (6) is being used at each time, when our algorithm is used. (If do not use our algorithm so that the controller switches on every time step, this figure would switch between 0 and 1 on every time step.) We see that our algorithm is rejecting most of the proposed switches to the controller; specifically, it is permitting f to switch between the two controllers only every 140 steps or so. By slowing down the rate at which we switch controllers, it causes the system to become stable (compare Theorem 3.5). In our second example, we will consider a significantly more complex setting representative of a real-world application. We consider controlling a Boeing 747 aircraft in a setting where the states are only partially observable. We have a four-dimensional state vector xt consisting of the sideslip angle β, bank angle φ, yaw rate, and roll rate of the aircraft in cruise flight. The two-dimensional controls ut are the rudder and aileron deflections. The state transition dynamics are given as in Equation (1)6 with IID gaussian disturbance terms wt. But instead of observing the states directly, on each time step t we observe only yt = Cxt + vt, (20) where yt ∈Rny, and the disturbances vt ∈Rny are distributed Normal(⃗0, Σv). If the system is stationary (i.e., if A, B, C, Σv, Σw were fixed), then this is a standard LQG problem, and optimal estimates ˆxt of the hidden states xt are obtained using a Kalman filter: ˆxt+1 = Lt(yt+1 −C(Axt + But)) + Aˆxt + But, (21) where Lt ∈Rnx×ny is the Kalman filter gain matrix. Further, it is known that, in LQG, the optimal steady state controller is obtained by picking actions according to ut = Ktˆxt, where Kt are appropriate control gains. Standard algorithms exist for solving for the optimal steady-state gain matrices L and K. [1] In our aircraft control problem, C = h 0 1 0 0 0 0 0 1 i , so that only two of the four state variables and are observed directly. Further, the noise in the observations varies over time. Specifically, sometimes the variance of the first observation is Σv,11 = Var(vt,1) = 2 6The parameters A ∈R4×4 and B ∈R4×2 are obtained from a standard 747 (“yaw damper”) model, which may be found in, e.g., the Matlab control toolbox, and various texts such as [6]. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 4 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 6 7 8 9 10 x 10 4 −0.5 0 0.5 1 1.5 2 2.5 (a) (b) Figure 2: (a) Typical evolution of true Σv,11 over time (straight lines) and online approximation to it. (b) Same as (a), but showing an example in which the learned variance estimate became negative. while the variance of the second observation is Σv,22 = Var(vt,2) = 0.5; and sometimes the values of the variances are reversed Σv,11 = 0.5, Σv,22 = 2. (Σv ∈R2×2 is diagonal in all cases.) This models a setting in which, at various times, either of the two sensors may be the more reliable/accurate one. Since the reliability of the sensors changes over time, one might want to apply an online learning algorithm (such as online stochastic gradient ascent) to dynamically estimate the values of Σv,11 and Σv,22. Figure 2 shows a typical evolution of Σv,11 over time, and the result of using a stochastic gradient ascent learning algorithm to estimate Σv,11. Empirically, a stochastic gradient algorithm seems to do fairly well at tracking the true Σv,11. Thus, one simple adaptive control scheme would be to take the current estimate of Σv at each time step t, apply a standard LQG solver giving this estimate (and A, B, C, Σw) to it to obtain the optimal steady-state Kalman filter and control gains, and use the values obtained as our proposed gains Lt and Kt for time t. This gives a simple method for adapting our controller and Kalman filter parameters to the varying noise parameters. The adaptive control algorithm that we have described is sufficiently complex that it is extremely difficult to prove that it gives a stable controller. Thus, to guarantee BIBO stability of the system, one might choose to run it with our algorithm. To do so, note that the “state” of the controlled system at each time step is fully characterized by the true world state xt and the internal state estimate of the Kalman filter ˆxt. So, we can define an augmented state vector ˜xt = [xt; ˆxt] ∈R8. Because xt+1 is linear in ut (which is in turn linear in ˆxt) and similarly ˆxt+1 is linear in xt and ut (substitute (20) into (21)), for a fixed set of gains Kt and Lt, we can express ˜xt+1 as a linear function of ˜xt plus a disturbance: ˜xt+1 = ˜Dt˜xt + ˜wt. (22) Here, ˜Dt depends implicitly on A, B, C, Lt and Kt. (The details are not complex, but are omitted due to space). Thus, if a learning algorithm is proposing new ˆKt and ˆLt matrices on each time step, we can ensure that the resulting system is BIBO stable by computing the corresponding ˜Dt as a function of ˆKt and ˆLt, and running our algorithm (with ˜Dt’s replacing the Dt’s) to decide if the proposed gains should be accepted. In the event that they are rejected, we set Kt = Kt−1, Lt = Lt−1. It turns out that there is a very subtle bug in the online learning algorithm. Specifically, we were using standard stochastic gradient ascent to estimate Σv,11 (and Σv,22), and on every step there is a small chance that the gradient update overshoots zero, causing Σv,11 to become negative. While the probability of this occurring on any particular time step is small, a Boeing 747 flown for sufficiently many hours using this algorithm will eventually encounter this bug and obtain an invalid, negative, variance estimate. When this occurs, the Matlab LQG solver for the steady-state gains outputs L = 0 on this and all successive time steps.7 If this were implemented on a real 747, this would cause it to ignore all observations (Equation 21), enter divergent oscillations (see Figure 3a), and crash. However, using our algorithm, the behavior of the system is shown in Figure 3b. When the learning algorithm 7Even if we had anticipated this specific bug and clipped Σv,11 to be non-negative, the LQG solver (from the Matlab controls toolbox) still outputs invalid gains, since it expects nonsingular Σv. 0 1 2 3 4 5 6 7 8 9 10 x 10 4 −8 −6 −4 −2 0 2 4 6 8 x 10 5 0 1 2 3 4 5 6 7 8 9 10 x 10 4 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 (a) (b) Figure 3: (a) Typical plot of state (xt,1) using the (buggy) online learning algorithm in a sequence in which L was set to zero part-way through the sequence. (Note scale on vertical axis; this plot is typical of a linear system entering divergent/unstable oscillations.) (b) Results on same sequence of disturbances as in (a), but using our algorithm. encounters the bug, our algorithm successfully rejects the changes to the gains that lead to instability, thereby keeping the system stable. 5 Discussion Space constraints preclude a full discussion, but these ideas can also be applied to verifying the stability of certain nonlinear dynamical systems. For example, if the A (and/or B) matrix depends on the current state but is always expressible as a convex combination of some fixed A1, . . . , Ak, then we can guarantee BIBO stability by ensuring that (10) holds for all combinations of Dt = Ai + BKt defined using any Ai (i = 1, . . . k).8 The same idea also applies to settings where A may be changing (perhaps adversarially) within some bounded set, or if the dynamics are unknown so that we need to verify stability with respect to a set of possible dynamics. In simulation experiments of the Stanford autonomous helicopter, by using a linearization of the non-linear dynamics, our algorithm was also empirically successful at stabilizing an adaptive control algorithm that normally drives the helicopter into unstable oscillations. References [1] B. Anderson and J. Moore. Optimal Control: Linear Quadratic Methods. Prentice-Hall, 1989. [2] Karl Astrom and Bjorn Wittenmark. Adaptive Control (2nd Edition). Addison-Wesley, 1994. [3] V. D. Blondel and J. N. Tsitsiklis. The boundedness of all products of a pair of matrices is undecidable. Systems and Control Letters, 41(2):135–140, 2000. [4] Michael S. Branicky. Analyzing continuous switching systems: Theory and examples. In Proc. American Control Conference, 1994. [5] Michael S. Branicky. Stability of switched and hybrid systems. In Proc. 33rd IEEE Conf. Decision Control, 1994. [6] G. Franklin, J. Powell, and A. Emani-Naeini. Feedback Control of Dynamic Systems. AddisonWesley, 1995. [7] M. Johansson and A. Rantzer. On the computation of piecewise quadratic lyapunov functions. In Proceedings of the 36th IEEE Conference on Decision and Control, 1997. [8] H. Khalil. Nonlinear Systems (3rd ed). Prentice Hall, 2001. [9] Daniel Liberzon, Jo˜ao Hespanha, and A. S. Morse. Stability of switched linear systems: A lie-algebraic condition. Syst. & Contr. Lett., 3(37):117–122, 1999. [10] J. Nakanishi, J.A. Farrell, and S. Schaal. A locally weighted learning composite adaptive controller with structure adaptation. In International Conference on Intelligent Robots, 2002. [11] T. J. Perkins and A. G. Barto. Lyapunov design for safe reinforcement learning control. In Safe Learning Agents: Papers from the 2002 AAAI Symposium, pages 23–30, 2002. [12] Jean-Jacques Slotine and Weiping Li. Applied Nonlinear Control. Prentice Hall, 1990. 8Checking all kN such combinations takes time exponential in N, but it is often possible to use very small values of N, sometimes including N = 1, if the states xt are linearly reparameterized (x′ t = Mxt) to minimize σmax(D0).
|
2004
|
114
|
2,524
|
The Rescorla-Wagner algorithm and Maximum Likelihood estimation of causal parameters. Alan Yuille Department of Statistics University of California at Los Angeles Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract This paper analyzes generalization of the classic Rescorla-Wagner (RW) learning algorithm and studies their relationship to Maximum Likelihood estimation of causal parameters. We prove that the parameters of two popular causal models, ∆P and PC, can be learnt by the same generalized linear Rescorla-Wagner (GLRW) algorithm provided genericity conditions apply. We characterize the fixed points of these GLRW algorithms and calculate the fluctuations about them, assuming that the input is a set of i.i.d. samples from a fixed (unknown) distribution. We describe how to determine convergence conditions and calculate convergence rates for the GLRW algorithms under these conditions. 1 Introduction There has recently been growing interest in models of causal learning formulated as probabilistic inference [1,2,3,4,5]. There has also been considerable interest in relating this work to the Rescorla-Wagner learning model [3,5,6] (also known as the delta rule). In addition, there are studies of the equilibria of the Rescorla-Wagner model [6]. This paper proves mathematical results about these related topics. In Section (2), we describe two influential models, ∆P and PC, for causal inference and how their parameters can be learnt by maximum likelihood estimation from training data. Section (3) introduces the generalized linear Rescorla-Wagner (GLRW) algorithm, characterize its fixed points and quantify its fluctuations. We demonstrate that a simple GLRW can estimate the ML parameters for both the ∆P and PC models provided certain genericity conditions are satisfied. But the experimental conditions studied by Cheng [2] require a non-linear generalization of Rescorla-Wagner (Yuille, in preparation). Section (4) gives a way to determine convergence conditions and calculate the convergence rates of GLRW algorithms. Finally Section (5) sketches how the results in this paper can be extended to allow for arbitrary number of causes. 2 Causal Learning and Probabilistic Inference The task is to estimate the causal effect of variables. There is an observed event E and two causes C1, C2. Observers are asked to determine the causal power of the two causes. The variables are binary-valued. E = 1 means the event occurs, E = 0 means it does not. Similarly for causes C1 and C2. Much of the work in this section can be generalized to cases where there are an arbitrary number of causes C1, C2, ..., CN, see section (5). The training data {(Eµ, Cµ 1 , Cµ 2 )} is assumed to be samples from an unknown distribution Pemp(E, C1, C2). Two simple models, ∆P [1] and PC [2,3], have been proposed to account for how people estimate causal power. There is also a more recent theory based on model selection [4]. The ∆P and PC theories are equivalent to assuming probability distributions for how the training data is generated. Then the power of the causes is given by the maximum likelihood estimation of the distribution parameters ω1, ω2. The two theories correspond to probability distributions P∆P (E|C1, C2, ω1, ω2) and PP C(E|C1, C2, ω1, ω2) given by: P∆P (E = 1|C1, C2, ω1, ω2) = ω1C1 + ω2C2. ∆P model. (1) PP C(E = 1|C1, C2, ω1, ω2) = ω1C1 + ω2C2 −ω1ω2C1C2. PC model. (2) The later is a noisy-or model. The event E = 1 can be caused by C1 = 1 with probability ω1, by C2 = 1 with probability ω2, or caused by both. The model can be derived by setting PP C(E = 0|C1, C2, ω1, ω2) = (1 −ω1C1)(1 −ω2C2). We assume that there is also a distribution on the causes P(C1, C2|⃗γ) which the observers also learn from the training data. This is equivalent to maximizing (with respect to ω1, ω2,⃗γ)): P({(Eµ, ⃗Cµ)} : ⃗ω,⃗γ) = Y µ P(Eµ, ⃗Cµ : ⃗ω,⃗γ) = Y µ P(Eµ|⃗Cµ : ⃗ω)P( ⃗Cµ : ⃗γ). (3) By taking logarithms, we see that estimating ω1, ω2 and ⃗γ are independent. So we will concentrate on estimating the ω1, ω2. If the training data {Eµ, ⃗Cµ} is consistent with the model – i.e. there exist parameters ω1, ω2 such Pemp(E|C1, C2) = P(E|C1, C2, ω1, ω) 2 – then we can calculate the solution directly. For the ∆P model, we have: ω1 = Pemp(E = 1|C1 = 1, C2 = 0) = Pemp(E = 1|C1 = 1), ω2 = Pemp(E = 1|C1 = 0, C2 = 1) = Pemp(E = 1|C2 = 1). (4) For the PP C model, we obtain Cheng’s measures of causality [2,3]. ω1 = Pemp(E = 1|C1 = 1, C2) −Pemp(E = 1|C1 = 0, C2) 1 −Pemp(E = 1|C1 = 0, C2)} ω2 = Pemp(E = 1|C1, C2 = 1) −Pemp(E = 1|C1, C2 = 0) 1 −Pemp(E = 1|C1, C2 = 0)} . (5) 3 Generalized Linear Rescorla-Wagner The Rescorla-Wagner model [7] is an alternative way to account for human learning. This iterative algorithm specifies an update rule for weights. These weights could measure the strength of a cause, such as the parameters of the Maximum Likelihood estimation. Following recent work [3,6], we seek to find relationships between generalized linear RescorlaWagner (GLRW) and ML estimation. 3.1 GLRW and two special cases The Rescorla-Wagner algorithm updates weights {⃗V } using training data {Eµ, ⃗Cµ}. It is of form: ⃗V t+1 = ⃗V t + ∆⃗V t. (6) In this paper, we are particularly concerned with two special cases for choice of the update ∆V . ∆V1 = α1C1(E −C1V1 −C2V2), ∆V2 = α2C2(E −C1V1 −C2V2), basic (7) ∆V1 = α1C1(1 −C2)(E −V1), ∆V2 = α2C2(1 −C1)(E −V2), variant. (8) The first (7) is the basic RW algorithm. The second (8) is a variant of RW with a natural interpretation – a weight V1 is updated only if one cause is present, C1 = 1, and the other cause is absent, C2 = 0. The most general GLRW is of form: ∆V t i = N X j=1 V t j fij(Et, ⃗Ct) + gi(Et, ⃗Ct), ∀i, (9) where {fij(., .) : i, j = 1, ..., N} and {gi(.) : i = 1, ..., N} are functions of the data samples Eµ, ⃗Cµ. 3.2 GLRW and Stochastic Samples Our analysis assumes that the data samples {Eµ, ⃗Cµ)} are independent identical (i.i.d.) samples from an unknown distribution Pemp(E|⃗C)P( ⃗C). In this case, the GLRW becomes stochastic. It defines a distribution on weights which is updated as follows: P(⃗V t+1|⃗V t) = Z dEt d⃗Ct N Y i=1 δ(V t+1 i −V t i −∆V t i )P(Et, ⃗Ct). (10) This defines a Markov Chain. If certain conditions are satisfied (see section (4), the chain will converge to a fixed distribution P ∗(V ). This distribution can be characterized by its expected mean < V >∗= P V V P ∗(V ) and its expected covariance Σ∗= P V (V −< V >∗)(V −< V >∗)T P ∗(V ). In other words, even after convergence the weights will fluctuate about the expected mean < V >∗and the magnitude of the fluctuations will be given by the expected covariance. 3.3 What Does GLRW Converge to? We now compute the means and covariance of the fixed point distribution P ∗(⃗V ). We first do this for the GLRW, equation (9), and then we restrict ourselves to the two special cases, equations (7,8). Theorem 1. The means ⃗V ∗and the covariance Σ∗of the fixed point distribution P ∗(⃗V ), using the GLRW equation (9) and any empirical distribution Pemp(E, ⃗C) are given by the solutions to the linear equations, N X j=1 V ∗ j X E, ⃗C fij(E, ⃗C)Pemp(E, ⃗C) + X E, ⃗C gi(E, ⃗C)Pemp(E, ⃗C) = 0, ∀i, (11) and ∀i, j: Σ∗ ik = X jl Σ∗ jl X E, ⃗C Aij(E, ⃗C)Akl(E, ⃗C)Pemp(E, ⃗C) + X E, ⃗C Bi(E, ⃗C)Bk(E, ⃗C)Pemp(E, ⃗C), (12) where Aij(E, ⃗C) = δij + fij(E, ⃗C) and Bi(E, ⃗C) = P j V ∗ j fij(E, ⃗C) + gi(E, ⃗C) (here δij is the Kronecker delta function defined by δij = 1, if i = j and = 0 otherwise). The means have a unique solution provided P E, ⃗C Pemp(E, ⃗C)fij(E, ⃗C) is an invertible matrix. Proof. We derive the formula for the means ⃗V ∗by taking the expectation of the update rule, see equations (9), with respect to P ∗(⃗V ) and Pemp(E, ⃗C). To calculate the covariances, we express the update rule as: V t+1 i −V ∗ i = X j (V t j −V ∗ j )Aij(E, ⃗C) + Bi(E, ⃗C), ∀i (13) with Aij(E, ⃗C) and Bi(E, ⃗C) defined as above. Then we multiply both sides of equation (13) by their transpose (e.g. the left hand side by (V t+1 k −V ∗ k )) and taking the expectation with respect to P ∗(⃗V ) and Pemp(E, ⃗C) (making use of the result that the expected value of V t j −V ∗ j is zero as t →∞. We can apply these results to study the behaviour of the two special cases, equations (7,8), when the data is generated by either the ∆P or PC model. First consider the basic RW algorithm (7) when the data is generated by the P∆P model. We can use Theorem 1 to rederive the result that < ⃗V >∗= ⃗ω [3,6], and so basic RW performs ML estimation for the P∆P model. It also follows directly. that if the data is generated by the PP C model, then < ⃗V >∗̸= ⃗ω (although they are related by a nonlinear equation). Now consider the variant RW, equation (8). Theorem 2. The expected means of the fixed points of the variant RW equation (8) when the data is generated by probability model PP C(E|⃗C, ⃗ω) or P∆P (E|⃗C; ⃗ω) are given by: V ∗ 1 = ω1, V ∗ 2 = ω2, (14) provided Pemp(⃗C) satisfies genericity conditions so that < C1(1−C2) >< C2(1−C1) ≯= 0. The expected covariances are given by: Σ11 = ω1(1 −ω1) α1 2 −α1 , Σ22 = ω2(1 −ω2) α2 2 −α2 , Σ12 = Σ21 = 0. (15) . Proof. This is a direct calculation of quantities specified in Theorem 1. For example, we calculate the expected value of ∆V1 and ∆V2 first with respect to P(E|⃗C) and then with respect to P ∗(V ). This gives: < ∆V1 >P (E| ⃗C)P ∗(V )= α1C1(1 −C2)(ω1 −V ∗ 1 ), < ∆V2 >P (E| ⃗C)P ∗(V )= α2C2(1 −C1)(ω2 −V ∗ 2 ), (16) where we have used P V P ∗(V )V = V ∗, P E EPP C(E|⃗C) = ω1C1+ω2C2−ω1ω2C1C2, and logical relations to simply the terms (e.g. C2 1 = C1, C1(1 −C1) = 0). Taking the expectation of < ∆V1 >P (E| ⃗C)P ∗(V ) with respect to P(C) gives, α1ω1 < C1(1 −C2) >P (C) −α1V ∗ 1 < C1(1 −C2) >= 0, α2ω2 < C2(1 −C1) >P (C) −α2V ∗ 2 < C2(1 −C1) >= 0, (17) and the result follows directly, except for non-generic cases where < C1(1 −C2) >= 0 or < C2(1 −C1) >= 0. These degenerate cases are analyzed separately. It is perhaps surprising that the same GLRW algorithm can perform ML estimation when the data is generated by either model P∆P or PP C (and this can be generalized, see section (5)). Moreover, the expected covariance is the same for both models. Observe that the covariance decreases if we make the update coefficients α1, α2 of the algorithm small. The convergence rates are given in the next section. The non-generic cases include the situation studied in [2] where C1 is a background cause that it assumed to be always present, so < C1 >= 1. In this case V ∗ 1 = ω1, but V ∗ 2 is unspecified. It can be shown (Yuille, in preparation) that a nonlinear generalization of RW can perform ML on this problem (but it is eay to check that no GLRW can). But an even more ambiguous case occurs when ω1 = 1 (i.e. cause C1 always causes event E), then there is no way to estimate ω2 and Cheng’s measure of causality, equation (5), becomes undefined. 4 Convergence of Rescorla-Wagner We now analyze the convergence of the GLRW algorithm. We obtain conditions for the algorithm to converge and give the convergence rates. For simplicity, the results will be illustrated only on the simple models. Our results are based on the following theorem for the convergence of the state vector of a stochastic iterative equation. The theorem gives necessary and sufficient conditions for convergence, shows what the expected state vector converges to, and gives the rate of convergence. Theorem 3. Let ⃗zt+1 = At⃗zt be an iterative update equation, where ⃗z is a state vector and the update matrices At are i.i.d. samples from P(A). The convergence properties as t → ∞depends on < A >= P A AP(A). If < A > has a unit eigenvalue with eigenvector ⃗z∗and the next largest eigenvalue has modulus λ < 1, then limt→∞< ⃗zt >∝⃗z∗and the rate of convergence is et log λ. If the moduli of the eigenvalues of < A > are all less than 1, then limt→∞< ⃗zt >= 0. If < A > has an eigenvalue with modulus greater than 1, then < ⃗zt > diverges as t →∞. Proof. This is a standard result. To obtain it, write ⃗zt+1 = AtAt−1....A1⃗z1, where ⃗z1 is the initial condition. Now take the expectation of ⃗zt+1 with respect to the samples {(at, bt)}. By the i.i.d. assumption, this gives < ⃗zt+1 >=< A >t ⃗z1. The result follows by linear algebra. Let the eigenvectors and eigenvalues of < A > be {(λi,⃗ei)}. Express the initial conditions as ⃗z1 = P γi⃗ei where the {γi} are coefficients. Then < ⃗zt >= P i γiλt⃗ei, and the result follows. We use Theorem 3 to obtain convergence results for the GLRW algorithm. To ensure convergence, we need both the expected covariance and the expected means to converge. Then Markov’s lemma can be used to bound the fluctuations. (If we just require the expected means to converge, then the fluctuations of the weights may be infinitely large). This can be done by a suitable choice of the state vector ⃗z. For simplicity of algebra, we demonstrate this for a GLRW algorithm with a single weight. The update rule is Vt+1 = atVt + bt where at, bt are random samples. We define the state vector to be ⃗z = (V 2 t , Vt, 1). Theorem 4. Consider the stochastic update rule Vt+1 = atVt + bt where at and bt are samples from distributions Pa(a) and Pb(b). Define α1 = P a a2P(a), α2 = P a aP(a), β1 = P b b2P(b), β2 = P b bP(b), and γ = 2 P a,b abP(a, b). The algorithm converges if, and only if, α1 < 1, α2 < 1. If so, then limt→∞< Vt >=< V >= β2 1−α2 , limt→∞< (Vt−< V >)2 >= β1(1−α2)+γβ2 (1−α1)(1−α2) − β2 2 (1−α2)2 . The convergence rate is {max{α1, |α2|}t. Proof. Define ⃗zt = (V 2 t , Vt, 1) and express the update rule in matrix form: V 2 t+1 Vt+1 1 = a2 t 2atbt b2 t 0 at bt 0 0 1 V 2 t Vt 1 This is of the form analyzed in Theorem 3 provided we set: A = a2 t 2atbt b2 t 0 at bt 0 0 1 and < A >= α1 γ β1 0 α2 β2 0 0 1 ! , where α1 = P a a2P(a), α2 = P a aP(a), β1 = P b b2P(b), β2 = P b bP(b), and γ = 2 P a,b abP(a, b). The eigenvalues {λ} and eigenvectors {⃗e} of < A > are: λ1 = 1, ⃗e1 ∝(β1(1 −α2) + γβ2 (1 −α1)(1 −α2) , β2 1 −α2 , 1) λ2 = α1, ⃗e2 = (1, 0, 0), λ3 = α2, ⃗e3 ∝( γ α2 −α1 , 1, 0). (18) The result follows from Theorem 3. Observe that if |α2| < 1 but α1 > 1, then < Vt > will converge but the expected variance does not. The fluctuations in the GLRW algorithm will be infinitely large. We can extend Theorem 4 to the variant of RW equation (8). Let P = Pemp, then β12 = X E, ⃗C P(E|⃗C)P( ⃗C)C1(1 −C2), β21 = X E, ⃗C P(E|⃗C)P(⃗C)C2(1 −C1), γ12 = X E, ⃗C P(E|⃗C)P( ⃗C)EC1(1 −C2), γ21 = X E, ⃗C P(E|⃗C)P(⃗C)EC2(1 −C1). (19) If the data is generated by P∆P or PP C, then β12, β21, γ12, γ21 take the same values: β12 =< C1(1 −C2) >, β21 =< (1 −C1)C2 >, γ12 = ω1 < C1(1 −C2) >, γ21 = ω2 < (1 −C1)C2 > . (20) Theorem 5. The algorithm specified by equation (8) converges provided λ∗ = max{|λ2|, |λ3|, |λ4|, |λ5|} < 1, where λ2 = 1 −(2α1 −α2 1)β12, λ3 = 1 −(2α2 −α2 2)β21, λ4 = 1 −α1β12 λ5 = 1 −α2β21. The convergence rate is et log λ∗. The expected means and covariances can be calculated from the first eigenvector. Proof. We define the state vector ⃗z = (V 2 1 , V 2 2 , V1, V2, 1) and derive the update matrix A from equation (8). The eigenvectors and eigenvalues can be calculated (calculations omitted due to space constraints). The eigenvalues are 1, λ1, λ2, λ3, λ4. The convergence conditions and rates follow from Theorem 3. The expected means and covariances can be calculated from the first eigenvector, which is: ⃗e1 = (2(α1 −α2 1)γ2 12 (2α1 −α2 1)β2 12 + α2 1γ12 (2α1 −α2 1)β12 , 2(α2 −α2 2)γ2 21 (2α2 −α2 2)β2 21 + α2 2γ21 (2α2 −α2 2)β21 , γ12 β12 , γ21 β21 , 1), (21) and they agree with the calculations given in Theorem 2. 5 Generalization The results of the previous sections can be generalized to cases where there are more than two causes. For example, we can use the generalization of the PC model to include multiple generative causes ⃗C and preventative causes ⃗L, [5] extending [2]. The probability distribution for this generalized PC model is: PP C(E = 1|⃗C, ⃗L; ⃗ω, ⃗Ω) = {1 − n Y i=0 (1 −ωiCi)} m Y j=1 (1 −ΩjLj), (22) where there are n + 1 generative causes {Ci} and m preventative causes {Lj} specified in terms of parameters {ωi} and {Ωj} (constrained to lie between 0 and 1). We assume that there is a single background cause C0 which is always on (i.e. C0 = 1) and whose strength ω0 is known (for relaxing this constraint, see Yuille in preparation). Then it can be shown that the following GLRW algorithm will converge to the ML estimates of the remaining parameters {ω1 : 1 = 1, ..., n} and {Ωj : j = 1, ..., m} of the generalized PC model: ∆V t k = Ck{ m Y i=1 (1 −Li) n Y j=1:j̸=k (1 −Cj)}(E −ω0 −(1 −ω0)V t k), ∆U t l = Ll{ m Y k=1:k̸=l (1 −Lk) n Y j=1 (1 −Cj)}(E −ω0 −ω0U t l ), (23) where {Vk : k = 1, ..., n} and {Ul : l = 1, ..., m} are weights. The proof is straightforward algebra and is based on the following identity for binary variables: Q j(1 −ΩjLj) Q j(1 −Lj) = Q j(1 −Lj). The GLRW algorithm (23) will also perform ML estimation for data generated by other probability distributions which share the same linear terms as the generalized PC model (i.e. the terms linear in the {ωi} and {Ωj}.) The convergence conditions and the convergence rates can be calculated using the techniques in section (4). These results all assume genericity conditions so that none of the generative or preventative causes is either always on or always off (i.e. ruling out case like [2]). 6 Conclusion This paper introduced and studied generalized linear Rescorla-Wagner (GLRW) algorithms. We showed that two influential theories, ∆P and PC, for estimating causal effects can be implemented by the same GLRW, see (8). We obtained convergence results for GLRW including classifying the fixed points, calculating the asymptotic fluctuations, and the convergence rates. Our results assume that the GLRW are i.i.d. samples from an unknown empirical distribution Pemp(E, ⃗C). Observe that the fluctuations of GLRW can be removed by introducing damping coefficients which decrease over time. Stochastic approximation theory [8] can then be used to give conditions for convergence. More recent work (Yuille in preparation) clarifies the class of maximum likelihood inference problems that can be “solved” by GLRW and by non-linear GLRW. In particular, we show that a non-linear RW can perform ML estimation for the non-generic case studied by Cheng. We also investigate similarities to Kalman filter models [9]. Acknowledgements I thank Patricia Cheng, Peter Dayan and Yingnian Wu for helpfell discussions. Anonymous referees gave useful feedback that has motivated a follow-up paper. This work was partially supported by an NSF SLC catalyst grant “Perceptual Learning and Brain Plasticity” NSF SBE-0350356. References [1]. B. A. Spellman. “Conditioning Causality”. In D.R. Shanks, K.J. Holyoak, and D.L. Medin, (eds). Causal Learning: The Psychology of Learning and Motivation, Vol. 34. San Diego, California. Academic Press. pp 167-206. 1996. [2]. P. Cheng. “From Covariance to Causation: A Causal Power Theory”. Psychological Review, 104, pp 367-405. 1997. [3]. M. Buehner and P. Cheng. “Causal Induction: The power PC theory versus the Rescorla-Wagner theory”. In Proceedings of the 19th Annual Conference of the Cognitive Science Society”. 1997. [4]. J.B. Tenenbaum and T.L. Griffiths. “Structure Learning in Human Causal Induction”. Advances in Neural Information Processing Systems 12. MIT Press. 2001. [5]. D. Danks, T.L. Griffiths, J.B. Tenenbaum. “Dynamical Causal Learning”. Advances in Neural Information Processing Systems 14. 2003. [6]. D. Danks. “Equilibria of the Rescorla-Wagner Model”. Journal of Mathematical Psychology. Vol. 47, pp 109-121. 2003. [7]. R.A. Rescorla and A.R. Wagner. “A Theory of Pavlovian Conditioning: Variations in the Effectiveness of Reinforcement and Nonreinforcement”. In A.H. Black andW.F. Prokasy, eds. Classical Conditioning II: Current Research and Theory. New York. Appleton-Century-Crofts, pp 64-99. 1972. [8]. H.J. Kushner and D.S. Clark. Stochastic Approximation for Constrained and Unconstrained Systems. New York. Springer-Verlag. 1978. [9]. P. Dayan and S. Kakade. “Explaining away in weight space”. In Advances in Neural Information Processing Systems 13. 2001.
|
2004
|
115
|
2,525
|
Contextual models for object detection using boosted random fields Antonio Torralba MIT, CSAIL Cambridge, MA 02139 torralba@mit.edu Kevin P. Murphy UBC, CS Vancouver, BC V6T 1Z4 murphyk@cs.ubc.edu William T. Freeman MIT, CSAIL Cambridge, MA 02139 billf@mit.edu Abstract We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes. 1 Introduction Our long-term goal is to build a vision system that can examine an image and describe what objects are in it, and where. In many images, such as Fig. 5(a), objects of interest, such as the keyboard or mouse, are so small that they are impossible to detect just by using local features. Seeing a blob next to a keyboard, humans can infer it is likely to be a mouse; we want to give a computer the same abilities. There are several pieces of related work. Murphy et al [9] used global scene context to help object recognition, but did not model relationships between objects. Fink and Perona [4] exploited local dependencies in a boosting framework, but did not allow for multiple rounds of communication between correlated objects. He et al [6] do not model connections between objects directly, but rather they induce such correlations indirectly, via a bank of hidden variables, using a “restricted Boltzmann machine” architecture. In this paper, we exploit contextual correlations between the object classes by introducing Boosted Random Fields (BRFs). Boosted random fields build on both boosting [5, 10] and conditional random fields (CRFs) [8, 7, 6]. Boosting is a simple way of sequentially constructing “strong” classifiers from “weak” components, and has been used for singleclass object detection with great success [12]. Dietterich et al [3] combine boosting and 1D CRFs, but they only consider the problem of learning the local evidence potentials; we consider the much harder problem of learning the structure of a 2D CRF. Standard applications of MRFs/ CRFs to images [7] assume a 4-nearest neighbor grid structure. While successful in low-level vision, this structure will fail in capturing important long distance dependencies between whole regions and across classes. We propose a method for learning densely connected random fields with long range connections. The topology of these connections is chosen by a weak learner which has access to a library of graph fragments, derived from patches of labeled training images, which reflect typical spatial arrangments of objects (similar to the segmentation fragments in [2]). At each round of the learning algorithm, we add more connections from other locations in the image and from other classes (detectors). The connections are assumed to be spatially invariant, which means this update can be performed using convolution followed by a sigmoid nonlinearity. The resulting architecture is similar to a convolutional neural network, although we used a stagewise training procedure, which is much faster than back propagation. In addition to recognizing things, such as cars and people, we are also interested in recognizing spatially extended “stuff” [1], such as roads and buildings. The traditional sliding window approach to object detection does not work well for detecting “stuff”. Instead, we combine object detection and image segmentation (c.f., [2]) by labeling every pixel in the image. We do not rely on a bottom-up image segmentation algorithm, which can be fragile without top-down guidance. 2 Learning potentials and graph structure A conditional random field (CRF) is a distribution of the form P(S|x) = 1 Z Y i φi(Si) Y j∈Ni ψi,j(Si, Sj) where x is the input (e.g., image), Ni are the neighbors of node i, and Si are labels. We have assumed pairwise potentials for notational simplicity. Our goal is to learn the local evidence potentials, φi, the compatibility potentials ψ, and the set of neighbors Ni. We propose the following simple approximation: use belief propagation (BP) to estimate the marginals, P(Si|x), and then use boosting to maximize the likelihood of each node’s training data with respect to φi and ψ. In more detail, the algorithm is as follows. At iteration t, the goal is to minimize the negative log-likelihood of the training data. As in [11], we consider the per-label loss (i.e., we use marginal probabilities), as opposed to requiring that the joint labeling be correct (as in Viterbi decoding). Hence the cost function to be minimized is Jt = Y i Jt i = − Y m Y i bt i,m(Si,m) = − Y m Y i bt i,m(+1)S∗ i,mbt i,m(−1)1−S∗ i,m (1) where Si,m ∈{−1, +1} is the true label for pixel i in training case m, S∗ i,m = (Si,m + 1)/2 ∈{0, 1} is just a relabeling, and bt i,m = [P(Si = −1|xm, t), P(Si = 1|xm, t)] is the belief state at node i given input image xm after t iterations of the algorithm. The belief at node i is given by the following (dropping the dependence on case m) bt i(±1) ∝φt i(±1) M t i (±1) where M t i is the product of all the messages coming into i from all its neighbors at time t and where the message that k sends to i is given by M t+1 i (±1) = Y k∈Ni µt+1 k→i(±1) µt+1 k→i(±1) = X sk∈{−1,+1} ψk,i(sk, ±1) bt k(sk) µt i→k(sk) (2) where ψk,i is the compatility between nodes k and i. If we assume that the local potentials have the form φt i(si) = [eF t i /2; e−F t i /2], where F t i is some function of the input data, then: bt i(+1) = σ(F t i + Gt i), Gt i = log M t i (+1) −log M t i (−1) (3) where σ(u) = 1/(1 + e−u) is the sigmoid function. Hence each term in Eq. 1 simplifies to a cost function similar to that used in boosting: log Jt i = X m log 1 + e−Si,m(F t i,m+Gt i,m) . (4) 1. Input: a set of labeled pairs {xi,m; Si,m}, bound T Output: Local evidence functions f t i (x) and message update functions gt i(bNi). 2. Initialize: bt=0 i,m = 0; F t=0 i,m = 0; Gt=0 i,m = 0 3. For t=1..T. (a) Fit local potential fi(xi,m) by weighted LS to Y t i,m = Si,m(1 + e−Si,m(F t i +Gt i,m)) . (b) Fit compatibilities gt i(bt−1 Ni,m) to Y t i,m by weighted LS. (c) Compute local potential F t i,m = F t−1 i,m + f t i (xi,m) (d) Compute compatibilities Gt i,m = Pt n=1 gn i (bt−1 Ni,m) (e) Update the beliefs bt i,m = σ(F t i,m + Gt i,m) (f) Update weights wt+1 i,m = bt i,m(−1) bt i,m(+1) Figure 1: BRF training algorithm. We assume that the graph is very densely connected so that the information that one single node sends to another is so small that we can make the approximation µt+1 k→i(+1)/µt+1 k→i(−1) ≃1. (This is a reasonable approximation in the case of images, where each node represents a single pixel; only when the influence of many pixels is taken into account will the messages become informative.) Hence Gt+1 i = log M t+1 i (+1) M t+1 i (−1) = X k log P sk∈[−1,+1] ψk,i(sk, +1) bt k,m(sk) µt i→k(sk) P sk∈[−1,+1] ψk,i(sk, −1) bt k,m(sk) µt i→k(sk) (5) ≃ X k log P sk∈[−1,+1] ψk,i(sk, +1) bt k,m(sk) P sk∈[−1,+1] ψk,i(sk, −1) bt k,m(sk) (6) With this simplification, Gt+1 i is now a non-linear function of the beliefs Gt+1 i (⃗bt m) at iteration t. Therefore, We can write the beliefs at iteration t as a function of the local evidences and the beliefs at time t −1: bt i(+1) = σ(F t i (xi,m) + Gt i(⃗bt−1 m )). The key idea behind BRFs is to use boosting to learn the G functions, which approximately implement message passing in densely connected graphs. We explain this in more detail below. 2.1 Learning local evidence potentials Defining F t i (xi,m) = F t−1 i (xi,m) + f t i (xi,m) as an additive model, where xi,m are the features of training sample m at node i, we can learn this function in a stagewise fashion by optimizing the second order Taylor expansion of Eq. 4 wrt f t i , as in logitBoost [5]: arg min f t i log Jt i ≃arg min f t i X m wt i,m(Y t i,m −f t i (xi,m))2 (7) where Y t i,m = Si,m(1+e−Si,m(F t i +Gt i,m)). In the case that the weak learner is a “regression stump”, fi(x) = ah(x)+b, we can find the optimal a, b by solving a weighted least squares problem, with weights wt i,m = bt i(−1) bt i(+1); we can find the best basis function h(x) by searching over all elements of a dictionary. 2.2 Learning compatibility potentials and graph structure In this section, we discuss how to learn the compatibility functions ψij, and hence the structure of the graph. Instead of learning the compatibility functions ψij, we propose to 1. Input: a set of inputs {xi,m} and functions f t i , gt i Output: Set of beliefs bi,m and MAP estimates Si,m. 2. Initialize: bt=0 i,m = 0; F t=0 i,m = 0; Gt=0 i,m = 0 3. From t = 1 to T, repeat (a) Update local evidences F t i,m = F t−1 i,m + f t i (xi,m) (b) Update compatibilities Gt i,m = Pt n=1 gn i (bt−1 Ni,m) (c) Compute current beliefs bt i,m = σ(F t i,m + Gt i,m) 4. Output classification is Si,m = δ bt i,m > 0.5 Figure 2: BRF run-time inference algorithm. learn directly the function Gt+1 i . We propose to use an additive model for Gt+1 i as we did for learning F: Gt+1 i,m = Pt n=1 gn i (⃗bt m), where ⃗bt m is a vector with the beliefs of all nodes in the graph at iteration t for the training sample m. The weak learners gn i (⃗bt m) can be regression stumps with the form gn i (⃗bt m) = aδ(⃗w ·⃗bt m > θ) + b, where a, b, θ are the parameters of the regression stump, and ⃗wi is a set of weights selected from a dictionary. In the case of a graph with weak and almost symmetrical connections (which holds if ψ(s1, s2) ≈1, for all (s1, s2), which implies the messages are not very informative) we can further simplify the function Gt+1 i by approximating it as a linear function of the beliefs: Gt+1 i,m = X k∈Ni αk,i bt k,m(+1) + βk,i (8) This step reduces the computational cost. The weak learners gn i (⃗bt m) will also be linear functions. Hence the belief update simplifies to bt+1 i,m(+1) = σ(⃗αi ·⃗bt m +βi +F t i,m), which is similar to the mean-field update equations. The neighborhood Ni over which we sum incoming messages is determined by the graph structure, which is encoded in the non-zero values of αi. Each weak learner gn i will compute a weighted combination of the beliefs of the some subset of the nodes; this subset may change from iteration to iteration, and can be quite large. At iteration t, we choose the weak learner gt i so as to minimize log Jt i (bt−1) = − X m log 1 + e−Si,m(F t i,m+gt i(bt−1 m )+Pt−1 n=1 gn i (bt−1 m )) which reduces to a weighted least squares problem similar to Eq. 7. See Fig. 1 for the pseudo-code for the complete learning algorithm, and Fig. 2 for the pseudo-code for runtime inference. 3 BRFs for multiclass object detection and segmentation With the BRF training algorithm in hand, we describe our approach for multiclass object detection and region-labeling using densely connected BRFs. 3.1 Weak learners for detecting stuff and things The square sliding window approach does not provide a natural way of working with irregular objects. Using region labeling as an image representation allows dealing with irregular and extended objects (buildings, bookshelf, road, ...). Extended stuff [1] may be a very important source of contextual information for other objects. (a) Examples from the dictionary of about 2000 patches and masks, Ux,y, Vx,y. (b) Examples from the dictionary of 30 graphs, Wx,y,c. ... = + + Truth Output f t=0 f t=1 f t=2 F S (c) Example feedforward segmentation for screens. Figure 3: Examples of patches from the dictionary and an example of the segmentation obtained using boosting trained with patches from (a). The weak learners we use for the local evidence potentials are based on the segmentation fragments proposed in [2]. Specifically, we create a dictionary of about 2000 image patches U, chosen at random (but overlapping each object), plus a corresponding set of binary (inclass/ out-of-class) image masks, V : see Fig. 3(a). At each round t, for each class c, and for each dictionary entry, we construct the following weak learner, whose output is a binary matrix of the same size as the image I: v(I) = ((I ⊗U) > θ) ∗V > 0 (9) where ⊗represents normalized cross-correlation and ∗represents convolution. The intuition behind this is that I ⊗U will produce peaks at image locations that contain this patch/template, and then convolving with V will superimpose the segmentation mask on top of the peaks. As a function of the threshold θ, the feature will behave more as a template detector (θ ≃1) or as a texture descriptor (θ << 1). To be able to detect objects at multiple scales, we first downsample the image to scale σ, compute v(I ↓σ), and then upsample the result. The final weak learner does this for multiple scales, ORs all the results together, and then takes a linear transformation. f(I) = α (∨σ[v(I ↓σ) ↑σ]) + β (10) Fig. 3(c) shows an example of segmentation obtained by using boosting without context. The weak learners we use for the compatibility functions have a similar form: gc(b) = α C X c′=1 bc′ ∗Wc′ ! + β (11) where bc′ is the image formed by the beliefs at all pixels for class c′. This convolution corresponds to eq. 8 in which the node i is one pixel x, y of class c. The binary kernels (graph fragments) W define, for each node x, y of object class c, all the nodes from which it will receive messages. These kernels are chosen by sampling patches of various sizes from the labeling of images from the training set. This allows generating complicated patterns of connectivity that reflect the statistics of object co-occurrences in the training set. The overall incoming message is given by adding the kernels obtained at each boosting round. (This is the key difference from mutual boosting [4], where the incoming message is just the output of a single weak learner; thus, in mutual boosting, previously learned inter-class connections are only used once.) Although it would seem to take O(t) time to compute Gt, we can precompute a single equivalent kernel W ′, so at runtime the overall complexity is still linear in the number of boosting rounds, O(T). Gt x,y,c = C X c′=1 bc′ ∗ t X n=1 αnW n c′ ! + X n βndef = C X c′=1 bc′ ∗W ′ c′ + β′ Car Building Road y x a) Incoming messages to a car node. car car building car road car car building building building road building car road road road building road b) Compatibilities (W’). F G b=σ(F+G) c) A car out of context (outside 3rd floor windows) is less of a car. b(car) S(all) t=1 t=2 t=4 t=20 t=40 Final labeling d) Evolution of the beliefs for the car nodes (b) and labeling (S) for road, building, car. Figure 4: Street scene. The BRF is trained to detect cars, buildings and the road. In Fig. 4(a-b), we show the structures of the graph and the weights W ′ defined by GT for a BRF trained to detect cars, buildings and roads in street scenes. 3.2 Learning and inference For training we used a labeled dataset of office and street scenes with about 100 images in each set. During the training, in the first 5 rounds we only update the local potentials, to allow local evidence to accrue. After the 5th iteration we start updating also the compatibility functions. At each round, we update only the local potential and compatibility function associated with a single object class that reduces the most the multiclass cost. This allows objects that need many features to have more complicated local potentials. The algorithm learns to first detect easy (and large) objects, since these reduce the error of all classes the fastest. The easy-to-detect objects can then pass information to the harder ones. For instance, in office scenes, the system first detects screens, then keyboards, and finally computer mice. Fig. 5 illustrates this behavior on the test set. A similar behavior is obtained for the car detector (Fig. 4(d)). The detection of building and road provides strong constraints for the locations of the car. 3.3 Cascade of classifiers with BRFs The BRF can be turned into a cascade [12] by thresholding the beliefs. Computations can then be reduced by doing the convolutions (required for computing f and g) only in pixels that are still candidates for the presence of the target. At each round we update a binary rejection mask for each object class, Rt x,y,c, by thresholding the beliefs at round t: Rt x,y,c = Rt−1 x,y,c δ(bt x,y,c > θt c). A pixel in the rejection mask is set to zero when we can decide that the object is not present (when bt x,y,c is below the threshold θt c ≃0), and it is set to 1 when more processing is required. The threshold θt c is chosen so that the percentage of missed detections is below a predefined level (we use 1%). Similarity we can define a detection mask that will indicate pixels in which we decide the object is present. The mask is then used for computing the features v(I) and messages G by applying the convolutions only on the pixels not yet classified. We can denote those operators as ⊗R and ∗R. This 0.5 1 b (screen) F G F G F G b (keyboard) b (mouse) t=5 t=10 t=15 t=25 t=50 t=0 t=50 Iteration (t) t=20 Screen Keyboard Mouse Area under ROC Input image screen keyboard mouse Ground truth Output labeling b (screen) b (keyboard) b (mouse) b (screen) b (keyboard) b (mouse) b (screen) b (keyboard) b (mouse) b (screen) b (keyboard) b (mouse) BRF Boosting Figure 5: Top. In this desk scene, it is easy to identify objects like the screen, keyboard and mouse, even though the local information is sometimes insufficient. Middle: the evolution of the beliefs (b and F and G) during detection for a test image. Bottom. The graph bellow shows the average evolution of the area under the ROC for the three objects on 120 test images. results in a more efficient classifier with only a slight decrease of performance. In Fig. 6 we compare the reduction of the search space when implementing a cascade using independent boosting (which reduces to Viola and Jones [12]), and when using BRF’s. We see that for objects for which context is the main source of information, like the mouse, the reduction in search space is much more dramatic using BRFs than using boosting alone. 4 Conclusion The proposed BRF algorithm combines boosting and CRF’s, providing an algorithm that is easy for both training and inference. We have demonstrated object detection in cluttered scenes by exploiting contextual relationships between objects. The BRF algorithm is computationally efficient and provides a natural extension of the cascade of classifiers by integrating evidence from other objects in order to quickly reject certain image regions. The BRF’s densely connected graphs, which efficiently collect information over large image regions, provide an alternative framework to nearest-neighbor grids for vision problems. Acknowledgments This work was sponsored in part by the Nippon Telegraph and Telephone Corporation as part of the NTT/MIT Collaboration Agreement, by BAE systems, and by DARPA contract DABT63-99-1-0012. 0 10 20 30 40 0% 50% 100% 0 10 20 30 40 0 10 20 30 40 Size of search space Round Round Round BRF BRF BRF Boosting Boosting Boosting Screen Keyboard Mouse 0 0.5 1 0 0.2 0.4 0.5 0.8 1 0 0.5 1 0 0.5 1 BRF Boosting Boosting BRF Detection rate False alarm rate False alarm rate False alarm rate BRF Boosting Figure 6: Contextual information reduces the search space in the framework of a cascade and improves performances. The search space is defined as the percentage of pixels that require further processing before a decision can be reached at each round. BRF’s provide better performance and requires fewer computations. The graphs (search space and ROCs) correspond to the average results on a test set of 120 images. References [1] E. H. Adelson. On seeing stuff: the perception of materials by humans and machines. In Proc. SPIE, volume 4299, pages 1–12, 2001. [2] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In Proc. European Conf. on Computer Vision, 2002. [3] T. Dietterich, A. Ashenfelter, and Y. Bulatov. Training conditional random fields via gradient tree boosting. In Intl. Conf. on Machine Learning, 2004. [4] M. Fink and P. Perona. Mutual boosting for contextual influence. In Advances in Neural Info. Proc. Systems, 2003. [5] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of statistics, 28(2):337–374, 2000. [6] Xuming He, Richard Zemel, and Miguel Carreira-Perpinan. Multiscale conditional random fields for image labelling. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004. [7] S. Kumar and M. Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In IEEE Conf. on Computer Vision and Pattern Recognition, 2003. [8] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Intl. Conf. on Machine Learning, 2001. [9] K. Murphy, A. Torralba, and W. Freeman. Using the forest to see the trees: a graphical model relating features, objects and scenes. In Advances in Neural Info. Proc. Systems, 2003. [10] R. Schapire. The boosting approach to machine learning: An overview. In MSRI Workshop on Nonlinear Estimation and Classification, 2001. [11] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural Info. Proc. Systems, 2003. [12] P. Viola and M. Jones. Robust real-time object detection. Intl. J. Computer Vision, 57(2):137– 154, 2004.
|
2004
|
116
|
2,526
|
An Auditory Paradigm for Brain–Computer Interfaces N. Jeremy Hill1, T. Navin Lal1, Karin Bierig1 Niels Birbaumer2 and Bernhard Sch¨olkopf1 1Max Planck Institute for Biological Cybernetics, Spemannstraße 38, 72076 T¨ubingen, Germany. {jez|navin|bierig|bs}@tuebingen.mpg.de 2Institute for Medical Psychology and Behavioural Neurobiology, University of T¨ubingen, Gartenstraße 29, 72074 T¨ubingen, Germany. niels.birbaumer@uni-tuebingen.de Abstract Motivated by the particular problems involved in communicating with “locked-in” paralysed patients, we aim to develop a braincomputer interface that uses auditory stimuli. We describe a paradigm that allows a user to make a binary decision by focusing attention on one of two concurrent auditory stimulus sequences. Using Support Vector Machine classification and Recursive Channel Elimination on the independent components of averaged eventrelated potentials, we show that an untrained user’s EEG data can be classified with an encouragingly high level of accuracy. This suggests that it is possible for users to modulate EEG signals in a single trial by the conscious direction of attention, well enough to be useful in BCI. 1 Introduction The aim of research into brain-computer interfaces (BCIs) is to allow a person to control a computer using signals from the brain, without the need for any muscular movement—for example, to allow a completely paralysed patient to communicate. Total or near-total paralysis can result in cases of brain-stem stroke, cerebral palsy, and Amytrophic Lateral Sclerosis (ALS, also known as Lou Gehrig’s disease). It has been shown that some patients in a “locked-in” state, in which most cognitive functions are intact despite complete paralysis, can learn to communicate via an interface that interprets electrical signals from the brain, measured externally by electro-encephalogram (EEG) [1]. Successful approaches to such BCIs include using feedback to train the patient to modulate slow cortical potentials (SCPs) to meet a fixed criterion [1], machine classification of signals correlated with imagined muscle movements, recorded from motor and pre-motor cortical areas [2, 3], and detection of an event-related potential (ERP) in response to a visual stimulus event [4]. The experience of clinical groups applying BCI is that different paradigms work to varying degrees with different patients. For some patients, long immobility and the degeneration of the pyramidal cells of the motor cortex may make it difficult to produce imagined-movement signals. Another concern is that in very severe cases, the entire visual modality becomes unreliable: the eyes cannot adjust focus, the fovea cannot be moved to inspect different locations in the visual scene, meaning that most of a given image will stimulate peripheral regions of retina which have low spatial resolution, and since the responses of retinal ganglion cells that form the input to the visual system are temporally band-pass, complete immobility of the eye means that steady visual signals will quickly fade [5]. Thus, there is considerable motivation to add to the palette of available BCI paradigms by exploring EEG signals that occur in response to auditory stimuli—a patient’s sense of hearing is often uncompromised by their condition. Here, we report the results of an experiment on healthy subjects, designed to develop a BCI paradigm in which a user can make a binary choice. We attempt to classify EEG signals that occur in response to two simultaneous auditory stimulus streams. To communicate a binary decision, the subject focuses attention on one of the two streams, left or right. Hillyard et al. [6] and others reported in the 60’s and 70’s that selective attention in a dichotic listening task caused a measurable modulation of EEG signals (see [7, 8] for a review). This modulation was significant when signals were averaged over a large number of instances, but our aim is to discover whether single trials are classifiable, using machine-learning algorithms, with a high enough accuracy to be useful in a BCI. 2 Stimuli and methods EEG signals were recorded from 15 healthy untrained subjects (9 female, 6 male) between the ages of 20 and 38, using 39 silver chloride electrodes, referenced to the ears. An additional EOG electrode was positioned lateral to and slightly below the left eye, to record eye movement artefacts—blinks and horizontal and vertical saccades all produced clearly identifiable signals on the EOG channel. The signals were filtered by an analog band-pass filter between 0.1 and 40 Hz, before being sampled at 256 Hz. Subjects sat 1.5m from a computer monitor screen, and performed eight 10-minute blocks each consisting of 50 trials. On each trial, the appearance of a fixation point on screen was followed after 1 sec by an arrow pointing left or right (25 left, 25 right in each block, in random order). The arrow disappeared after 500 msec, after which there was a pause of 500 msec, and then the auditory stimulus was presented, lasting 4 seconds. 500 msec after the end of the auditory stimulus, the fixation point disappeared and there was a pause of between 2 and 4 seconds for the subject to relax. While the fixation point was present, subjects were asked to keep their gaze fixed on it, to blink as little as possible, and not to swallow or make any other movements (we wished to ensure that, as far as possible, our signals were free of artefacts from signals that a paralysed patient would be unable to produce). The auditory stimulus consisted of two periodic sequences of 50-msec-long squarewave beeps, one presented from a speaker to the left of the subject, and the other from a speaker to the right. Each sequence contained “target” and “non-target” beeps: the first three in the sequence were always non-targets, after which they could be targets with independent probability 0.3. The right-hand sequence consisted of eight beeps of frequencies 1500 Hz (non-target) and 1650 Hz (target), repeating with a period of 490 msec. The left-hand sequence consisted of seven beeps of frequencies 800 Hz (non-target) and 880 Hz (target), starting 70 msec after start of the right-hand sequence and repeating with a period of 555 msec. According to the direction of the arrow on each trial, subjects were instructed to count the number of target beeps in either the left or right sequence. In the pause between trials, they were instructed to report the number of target beeps using a numeric keypad.1 sound signals of different periods and concatenated averaged signal amplitude (epoch averaging) time (sec) deviant tones (longer duration, or absent) LEFT RIGHT 0 0 0.5 1 1.5 2 2.5 3 3.5 4 +0.5 +1 -0.5 -1 1 n P 1 n P Figure 1: Schematic illustration of the acoustic stimuli used in the experiment, and of the averaging process used in preprocessing method A (illustrated by showing what would happen if the sound signals themselves were averaged) The sequences differed in location and pitch in order to help the subjects focus their attention on one sequence and ignore the other. The task of reporting the number of target beeps was instituted in order to keep the subjects alert, and to make the task more concrete, because subjects in pilot experiments found that just being asked “listen to the left” or “listen to the right” was too vague a task demand to perform well over the course of 400 repetitions.2 The regular repetition of the beeps, at the two different periods, was designed to allow the average ERP to a lefthand beep on a single trial to be examined with minimal contamination by ERPs to right-hand beeps, and vice versa: figure 2 illustrates that, when the periods of one signal are averaged, signals correlated with that sequence add in phase, whereas signals correlated with the other sequence spread out, out of phase. Comparison of the average response to a left beat with the average response to a right beat, on a single trial, should thus emphasize any modulating effect of the direction of attention on the ERP, of the kind described by Hillyard et al. [6]. 1In order to avoid contamination of the EEG signals with movement artefacts, a few practice trials were performed before the first block, so that subjects learned to wait until the fixation point was out before looking at the keypad or beginning the hand movement toward it. 2Although a paralysed patient would clearly be unable to give responses in this way, it is hoped that this extra motivation would not be necessary. An additional stimulus feature was designed to investigate whether mismatch negativity (MMN) could form a useful basis for a BCI. Mismatch negativity is a difference between the ERP to standard stimuli and the ERP to deviant stimuli, i.e. rare stimulus events (with probability of occurrence typically around 0.1) which differ in some manner from the more regular standards. MMN is treated in detail by N¨a¨at¨anen [9]. It has been associated with the distracting effect of the occurrence of a deviant while processing standards, and while it occurs to stimuli outside as well as inside the focus of attention, there is evidence to suggest that this distraction effect is larger the more similar the (task-irrelevant) deviant stimulus is to the (taskrelevant) standards [10]. Thus there is the possibility that a deviant stimulus (say, a longer beep) inserted into the sequence to which the subject is attending (same side, same pitch) might elicit a larger MMN signal than a deviant in the unattended sequence. To explore this, after at least two standard beats of each trial, one of the beats (randomly chosen, with the constraint that the epoch following the deviant on the left should not overlap with the epoch following the deviant on the right) was made to deviate on each trial. (Note the frequencies of occurrence of the deviants were 1/7 and 1/8 rather than the ideal 1/10: the double contraint of having manageably short trials and a reasonable epoch length meant that the number of beeps in the left and right sequences was limited to seven and eight respectively, and clearly to use MMN in BCI, every trial has to have at least one deviant in each sequence.) For 8 subjects, the deviant beat was simply a silent beat—a disruptive pause in the otherwise regular sequence. For the remaining 7 subjects, the deviant beat was a beep lasting 100 msec instead of the usual 50 msec (as in the distraction paradigm of Schr¨oger and Wolff[10], the difference between deviant and standard is on a task-irrelevant dimension—in our case duration, the task being to discriminate pitch). A sixteenth subject, in the long-deviant condition, had to be eliminated because of poor signal quality. 3 Analysis As a first step in analyzing the data, the raw EEG signals were examined by eye for each of the 400 trials of each of the subjects. Trials were rejected if they contained obvious large artefact signals caused by blinks or saccades (visible in the EOG and across most of the frontal positions), small periodic eye movements, or other muscle movements (neck and brow, judged from electrode positions O9 and O10, Fp1, Fpz and Fp2). Between 6 and 228 trials had to be rejected out of 400, depending on the subject. One of two alternative preprocessing methods was then used. In order to look for effects of the attention-modulation reported by Hillyard et al, method (A) took the average ERP in response to standard beats (discarding the first beat). In order to look for possible attention-modulation of MMN, method (B) subtracted the average response to standards from the response to the deviant beat. In both methods, the average ERP signal to beats on the left was concatenated with the average ERP signal following beats on the right, as depicted in figure 2 (for illustrative purposes the figure uses the sound signal itself, rather than an ERP). For each trial, either preprocessing method resulted in a signal of 142 (left) + 125 (right) = 267 time samples for each of 40 channels (39 EEG channels plus one EOG), for a total of 10680 input dimensions to the classifier. The classifier used was a linear hard-margin Support Vector Machine (SVM) [11]. To evaluate its performance, the trials from a single subject were split into ten nonoverlapping partitions of equal size: each such partition was used in turn as a test set for evaluating the performance of the classifier trained on the other 90% of the trials. Before training, linear Independent Component Analysis (ICA) was carried out on the training set in order to perform blind source separation—this is a common technique in the analysis of EEG data [12, 13], since signals measured through the intervening skull, meninges and cerebro-spinal fluid are of low spatial resolution, and the activity measured from neighbouring EEG electrodes can be assumed to be highly correlated mixtures of the underlying sources. For the purposes of the ICA, the concatenation of all the preprocessed signals from one EEG channel, from all trials in the training partition, was treated as a single mixture signal. A 40-by-40 separating matrix was obtained using the stabilized deflation algorithm from version 2.1 of FastICA [14]. This matrix, computed only from the training set, was then used to separate the signals in both the training set and the test set. Then, the signals were centered and normalized: for each averaged (unmixed) ERP in each of the 40 ICs of each trial, the mean was subtracted, and the signal was divided by its 2-norm. Thus the entry Kij in the kernel matrix of the SVM was proportional to the sum of the coefficients of correlation between corresponding epochs in trials i and j. The SVM was then trained and tested. Single-trial error rate was estimated as the mean proportion of misclassified test trials across the ten folds. For comparison, the classification was also performed on the mixture signals without ICA, and with and without the normalizing step. Results are shown in table 1. Due to space constraints, standard error values for the estimated error rates are not shown: standard error was typically ±0.025, and maximally ±0.04. It can be seen that the best error rate obtainable with a given subject varies according to the subject, between 3% and 37%, in a way that is not entirely explained by the differences in the numbers of good (artefactfree) trials available. ICA generally improved the results, by anything up to 14%. Preprocessing method (B) generally performed poorly (minimum 19% error, and generally over 35%). Any attention-dependent modulation of an MMN signal is apparently too small relative to the noise (signals from method B were generally noisier than those from method A, because the latter, averaged over 5 or 6 epochs within a trial, are subtracted from signals that come from only one epoch per trial in order to produce the method B average). For preprocessing method A, normalization generally produced a small improvement. Thus, promising results can be obtained using the average ERP in response to standard beeps, using ICA followed by normalization (fourth results column): error rates of 5–15% for some some subjects are comparable with the performance of, for example, well-trained patients in an SCP paradigm [1], and correspond to information transfer rates of 0.4–0.7 bits per trial (say, 4–7 bits per minute). Note that, despite the fact that this method does not use the ERPs that occur in response to deviant beats, the results for subject in the silent-deviant condition were generally better than for those in the long-deviant condition. It may be that the more irregular-sounding sequences with silent beats forced the subjects to concentrate harder in order to perform the counting task—alternatively, it may simply be that this group of subjects could concentrate less well, an interpretation which is also suggested by the fact that more trials had to be rejected from their data sets). In order to examine the extent to which the dimensionality of the classification problem could be reduced, recursive feature elimination [15] was performed (limited now to preprocessing method A with ICA and normalization). For each of ten folds, ICA and normalization was performed, then an SVM was trained and tested. For each independent component j, an elimination criterion value cj = P i∈Fj w2 i was computed, where w is the hyperplane normal vector of the trained SVM, and Fj is the set of indices to features that are part of component j. The IC with the lowest criterion score cj was deemed to be the least influential for classification, Table 1: SVM classification error rates: the best rates for each of the preprocessing methods, A and B (see text), are in bold. The symbol ∥· ∥is used to denote normalization during pre-processing as described in the text, and the symbol · is used to denote no normalization. deviant # Method A Method B subj. duration good no ICA ICA no ICA ICA (msec) trials · ∥· ∥ · ∥· ∥ · ∥· ∥ · ∥· ∥ CM 0 326 0.08 0.06 0.06 0.04 0.36 0.35 0.26 0.25 CN 0 250 0.26 0.19 0.28 0.14 0.43 0.44 0.38 0.40 GH 0 198 0.34 0.27 0.35 0.22 0.41 0.41 0.39 0.43 JH 0 348 0.21 0.19 0.14 0.08 0.31 0.42 0.28 0.35 KT 0 380 0.23 0.21 0.15 0.07 0.41 0.36 0.35 0.34 KW 0 394 0.18 0.14 0.06 0.03 0.34 0.39 0.19 0.23 TD 0 371 0.22 0.18 0.15 0.10 0.35 0.39 0.29 0.28 TT 0 367 0.32 0.31 0.33 0.32 0.40 0.42 0.39 0.43 AH 100 353 0.22 0.22 0.17 0.16 0.41 0.41 0.45 0.46 AK 100 172 0.35 0.31 0.34 0.22 0.50 0.46 0.50 0.42 CG 100 271 0.37 0.29 0.31 0.28 0.51 0.47 0.48 0.44 CH 100 375 0.31 0.28 0.26 0.22 0.49 0.46 0.46 0.44 DK 100 241 0.34 0.34 0.35 0.30 0.45 0.44 0.42 0.40 KB 100 363 0.21 0.21 0.15 0.10 0.42 0.47 0.39 0.41 SK 100 239 0.47 0.43 0.40 0.37 0.46 0.49 0.45 0.51 and the corresponding features Fj were removed. Then the SVM was re-trained and re-tested, and the elimination process iterated until one channel remained. The removal of batches of features in this way is similar to the Recursive Channel Elimination approach to BCI introduced by Lal et al. [3], except that independent components are removed instead of mixtures (a convenient acronym would therefore be RICE, for Recursive Independent Component Elimination). Results for the two subject groups are plotted in the left and right panels of figure 3, showing estimated error rates averaged over ten folds against the number of ICs used for classification. Each subject’s initials, together with the number of useable trials that subject performed, are printed to the right of the corresponding curve.3 It can be seen that a fairly large number of ICs (around 20–25 out of the 40) contribute to the classification: this may indicate that the useful information in the EEG signals is diffused fairly widely between the areas of the brain from which we are detecting signals (indeed, this is in accordance with much auditory-ERP and -MMN research, in which strong signals are often measured at the vertex, quite far from the auditory cortex [6, 7, 8, 9]). One of the motivations for reducing the dimensionality of the data is to determine whether performance can be improved as irrelevant noise is eliminated, and as the probability of overfitting decreases. However, these factors do not seem to limit performance on the current data: for most subjects, performance does not improve as features are eliminated, instead remaining roughly constant until fewer than 20–25 ICs remain. A possible exception is KT, whose performance may improve by 2–3% after elimination of 20 components, and a clearer exception 3RICE was also carried out using the full 400 trials for each subject (results not shown). Despite the (sometimes drastic) reduction in the number of trials, rejection by eye of artefact trials did not raise the classification error rate by an appreciable amount. The one exception was subject SK, for whom the probability of mis-classification increased by about 0.1 when 161 trials containing strong movement signals were removed—clearly this subject’s movements were classifiably dependent on whether he was attending to the left or to the right. number of ICs retained SK (239) DK (241) CG (271) AK (172) CH (375) AH (353) KB (363) deviant duration = 100 msec classification error rate number of ICs retained TT (367) GH (198) CN (250) TD (371) JH (348) KT (380) CM (326) KW (394) deviant duration = 0 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Figure 2: Results of recursive independent component elimination is CG, for whom elimination of 25 components yields an improvement of roughly 10%. The ranking returned by the RICE method is somewhat difficult to interpret, not least because each fold of the procedure can compute a different ICA decomposition, whose independent components are not necessarily readily identifiable with one another. A thorough analysis is not possible here—however, with the mixture weightings for many ICs spread very widely around the electrode array, we found no strong evidence for or against the particular involvement of muscle movement artefact signals in the classification. 4 Conclusion Despite wide variation in performance between subjects, which is to be expected in the analysis of EEG data, our classification results suggest that it is possible for a user with no previous training to direct conscious attention, and thereby modulate the event-related potentials that occur in response to auditory stimuli reliably enough, on a single trial, to provide a useful basis for a BCI. The information used by the classifier seems to be diffused fairly widely over the scalp. While the ranking from recursive independent component elimination did not reveal any evidence of an overwhelming contribution from artefacts related to muscle activity, it is not possible to rule out completely the involvement of such artefacts—possibly the only way to be sure of this is to implement the interface with locked-in patients, preparations for which are underway. Acknowledgments Many thanks to Prof. Kuno Kirschfeld and Bernd Battes for the use of their laboratory. References [1] N. Birbaumer, A. K¨ubler, N. Ghanayim, T. Hinterberger, J. Perelmouter, J. Kaiser, I. Iversen, B. Kotchoubey, N. Neumann, and H. Flor. The Thought Translation Device (TTD) for Completely Paralyzed Patients. IEEE Transactions on Rehabilitation Engineering, 8(2):190–193, June 2000. [2] G. Pfurtscheller., C. Neuper amd A. Schl¨ogl, and K. Lugger. Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters. IEEE Transactions on Rehabilitation Engineering, 6(3):316–325, 1998. [3] T.N. Lal, M. Schr¨oder, T. Hinterberger, J. Weston, M. Bogdan, N. Birbaumer, and B. Sch¨olkopf. Support Vector Channel Selection in BCI. IEEE Transactions on Biomedical Engineering. Special Issue on Brain-Computer Interfaces, 51(6):1003– 1010, June 2004. [4] E. Donchin, K.M. Spencer, and R. Wijesinghe. The menatal prosthesis: Assessing the speed of a P300-based brain-computer interface. IEEE Transactions on Rehabilitation Engineering, 8:174–179, 2000. [5] L.A. Riggs, F. Ratliff, J.C. Cornsweet, and T.N. Cornsweet. The disappearance of steadily fixated visual test objects. Journal of the Optical Society of America, 43:495– 501, 1953. [6] S.A. Hillyard, R.F. Hink, V.L. Schwent, and T.W. Picton. Electrical signs of selective attention in the human brain. Science, 182:177–180, 1973. [7] R. N¨a¨at¨anen. Processing negativity: an evoked-potential reflection of selective attention. Psychological Bulletin, 92(3):605–640, 1982. [8] R. N¨a¨at¨anen. The role of attention in auditory information processing as revealed by event-related potentials and other brain measures of cognitive function. Behavioral and Brain Sciences, 13:201–288, 1990. [9] R. N¨a¨at¨anen. Attention and Brain Function. Erlbaum, Hillsdale NJ, 1992. [10] E. Schr¨oger and C. Wolff. Behavioral and electrophysiological effects of task-irrelevant sound change: a new distraction paradigm. Cognitive Brain Research, 7:71–87, 1998. [11] B. Sch¨olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, USA, 2002. [12] K.R. M¨uller, J. Kohlmorgen, A. Ziehe, and B. Blankertz. Decomposition algorithms for analysing brain signals. In S. Haykin, editor, Adaptive Systems for Signal Processing, Communications and Control, pages 105–110, 2000. [13] A. Delorme and S. Makeig. EEGLAB: an open source toolbox for analysis of singletrial EEG dynamics including Independent Component Analysis. Journal of Neuroscience Methods, 134:9–21, 2004. [14] A. Hyv¨arinen. Fast and robust fixed-point algorithms for Independent Component Analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [15] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene Selection for Cancer Classification using Support Vector Machines. Journal of Machine Learning Research, 3:1439–1461, 2003.
|
2004
|
117
|
2,527
|
Worst-Case Analysis of Selective Sampling for Linear-Threshold Algorithms∗ Nicol`o Cesa-Bianchi DSI, University of Milan cesa-bianchi@dsi.unimi.it Claudio Gentile Universit`a dell’Insubria gentile@dsi.unimi.it Luca Zaniboni DTI, University of Milan zaniboni@dti.unimi.it Abstract We provide a worst-case analysis of selective sampling algorithms for learning linear threshold functions. The algorithms considered in this paper are Perceptron-like algorithms, i.e., algorithms which can be efficiently run in any reproducing kernel Hilbert space. Our algorithms exploit a simple margin-based randomized rule to decide whether to query the current label. We obtain selective sampling algorithms achieving on average the same bounds as those proven for their deterministic counterparts, but using much fewer labels. We complement our theoretical findings with an empirical comparison on two text categorization tasks. The outcome of these experiments is largely predicted by our theoretical results: Our selective sampling algorithms tend to perform as good as the algorithms receiving the true label after each classification, while observing in practice substantially fewer labels. 1 Introduction In this paper, we consider learning binary classification tasks with partially labelled data via selective sampling. A selective sampling algorithm (e.g., [3, 12, 7] and references therein) is an on-line learning algorithm that receives a sequence of unlabelled instances, and decides whether or not to query the label of the current instance based on instances and labels observed so far. The idea is to let the algorithm determine which labels are most useful to its inference mechanism, so that redundant examples can be discarded on the fly and labels can be saved. The overall goal of selective sampling is to fit real-world scenarios where labels are scarce or expensive. As a by now classical example, in a web-searching task, collecting web pages is a fairly automated process, but assigning them a label (a set of topics) often requires timeconsuming and costly human expertise. In these cases, it is clearly important to devise learning algorithms having the ability to exploit the label information as much as possible. Furthermore, when we consider kernel-based algorithms [23, 9, 21], saving labels directly implies saving support vectors in the currently built hypothesis, which, in turn, implies saving running time in both training and test phases. Many algorithms have been proposed in the literature to cope with the broad task of learning with partially labelled data, working under both probabilistic and worst-case assumptions, for either on-line or batch settings. These range from active learning algorithms [8, 22], ∗The authors gratefully acknowledge partial support by the PASCAL Network of Excellence under EC grant no. 506778. This publication only reflects the authors’ views. to the query-by-committee algorithm [12], to the adversarial “apple tasting” and labelefficient algorithms investigated in [16] and [17, 6], respectively. In this paper we present a worst-case analysis of two Perceptron-like selective sampling algorithms. Our analysis relies on and contributes to a well-established way of studying linear-threshold algorithms within the mistake bound model of on-line learning (e.g., [18, 15, 11, 13, 14, 5]). We show how to turn the standard versions of the (first-order) Perceptron algorithm [20] and the second-order Perceptron algorithm [5] into selective sampling algorithms exploiting a randomized margin-based criterion (inspired by [6]) to select labels, while preserving in expectation the same mistake bounds. In a sense, this line of research complements an earlier work on selective sampling [7], where a second-order kind of algorithm was analyzed under precise stochastic assumptions about the way data are generated. This is exactly what we face in this paper: we avoid any assumption whatsoever on the data-generating process, but we are still able to prove meaningful statements about the label efficiency features of our algorithms. In order to give some empirical evidence for our analysis, we made some experiments on two medium-size text categorization tasks. These experiments confirm our theoretical results, and show the effectiveness of our margin-based label selection rule. 2 Preliminaries, notation An example is a pair (x, y), where x ∈Rn is an instance vector and y ∈{−1, +1} is the associated binary label. A training set S is any finite sequence of examples S = (x1, y1), . . . , (xT , yT ) ∈(Rn × {−1, +1})T . We say that S is linearly separable if there exists a vector u ∈Rn such that ytu⊤xt > 0 for t = 1, . . . , T. We consider the following selective sampling variant of a standard on-line learning model (e.g., [18, 24, 19, 15] and references therein). This variant has been investigated in [6] for a version of Littlestone’s Winnow algorithm [18, 15]. Learning proceeds on-line in a sequence of trials. In the generic trial t the algorithm receives instance xt from the environment, outputs a prediction ˆyt ∈{−1, +1} about the label yt associated with xt, and decides whether or not to query the label yt. No matter what the algorithm decides, we say that the algorithm has made a prediction mistake if ˆyt ̸= yt. We measure the performance of the algorithm by the total number of mistakes it makes on S (including the trials where the true label remains hidden). Given a comparison class of predictors, the goal of the algorithm is to bound the amount by which this total number of mistakes differs, on an arbitrary sequence S, from some measure of the performance of the best predictor in hindsight within the comparison class. Since we are dealing with (zero-threshold) linearthreshold algorithms, it is natural to assume the comparison class be the set of all (zerothreshold) linear-threshold predictors, i.e., all (possibly normalized) vectors u ∈Rn. Given a margin value γ > 0, we measure the performance of u on S by its cumulative hinge loss1 [11, 13] PT t=1 Dγ(u; (xt, yt)), where Dγ(u; (xt, yt)) = max{0, γ −ytu⊤xt}. Broadly speaking, the goal of the selective sampling algorithm is to achieve the best bound on the number of mistakes with as few queried labels as possible. As in [6], our algorithms exploit a margin-based randomized rule to decide which labels to query. Thus, our mistake bounds are actually worst-case over the training sequence and average-case over the internal randomization of the algorithms. All expectations occurring in this paper are w.r.t. this randomization. 3 The algorithms and their analysis As a simple example, we start by turning the classical Perceptron algorithm [20] into a worst-case selective sampling algorithm. The algorithm, described in Figure 1, has a real 1The cumulative hinge loss measures to what extent hyperplane u separates S at margin γ. This is also called the soft margin in the SVM literature [23, 9, 21]. ALGORITHM Selective sampling Perceptron algorithm Parameter b > 0. Initialization: v0 = 0; k = 1. For t = 1, 2, . . . do: 1. Get instance vector xt ∈Rn and set rt = v⊤ k−1ˆxt, with ˆxt = xt/||xt||; 2. predict with ˆyt = SGN(rt) ∈{−1, +1}; 3. draw a Bernoulli random variable Zt ∈{0, 1} of parameter b b+|rt|; 4. if Zt = 1 then: (a) ask for label yt ∈{−1, +1}, (b) if ˆyt ̸= yt then update as follows: vk = vk−1 +ytˆxt, k ←k +1. Figure 1: The selective sampling (first-order) Perceptron algorithm. parameter b > 0 which might be viewed as a noise parameter, ruling the extent to which a linear threshold model fits the data at hand. The algorithm maintains a vector v ∈Rn (whose initial value is zero). In each trial t the algorithm observes an instance vector xt ∈Rn and predicts the binary label yt through the sign of the margin value rt = v⊤ k−1ˆxt. Then the algorithm decides whether to ask for the label yt through a simple randomized rule: a coin with bias b/(b + |rt|) is flipped; if the coin turns up heads (Zt = 1 in Figure 1) then the label yt is revealed. Moreover, on a prediction mistake (ˆyt ̸= yt) the algorithm updates vector vk according to the usual Perceptron additive rule. On the other hand, if either the coin turns up tails or ˆyt = yt no update takes place. Notice that k is incremented only when an update occurs. Thus, at the end of trial t, subscript k counts the number of updates made so far (plus one). In the following theorem we prove that our selective sampling version of the Perceptron algorithm can achieve, in expectation, the same mistake bound as the standard Perceptron’s using fewer labels. See Remark 1 for a discussion. Theorem 1 Let S = ((x1, y1), (x2, y2), . . . , (xT , yT )) ∈(Rn × {−1, +1})T be any sequence of examples and UT be the (random) set of update trials for the algorithm in Figure 1 (i.e, the set of trials t ≤T such that ˆyt ̸= yt and Zt = 1). Then the expected number of mistakes made by the algorithm in Figure 1 is upper bounded by infγ>0 infu∈Rn 2b+1 2b E hP t∈UT 1 γ Dγ(u; (ˆxt, yt)) i + (2b+1)2 8b ||u||2 γ2 . The expected number of labels queried by the algorithm is equal to PT t=1 E h b b+|rt| i . Proof. Let Mt be the Bernoulli variable which is one iff ˆyt ̸= yt and denote by k(t) the value of the update counter k in trial t just before the update k ←k + 1. Our goal is then to bound E h PT t=1 Mt i from above. Consider the case when trial t is such that Mt Zt = 1. Then one can verify by direct inspection that choosing rt = v⊤ k(t−1)ˆxt (as in Figure 1) yields yt u⊤ˆxt −yt rt = 1 2||u −vk(t−1)||2 −1 2||u −vk(t)||2 + 1 2||vk(t−1) −vk(t)||2, holding for any u ∈Rn. On the other hand, if trial t is such that Mt Zt = 0 we have vk(t−1) = vk(t). Hence we conclude that the equality Mt Zt yt u⊤ˆxt −yt rt = 1 2||u −vk(t−1)||2 −1 2||u −vk(t)||2 + 1 2||vk(t−1) −vk(t)||2 actually holds for all trials t. We sum over t = 1, . . . , T while observing that Mt Zt = 1 implies both ||vk(t−1)−vk(t)|| = 1 and yt rt ≤0. Recalling that vk(0) = 0 and rearranging we obtain PT t=1 Mt Zt yt u⊤ˆxt + |rt| −1 2 ≤1 2||u||2, ∀u ∈Rn. (1) Now, since the previous inequality holds for any comparison vector u ∈Rn, we stretch u to b+1/2 γ u, being γ > 0 a free parameter. Then, by the very definition of Dγ(u; (ˆxt, yt)), b+1/2 γ yt u⊤ˆxt ≥b+1/2 γ (γ −Dγ(u; (ˆxt, yt))) ∀γ > 0. Plugging into (1) and rearranging, PT t=1 Mt Zt(b + |rt|) ≤(b + 1 2) P t∈UT 1 γ Dγ(u; (ˆxt, yt)) + (2b+1)2 8γ2 ||u||2 . (2) ALGORITHM Selective sampling second-order Perceptron algorithm Parameter b > 0. Initialization: A0 = I; v0 = 0; k = 1. For t = 1, 2, . . . do: 1. Get xt ∈Rn and set rt = v⊤ k−1(Ak−1 + ˆxtˆx⊤ t )−1ˆxt, ˆxt = xt/||xt||; 2. predict with ˆyt = SGN(rt) ∈{−1, +1}; 3. draw a Bernoulli random variable Zt ∈{0, 1} of parameter b b + |rt| + 1 2r2 t 1 + ˆx⊤ t A−1 k−1ˆxt ; (3) 4. if Zt = 1 then: (a) ask for label yt ∈{−1, +1}, (b) if ˆyt ̸= yt then update as follows: vk = vk−1 + ytˆxt, Ak = Ak−1 + ˆxtˆx⊤ t , k ←k + 1. Figure 2: The selective sampling second-order Perceptron algorithm. From Figure 1 we see that E[Zt | Z1, . . . , Zt−1] = b b+|rt|. Therefore, taking expectations on both sides of (2), E hPT t=1 Mt Zt(b + |rt|) i = PT t=1 E h E h Mt Zt b + |rt| | Z1, . . . , Zt−1 ii = PT t=1 E h Mt b + |rt| E h Zt | Z1, . . . , Zt−1 ii = E h PT t=1 Mt i b. Replacing back into (2) and dividing by b proves the claimed bound on E hPT t=1 Mt i . The value of E hPT t=1 Zt i (the expected number of queried labels) trivially follows from E hPT t=1 Zt i = E hPT t=1 E[Zt | Z1, . . . , Zt−1] i . 2 We now consider the selective sampling version of the second-order Perceptron algorithm, as defined in [5]. See Figure 2. Unlike the first-order algorithm, the second-order algorithm mantains a vector v ∈Rn and a matrix A ∈Rn × Rn (whose initial value is the identity matrix I). The algorithm predicts through the sign of the margin quantity rt = v⊤ k−1(Ak−1 + ˆxtˆx⊤ t )−1ˆxt, and decides whether to ask for the label yt through a randomized rule similar to the one in Figure 1. The analysis follows the same pattern as the proof of Theorem 1. A key step in this analysis is a one-trial progress equation developed in [10] for a regression framework. See also [4]. Again, the comparison between the second-order Perceptron’s bound and the one contained in Theorem 2 reveals that the selective sampling algorithm can achieve, in expectation, the same mistake bound (see Remark 1) using fewer labels. Theorem 2 Using the notation of Theorem 1, the expected number of mistakes made by the algorithm in Figure 2 is upper bounded by inf γ>0 inf u∈Rn E " X t∈UT 1 γ Dγ(u; (ˆxt, yt)) # + b 2γ2 u⊤E Ak(T ) u + 1 2b n X i=1 E ln (1 + λi) ! , where λ1, . . . , λn are the eigenvalues of the (random) correlation matrix P t∈UT ˆxtˆx⊤ t and Ak(T ) = I +P t∈UT ˆxtˆx⊤ t (thus 1+λi is the i-th eigenvalue of Ak(T )). The expected number of labels queried by the algorithm is equal to PT t=1 E b b+|rt|+ 1 2 r2 t 1+ˆx ⊤ t A−1 k−1ˆxt . Proof sketch. The proof proceeds along the same lines as the proof of Theorem 1. Thus we only emphasize the main differences. In addition to the notation given there, we define Ut as the set of update trials up to time t, i.e., Ut = {i ≤t : Mi Zi = 1}, and Rt as the (random) function Rt(u) = 1 2||u||2 + P i∈Ut 1 2(yi −u⊤ˆxi)2. When trial t is such that Mt Zt = 1 we can exploit a result contained in [10] for linear regression (proof of Theorem 3 therein), where it is essentially shown that choosing rt = v⊤ k−1A−1 k(t)ˆxt (as in Figure 2) yields 1 2(yt −rt)2 = inf u∈Rn Rt(u) −inf u∈Rn Rt−1(u) + 1 2 ˆx⊤ t A−1 k(t)ˆxt −r2 t ˆx⊤ t A−1 k(t)−1ˆxt . (4) On the other hand, if trial t is such that Mt Zt = 0 we have Ut = Ut−1, thus infu∈Rn Rt−1(u) = infu∈Rn Rt(u). Hence the equality 1 2Mt Zt (yt −rt)2 + r2 t ˆx⊤ t A−1 k(t)−1ˆxt = inf u∈Rn Rt(u) −inf u∈Rn Rt−1(u) + 1 2Mt Zt ˆx⊤ t A−1 k(t)ˆxt (5) holds for all trials t. We sum over t = 1, . . . , T, and observe that by definition RT (u) = 1 2||u||2 + PT t=1 Mt Zt 2 (yi −u⊤ˆxi)2 and R0(u) = 1 2||u||2 (thus infu∈Rn R0(u) = 0). After some manipulation one can see that (5) implies PT t=1 Mt Zt yt u⊤ˆxt + |rt| + 1 2r2 t (1 + ˆx⊤ t A−1 k(t)−1ˆxt) ≤1 2u⊤Ak(T )u + PT t=1 1 2Mt Zt ˆx⊤ t A−1 k(t)ˆxt, (6) holding for any u ∈Rn. We continue by elaborating on (6). First, as in [4, 10, 5], we upper bound the quadratic terms ˆx⊤ t A−1 k(t)ˆxt by2 ln det(Ak(t)) det(Ak(t)−1). This gives PT t=1 1 2Mt Zt ˆx⊤ t A−1 k(t)ˆxt ≤1 2 ln det(Ak(T )) det(A0) = 1 2 Pn i=1 ln (1 + λi) . Second, as in the proof of Theorem 1, we stretch the comparison vector u ∈Rn to b γ u and introduce hinge loss terms. We obtain: PT t=1Mt Zt b + |rt| + 1 2r2 t (1 + ˆx⊤ t A−1 k(t)−1ˆxt) ≤b P t∈UT 1 γ Dγ(u; (ˆxt, yt)) + b2 2γ2 u⊤Ak(T )u + 1 2 Pn i=1 ln (1 + λi). (7) The bounds on E hPT t=1 Mt i and E hPT t=1 Zt i can now be obtained by following the proof of Theorem 1. 2 Remark 1 The bounds in Theorems 1 and 2 depend on the choice of parameter b. As a matter of fact, the optimal tuning of this parameter is easily computed. Let us set for brevity ˆDγ(u; S) = E hP t∈UT 1 γ Dγ(u; (ˆxt, yt)) i . Choosing3 b = 1 2 q 1 + 4γ2 ||u||2 ˆDγ(u; S) in Theorem 1 gives the following bound on the expected number of mistakes: infu∈Rn ˆDγ(u; S) + ||u||2 2γ2 + ||u|| 2γ q ˆDγ(u; S) + ||u||2 4γ2 . (8) This is an expectation version of the mistake bound for the standard (first-order) Perceptron algorithm [14]. Notice, that in the special case when the data are linearly separable with margin γ∗the optimal tuning simplifies to b = 1/2 and yields the familiar Perceptron bound ||u||2/(γ∗)2. On the other hand, if we set b = γ r P n i=1 E ln(1+λi) u⊤E[Ak(T )]u in Theorem 2 we are led to the bound infu∈Rn ˆDγ(u; S) + 1 γ q (u⊤E Ak(T ) u) Pn i=1 E ln (1 + λi) , (9) 2Here det denotes the determinant. 3Clearly, this tuning relies on information not available ahead of time, since it depends on the whole sequence of examples. The same holds for the choice of b giving rise to (9). which is an expectation version of the mistake bound for the (deterministic) second-order Perceptron algorithm, as proven in [5]. As it turns out, (8) and (9) might be even sharper than their deterministic counterparts. In fact, the set of update trials UT is on average significantly smaller than the one for the deterministic algorithms. This tends to shrink the three terms ˆDγ(u; S), u⊤E Ak(T ) u, and Pn i=1 E ln (1 + λi), the main ingredients of the selective sampling bounds. Remark 2 Like any Perceptron-like algorithm, the algorithms in Figures 1 and 2 can be efficiently run in any given reproducing kernel Hilbert space (e.g., [9, 21, 23]), just by turning them into equivalent dual forms. This is actually what we did in the experiments reported in the next section. 4 Experiments The empirical evaluation of our algorithms was carried out on two datasets of free-text documents. The first dataset is made up of the first (in chronological order) 40, 000 newswire stories from Reuters Corpus Volume 1 (RCV1) [2]. The resulting set of examples was classified over 101 categories. The second dataset is a specific subtree of the OHSUMED corpus of medical abstracts [1]: the subtree rooted in “Quality of Health Care” (MeSH code N05.712). From this subtree we randomly selected a subset of 40, 000 abstracts. The resulting number of categories was 94. We performed a standard preprocessing on the datasets – details will be given in the full paper. Two kinds of experiments were made on each dataset. In the first experiment we compared the selective sampling algorithms in Figures 1 and 2 (for different values of b), with the standard second-order Perceptron algorithm (requesting all labels). Such a comparison was devoted to studying the extent to which a reduced number of label requests might lead to performance degradation. In the second experiment, we compared variable vs. constant label-request rate. That is, we fixed a few values for parameter b, run the selective sampling algorithm in Figure 2, and computed the fraction of labels requested over the training set. Call this fraction ˆp = ˆp(b). We then run a second-order selective sampling algorithm with (constant) label request probability equal to ˆp (independent of t). The aim of this experiment was to investigate the effectiveness of a margin-based selective sampling criterion, as opposed to a random one. Figure 3 summarizes the results we obtained on RCV1 (the results on OHSUMED turned out to be similar, and are therefore omitted from this paper). For the purpose of this graphical representation, we selected the 50 most frequent categories from RCV1, those with frequency larger than 1%. The standard second-order algorithm is denoted by 2NDORDER-ALL-LABELS, the selective sampling algorithms in Figures 1 and 2 are denoted by 1ST-ORDER and 2ND-ORDER, respectively, whereas the second-order algorithm with constant label request is denoted by 2ND-ORDER-FIXED-BIAS.4 As evinced by Figure 3(a), there is a range of values for parameter b that makes 2ND-ORDER achieve almost the same performance as 2ND-ORDER-ALL-LABELS, but with a substantial reduction in the total number of queried labels.5 In Figure 3(b) we report the results of running 2ND-ORDER, 1ND-ORDER and 2ND-ORDER-FIXED-BIAS after choosing values for b that make the average F-measure achieved by 2ND-ORDER just slightly larger than those achieved by the other two algorithms. We then compared the resulting label request rates and found 2NDORDER largely best among the three algorithms (its instantaneous label rate after 40, 000 examples is less than 19%). We made similar experiments for specific categories in RCV1. On the most frequent ones (such as category 70 – Figure 3(c)) this behavior gets emphasized. Finally, in Figure 3(d) we report a direct macroaveraged F-measure comparison 4We omitted to report on the first-order algorithms 1ST-ORDER-ALL-LABELS and 1ST-ORDERFIXED-BIAS, since they are always outperformed by their corresponding second-order algorithms. 5Notice that the figures are plotting instantaneous label rates, hence the overall fraction of queried labels is obtained by integration. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 24000 20000 16000 12000 8000 4000 F-measure & Label-request Training examples 2ND ORDER: Parameter ’b’ variations F-measure 2ND-ORDER-ALL-LABELS F-measure 2ND-ORDER - b=0.025 F-measure 2ND-ORDER - b=0.05 F-measure 2ND-ORDER - b=0.075 Label-request 2ND-ORDER - b=0.025 Label-request 2ND-ORDER - b=0.05 Label-request 2ND-ORDER - b=0.075 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 24000 20000 16000 12000 8000 4000 F-measure & Label-request Training examples Selective Sampling comparison on RCV1 Dataset F-measure 2ND-ORDER - b=0.025 F-measure 2ND-ORDER-FIXED-BIAS - p=0.489 F-measure 1ST-ORDER - b=1.0 Label-request 2ND-ORDER - b=0.025 Label-request 2ND-ORDER-FIXED-BIAS - p=0.489 Label-request 1ST-ORDER - b=1.0 (a) (b) 0.2 0.4 0.6 0.8 1 1.2 24000 20000 16000 12000 8000 4000 F-measure & Label-request Training examples Selective Sampling comparison on category 70 of RCV1 Dataset F-measure 2ND-ORDER - b=0.025 F-measure 2ND-ORDER-FIXED-BIAS - p=0.489 F-measure 1ST-ORDER - b=1.0 Label-request 2ND-ORDER - b=0.025 Label-request 2ND-ORDER-FIXED-BIAS - p=0.489 Label-request 1ST-ORDER - b=1.0 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.489 0.425 0.336 0.211 0.104 F-measure Label-request 2ND-ORDER: Margin based vs Fixed bias 2ND-ORDER 2ND-ORDER-FIXED-BIAS (c) (d) Figure 3: Instantaneous F-measure and instantaneous label-request rate on the RCV1 dataset. We solved a binary classification problem for each class and then (macro)averaged the results. All curves tend to flatten after about 24, 000 examples (out of 40, 000). (a) Instantaneous macroaveraged F-measure of 2ND-ORDER (for three values of b) and their corresponding label-request curves. For the very sake of comparison, we also included the F-measure of 2ND-ORDER-ALL-LABELS. (b) Comparison among 2ND-ORDER, 1STORDER and 2ND-ORDER-FIXED-BIAS. (c) Same comparison on a specific category. (d) F-measure of 2ND-ORDER vs. F-measure of 2ND-ORDER-FIXED-BIAS for 5 values of parameter b, after 40, 000 examples. between 2ND-ORDER and 2ND-ORDER-FIXED-BIAS for 5 values of b. On the x-axis are the resulting 5 values of the constant bias ˆp(b). As expected, 2ND-ORDER outperforms 2ND-ORDER-FIXED-BIAS, though the difference between the two tends to shrink as b (or, equivalently, ˆp(b)) gets larger. 5 Conclusions and open problems We have introduced new Perceptron-like selective sampling algorithms for learning linearthreshold functions. We analyzed these algorithms in a worst-case on-line learning setting, providing bounds on both the expected number of mistakes and the expected number of labels requested. Our theoretical investigation naturally arises from the traditional way margin-based algorithms are analyzed in the mistake bound model of on-line learning [18, 15, 11, 13, 14, 5]. This investigation suggests that our worst-case selective sampling algorithms can achieve on average the same accuracy as that of their more standard relatives, but allowing a substantial label saving. These theoretical results are corroborated by our empirical comparison on textual data, where we have shown that: (1) the selective sampling algorithms tend to be unaffected by observing less and less labels; (2) if we fix ahead of time the total number of label observations, the margin-driven way of distributing these observations over the training set is largely more effective than a random one. We close by two simple open questions. (1) Our selective sampling algorithms depend on a scale parameter b having a significant influence on their practical performance. Is there any principled way of adaptively tuning b so as to reduce the algorithms’ sensitivity to tuning parameters? (2) Theorems 1 and 2 do not make any explicit statement about the number of weight updates/support vectors computed by our selective sampling algorithms. We would like to see a theoretical argument that enables us to combine the bound on the number of mistakes with that on the number of labels, giving rise to a meaningful upper bound on the number of updates. References [1] The OHSUMED test collection. URL: medir.ohsu.edu/pub/ohsumed/. [2] Reuters corpus volume 1. URL: about.reuters.com/researchandstandards/corpus/. [3] Atlas, L., Cohn, R., and Ladner, R. (1990). Training connectionist networks with queries and selective sampling. In NIPS 2. MIT Press. [4] Azoury, K.S., and Warmuth, M.K. (2001). Relative loss bounds for on-line density estimation with the exponential familiy of distributions. Machine Learning, 43(3):211–246, 2001. [5] Cesa-Bianchi, N., Conconi, A., and Gentile, C. (2002). A second-order Perceptron algorithm. In Proc. 15th COLT, pp. 121–137. LNAI 2375, Springer. [6] Cesa-Bianchi, N. Lugosi, G., and Stoltz, G. (2004). Minimizing Regret with Label Efficient Prediction In Proc. 17th COLT, to appear. [7] Cesa-Bianchi, N., Conconi, A., and Gentile, C. (2003). Learning probabilistic linear-threshold classifiers via selective sampling. In Proc. 16th COLT, pp. 373–386. LNAI 2777, Springer. [8] Campbell, C., Cristianini, N., and Smola, A. (2000). Query learning with large margin classifiers. In Proc. 17th ICML, pp. 111–118. Morgan Kaufmann. [9] Cristianini, N., and Shawe-Taylor, J. (2001). An Introduction to Support Vector Machines. Cambridge University Press. [10] Forster, J. On relative loss bounds in generalized linear regression. (1999). In Proc. 12th Int. Symp. FCT, pp. 269–280, Springer. [11] Freund, Y., and Schapire, R. E. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37(3), 277–296. [12] Freund, Y., Seung, S., Shamir, E., and Tishby, N. (1997). Selective sampling using the query by committee algorithm. Machine Learning, 28(2/3):133–168. [13] Gentile, C. & Warmuth, M. (1998). Linear hinge loss and average margin. In NIPS 10, MIT Press, pp. 225–231. [14] Gentile, C. (2003). The robustness of the p-norm algorithms. Machine Learning, 53(3), 265– 299. [15] Grove, A.J., Littlestone, N., & Schuurmans, D. (2001). General convergence results for linear discriminant updates. Machine Learning, 43(3), 173–210. [16] Helmbold, D.P., Littlestone, N. and Long, P.M. (2000). Apple tasting. Information and Computation, 161(2), 85–139. [17] Helmbold, D.P., and Panizza, S. (1997). Some label efficient learning results. In Proc. 10th COLT, pp. 218–230. ACM Press. [18] Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: a new linearthreshold algorithm. Machine Learning, 2(4), 285–318. [19] Littlestone, N., and Warmuth, M.K. (1994). The weighted majority algorithm. Information and Computation, 108(2), 212–261. [20] F. Rosenblatt. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Review, 65, 386–408. [21] Sch¨olkopf, B., and Smola, A. (2002). Learning with kernels. MIT Press, 2002. [22] Tong, S., and Koller, D. (2000). Support vector machine active learning with applications to text classification. In Proc. 17th ICML. Morgan Kaufmann. [23] Vapnik, V.N. (1998). Statistical Learning Theory. Wiley. [24] Vovk, V. (1990). Aggregating strategies. Proc. 3rd COLT, pp. 371–383. Morgan Kaufmann.
|
2004
|
118
|
2,528
|
Markov Networks for Detecting Overlapping Elements in Sequence Data Joseph Bockhorst Dept. of Computer Sciences University of Wisconsin Madison, WI 53706 joebock@cs.wisc.edu Mark Craven Dept. of Biostatistics and Medical Informatics University of Wisconsin Madison, WI 53706 craven@biostat.wisc.edu Abstract Many sequential prediction tasks involve locating instances of patterns in sequences. Generative probabilistic language models, such as hidden Markov models (HMMs), have been successfully applied to many of these tasks. A limitation of these models however, is that they cannot naturally handle cases in which pattern instances overlap in arbitrary ways. We present an alternative approach, based on conditional Markov networks, that can naturally represent arbitrarily overlapping elements. We show how to efficiently train and perform inference with these models. Experimental results from a genomics domain show that our models are more accurate at locating instances of overlapping patterns than are baseline models based on HMMs. 1 Introduction Hidden Markov models (HMMs) and related probabilistic sequence models have been among the most accurate methods used for sequence-based prediction tasks in genomics, natural language processing and other problem domains. One key limitation of these models, however, is that they cannot represent general overlaps among sequence elements in a concise and natural manner. We present a novel approach to modeling and predicting overlapping sequence elements that is based on undirected Markov networks. Our work is motivated by the task of predicting DNA sequence elements involved in the regulation of gene expression in bacteria. Like HMM-based methods, our approach is able to represent and exploit relationships among different sequence elements of interest. In contrast to HMMs, however, our approach can naturally represent sequence elements that overlap in arbitrary ways. We describe and evaluate our approach in the context of predicting a bacterial genome’s genes and regulatory “signals” (together its regulatory elements). Part of the process of understanding a given genome is to assemble a “parts list”, often using computational methods, of its regulatory elements. Predictions, in this case, entail specifying the start and end coordinates of subsequences of interest. It is common in bacterial genomes for these important sequence elements to overlap. prom1 prom2 prom3 gene1 gene2 term1 prom term gene START END (a) (b) Figure 1: (a) Example arrangement of two genes, three promoters and one terminator in a DNA sequence. (b) Topology of an HMM for predicting these elements. Large circles represent element-specific sub-models and small gray circles represent inter-element submodels, one for each allowed pair of adjacent elements. Due to the overlapping elements, there is no path through the HMM consistent with the configuration in (a). Our approach to predicting overlapping sequence elements, which is based on discriminatively trained undirected graphical models called conditional Markov networks [5, 10] (also called conditional random fields), uses two key steps to make a set of predictions. In the first step, candidate elements are generated by having a set of models independently make predictions. In the second step, a Markov network is constructed to decide which candidate predictions to accept. Consider the task of predicting gene, promoter, and terminator elements encoded in bacterial DNA. Figure 1(a) shows an example arrangement of these elements in a DNA sequence. Genes are DNA sequences that encode information for constructing proteins. Promoters and terminators are DNA sequences that regulate transcription, the first step in the synthesis of a protein from a gene. Transcription begins at a promoter, proceeds downstream (left-to-right in Figure 1(a)), and ends at a terminator. Regulatory elements often overlap each other, for example prom2 and prom3 or gene1 and prom2 in Figure 1. One technique for predicting these elements is first to train a probabilistic sequence model for each element type (e.g. [2, 9]) and then to “scan” an input sequence with each model in turn. Although this approach can predict overlapping elements, it is limited since it ignores inter-element dependencies. Other methods, based on HMMs (e.g. [11, 1]), explicitly consider these dependencies. Figure 1(b) shows an example topology of such an HMM. Given an input sequence, this HMM defines a probability distribution over parses, partitionings of the sequence into subsequences corresponding to elements and the regions between them. These models are not naturally suited to representing overlapping elements. For the case shown in Figure 1(a) for example, even if the subsequences for gene1 and prom2 match their respective sub-models very well, since both elements cannot be in the same parse there is a competition between predictions of gene1 and prom2. One could expand the state set to include states for specific overlap situations however, the number of states increases exponentially with the number of overlap configurations. Alternatively, one could use the factorized state representation of factorial HMMs [4]. These models, however, assume a fixed number of loosely connected processes evolving in parallel, which is not a good match to our genomics domain. Like HMMs, our method, called CMN-OP (conditional Markov networks for overlapping patterns), employs element-specific sub-models and probabilistic constraints on neighboring elements qualitatively expressed in a graph. The key difference between CMN-OP and HMMs is the probability distributions they define for an input sequence. While, as mentioned above, an HMM defines a probability distribution over partitions of the sequence, a CMN-OP defines a probability distribution over all possible joint arrangements of elements in an input sequence. Figure 2 illustrates this distinction. 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 sample space (a) HMM (b) CMN−OP end position 1 2 3 4 5 6 7 8 start position predicted labels sample space predicted signals Figure 2: An illustration of the difference in the sample spaces on which probability distributions over labelings are defined by (a) HMMs and (b) CMN-OP models. The left side of (a) shows a sequence of length eight for which an HMM has predicted that an element of interest occupies two subsequences, [1:3] and [6:7]. The darker subsequences, [4:5] and [8:8], represent sequence regions between predicted elements. The right side of (a) shows the corresponding event in the sample space of the HMM, which associates one label with each position. The left side of (b) shows four predicted elements made by a CMN-OP model. The right side of (b) illustrates the corresponding event in the CMN-OP sample space. Each square corresponds to a subsequence, and an event in this sample space assigns a (possibly empty) label to each sub-sequence. 2 Models A conditional Markov network [5, 10] (CMN) defines the conditional probability distribution Pr(Y|X) where X is a set of observable input random variables and Y is a set of output random variables. As with standard Markov networks, a CMN consists of a qualitative graphical component G = (V, E) with vertex set V and edge set E that encodes a set of conditional independence assertions along with a quantitative component in the form of a set of potentials Φ over the cliques of G. In CMNs, V = X ∪Y. We denote an assignment of values to the set of random variables U with u. Each clique, q = (Xq, Yq), in the clique set Q(G) has a potential function φq(xq, yq) ∈Φ that assigns a non-negative number to each of the joint settings of (Xq, Yq). A CMN (G, Φ) defines the conditional probability distribution Pr(y|x) = 1 Z(x) Q q∈Q(G) φq(xq, yq) where Z(x) = P y′ Q q∈Q(G) φq(xq, y′ q) is the x dependent normalization factor called the partition function. One benefit of CMNs for classification tasks is that they are typically discriminatively trained by maximizing a function based on the conditional likelihood Pr(Y|X) over a training set rather than the joint likelihood Pr(Y, X). A common representation for the potentials φq(yq, xq) is with a log-linear model: φq(yq, xq) = exp{P b wb qf b q(yq, xq)} = exp{wT q · fq(yq, xq)}. Here wb q is the weight of feature f b q and wq and fq are column vectors of q’s weights and features. Now we show how we use CMNs to predict elements in observation sequences. Given a sequence x of length L, our task is to identify the types and locations of all instances of patterns in P = {P1, ..., PN} that are present in x where P is a set of pattern types. In the genomics domain x is a DNA sequence and P is a set of regulatory elements such as {gene, promoter, terminator}. A match m of a pattern to x specifies a subsequence xi:j and a pattern type Pk ∈P. We denote the set of all matches of pattern types in P to x with M(P, x). We call a subset C = (m1, m2, ..., mM) of M(P, x) a configuration. Matches in C are allowed PROM GENE TERM START END X 1 Y 2 Y YL+1 (a) (b) Figure 3: (a) The structure of the CMN-OP induced for the sequence x of length L. The ath pattern match Ya is conditionally independent of its non-neighbors given its neighbors X, Ya−1 and Ya+1. (b) The interaction graph we use in the regulatory element prediction task. Vertices are the pattern types along with START and END. Edges connect pattern types that may be adjacent. Edges from START connect to pattern types that may be the first matches Edges into END come from pattern types that may be the last matches. to overlap however, we assume that no two matches in C have the same start index1. Thus, the maximum size of a configuration C is L, and the elements of C may be ordered by start position such that ma ≤ma+1. Our models define a conditional probability distribution over configurations given an input sequence x. Given a sequence x of length L, the output random variables of our models are Y = (Y1, Y2, ..., YL, YL+1). We represent a configuration C = (m1, m2, ..., mM) with Y in the following way. If a is less than or equal to the configuration size M, we assign Ya to the ath match in C (Ya = ma), otherwise we set Ya equal to a special value null. Note that YL+1 will always be null; it is included for notational convenience. Our models define the conditional distribution Pr(Y|X). Our models assume that a pattern match is independent of other matches given its neighbors. That is, Ya is independent of Ya′ for a′ < a −1 or a′ > a + 1 given X, Ya−1 and Ya+1. This is analogous to the HMM assumption that the next state depends only on the current state. The conditional Markov network structure associated with this assumption is shown in Figure 3(a). The cliques in this graph are {Ya, Ya+1, X} for 1 ≤a ≤L. We denote the clique {Ya, Ya+1, X} with qa. We define the clique potential of qa for a ̸= 1 as the product of a pattern match term g(ya, x) and a pattern interaction term h(ya, ya+1, x). The functions g() and h() are shared among all cliques so φqa(ya, ya+1, x) = g(ya, x) × h(ya, ya+1, x) for 2 ≤a ≤L. The first clique q1 includes an additional start placement term α(y1, x) that scores the type and position of the first match y1. To ensure that real matches come before any null settings and that additional null settings do not affect Pr(y|x), we require that g(null, x) = 1, h(null,null, x) = 1 and h(null,ya, x) = 0 for all x and ya ̸= null. The pattern match term measures the agreement between the matched subsequence and the pattern type associated with ya. In the genomics domain our representation of the sequence match term is based on regulatory element specific HMMs. The pattern interaction term measures the compatibility between the types and spacing (or overlap) of adjacent matches. A Conditional Markov Network for Overlapping Patterns (CMN-OP) = (g, h, α) specifies a pattern match function g, pattern interaction function h and start placement function α that define the conditional distribution Pr(y|x) = 1 Z(x) QL a=1 φa(qa, x) = α(y1) Z(x) QL a=1 g(ya, x)h(ya, ya+1, x) where Z(x) is the normalizing partition function. Using the log-linear representation for g() and h() we have Pr(y|x) = α(y1) Z(x) exp{PL a=1 wT g · fg(ya, x) + wT h · fh(ya, ya+1, x)}. Here wg, fg, wh and fh are g() and h()’s weights and features. 1We only need to require configurations to be ordered sets. We make this slightly more stringent assumption to simplify the description of the model. 2.1 Representation Our representation of the pattern match function g() is based on HMMs. We construct an HMM with parameters Θk for each pattern type Pk along with a single background HMM with parameters ΘB. The pattern match score of ya ̸= null with subsequence xi:j and pattern type Pk is the odds Pr(xi:j|Θk)/ Pr(xi:j|ΘB). We have a feature f k g (ya, x) for each pattern type Pk whose value is the logarithm of the odds if the pattern associated with ya is Pk and zero otherwise. Currently, the weights wg are not trained and are fixed at 1. So, wT g · fg(ya, x) = f k g (ya, x) = log(Pr(xi:j|Θk)/ Pr(xi:j|ΘB)) where Pk is the pattern of ya. Our representation of the pattern interaction function h() consists of two components: (i) a directed graph I called the interaction graph that contains a vertex for each pattern type in P along with special vertices START and END and (ii) a set of weighted features for each edge in I. The interaction graph encodes qualitative domain knowledge about allowable orderings of pattern types. The value of h(ya, ya+1, x) = wT h · fh(ya, ya+1, x) is non-zero only if there is an edge in I from the pattern type associated with ya to the pattern type associated with ya+1. Thus, any configuration with non-zero probability corresponds to a path through I. Figure 3(b) shows the interaction graph we use to predict bacterial regulatory elements. It asserts that between the start positions of two genes there may be no element starts, a single terminator start or zero or more promoter starts with the requirement that all promoters start after the start of the terminator. Note that in CMN-OP models, the interaction graph indicates legal orderings over the start position of matches not over complete matches as in an HMM. Each of the pattern interaction features f ∈fh is associated with an edge in the interaction graph I. Each edge e in I has single bias feature f b e and a set of distance features f D e . The value of f b e(ya, ya+1, x) is 1 if the pattern types connected by e correspond to the types associated with ya and ya+1 and 0 otherwise. The distance features for edge e provide a discretized representation of the distance between (or amount of overlap of) two adjacent matches of types consistent with e. We associate each distance feature f r e ∈f D e with a range r. The value of f r e (ya, ya+1, x) is 1 if the (possibly negative) difference between the start position of ya+1 and the end position of ya is in r, otherwise it is 0. The set of ranges for a given edge are nonoverlapping. So, h(ya, ya+1, x) = exp(wT h · fh(ya, ya+1, x)) = exp(wb e + wr e) where e is the edge for ya and ya+1, wb e is the weight of the bias feature f b e and wr e is the weight of the single distance feature f r e whose range contains the spacing between the matches of ya and ya+1. 3 Inference and Training Given a trained model with weights w and an input sequence x, the inference task is to determine properties of the distribution Pr(y|x). Since the cliques of a CMNOP form a chain we could perform exact inference with the belief propagation (BP) algorithm [8]. The number of joint settings in one clique grows O(L4), however, giving BP a running time of O(L5) and which is impractical for longer sequences. The exact inference procedure we use, which is inspired the energy minimization algorithm for pictorial structures [3], runs in O(L2) time. Our inference procedure exploits two properties of our representation of the pattern interaction function h(). First, we use the invariance of h(ya, ya+1, x) to the start position of ya and the end position of ya+1. In this section, we make this explicit by writing h(ya, ya+1, x) as h(k, k′, d) where k and k′ are the pattern types of ya and ya+1 respectively and d is the distance between (or overlap of if negative) ya and ya+1. The second property we use is the fact that the difference between h(k, k′, d) and h(k, k′, d + 1) is non-zero only if d is the maximum value of the range of one of the distance features f r e ∈f D e associated with the edge e = k →k′ The inference procedure we use for our CMN-OP models consists of a forward pass and a backward pass. Due to space limitations, we only describe the key aspects of the forward pass. The forward pass fills an L × L × N matrix F where we define F(i, j, k) to be the sum of the scores of all partial configurations ˜y that end with y∗where y∗is the match of xi:j to Pk: F(i, j, k) ≡ g(y∗, x) P ˜y α(y1, x) Q ya∈(˜y\y∗) g(ya, x)h(ya, ya+1, x) Here ˜y = (y1, y2, ..., y∗) and \ denotes set difference. F has a recursive formulation: F(i, j, k) = gk(y∗, x) αk(i) + i−1 X i′=1 L X j′=i′ N X k′=1 F(i′, j′, k′)h(k′, k, i −j′) . The triple sum is over all possible adjacent previous matches. Due to the first property of h just discussed, the value of the triple sum for setting F(i, j, k) and F(i, j′, k) is the same for any j′. We cache the value of the triple sum in the L × N matrix Fin where Fin(i, k) holds the value needed for setting F(i, j′, k) for any j′. We begin the forward pass with i = 1 and set the values of F(1, j, k) for all j and k before incrementing i. After i is incremented, we use the second property of h to update Fin in time O(N 2B), which is independent of the sequence length L, where B is the number of “bins” used in our discretized represenation of distance. The overall time complexity of the forward pass is O(LN 2B + L2N). The first term is for updating Fin and the second term is for the constant time setting of the O(L2N) elements of F. If the sequence length L dominates N and B, as it does in the gene regulation domain, the effective running time is O(L2). Training involves estimating the weights w from a training set D. An element d of D is a pair (xd, ˆyd) where xd is a fully observable sequence and ˆyd is a partially observable configuration for xd. To help avoid overfitting we assume a zero-mean Gaussian prior over the weights and optimize the log of the MAP objective function following Taskar et al. [10]: L(w, D) = P d∈D(log Pr( ˆ yd|xd)) −wT ·w 2σ2 . The value of the gradient ∇L(w, D) in the direction of weight w ∈w is: ∂L(w,D) ∂w = P d∈D(E[Cw|xd, ˆyd] −E[Cw|xd]) −w σ2 where Cw is a random variable representing the number of times the binary feature of w is 1. The expectation is relative to Pr(y|x) defined by the current setting of w. The value in the summation is the difference in the expected number of times w is used given both x and ˆy to the expected number of times w is used given just x. The last term is the shrinking effect of the prior. With the gradient in hand, we can use any of a number of optimization procedures to set w. We use the quasi-Newton method BFGS [6]. 4 Empirical Evaluation In this section we evaluate our Markov network approach by applying it to recognize regulatory signals in the E. coli genome. Our hypothesis is that the CMN-OP models will provide more accurate predictions than either of two baselines: (i) predicting the signals independently, and (ii) predicting the signals using an HMM. All three approaches we evaluate – the Markov networks and the two baselines – employ two submodels [1]. The first submodel is an HMM that is used to predict (a) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Precision Recall Promoters CMN-OP HMM SCAN (b) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Precision Recall Terminators CMN-OP HMM SCAN (c) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Precision Recall Overlapping Terminators CMN-OP HMM SCAN Figure 4: Precision-recall curves for the CMN-OP, HMM and SCAN models on (a) the promoter localization task, (b) the terminator localization task and (c) the terminator localization task for terminators known to overlap genes or promoters. candidate promoters and the second submodel is a stochastic context free grammar (SCFG) that is used to predict candidate terminators. The first baseline approach, which we refer to as SCAN, involves “scanning” a promoter model and a terminator model along each sequence being processed, and at each position producing a score indicating the likelihood that a promoter or terminator starts at that position. With this baseline, each prediction is made independently of all other predictions. The second baseline is an HMM, similar to the one depicted in Figure 1(b). The HMM that we use here, does not contain the gene submodel shown in Figure 1(b) because the sequences we use in our experiments do not contain entire genes. We have the HMM and CMN-OP models make terminator and promoter predictions for each position in each test sequence. We do this using posterior decoding which involves having a model compute the probability that a promoter (terminator) ends at a specified position given that the model somehow explains the sequence. The data set we use consists of 2,876 subsequences of the E. coli genome that collectively contain 471 known promoters and 211 known terminators. Using tenfold cross-validation, we evaluate the three methods by considering how well each method is able to localize predicted promoters and terminators in the test sequences. Under this evaluation criterion, a correct prediction predicts a promoter (terminator) within k bases of an actual promoter (terminator). We set k to 10 for promoters and to 25 for terminators. For all methods, we plot precision-recall (PR) curves by varying a threshold on the prediction confidences. Recall is defined as T P T P +F N , and precision is defined as T P T P +F P , where TP is the number of true positive predictions, FN is the number of false negatives, and FP is the number of false positives. Figures 4(a) and 4(b) show PR curves for the promoter and terminator localization tasks, respectively. For both cases, the HMM and CMN-OP models are clearly superior to the SCAN models. This result indicates the value of taking the regularities of relationships among these signals into account when making predictions. For the case of localizing terminators, the CMN-OP PR curve dominates the curve for the HMMs. The difference is not so marked for promoter localization, however. Although the CMN-OP curve is better at high recall levels, the HMM curve is somewhat better at low recall levels. Overall, we conclude that these results show the benefits of representing relationships among predicted signals (as is done in the HMMs and CMN-OP models) and being able to represent and predict overlapping signals. Figure 4(c) shows the PR curves specifically for a set of filtered test sets in which each actual terminator overlaps either a gene or a promoter. These curves indicate that the CMN-OP models have a particular advantage in these cases. 5 Conclusion We have presented an approach, based on Markov networks, able to naturally represent and predict overlapping sequence elements. Our approach first generates a set of candidate elements by having a set of models independently make predictions. Then, we construct a Markov network to decide which candidate predictions to accept. We have empirically validated our approach by using it to recognize promoter and terminator “signals” in a bacterial genome. Our experiments demonstrate that our approach provides more accurate predictions than baseline HMM models. Although we describe and evaluate our approach in the context of genomics, we believe that it has other applications as well. Consider, for example, the task of segmenting and indexing audio and video streams [7]. We might want to annotate segments of a stream that correspond to specific types of events or to particular individuals who appear or are speaking. Clearly, there might be overlapping events and appearances of people, and moreover, there are likely to be dependencies among events and appearances. Any problem with these two properties is a good candidate for our Markov-network approach. Acknowledgments This research was supported in part by NSF grant IIS-0093016, and NIH grants T15-LM07359-01 and R01-LM07050-01. References [1] J. Bockhorst, Y. Qiu, J. Glasner, M. Liu, F. Blattner, and M. Craven. Predicting bacterial transcription units using sequence and expression data. Bioinformatics, 19(Suppl. 1):i34–i43, 2003. [2] M. Ermolaeva, H. Khalak, O. White, H. Smith, and S. Salzberg. Prediction of transcription terminators in bacterial genomes. J. of Molecular Biology, 301:27–33, 2000. [3] P. Felzenszwalb and D. Huttenlocher. Efficient matching of pictorial structures. In Proc. of the 2000 IEEE Conf. on Computer Vision and Pattern Recognition, 66–75. [4] Z. Ghahramani and M. I. Jordan. Factorial hidden markov models. Machine Learning, 29:245–273, 1997. [5] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of the 18th Internat. Conf. on Machine Learning, pages 282–289, Williamstown, MA, 2001. Morgan Kaufmann. [6] R. Malouf. A comparison of algorithms for maximum entropy parameter estimation. Sixth workshop on computational language learning (CoNLL), 2002. [7] National Institute of Standards and Technology. TREC video retrieval evaluation (TRECVID), 2004. http://www-nlpir.nist.gov/projects/t01v/. [8] J. Pearl. Probabalistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988. [9] A. Pedersen, P. Baldi, S. Brunak, and Y. Chauvin. Characterization of prokaryotic and eukaryotic promoters using hidden Markov models. In Proc. of the 4th International Conf. on Intelligent Systems for Molecular Biology, pages 182–191, St. Louis, MO, 1996. AAAI Press. [10] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proc. of the 18th International Conf. on Uncertainty in Artificial Intelligence, Edmonton, Alberta, 2002. Morgan Kaufmann. [11] T. Yada, Y. Totoki, T. Takagi, and K. Nakai. A novel bacterial gene-finding system with improved accuracy in locating start codons. DNA Research, 8(3):97–106, 2001.
|
2004
|
119
|
2,529
|
Maximal Margin Labeling for Multi-Topic Text Categorization Hideto Kazawa, Tomonori Izumitani, Hirotoshi Taira and Eisaku Maeda NTT Communication Science Laboratories Nippon Telegraph and Telephone Corporation 2-4 Hikaridai, Seikacho, Sorakugun, Kyoto 619-0237 Japan {kazawa,izumi,taira,maeda}@cslab.kecl.ntt.co.jp Abstract In this paper, we address the problem of statistical learning for multitopic text categorization (MTC), whose goal is to choose all relevant topics (a label) from a given set of topics. The proposed algorithm, Maximal Margin Labeling (MML), treats all possible labels as independent classes and learns a multi-class classifier on the induced multi-class categorization problem. To cope with the data sparseness caused by the huge number of possible labels, MML combines some prior knowledge about label prototypes and a maximal margin criterion in a novel way. Experiments with multi-topic Web pages show that MML outperforms existing learning algorithms including Support Vector Machines. 1 Multi-topic Text Categorization (MTC) This paper addresses the problem of learning for multi-topic text categorization (MTC), whose goal is to select all topics relevant to a text from a given set of topics. In MTC, multiple topics may be relevant to a single text. We thus call a set of topics label, and say that a text is assigned a label, not a topic. In almost all previous text categorization studies (e.g. [1, 2]), the label is predicted by judging each topic’s relevance to the text. In this decomposition approach, the features specific to a topic, not a label, are regarded as important features. However, the approach may result in inefficient learning as we will explain in the following example. Imagine an MTC problem of scientific papers where quantum computing papers are assigned multi-topic label “quantum physics (QP) & computer science (CS)”. (QP and CS are topics in this example.) Since there are some words specific to quantum computing such as “qbit1”, one can say that efficient MTC learners should use such words to assign label QP & CS. However, the decomposition approach is likely to ignore these words since they are only specific to a small portion of the whole QP or CS papers (there are many more QP and CS papers than quantum computing papers), and therefore are not discriminative features for either topic QP or CS. 1Qbit is a unit of quantum information, and frequently appears in real quantum computing literatures, but rarely seen in other literatures. Symbol Meaning x(∈Rd) A document vector t1, t2, . . . , tl Topics T The set of all topics L, λ(⊂T) A label L[j] The binary representation of L. 1 if tj ∈L and 0 otherwise. Λ(= 2T ) The set of all possible labels {(xi, Li)}m i=1 Training samples Table 1: Notation Parametric Mixture Model (PMM) [3] adopts another approach to MTC. It is assumed in PMM that multi-topic texts are generated from a mixture of topic-specific word distributions. Its decision on labeling is done at once, not separately for each topic. However, PMM also has a problem with multi-topic specific features such as “qbit” since it is impossible for texts to have such features given PMM’s mixture process. These problems with multi-topic specific features are caused by dependency assumptions between labels, which are explicitly or implicitly made in existing methods. To solve these problems, we propose Maximal Margin Labeling, which treats labels as independent classes and learns a multi-class classifier on the induced multi-class problem. In this paper, we first discuss why multi-class classifiers cannot be directly applied to MTC in Section 2. We then propose MML in Section 3, and address implementation issues in Section 4. In Section 5, MML is experimentally compared with existing methods using a collection of multi-topic Web pages. We summarize this paper in Section 6. 2 Solving MTC as a Multi-Class Categorization To discuss why existing multi-class classifiers do not work in MTC, we start from the multi-class classifier proposed in [4]. Hereafter we use the notation given in Table 1. The multi-class classifier in [4] categorizes an object into the class whose prototype vector is the closest to the object’s feature vector. By substituting label for class, the classifier can be written as follows. f(x) = arg max λ∈Λ ⟨x, mλ⟩X (1) where ⟨, ⟩X is the the inner product of Rd, and mλ ∈Rd is the prototype vector of label λ. Following the similar argument as in [4], the prototype vectors are learned by solving the following maximal margin problem2. min M 1 2∥M∥2 + C X 1≤i≤m X λ∈Λ,λ̸=Li ξλ i s.t. ⟨xi, mLi⟩X −⟨xi, mλ⟩X ≥1 −ξλ i for 1 ≤i ≤m, ∀λ ̸= Li, (2) where M is the prototype matrix whose columns are the prototype vectors, and ∥M∥is the Frobenius matrix norm of M. Note that Eq. (1) and Eq. (2) cover only training samples’ labels, but also all possible labels. This is because the labels unseen in training samples may be relevant to test samples. In 2In Eq.(2), we penalize all violation of the margin constraints. On the other hand, Crammer and Singer penalize only the largest violation of the margin constraint for each training sample [4]. We chose the “penalize-all” approach since it leads to an optimization problem without equality constraints (see Eq.(7)), which is much easier to solve than the one in [4]. usual multi-class problems, such unseen labels seldom exist. In MTC, however, the number of labels is generally very large (e.g. one of our datasets has 1,054 labels (Table 2)), and unseen labels often exist. Thus it is necessary to consider all possible labels in Eq. (1) and Eq. (2) since it is impossible to know which unseen labels are present in the test samples. There are two problems with Eq. (1) and Eq. (2). The first problem is that they involve the prototype vectors of seldom or never seen labels. Without the help of prior knowledge about where the prototype vectors should be, it is impossible to obtain appropriate prototype vectors for such labels. The second problem is that these equations are computationally too demanding since they involve combinatorial maximization and summation over all possible labels, whose number can be quite large. (For example, the number is around 230 in the datasets used in our experiments.) We will address the first problem in Section 3 and the second problem in Section 4. 3 Maximal Margin Labeling In this section, we incorporate some prior knowledge about the location of prototype vectors into Eq. (1) and Eq. (2), and propose a novel MTC learning algorithm, Maximal Margin Labeling (MML). As prior knowledge, we simply assume that the prototype vectors of similar labels should be placed close to each other. Based on this assumption, we first rewrite Eq. (1) to yield f(x) = arg max λ∈Λ ⟨M T x, eλ⟩L, (3) where ⟨, ⟩L is the inner product of R|Λ| and {eλ}λ∈Λ is the orthonormal basis of R|Λ|. The classifier of Eq. (3) can be interpreted as a two-step process: the first step is to map the vector x into R|Λ| by M T , and the second step is to find the closest eλ to image M T x. Then we replace {eλ}λ∈Λ with (generally) non-orthogonal vectors {φ(λ)}λ∈Λ whose geometrical configuration reflects label similarity. More formally speaking, we use vectors {φ(λ)}λ∈Λ that satisfy the condition ⟨φ(λ1), φ(λ2)⟩S = S(λ1, λ2) for ∀λ1, λ2 ∈Λ, (4) where ⟨, ⟩S is an inner product of the vector space spanned by {φ(λ)}λ∈Λ, and S is a Mercer kernel [5] on Λ × Λ and is a similarity measure between labels. We call the vector space spanned by {φ(λ)} VS. With this replacement, MML’s classifier is written as follows. f(x) = arg max λ∈Λ ⟨Wx, φ(λ)⟩S, (5) where W is a linear map from Rd to VS. W is the solution of the following problem. min W 1 2∥W∥2 + C m X i=1 X λ∈Λ,λ̸=Li ξλ i s.t. Wxi, φ(Li)−φ(λ) ∥φ(Li)−φ(λ)∥ ≥1−ξλ i , ξλ i ≥0 for 1 ≤i ≤m, ∀λ ̸= Li. (6) Note that if φ(λ) is replaced by eλ, Eq. (6) becomes identical to Eq. (2) except for a scale factor. Thus Eq. (5) and Eq. (6) are natural extensions of the multi-class classifier in [4]. We call the MTC classifier of Eq. (5) and Eq. (6) “Maximal Margin Labeling (MML)”. Figure 1 explains the margin (the inner product in Eq. (6)) in MML. The margin represents the distance from the image of the training sample xi to the boundary between the correct label Li and wrong label λ. MML optimizes the linear map W so that the smallest margin between all training samples and all possible labels becomes maximal, along with a penalty C for the case that samples penetrate into the margin. Figure 1: Maximal Margin Labeling Dual Form For numerical computation, the following Wolfe dual form of Eq. (6) is more convenient. (We omit its derivation due to space limits.) max αλ i X i,λ αλ i −1 2 X i,λ X i′,λ′ αλ i αλ′ i′ (xi·xi′)S(Li, Li′)−S(Li, λ′)−S(λ, Li′)+S(λ, λ′) 2 p (1−S(Li, λ))(1−S(Li′, λ′)) s.t. 0 ≤αλ i ≤C for 1 ≤i ≤m, ∀λ ̸= Li, (7) where we denote Pm i=1 P λ∈Λ,∀λ̸=Li by P i,λ, and αλ i are the dual variables corresponding to the first inequality constraints in Eq. (6). Note that Eq. (7) does not contain φ(λ): all the computations involving φ can be done through the label similarity S. Additionally xi only appears in the inner products, and therefore can be replaced by any kernel of x. Using the solution αλ i of Eq. (7), the MML’s classifier in Eq. (5) can be written as follows. f(x) = arg max L∈Λ X i,λ αλ i (x·xi)S(Li, L)−S(λ, L) p 2(1−S(Li, λ)) . (8) Label Similarity3 As examples of label similarity, we use two similarity measures: Dice measure and cosine measure. Dice measure4 SD(λ1, λ2) = 2|λ1∩λ2| |λ1|+|λ2| = 2 Pl j=1 λ1[j]λ2[j] Pl j=1 λ1[j] + Pl j=1 λ2[j] . (9) Cosine measure SC(λ1, λ2) = |λ1 ∩λ2| p |λ1| p |λ2| = Pl j=1 λ1[j]λ2[j] qPl j=1 λ1[j] qPl j=1 λ2[j] .(10) 4 Efficient Implementation 4.1 Approximation in Learning Eq. (7) contains the sum over all possible labels. As the number of topics (l) increases, this summation rapidly becomes intractable since |Λ| grows exponentially as 2l. To circumvent 3The following discussion is easily extended to include the case that both λ1 and λ2 are empty although we do not discuss the case due to space limits. this problem, we approximate the sum over all possible labels in Eq. (7) by the partial sum over αλ i of |(A ∩Bc) ∪(Ac ∩B)|=1 and set all the other αλ i to zero. This approximation reduces the burden of the summation quite a lot: the number of summands is reduced from 2l to l, which is a huge reduction especially when many topics exist. To understand the rationale behind the approximation, first note that αλ i is the dual variable corresponding to the first inequality constraint (the margin constraint) in Eq. (7). Thus αλ i is non-zero if and only if Wxi falls in the margin between φ(Li) and φ(λ). We assume that this margin violation mainly occurs when φ(λ) is “close” to φ(Li), i.e. |(A ∩Bc) ∪ (Ac ∩B)|=1. If this assumption holds well, the proposed approximation of the sum will lead to a good approximation of the exact solution. 4.2 Polynomial Time Algorithms for Classification The classification of MML (Eq. (8)) involves the combinatorial maximization over all possible labels, so it can be a computationally demanding process. However, efficient classification algorithms are available when either the cosine measure or dice measure is used as label similarity. Eq. (8) can be divided into the subproblems by the number of topics in a label. f(x) = arg max L∈{ˆL1,ˆL2,...,ˆLl} g(x, L), (11) ˆLn = arg max L∈Λ,|L|=n g(x, L). (12) where g(x) is g(x, L) = l X j=1 cn[j]L[j], cn[j] = P i,λ αλ i (x·xi) √ 2(1−SD(Li,λ)). 2Li[j] |Li|+n −2λ[j] |λ|+n if SD is used. P i,λ αλ i (x·xi) √ 2(1−SC(Li,λ)) Li[j] √ |Li|√n − λ[j] √ |λ|√n if SC is used. (13) Here n = |L|. The computational cost of Eq. (13) for all j is O(nαl) (nα is the number of non-zero α), and that of Eq. (12) is O(l log l). Thus the total cost of the classification by Eq. (11) is O(nαl2 + l2 log l). On the other hand, nα is O(ml) under the approximation described above. Therefore, the classification can be done within O(ml3) computational steps, which is a significant reduction from the case that the brute force search is used in Eq. (8). 5 Experiments In this section, we report experiments that compared MML to PMM [3], SVM5 [6], and BoosTexter [2] using a collection of Web pages. We used a normalized linear kernel k(x, x′) = x · x′/∥x∥∥x′∥in MML and SVM. As for BoosTexter, “real abstaining AdaBoost.MH” was used as the weak learner. 5.1 Experimental Setup The datasets used in our experiment represent the Web page collection used in [3] (Table 2). The Web pages were collected through the hyperlinks from Yahoo!’s top directory 5For each topic, an SVM classifier is trained to predict whether the topic is relevant (positive) or irrelevant (negative) to input doucments. Dataset Name (Abbrev.) #Text #Voc #Tpc #Lbl Label Size Frequency (%) 1 2 3 4 ≥5 Arts & Humanities (Ar) 7,484 23,146 26 599 55.6 30.5 9.7 2.8 1.4 Business & Economy (Bu) 11,214 21,924 30 233 57.6 28.8 11.0 1.7 0.8 Computers & Internet (Co) 12,444 34,096 33 428 69.8 18.2 7.8 3.0 1.1 Education (Ed) 12,030 27,534 33 511 66.9 23.4 7.3 1.9 0.6 Entertainment (En) 12,730 32,001 21 337 72.3 21.1 4.5 1.0 1.1 Health (He) 9,205 30,605 32 335 53.2 34.0 9.5 2.4 0.9 Recreation (Rc) 12,828 30,324 22 530 69.2 23.1 5.6 1.4 0.6 Reference (Rf) 8,027 39,679 33 275 85.5 12.6 1.5 0.3 0.1 Science (Si) 6,428 37,187 40 457 68.0 22.3 7.3 1.9 0.5 Social Science (SS) 12,111 52,350 39 361 78.4 17.0 3.7 0.7 0.3 Society & Culture (SC) 14,512 31,802 27 1054 59.6 26.1 9.2 2.9 2.2 Table 2: A summary of the web page datasets. “#Text” is the number of texts in the dataset, “#Voc” the number of vocabularies (i.e. features), “#Tpc” the number of topics, “#Lbl” the number of labels, and “Label Size Frequency” is the relative frequency of each label size. (Label size is the number of topics in a label.) Method Feature Type Parameter MML TF, TF×IDF C = 0.1, 1, 10 PMM TF Model1, Model2 SVM TF, TF×IDF C = 0.1, 1, 10 Boost Binary R={2, 4, 6, 8, 10}×103 Table 3: Candidate feature types and learning parameters. (R is the number of weak hypotheses.) The underlined fetures and parameters were selected for the evaluation with the test data. (www.yahoo.com), and then divided into 11 datasets by Yahoo’s top category. Each page is labeled with the Yahoo’s second level sub-categories from which the page is hyperlinked. (Thus, the sub-categories are topics in our term.) See [3] for more details about the collection. Then the Web pages were converted into three types of feature vectors: (a) Binary vectors, where each feature indicates the presence (absence) of a term by 1 (0); (b) TF vectors, where each feature is the number of appearances of a term (term frequency); and (c) TF×IDF vectors, where each feature is the product of term frequency and inverse document frequency [7]. To select the best combinations of feature types and learning parameters such as the penalty C for MML, the learners were trained on 2,000 Web pages with all combinations of feature and parameter listed in Table 3, and then were evaluated by labeling F-measure on independently drawn development data. The combinations which achieve the best labeling F-measures (underlined in Table 3) were used in the following experiments. 5.2 Evaluation Measures We used three measures to evaluate labeling performance: labeling F-measure, exact match ratio, and retrieval F-measure. In the following definitions, {Lpred i }n i=1 and {Ltrue i }n i=1 mean the predicted labels and the true labels, respectively. Labeling F-measure Labeling F-measure FL evaluates the average labeling performance while taking partial match into account. FL = 1 n n X i=1 2|Lpred i ∩Ltrue i | |Lpred i | + |Ltrue i | = 1 n n X i=1 2 Pl j=1 Lpred i [j]Ltrue i [j] Pl j=1(Lpred i [j] + Ltrue i [j]) . (14) DataLabeling F-measure Exact Match Ratio Retrieval F-measure set MD MC PM SV BO MD MC PM SV BO MD MC PM SV BO Ar 0.55 0.44 0.50 0.46 0.38 0.44 0.32 0.21 0.29 0.22 0.30 0.26 0.24 0.29 0.22 Bu 0.80 0.81 0.75 0.76 0.75 0.63 0.62 0.48 0.57 0.53 0.25 0.27 0.20 0.29 0.20 Co 0.62 0.59 0.61 0.55 0.47 0.51 0.46 0.35 0.41 0.34 0.27 0.25 0.19 0.30 0.17 Ed 0.56 0.43 0.51 0.48 0.37 0.45 0.34 0.19 0.30 0.23 0.25 0.23 0.21 0.25 0.16 En 0.64 0.52 0.61 0.54 0.49 0.55 0.44 0.31 0.42 0.36 0.37 0.33 0.30 0.35 0.29 He 0.74 0.74 0.66 0.67 0.60 0.58 0.53 0.34 0.47 0.39 0.35 0.35 0.23 0.35 0.26 Rc 0.63 0.46 0.55 0.49 0.44 0.54 0.38 0.25 0.37 0.33 0.47 0.39 0.36 0.40 0.33 Rf 0.67 0.58 0.63 0.56 0.50 0.60 0.51 0.39 0.49 0.41 0.29 0.25 0.24 0.25 0.16 Si 0.61 0.54 0.52 0.47 0.39 0.52 0.43 0.22 0.36 0.28 0.37 0.35 0.28 0.31 0.19 SS 0.73 0.71 0.66 0.64 0.59 0.65 0.60 0.45 0.55 0.49 0.36 0.35 0.18 0.31 0.15 SC 0.60 0.55 0.54 0.49 0.44 0.44 0.40 0.21 0.32 0.27 0.29 0.28 0.25 0.26 0.20 Avg 0.65 0.58 0.59 0.56 0.49 0.54 0.46 0.31 0.41 0.35 0.32 0.30 0.24 0.31 0.21 Table 4: The performance comparison by labeling F-measure (left), exact match ratio (middle) and retrieval F-measure (right). The bold figures are the best ones among the five methods, and the underlined figures the second best ones. MD, MC, PM, SV, and BO represent MML with SD, MML with SC, PMM, SVM and BoosTexter, respectively. Exact Match Ratio Exact match ratio EX counts only exact matches between the predicted label and the true label. EX = 1 n n X i=1 I[Lpred i = Ltrue i ], (15) where I[S] is 1 if the statement S is true and 0 otherwise. Retrieval F-measure6 For real tasks, it is also important to evaluate retrieval performance, i.e. how accurately classifiers can find relevant texts for a given topic. Retrieval F-measure FR measures the average retrieval performance over all topics. FR = 1 l l X j=1 2 Pn i=1 Lpred i [j]Ltrue i [j] Pn i=1(Lpred i [j] + Ltrue i [j]) . (16) 5.3 Results First we trained the classifiers with randomly chosen 2,000 samples. We then calculated the three evaluation measures for 3,000 other randomly chosen samples. This process was repeated five times, and the resulting averaged values are shown in Table 4. Table 4 shows that the MMLs with Dice measure outperform other methods in labeling F-measure and exact match ratio. The MMLs also show the best performance with regard to retrieval Fmeasure although the margins to the other methods are not as large as observed in labeling F-measure and exact match ratio. Note that no classifier except MML with Dice measure achieves good labeling on all the three measures. For example, PMM shows high labeling F-measures, but its performance is rather poor when evaluated in retrieval F-measure. As the second experiment, we evaluated the classifiers trained with 250–2000 training samples on the same test samples. Figure 2 shows each measure averaged over all datasets. It is observed that the MMLs show high generalization even when training data is small. An interesting point is that MML with cosine measure achieves rather high labeling F-measures and retrieval F-measure with training data of smaller size. Such high-performace, however, does not continue when trained on larger data. 6FR is called “the macro average of F-measures” in the text categorization community. Figure 2: The learning curve of labeling F-measure (left), exact match ratio (middle) and retrieval F-measure (right). MD, MC, PM, SV, BO mean the same as in Table 4. 6 Conclusion In this paper, we proposed a novel learning algorithm for multi-topic text categorization. The algorithm, Maximal Margin Labeling, embeds labels (sets of topics) into a similarityinduced vector space, and learns a large margin classifier in the space. To overcome the demanding computational cost of MML, we provide an approximation method in learning and efficient classification algorithms. In experiments on a collection of Web pages, MML outperformed other methods including SVM and showed better generalization. Acknowledgement The authors would like to thank Naonori Ueda, Kazumi Saito and Yuji Kaneda of Nippon Telegraph and Telephone Corporation for providing PMM’s codes and the datasets. References [1] Thorsten Joachims. Text categorization with support vector machines: learning with many relevant features. In Claire N´edellec and C´eline Rouveirol, editors, Proc. of the 10th European Conference on Machine Learning, number 1398, pages 137–142, 1998. [2] Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135–168, 2000. [3] Naonori Ueda and Kazumi Saito. Parametoric mixture models for multi-topic text. In Advances in Neural Information Processing Systems 15, pages 1261–1268, 2003. [4] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265–292, 2001. [5] Klaus-Robert M¨uller, Sebastian Mika, Gunnar R¨atsch, Koji Tsuda, and Bernhard Sch¨olkopf. An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, 12(2):181–201, 2001. [6] Vladimir N. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., 1998. [7] Ricardo Baeza-Yates and Berthier Ribeiro-Neto. Modern Information Retrieval. Addison-Wealy, 1999.
|
2004
|
12
|
2,530
|
Co-Validation: Using Model Disagreement on Unlabeled Data to Validate Classification Algorithms Omid Madani, David M. Pennock, Gary W. Flake Yahoo! Research Labs 3rd floor, Pasadena Ave. Pasadena, CA 91103 {madani|pennockd|flakeg}@yahoo-inc.com Abstract In the context of binary classification, we define disagreement as a measure of how often two independently-trained models differ in their classification of unlabeled data. We explore the use of disagreement for error estimation and model selection. We call the procedure co-validation, since the two models effectively (in)validate one another by comparing results on unlabeled data, which we assume is relatively cheap and plentiful compared to labeled data. We show that per-instance disagreement is an unbiased estimate of the variance of error for that instance. We also show that disagreement provides a lower bound on the prediction (generalization) error, and a tight upper bound on the “variance of prediction error”, or the variance of the average error across instances, where variance is measured across training sets. We present experimental results on several data sets exploring co-validation for error estimation and model selection. The procedure is especially effective in active learning settings, where training sets are not drawn at random and cross validation overestimates error. 1 Introduction Balancing hypothesis-space generality with predictive power is one of the central tasks in inductive learning. The difficulties that arise in seeking an appropriate tradeoff go by a variety of names—overfitting, data snooping, memorization, no free lunch, bias-variance tradeoff, etc.—and lead to a number of known solution techniques or philosophies, including regularization, minimum description length, model complexity penalization (e.g., BIC, AIC), Ockham’s razor, training with noise, ensemble methods (e.g., boosting), structural risk minimization (e.g., SVMs), cross validation, hold-out validation, etc. All of these methods in some way attempt to estimate or control the prediction (generalization) error of an induced function on unseen data. In this paper, we explore a method of error estimation that we call co-validation. The method trains two independent functions that in a sense validate (or invalidate) one another by examining their mutual rate of disagreement across a set of unlabeled data. In Section 2, we formally define disagreement. The measure simultaneously reflects notions of algorithm stability, model capacity, and problem complexity. For example, empirically we find that disagreement goes down when we increase the training set size, reduce the model’s capacity (complexity), or reduce the inherent difficulty of the learning problem. Intuitively, the higher the disagreement rate, the higher the average error rate of the learner, where the average is taken over both test instances and training subsets. Therefore disagreement is a measure of the fitness of the learner to the learning task. However, as researchers have noted in relation to various measures of learner stability in general [Kut02], while robust learners (i.e., algorithms with low prediction error) are stable, a stable learning algorithm does not necessarily have low prediction error. In the same vein, we show and explain that the disagreement measure provides only lower bounds on error. Still, our empirical results give evidence that disagreement can be a useful estimate in certain circumstances. Since we require a source of unlabeled data—preferably a large source in order to accurately measure disagreement—we assume a semi-supervised setting where unlabeled data is relatively cheap and plentiful while labeled data is scarce or expensive. This scenario is often realistic, most notably for text classification. We focus on the binary classification setting and analyze 0/1 error. In practice, cross validation—especially leave-one-out cross validation—often provides an accurate and reliable error estimate. In fact, under the usual assumption that training and test data both arise from the same distribution, k-fold cross validation provides an unbiased estimate of prediction error (for functions trained on m(1 −1/k) many instances, m being the total number of labeled instances). However, in many situations, training data may actually arise from a different distribution than test data. One extreme example of this is active learning, where training samples are explicitly chosen to be maximally informative, using a process that is neither independent nor reflective of the test distribution. Even beyond active learning, in practice the process of gathering data and obtaining labels often may bias the training set, for example because some inputs are cheaper or easier to label, or are more readily available or obvious to the data collector, etc. In these cases, the error estimate obtained from cross validation may not yield an accurate measure of the prediction error of the learned function, and model selection based on cross validation may suffer. Empirically we find that in active learning settings, disagreement often provides a more accurate estimate of prediction error and is more useful as a guide for model selection. Related to the problem of (average) error estimation is the problem of error variance estimation: both variance across test instances and variance across functions (i.e., training sets). Even if a learning algorithm exhibits relatively low average error, if it exhibits high variance, the algorithm may be undesirable depending on the end-user’s risk tolerance. Variance is also useful for algorithm comparison, to determine whether observed error differences are statistically significant. For variance estimation, cross validation is on much less solid footing: in fact, Bengio and Grandvalet [BG03] recently proved an impossibility result showing that no method exists for producing an unbiased estimate of the variance of cross validation error in a pure supervised setting with labeled training data only. In this work, we show how disagreement relates to certain measures of variance. First, the disagreement on a particular instance provides an unbiased estimate of the variance of error on that instance. Second, disagreement provides an upper bound on the variance of prediction error (the type of variance useful for algorithm comparison). The paper is organized as follows. In § 2 we formally define disagreement and prove how it lower-bounds prediction error and upper-bounds variance of prediction error. In § 3 we empirically explore how error estimates and model selection strategies that we devise based on disagreement compare against cross validation in standard (iid) learning settings and in active learning settings. In § 4 we discuss related work. We conclude in § 5. 2 Error, Variance, and Disagreement Denote a set of input instances by X. Each instance x ∈X is a vector of feature attributes. Each instance has a unique true classification or label yx ∈{0, 1}, in general unknown to the learner. Let Z∗= {(x, yx)}m be a set of m labeled training instances provided to the learner. The learner is an algorithm A : Z∗→F, that inputs labeled instances and output a function f ∈F, where F is the set of all functions (classifiers) that A may output (the hypothesis space). Each f ∈F is a function that maps instances x to labels {0, 1}. The goal of the algorithm is to choose f ∈F to minimize 0/1 error (defined below) on future unlabeled test instances. We assume the training set size is fixed at some m > 0, and we take expectations over one or both of two distributions: (1) the distribution X over instances in X, and (2) the distribution F induced over the functions F, when learner A is trained on training sets of size m obtained by sampling from X. The 0/1 error ex,f of a given function f on a given instance x equals 1 if and only if the function incorrectly classifies the instances, and equals 0 otherwise; that is, ex,f = 1{f(x) ̸= yx}. We define the expected prediction error e of algorithm A as e = Ef,xef,x, where the expectation is taken over instances drawn from X (x ∼X), and functions drawn from F (f ∼F). The variance of prediction error σ2 is useful for comparing different learners (e.g., [BG03]). Let ef denote the 0/1 error of function f (i.e., ef = Exex,f). Then σ2 = Ef((ef −e)2) = Ef(e2 f) −e2. Define the disagreement between two classifiers f1 and f2 on instance x as 1{f1(x) ̸= f2(x)}. The disagreement rate of learner A is then: d = Ex,f1,f21{f1(x) ̸= f2(x)}, (1) where recall that the expectation is taken over x ∼X, f1 ∼F, f2 ∼F (with respect to traning sets of some fixed size m). Let dx be the (expected) disagreement at x when we sample functions from F: dx = Ef1,f21{f1(x) ̸= f2(x)}. Similarly, let ex and σ2 x denote respectively the error and variance at x: ex = P(f(x) ̸= yx)) = Ef1{f(x) ̸= yx} = Efef,x and σ2 x = V AR(ef) = Ef[(1{f(x) ̸= yx} −ex)2] = ex(1 −ex). (The last equality follow from the fact that ef,x is a Bernoulli/binary random variable.) Now, we can establish the connection between disagreement and variance of error (of the learner) at instance x: dx = Ef1,f21{(f1(x) = yx and f2(x) ̸= yx) or (f1(x) ̸= yx andf2(x) = yx)} = P(1{(f1(x) = yx andf2(x) ̸= yx) or (f1(x) ̸= yx andf2(x) = yx)} = 2P(f1(x) = yx and f2(x) ̸= yx) = 2ex(1 −ex) ⇒ σ2 x = dx/2. (2) The derivations follow from the fact that the expectation of a Bernoulli random variable is the same as its probability of being 1, and the two events above (the event (f1(x) = yx and f2(x) ̸= yx) and the event (f1(x) = yx and f2(x) ̸= yx) ) are mutually exclusive and have equal probability, and the two events f1(x) = yx and f2(x) ̸= yx are conditionally independent (note that the two events are conditioned on x, and the two functions are picked independently of one another). Furthermore, d = ExEf1,f2[1{f1(x) ̸= f2(x)}] = Exdx = 2Ex(σ2 x) = 2Ex[ex(1 −ex)] = 2(e −Exe2 x), and therefore: d 2 = e −Exe2 x. (3) 2.1 Bounds on Variance via Disagreement The variance of prediction error σ2 can be used to test the significance of the difference in two learners’ error rates. Bengio and Granvalet [BG03] show that there is no unbiased estimator of the variance of k-fold cross-validation in the supervised setting. We can see from Equation 2 that having access to disagreement at a given instance x (labeled or not) does yield the variance of error at that instance. Thus disagreement obtained via 2-fold training gives us an unbaised estimator of σ2 x, the variance of prediction error at instance x, for functions trained on m/2 instances. (Note for unbiasedness, none of the functions should have been trained on the given instance.) Of course, to compare different algorithms on a given instance, one also needs the average error at that instance. In terms of overall variance of prediction error σ2 (where error is averaged across instances and variance taken across functions), there exist scenarios when σ2 is 0 but d is not (when errors of the different functions learned are the same but negatively correlated), and scenarios when σ2 = d/2 ̸= 0. In fact, disagreement yields an upper-bound: Theorem 1 d ≥2σ2. Proof (sketch). We show that the result holds for any finite sampling of functions and instances: Consider the binary (0/1) matrix M where the rows correspond to instances and the columns correspond to functions, and the entries are the binary-valued errors (entry Mi,j = 1{fj(xi) ̸= yxi}). Thus the average error is the number of 1 entries when samplings of instances and functions are drawn from X and F respectively, and variances and disagreement can also be readily defined for the matrix. We show the inequality holds for any such n × n matrix for any n. This establishes the theorem (by using limiting arguments). Treat the 1 entries (matrix cells) as vertices in a graph, where an edge exists between two 1 entries if they share a column or a row. For a fixed number of 1 entries N (N ≤n2), we show the difference between disagreement and variance is minimized when the number of edges is maximized. We establish that configuration maximizing the number of edges occurs when all the 1 entries form a compact formation, that is, all the matrix entries in row i are filled before filling row i+1 with 1s. Finally, we show that for such a configuration minimzing the difference, the difference remains nonnegative. 2 In typical small training sample size cases when the errors are nonzero and not entirely correlated (the patterns of 1s in the matrix is basically scattered) d/2 can be significantly larger than σ2. With increasing training size, the functions learned tend to make the same errors and d and σ2 both approach 0. 2.2 Bounds on Error via Disagreement From Jensen’s inequality, we have that Exe2 x ≥(Exex)2 = e2, therefore using eq. 3, we conclude that d/2 ≤e −e2. This implies that 1 − √ 1 −2d 2 ≤e ≤1 + √ 1 −2d 2 . (4) The upper bound derived is often not informative, as it is greater than 0.5, and often we know the error is less than 0.5. Let el = 1−√1−2d 2 . We next discuss whether/when el can be far from the actual error, and the related question of whether we can derive a good upperbound or just a good estimator on error using a measure based on disagreement. When functions generated by the learner make correlated and frequent mistakes, el can be far from true error. The extreme case of this is a learner that always outputs a constant function. In order to account for weak but stable learners, the error lower bound should be complemented with some measure that ensures that the learner is actually adapting (i.e., doing its job!). We explore using the training (empirical) error for this purpose. Let ˜e denote the average training error of the algorithm: ˜e = Ef ˜ef = Ef 1 m P xi∈Z∗1{f(xi) ̸= yxi}, where Z∗is the training set that yielded f. Define ˆe = max(˜e, el). We explore ˆe as a candidate criterion for model selection, which we compare against the cross-validation criterion in § 3. Note that a learner can exhibit low disagreement and low training error, yet still have high prediction error. For example, the learner could memorize the training data and output a constant on all other instances. (Though when disagreement is exactly zero, the test error equals the training error.) A measure of self-disagreement within the labeled training set, defined by Lang et al. [LBRB02], in conjunction with the empirical training error does yield an upper bound. Still, we find empirically that, when using SVMs, naive Bayes, or logistic regression, disagreement on unlabeled data does not tend to wildly underestimate error, even though it’s theoretically possible. 3 Experiments We conducted experiments on the “20 Newsgroups” and Reuters-21578 test categorization datasets, and the Votes, Chess, Adult, and Optics datasets from the UCI collection [BKM98].1 We chose two categorization tasks from the newsgroups sets: (1) identifying Baseball documents in a collection containing both Baseball and Hockey documents (2000 total documents), and (2) identifying alt.atheism documents from among the alt.atheism, soc.religion.christian, and talk.religion.misc collections (3000 documents). For the Reuters set, we chose documents belonging to one of the top 10 categories of the corpus (9410 documents), and we attempt to discriminate the “Earn” (3964) and “Acq” (2369) respectively from the remaining nine. These categories are large enough that 0/1 error remains a reasonable measure. We used the bow library for stemming and stop words, kept features up to 3-grams, and used l2-normalized frequency counts [McC96]. The Votes, Chess, Adult, and Optics datasets have respectively 435, 3197, 32561 and 1800 instances. These datasets give us some representation of the various types of learning problems. All our data set are in a nonnegative feature value representation. We used support vector machines with polynomial kernels available from the libsvm library [CL01] in all our experiments.2 For the error estimation experiments, we used linear SVMs with a C value of 10. For the model selection experiments, we used polynomial degree as the model selection parameter. 3.1 Error Estimation We first examine the use of disagreement for error estimation both in the standard setting where training and test samples are uniformly iid, and in an active learning scenario. For each of several training set sizes for each data set, we computed average results and standard deviation across thirty trials. In each trial, we first generate a training set, sampled either uniformly iid or actively, then set aside 20% of remaining instances as the test set. Next, we partition the training set into equal halves, train an SVM on each half, and compute the disagreement rate between the two SVMs across the set of (unlabeled) data that has not been designated for the training or test set (80% of total −m instances). We repeat this inner loop of partitioning, dual training, and disagreement computation thirty times and take averages. We examined the utility of our disagreement bound (4) as an estimate of the true test error of the algorithm trained on the full data set (“trueE”). We also examined using the maximum of the training error (“trainE”) and lower bound on error from our disagreement measure (“disE”) as an estimate of trueE (“MaxDtE = max(trainE, disE)”). Note that disE and trainE are respectively unbiased empirical estimates of expected disagreement d and expected training error ˜e of § 2 for the standard setting. Since our disagreement measure is actually a bound on half error (i.e., error averaged over training sets of size m/2), we also compare against two-fold cross-validation error (“2cvE”), and the true test error of the two functions obtained from training on the two halves (“1/2trueE”). 1Available from http://www.ics.uci.edu/ and http://www.daviddlewis.com/resources/testcollections/ 2We observed similar results in error estimation using linear logistic regression and Naive Bayes learners in preliminary experiments. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 50 100 150 200 0/1 ERROR TRAINING SET SIZE Linear SVM on BASEBALLvsHockey Dataset trueE 1/2trueE 2cvE disE trainE maxDtE 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 50 100 150 200 0/1 ERROR TRAINING SET SIZE Linear SVM on BASEBALLvsHockey Dataset trueE 1/2trueE 2cvE disE trainE maxDtE Figure 1: (a) Random training set. (b) Actively picked. 0 2 4 6 8 10 12 60 80 100 120 140 160 180 200 Ratio of the differences from trueE (a) TRAINING SET SIZE Baseball Religion Earn Acq Adult Chess Votes Digit 1 (Optics) 0 1 2 3 4 5 6 60 80 100 120 140 160 180 200 Ratio of disE to trueE (b) TRAINING SET SIZE Baseball Religion Earn Acq Adult Chess Votes Digit 1 (Optics) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 60 80 100 120 140 160 180 200 Ratio of disE to 1/2trueE (c) TRAINING SET SIZE Baseball Religion Earn Acq Adult Chess Votes Digit 1 (Optics) Figure 2: Plots of ratios when active learning: (a) 2cvE - trueE disE - trueE (b) disE trueE (c) disE 1/2trueE. In the standard scenario, when the training set is chosen uniformly at random from the corpus, leave-one-out cross validated error (“looE”) is generally a very good estimate of trueE, while 2cvE is a good estimate for 1/2trueE. For all the data sets, as expected our error estimate maxDtE underestimates 1/2trueE. A representative example is shown in Figure 1(a). In the active learning scenario, the training set is chosen in an attempt to maximize information, and the choice of each new instance depends on the set of previously chosen instances. Often this means that especially difficult instances are chosen (or at least instances whose labels are difficult to infer from the current training set). Thus cross validation naturally overestimates the difficulty of the learning task and so may greatly overestimate error. On the other hand, an approximate model of active learning is that the instances are iid sampled from a hard distribution. This ignores the sequential nature of active learning. Measuring disagreement on the easier test distribution via subsampling the training set may remain a good estimator of the actual test error. We used linear SVMs as the basis for our active learning procedure. In each trial, we begin with random training set size of 10, and then grow the labeled set by using the uncertainty sampling technique. We computed the various error measures at regular intervals.3 A representative plot of errors during active learning is given in Fig. 1(b). In all the datasets experimented with, we have observed the same pattern: the error estimate using disagreement provides a much better estimate of 1/2trueE and trueE than does 2cvE (Fig. 2a), and can be used as an indication of the error and the progress of active learning. Note that while we have not computed looE error in the error-estimation experiments, figure Fig. 1(b) indicates that 2cvE is not a good estimator of trueE at size m/2 either, and this has been the case in all our experiments. We have observed that disE estimates the 1/2trueE best (Fig. 2c). The estimation performance may degrade towards the end of active learning when the learner converges (disagreement approaches 0). However, we have observed that both 1/2trueE (obtained via subsampling) and disE tend to overestimate the actual error of the active learner even at half the training size (e.g., Fig. 1(b)). This observation underlines the importance of taking the sequential nature of active learning into account. 3We could use a criterion based on disagreement for selective sampling, but we have not throughly explored this option. 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0 0.5 1 1.5 2 2.5 3 3.5 4 error (a) SVM poly degree 1/2cvE looE maxDtE trueE 0.001 0.01 0.1 0.001 0.01 0.1 looE (b) maxDtE Figure 3: (a) An example were maxDtE performs particularly well as a model selection criteria, tracking the true error curve more closely than looE or 2cvE. (b) A summary of all experiments plotting looE versus maxDtE on a log-log scale: points above the diagonal indicate maxDtE outperforming looE. 3.2 Model Selection We explore various criteria for selecting the expected best among twenty SVMs, each trained using a different polynomial degree kernel. For each data set, we manually identify an interval of polynomial degrees that seems to include the error minimum4, then choose twenty degrees equally spaced within that interval. We compare our disagreement-based estimate maxDtE with the cross validation estimates looE and 2cvE as model selection criteria. In each trial, we identify the polynomial degree that is expected to be best according to each criteria, then train an SVM at that degree on the full training set. We compare trueE at the degree selected by each criteria against trueE at the actual optimal degree. In the standard uniform iid scenario, though cross validation often does fail as a model selection criteria for regression problems, it seems that cross validation in general is hard to beat for classification problems [SS02]. We find that both looE and 2cvE modestly outperform maxDtE as model selection criteria, though maxDtE is often competitive. We are exploring using the maximum of cross validation and maxDtE as an alternative with preliminary evidence of a slight advantage over cross validation alone. In an active learning setting, even though cross validation overestimates error, it is theoretically possible that cross validation would still function well to identify the best or near-best model. However, our experiments suggest that the performance of cross validation as a model selection criteria indeed degrades under active learning. In this situation, maxDtE serves as a consistently better model selection criteria. Figure 3(a) shows an example where maxDtE performs particularly well. The active learning model selection experiments proceed as follows. For each data set, we use one run of active learning to identify 200 ordered and actively-picked instances. For each training size m ∈{25, 50, 100, 200}, we run thirty experiments using a random shuffling of the size-m prefix of the 200 actively-picked instances. In each trial and for each of the twenty polynomial degrees, we measure trueE and looE, then run an inner loop of thirty random partitionings and dual trainings to measure average d, expE, 2cvE, and 1/2trueE. Disagreements and errors are measured across the full test set (total −m instances), so this is a transductive learning setting. Figure 3(b) summarizes the results. We observe that model selection based on disagreement often outperforms model selection based on cross-validation, and at times significantly so. Across 26 experiments, the winloss-tie record of maxDtE versus 2cvE was 16-5-5, the record of maxDtE versus looE was 18-6-2, and the record of 2cvE versus looE was 15-9-2. 4Although for fractional degress less than 1 the kernal matrix is not guaranteed to be positive semi-definite, we included such ranges whenever the range included the error minimum. Non-integral degress greater than 1 do not pose a problem as the feature values in all our problem representations are nonnegative. 4 Related Work Previous work has already shown that using various measures of stability on unlabeled data is useful for ensemble learning, model selection, and regularization, both in supervised and unsupervised learning [KV95, Sch97, SS02, BC03, LBRB02, LRBB04]. Metric-based methods for model selection are complementary to our approach in that they are desgined to prefer models/algorithms that behave similarly on the labeled and unlabeled data [Sch97, SS02, BC03], while disagreement is a measure of self-consistency on the same dataset (in this paper, unlabeled data only). Consequently, our method is also applicable to scenarios in which the test and training distributions are different. Lang et. al [LBRB02, LRBB04] also explore disagreement on unlabeled data, establishing robust model selection techniques based on disagreement for clustering. Theoretical work on algorithmic stability focuses on deriving generalization bounds given that the algorithm has certain inherent stability properties [KN02]. 5 Conclusions and Future Work Two advantages of co-validation over traditional techniques are: (1) disagreement can be measured to almost an arbitrary degree assuming unlabeled data is plentiful, and (2) disagreement is measured on unlabeled data drawn from the same distribution as test instances, the extreme case of which is in transductive learning where the unlabeled and test instances coincide. In this paper we derived bounds on certain measures of error and variance based on disagreement, then examined empirically when co-validation might be useful. We found co-validation particularly useful in active learning settings. Future goals include extending the theory to active learning, precision/recall, algorithm comparison (using variance), ensemble learning, and regression. We plan to compare semi-supervised and transductive learning, and consider procedures to generate fictitious unlabeled data. References [BC03] Y. Bengio and N. Chapados. Extensions to metric-based model selection. Journal of Machine Learning Research, 2003. [BG03] Y. Bengio and Y. Granvalet. No unbiased estimator of the variance of k-fold cross-validation. In NIPS, 2003. [BKM98] C.L. Blake, E. Keogh, and C.J. Merz. UCI repository of machine learning databases, 1998. [CL01] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm. [KN02] S. Kutin and P. Niyogi. Almost-everywhere algorithmic stability and generalization error. In UAI, 2002. [Kut02] S. Kutin. Algorithmic stability and ensemble-based learning. PhD thesis, University of Chicago, 2002. [KV95] A. Krogh and J. Vedelsby. Neural network ensembles, cross validation, and active learning. In NIPS, 1995. [LBRB02] T. Lange, M. Braun, V. Roth, and J. Buhmann. Stability-based model selection. In NIPS, 2002. [LRBB04] T. Lange, V. Roth, M. Braun, and J. Buhmann. Stability based validation of clustering algorithms. Neural Computation, 16, 2004. [McC96] A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996. [Sch97] D. Schuurmans. A new metric-based approach to model selection. In AAAI, 1997. [SS02] D. Schuurmans and F. Southey. Metric-based methods for adaptive model selection and regularization. Machine Learning, pages 51–84, 2002.
|
2004
|
120
|
2,531
|
Co-Training and Expansion: Towards Bridging Theory and Practice Maria-Florina Balcan Computer Science Dept. Carnegie Mellon Univ. Pittsburgh, PA 15213 ninamf@cs.cmu.edu Avrim Blum Computer Science Dept. Carnegie Mellon Univ. Pittsburgh, PA 15213 avrim@cs.cmu.edu Ke Yang Computer Science Dept. Carnegie Mellon Univ. Pittsburgh, PA 15213 yangke@cs.cmu.edu Abstract Co-training is a method for combining labeled and unlabeled data when examples can be thought of as containing two distinct sets of features. It has had a number of practical successes, yet previous theoretical analyses have needed very strong assumptions on the data that are unlikely to be satisfied in practice. In this paper, we propose a much weaker “expansion” assumption on the underlying data distribution, that we prove is sufficient for iterative cotraining to succeed given appropriately strong PAC-learning algorithms on each feature set, and that to some extent is necessary as well. This expansion assumption in fact motivates the iterative nature of the original co-training algorithm, unlike stronger assumptions (such as independence given the label) that allow a simpler one-shot co-training to succeed. We also heuristically analyze the effect on performance of noise in the data. Predicted behavior is qualitatively matched in synthetic experiments on expander graphs. 1 Introduction In machine learning, it is often the case that unlabeled data is substantially cheaper and more plentiful than labeled data, and as a result a number of methods have been developed for using unlabeled data to try to improve performance, e.g., [15, 2, 6, 11, 16]. Co-training [2] is a method that has had substantial success in scenarios in which examples can be thought of as containing two distinct yet sufficient feature sets. Specifically, a labeled example takes the form (⟨x1, x2⟩, ℓ), where x1 ∈X1 and x2 ∈X2 are the two parts of the example, and ℓis the label. One further assumes the existence of two functions c1, c2 over the respective feature sets such that c1(x1) = c2(x2) = ℓ. Intuitively, this means that each example contains two “views,” and each view contains sufficient information to determine the label of the example. This redundancy implies an underlying structure of the unlabeled data (since they need to be “consistent”), and this structure makes the unlabeled data informative. In particular, the idea of iterative co-training [2] is that one can use a small labeled sample to train initial classifiers h1, h2 over the respective views, and then iteratively bootstrap by taking unlabeled examples ⟨x1, x2⟩for which one of the hi is confident but the other is not — and using the confident hi to label such examples for the learning algorithm on the other view, improving the other classifier. As an example for webpage classification given in [2], webpages contain text (x1) and have hyperlinks pointing to them (x2). From a small labeled sample, we might learn a classifier h2 that says that if a link with the words “my advisor” points to a page, then that page is probably a positive example of faculty-member-home-page; so, if we find an unlabeled example with this property we can use h2 to label the page for the learning algorithm that uses the text on the page itself. This approach and its variants have been used for a variety of learning problems, including named entity classification [3], text classification [10, 5], natural language processing [13], large scale document classification [12], and visual detectors [8]. Co-training effectively requires two distinct properties of the underlying data distribution in order to work. The first is that there should at least in principle exist low error classifiers c1, c2 on each view. The second is that these two views should on the other hand not be too highly correlated — we need to have at least some examples where h1 is confident but h2 is not (or vice versa) for the co-training algorithm to actually do anything. Unfortunately, previous theoretical analyses have needed to make strong assumptions of this second type in order to prove their guarantees. These include “conditional independence given the label” used by [2] and [4], or the assumption of “weak rule dependence” used by [1]. The primary contribution of this paper is a theoretical analysis that substantially relaxes the strength of this second assumption to just a form of “expansion” of the underlying distribution (a natural analog of the graph-theoretic notions of expansion and conductance) that we show in some sense is a necessary condition for co-training to succeed as well. However, we will need a fairly strong assumption on the learning algorithms: that the hi they produce are never “confident but wrong” (formally, the algorithms are able to learn from positive data only), though we give a heuristic analysis of the case when this does not hold. One key feature of assuming only expansion on the data is that it specifically motivates the iterative nature of the co-training algorithm. Previous assumptions that had been analyzed imply such a strong form of expansion that even a “one-shot” version of co-training will succeed (see Section 2.2). In fact, the theoretical guarantees given in [2] are exactly of this type. However, distributions can easily satisfy our weaker condition without allowing one-shot learning to work as well, and we describe several natural situations of this form. An additional property of our results is that they are algorithmic in nature. That is, if we have sufficiently strong efficient PAC-learning algorithms for the target function on each feature set, we can use them to achieve efficient PAC-style guarantees for co-training as well. However, as mentioned above, we need a stronger assumption on our base learning algorithms than used by [2] (see section 2.1). We begin by formally defining the expansion assumption we will use, connecting it to standard graph-theoretic notions of expansion and conductance. We then prove the statement that ϵ-expansion is sufficient for iterative co-training to succeed, given strong enough base learning algorithms over each view, proving bounds on the number of iterations needed to converge. In Section 4.1, we heuristically analyze the effect of imperfect feature sets on co-training accuracy. Finally, in Section 4.2, we present experiments on synthetic expander graph data that qualitatively bear out our analyses. 2 Notations, Definitions, and Assumptions We assume that examples are drawn from some distribution D over an instance space X = X1 × X2, where X1 and X2 correspond to two different “views” of an example. Let c denote the target function, and let X+ and X−denote the positive and negative regions of X respectively (for simplicity we assume we are doing binary classification). For most of this paper we assume that each view in itself is sufficient for correct classification; that is, c can be decomposed into functions c1, c2 over each view such that D has no probability mass on examples x such that c1(x1) ̸= c2(x2). For i ∈{1, 2}, let X+ i = {xi ∈Xi : ci(xi) = 1}, so we can think of X+ as X+ 1 × X+ 2 , and let X− i = Xi −X+ i . Let D+ and D−denote the marginal distribution of D over X+ and X−respectively. In order to discuss iterative co-training, we need to be able to talk about a hypothesis being confident or not confident on a given example. For convenience, we will identify “confident” with “confident about being positive”. This means we can think of a hypothesis hi as a subset of Xi, where xi ∈hi means that hi is confident that xi is positive, and xi ̸∈hi means that hi has no opinion. As in [2], we will abstract away the initialization phase of co-training (how labeled data is used to generate an initial hypothesis) and assume we are given initial sets S0 1 ⊆X+ 1 and S0 2 ⊆X+ 2 such that Pr⟨x1,x2⟩∈D(x1 ∈S0 1 or x2 ∈S0 2) ≥ρinit for some ρinit > 0. The goal of co-training will be to bootstrap from these sets using unlabeled data. Now, to prove guarantees for iterative co-training, we make two assumptions: that the learning algorithms used in each of the two views are able to learn from positive data only, and that the distribution D+ is expanding as defined in Section 2.2 below. 2.1 Assumption about the base learning algorithms on the two views We assume that the learning algorithms on each view are able to PAC-learn from positive data only. Specifically, for any distribution D+ i over X+ i , and any given ϵ, δ > 0, given access to examples from D+ i the algorithm should be able to produce a hypothesis hi such that (a) hi ⊆X+ i (so hi only has one-sided error), and (b) with probability 1−δ, the error of hi under D+ i is at most ϵ. Algorithms of this type can be naturally thought of as predicting either “positive with confidence” or “don’t know”, fitting our framework. Examples of concept classes learnable from positive data only include conjunctions, k-CNF, and axisparallel rectangles; see [7]. For instance, for the case of axis-parallel rectangles, a simple algorithm that achieves this guarantee is just to output the smallest rectangle enclosing the positive examples seen. If we wanted to consider algorithms that could be confident in both directions (rather than just confident about being positive) we could instead use the notion of “reliable, useful” learning due to Rivest and Sloan [14]. However, fewer classes of functions are learnable in this manner. In addition, a nice feature of our assumption is that we will only need D+ to expand and not D−. This is especially natural if the positive class has a large amount of cohesion (e.g, it consists of all documents about some topic Y ) but the negatives do not (e.g., all documents about all other topics). Note that we are effectively assuming that our algorithms are correct when they are confident; we relax this in our heuristic analysis in Section 4. 2.2 The expansion assumption for the underlying distribution For S1 ⊆X1 and S2 ⊆X2, let boldface Si (i = 1, 2) denote the event that an example ⟨x1, x2⟩has xi ∈Si. So, if we think of S1 and S2 as our confident sets in each view, then Pr(S1 ∧S2) denotes the probability mass on examples for which we are confident about both views, and Pr(S1 ⊕S2) denotes the probability mass on examples for which we are confident about just one. In this section, all probabilities are with respect to D+. We say: Definition 1 D+ is ϵ-expanding if for any S1 ⊆X+ 1 , S2 ⊆X+ 2 , we have Pr(S1 ⊕S2) ≥ϵ min Pr(S1 ∧S2), Pr(S1 ∧S2) . We say that D+ is ϵ-expanding with respect to hypothesis class H1 × H2 if the above holds for all S1 ∈H1 ∩X+ 1 , S2 ∈H2 ∩X+ 2 (here we denote by Hi ∩X+ i the set h ∩X+ i : h ∈Hi for i = 1, 2). To get a feel for this definition, notice that ϵ-expansion is in some sense necessary for iterative co-training to succeed, because if S1 and S2 are our confident sets and do not expand, then we might never see examples for which one hypothesis could help the other.1 In Section 3 we show that Definition 1 is in fact sufficient. To see how much weaker this definition is than previously-considered requirements, it is helpful to consider a slightly stronger kind of expansion that we call “left-right expansion”. Definition 2 We say D+ is ϵ-right-expanding if for any S1 ⊆X+ 1 , S2 ⊆X+ 2 , if Pr(S1) ≤1/2 and Pr(S2|S1) ≥1 −ϵ then Pr(S2) ≥(1 + ϵ) Pr(S1). 1However, ϵ-expansion requires every pair to expand and so it is not strictly necessary. If there were occasional pairs (S1, S2) that did not expand, but such pairs were rare and unlikely to be encountered as confident sets in the co-training process, we might still be OK. We say D+ is ϵ-left-expanding if the above holds with indices 1 and 2 reversed. Finally, D+ is ϵ-left-right-expanding if it has both properties. It is not immediately obvious but left-right expansion in fact implies Definition 1 (see Appendix A), though the converse is not necessarily true. We introduce this notion, however, for two reasons. First, it is useful for intuition: if Si is our confident set in X+ i and this set is small (Pr(Si) ≤1/2), and we train a classifier that learns from positive data on the conditional distribution that Si induces over X3−i until it has error ≤ϵ on that distribution, then the definition implies the confident set on X3−i will have noticeably larger probability than Si; so it is clear why this is useful for co-training, at least in the initial stages. Secondly, this notion helps clarify how our assumptions are much less restrictive than those considered previously. Specifically, Independence given the label: Independence given the label implies that for any S1 ⊆ X+ 1 and S2 ⊆X+ 2 we have Pr(S2|S1) = Pr(S2). So, if Pr(S2|S1) ≥1 −ϵ, then Pr(S2) ≥1 −ϵ as well, even if Pr(S1) is tiny. This means that not only does S1 expand by a (1 + ϵ) factor as in Def. 2, but in fact it expands to nearly all of X+ 2 . Weak dependence: Weak dependence [1] is a relaxation of conditional independence that requires only that for all S1 ⊆X+ 1 , S2 ⊆X+ 2 we have Pr(S2|S1) ≥α Pr(S2) for some α > 0. This seems much less restrictive. However, notice that if Pr(S2|S1) ≥1 −ϵ, then Pr(S2|S1) ≤ϵ, which implies by definition of weak dependence that Pr(S2) ≤ϵ/α and therefore Pr(S2) ≥1 −ϵ/α. So, again (for sufficiently small ϵ), even if S1 is very small, it expands to nearly all of X+ 2 . This means that, as with conditional independence, if one has an algorithm over X2 that PAC-learns from positive data only, and one trains it over the conditional distribution given by S1, then by driving down its error on this conditional distribution one can perform co-training in just one iteration. 2.2.1 Connections to standard graph-theoretic notions of expansion Our definition of ϵ-expansion (Definition 1) is a natural analog of the standard graphtheoretic notion of edge-expansion or conductance. A Markov-chain is said to have high conductance if under the stationary distribution, for any set of states S of probability at most 1/2, the probability mass on transitions exiting S is at least ϵ times the probability of S. E.g., see [9]. A graph has high edge-expansion if the random walk on the graph has high conductance. Since the stationary distribution of this walk can be viewed as having equal probability on every edge, this is equivalent to saying that for any partition of the graph into two pieces (S, V −S), the number of edges crossing the partition should be at least an ϵ fraction of the number of edges in the smaller half. To connect this to Definition 1, think of S as S1 ∧S2. It is well-known that, for example, a random degree-3 bipartite graph with high probability is expanding, and this in fact motivates our synthetic data experiments of Section 4.2. 2.2.2 Examples We now give two simple examples that satisfy ϵ-expansion but not weak dependence. Example 1: Suppose X = Rd×Rd and the target function on each view is an axis-parallel rectangle. Suppose a random positive example from D+ looks like a pair ⟨x1, x2⟩such that x1 and x2 are each uniformly distributed in their rectangles but in a highly-dependent way: specifically, x2 is identical to x1 except that a random coordinate has been “re-randomized” within the rectangle. This distribution does not satisfy weak dependence (for any sets S and T that are disjoint along all axes we have Pr(T|S) = 0) but it is not hard to verify that D+ is ϵ-expanding for ϵ = Ω(1/d). Example 2: Imagine that we have a learning problem such that the data in X1 falls into n different clusters: the positive class is the union of some of these clusters and the negative class is the union of the others. Imagine that this likewise is true if we look at X2 and for simplicity suppose that every cluster has the same probability mass. Independence given the label would say that given that x1 is in some positive cluster Ci in X1, x2 is equally likely to be in any of the positive clusters Cj in X2. But, suppose we have something much weaker: each Ci in X1 is associated with only 3 Cj’s in X2 (i.e., given that x1 is in Ci, x2 will only be in one of these Cj’s). This distribution clearly will not even have the weak dependence property. However, say we have a learning algorithm that assumes everything in the same cluster has the same label (so the hypothesis space H consists of all rules that do not split clusters). Then if the graph of which clusters are associated with which is an expander graph, then the distributions will be expanding with respect to H. In particular, given a labeled example x, the learning algorithm will generalize to x’s entire cluster Ci, then this will be propagated over to nodes in the associated clusters Cj in X2, and so on. 3 The Main Result We now present our main result. We assume that D+ is ϵ-expanding (ϵ > 0) with respect to hypothesis class H1 × H2, that we are given initial confident sets S0 1 ⊆X+ 1 , S0 2 ⊆X+ 2 such that Pr(S0 1 ∨S0 2) ≥ρinit, that the target function can be written as ⟨c1, c2⟩with c1 ∈H1, c2 ∈H2, and that on each of the two views we have algorithms A1 and A2 for learning from positive data only. The iterative co-training that we consider proceeds in rounds. Let Si 1 ⊆X1 and Si 2 ⊆X2 be the confident sets in each view at the start of round i. We construct Si+1 2 by feeding into A2 examples according to D2 conditioned on Si 1 ∨Si 2. That is, we take unlabeled examples from D such that at least one of the current predictors is confident, and feed them into A2 as if they were positive. We run A2 with error and confidence parameters given in the theorem below. We simultaneously do the same with A1, creating Si+1 1 . After a pre-determined number of rounds N (specified in Theorem 1), the algorithm terminates and outputs the predictor that labels examples ⟨x1, x2⟩as positive if x1 ∈SN+1 1 or x2 ∈SN+1 2 and negative otherwise. We begin by stating two lemmas that will be useful in our analysis. For both of these lemmas, let S1, T1 ⊆X+ 1 , S2, T2 ⊆X+ 2 , where Sj, Tj ∈Hj. All probabilities are with respect to D+. Lemma 1 Suppose Pr (S1 ∧S2) ≤Pr (S1 ∧S2), Pr (T1 | S1 ∨S2) ≥1 −ϵ/8 and Pr (T2 | S1 ∨S2) ≥1 −ϵ/8. Then Pr (T1 ∧T2) ≥(1 + ϵ/2) Pr (S1 ∧S2). Proof: From Pr (T1 | S1 ∨S2) ≥1 −ϵ/8 and Pr (T2 | S1 ∨S2) ≥1 −ϵ/8 we get that Pr (T1 ∧T2) ≥(1 −ϵ/4) Pr (S1 ∨S2). Since Pr (S1 ∧S2) ≤Pr (S1 ∧S2) it follows from the expansion property that Pr (S1 ∨S2) = Pr (S1 ⊕S2) + Pr (S1 ∧S2) ≥(1 + ϵ) Pr (S1 ∧S2). Therefore, Pr (T1 ∧T2) ≥ (1 −ϵ/4)(1 + ϵ) Pr (S1 ∧S2) which implies that Pr (T1 ∧T2) ≥(1 + ϵ/2) Pr (S1 ∧S2). Lemma 2 Suppose Pr (S1 ∧S2) > Pr (S1 ∧S2) and let γ = 1 −Pr (S1 ∧S2). If Pr (T1 | S1 ∨S2) ≥1 −γϵ 8 and Pr (T2 | S1 ∨S2) ≥1 −γϵ 8 , then Pr (T1 ∧T2) ≥ (1 + γϵ 8 ) Pr (S1 ∧S2). Proof: From Pr (T1 | S1 ∨S2) ≥1 −γϵ 8 and Pr (T2 | S1 ∨S2) ≥1 −γϵ 8 we get that Pr (T1 ∧T2) ≥(1 −γϵ 4 ) Pr (S1 ∨S2). Since Pr (S1 ∧S2) > Pr (S1 ∧S2) it follows from the expansion property that Pr (S1 ⊕S2) ≥ϵ Pr (S1 ∧S2). Therefore γ = Pr (S1 ⊕S2) + Pr (S1 ∧S2) ≥(1 + ϵ) Pr (S1 ∧S2) ≥(1 + ϵ)(1 −Pr (S1 ∨S2)) and so Pr (S1 ∨S2) ≥1 − γ 1+ϵ. This implies Pr (T1 ∧T2) ≥(1 −γϵ 4 )(1 − γ 1+ϵ) ≥ (1 −γ)(1 + γϵ 8 ). So, we have Pr (T1 ∧T2) ≥(1 + γϵ 8 ) Pr (S1 ∧S2). Theorem 1 Let ϵfin and δfin be the (final) desired accuracy and confidence parameters. Then we can achieve error rate ϵfin with probability 1 −δfin by running co-training for N = O( 1 ϵ log 1 ϵfin + 1 ϵ · 1 ρinit ) rounds, each time running A1 and A2 with accuracy and confidence parameters set to ϵ·ϵfin 8 and δfin 2N respectively. Proof Sketch: Assume that, for i ≥1, Si 1 ⊆X+ 1 and Si 2 ⊆X+ 2 are the confident sets in each view after step i −1 of co-training. Define pi = Pr (Si 1 ∧Si 2), qi = Pr (Si 1 ∧Si 2), and γi = 1 −pi, with all probabilities with respect to D+. We are interested in bounding Pr (Si 1 ∨Si 2), but since technically it is easier to bound Pr (Si 1 ∧Si 2), we will instead show that pN ≥1 −ϵfin with probability 1 −δfin, which obviously implies that Pr(SN 1 ∨SN 2 ) is at least as good. By the guarantees on A1 and A2, after each round we get that with probability 1 −δfin N , we have Pr (Si+1 1 | Si 1 ∨Si 2) ≥1 −ϵfin·ϵ 8 and Pr (Si+1 2 | Si 1 ∨Si 2) ≥1 −ϵfin·ϵ 8 . In particular, this implies that with probability 1 −δfin N , we have p1 = Pr (S1 1 ∧S1 2) ≥ (1 −ϵ/4) · Pr (S0 1 ∨S0 2) ≥(1 −ϵ/4)ρinit. Consider now i ≥ 1. If pi ≤ qi, since with probability 1 − δfin N we have Pr (Si+1 1 | Si 1 ∨Si 2) ≥1 −ϵ 8 and Pr (Si+1 2 | Si 1 ∨Si 2) ≥1 −ϵ 8, using lemma 1 we obtain that with probability 1 −δfin N , we have Pr (Si+1 1 ∧Si+1 2 ) ≥(1 + ϵ/2) Pr (Si 1 ∧Si 2). Similarly, by applying lemma 2, we obtain that if pi > qi and γi ≥ϵfin then with probability 1 −δfin N we have Pr (Si+1 1 ∧Si+1 2 ) ≥(1 + γiϵ 8 ) Pr (Si 1 ∧Si 2). Assume now that it is the case that the learning algorithms A1 and A2 were successful on all the N rounds; note that this happens with probability at least 1 −δfin. The above observations imply that so long as pi ≤1/2 (so γi ≥1/2) we have pi+1 ≥ (1+ϵ/16)i(1−ϵ/4)ρinit. This means that after N1 = O( 1 ρinit · 1 ϵ ) iterations of co-training we get to a situation where pN1 > 1/2. At this point, notice that every 8/ϵ rounds, γ drops by at least a factor of 2; that is, if γi ≤ 1 2k then γ 8 ϵ +i ≤ 1 2k+1 . So, after a total of O( 1 ϵ log 1 ϵfin + 1 ϵ · 1 ρinit ) rounds, we have a predictor of the desired accuracy with the desired confidence. 4 Heuristic Analysis of Error propagation and Experiments So far, we have assumed the existence of perfect classifiers on each view: there are no examples ⟨x1, x2⟩with x1 ∈X+ 1 and x2 ∈X− 2 or vice-versa. In addition, we have assumed that given correctly-labeled positive examples as input, our learning algorithms are able to generalize in a way that makes only 1-sided error (i.e., they are never “confident but wrong”). In this section we give a heuristic analysis of the case when these assumptions are relaxed, along with several synthetic experiments on expander graphs. 4.1 Heuristic Analysis of Error propagation Given confident sets Si 1 ⊆X1 and Si 2 ⊆X2 at the ith iteration, let us define their purity (precision) as puri = PrD(c(x) = 1|Si 1 ∨Si 2) and their coverage (recall) to be covi = PrD(Si 1 ∨Si 2|c(x) = 1). Let us also define their “opposite coverage” to be oppi = PrD(Si 1∨Si 2|c(x) = 0). Previously, we assumed oppi = 0 and therefore puri = 1. However, if we imagine that there is an η fraction of examples on which the two views disagree, and that positive and negative regions expand uniformly at the same rate, then even if initially opp0 = 0, it is natural to assume the following form of increase in cov and opp: covi+1 = min (covi(1 + ϵ(1 −covi)) + η · (oppi+1 −oppi) , 1), (1) oppi+1 = min (oppi(1 + ϵ(1 −oppi)) + η · (covi+1 −covi) , 1). (2) 1 2 3 4 5 6 7 8 0 0.2 0.4 0.6 0.8 1 iteration accuracy on negative accuracy on positive overall accuracy 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 iteration accuracy on negative accuracy on positive overall accuracy 2 4 6 8 10 12 14 0 0.2 0.4 0.6 0.8 1 iteration accuracy on negative accuracy on positive overall accuracy Figure 1: Co-training with noise rates 0.1, 0.01, and 0.001 respectively (n = 5000). Solid line indicates overall accuracy; green (dashed, increasing) curve is accuracy on positives (covi); red (dashed, decreasing) curve is accuracy on negatives (1 −oppi). That is, this corresponds to both the positive and negative parts of the confident region expanding in the way given in the proof of Theorem 1, with an η fraction of the new edges going to examples of the other label. By examining (1) and (2), we can make a few simple observations. First, initially when coverage is low, every O(1/ϵ) steps we get roughly cov ←2 · cov and opp ←2 · opp + η · cov. So, we expect coverage to increase exponentially and purity to drop linearly. However, once coverage gets large and begins to saturate, if purity is still high at this time it will begin dropping rapidly as the exponential increase in oppi causes oppi to catch up with covi. In particular, a calculation (omitted) shows that if D is 50/50 positive and negative, then overall accuracy increases up to the point when covi + oppi = 1, and then drops from then on. This qualitative behavior is borne out in our experiments below. 4.2 Experiments We performed experiments on synthetic data along the lines of Example 2, with noise added as in Section 4.1. Specifically, we create a 2n-by-2n bipartite graph. Nodes 1 to n on each side represent positive clusters, and nodes n + 1 to 2n on each side represent negative clusters. We connect each node on the left to three nodes on the right: each neighbor is chosen with probability 1−η to be a random node of the same class, and with probability η to be a random node of the opposite class. We begin with an initial confident set S1 ⊆X+ 1 and then propagate confidence through rounds of co-training, monitoring the percentage of the positive class covered, the percent of the negative class mistakenly covered, and the overall accuracy. Plots of three experiments are shown in Figure 1, for different noise rates (0.1, 0.01, and 0.001). As can be seen, these qualitatively match what we expect: coverage increases exponentially, but accuracy on negatives (1−oppi) drops exponentially too, though somewhat delayed. At some point there is a crossover where covi = 1 −oppi, which as predicted roughly corresponds to the point at which overall accuracy starts to drop. 5 Conclusions Co-training is a method for using unlabeled data when examples can be partitioned into two views such that (a) each view in itself is at least roughly sufficient to achieve good classification, and yet (b) the views are not too highly correlated. Previous theoretical work has required instantiating condition (b) in a very strong sense: as independence given the label, or a form of weak dependence. In this work, we argue that the “right” condition is something much weaker: an expansion property on the underlying distribution (over positive examples) that we show is sufficient and to some extent necessary as well. The expansion property is especially interesting because it directly motivates the iterative nature of many of the practical co-training based algorithms, and our work is the first rigorous analysis of iterative co-training in a setting that demonstrates its advantages over one-shot versions. Acknowledgements: This work was supported in part by NSF grants CCR-0105488, NSF-ITR CCR-0122581, and NSF-ITR IIS-0312814. References [1] S. Abney. Bootstrapping. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 360–367, 2002. [2] A. Blum and T. M. Mitchell. Combining labeled and unlabeled data with co-training. In Proc. 11th Annual Conference on Computational Learning Theory, pages 92–100, 1998. [3] M. Collins and Y. Singer. Unsupervised models for named entity classification. In SIGDAT Conf. Empirical Methods in NLP and Very Large Corpora, pages 189–196, 1999. [4] S. Dasgupta, M. L. Littman, and D. McAllester. PAC generalization bounds for co-training. In Advances in Neural Information Processing Systems 14. MIT Press, 2001. [5] R. Ghani. Combining labeled and unlabeled data for text classification with a large number of categories. In Proceedings of the IEEE International Conference on Data Mining, 2001. [6] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning, pages 200–209, 1999. [7] M. Kearns, M. Li, and L. Valiant. Learning Boolean formulae. JACM, 41(6):1298–1328, 1995. [8] A. Levin, Paul Viola, and Yoav Freund. Unsupervised improvement of visual detectors using co-training. In Proc. 9th IEEE International Conf. on Computer Vision, pages 626–633, 2003. [9] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995. [10] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Proc. ACM CIKM Int. Conf. on Information and Knowledge Management, pages 86–93, 2000. [11] K. Nigam, A. McCallum, S. Thrun, and T. M. Mitchell. Text classification from labeled and unlabeled documents using em. Machine Learning, 39(2/3):103–134, 2000. [12] S. Park and B. Zhang. Large scale unstructured document classification using unlabeled data and syntactic information. In PAKDD 2003, LNCS vol. 2637, pages 88–99. Springer, 2003. [13] D. Pierce and C. Cardie. Limitations of Co-Training for natural language learning from large datasets. In Proc. Conference on Empirical Methods in NLP, pages 1–9, 2001. [14] R. Rivest and R. Sloan. Learning complicated concepts reliably and usefully. In Proceedings of the 1988 Workshop on Computational Learning Theory, pages 69–79, 1988. [15] David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Meeting of the Association for Computational Linguistics, pages 189–196, 1995. [16] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. 20th International Conf. Machine Learning, pages 912–912, 2003. A Relating the definitions We show here how Definition 2 implies Definition 1. Theorem 2 If D+ satisfies ϵ-left-right expansion (Definition 2), then it also satisfies ϵ′-expansion (Definition 1) for ϵ′ = ϵ/(1 + ϵ). Proof: We will prove the contrapositive. Suppose there exist S1 ⊆X+ 1 , S2 ⊆X+ 2 such that Pr(S1 ⊕S2) < ϵ′ min Pr(S1 ∧S2), Pr(S1 ∧S2) . Assume without loss of generality that Pr(S1 ∧S2) ≤Pr(S1 ∧S2). Since Pr(S1 ∧S2) + Pr(S1 ∧S2) + Pr(S1 ⊕S2) = 1 it follows that Pr(S1 ∧S2) ≤1 2 −Pr(S1⊕S2) 2 . Assume Pr(S1) ≤Pr(S2). This implies that Pr(S1) ≤1 2 since Pr(S1)+Pr(S2) = 2 Pr(S1∧S2)+Pr(S1⊕S2) and so Pr(S1) ≤Pr(S1∧S2)+ Pr(S1⊕S2) 2 . Now notice that Pr(S2|S1) = Pr(S1 ∧S2) Pr(S1) ≥ Pr(S1 ∧S2) Pr(S1 ∧S2) + Pr(S1 ⊕S2) > 1 1 + ϵ′ ≥1 −ϵ. But Pr(S2) ≤Pr(S1 ∧S2) + Pr(S1 ⊕S2) < (1 + ϵ′) Pr(S1 ∧S2) ≤(1 + ϵ) Pr(S1) and so Pr(S2) < (1 + ϵ) Pr(S1). Similarly if Pr(S2) ≤Pr(S1) we get a failure of expansion in the other direction. This completes the proof.
|
2004
|
121
|
2,532
|
Probabilistic Inference of Alternative Splicing Events in Microarray Data Ofer Shai, Brendan J. Frey, and Quaid D. Morris Dept. of Electrical & Computer Engineering University of Toronto, Toronto, ON Qun Pan, Christine Misquitta, and Benjamin J. Blencowe Banting & Best Dept. of Medical Research University of Toronto, Toronto, ON Abstract Alternative splicing (AS) is an important and frequent step in mammalian gene expression that allows a single gene to specify multiple products, and is crucial for the regulation of fundamental biological processes. The extent of AS regulation, and the mechanisms involved, are not well understood. We have developed a custom DNA microarray platform for surveying AS levels on a large scale. We present here a generative model for the AS Array Platform (GenASAP) and demonstrate its utility for quantifying AS levels in different mouse tissues. Learning is performed using a variational expectation maximization algorithm, and the parameters are shown to correctly capture expected AS trends. A comparison of the results obtained with a well-established but low through-put experimental method demonstrate that AS levels obtained from GenASAP are highly predictive of AS levels in mammalian tissues. 1 Biological diversity through alternative splicing Current estimates place the number of genes in the human genome at approximately 30,000, which is a surprisingly small number when one considers that the genome of yeast, a singlecelled organism, has 6,000 genes. The number of genes alone cannot account for the complexity and cell specialization exhibited by higher eukaryotes (i.e. mammals, plants, etc.). Some of that added complexity can be achieved through the use of alternative splicing, whereby a single gene can be used to code for a multitude of products. Genes are segments of the double stranded DNA that contain the information required by the cell for protein synthesis. That information is coded using an alphabet of 4 (A, C, G, and T), corresponding to the four nucleotides that make up the DNA. In what is known as the central dogma of molecular biology, DNA is transcribed to RNA, which in turn is translated into proteins. Messenger RNA (mRNA) is synthesized in the nucleus of the cell and carries the genomic information to the ribosome. In eukaryotes, genes are generally comprised of both exons, which contain the information needed by the cell to synthesize proteins, and introns, sometimes referred to as spacer DNA, which are spliced out of the pre-mRNA to create mature mRNA. An estimated 35%-75% of human genes [1] can be (c) (d) (b) (a) 2 A3’ C1 C2 A C2 C1 C2 C1 C1 C2 A 5’ 2 C1 A2 A1 2 A C2 C1 C2 C1 1 A C2 C1 C2 C1 C1 C2 C2 C1 C2 5’ A C1 3’ A C1 C2 C1 A C C Figure 1: Four types of AS. Boxes represent exons and lines represent introns, with the possible splicing alternatives indicated by the connectors. (a) Single cassette exon inclusion/exclusion. C1 and C2 are constitutive exons (exons that are included in all isoforms) and flank a single alternative exon (A). The alternative exon is included in one isoform and excluded in the other. (b) Alternative 3’ (or donor) and alternative 5’ (acceptor) splicing sites. Both exons are constitutive, but may contain alternative donor and/or acceptor splicing sites. (c) Mutually exclusive exons. One of the two alternative exons (A1 and A2) may be included in the isoform, but not both. (d) Intron inclusion. An intron may be included in the mature mRNA strand. spliced to yield different combinations of exons (called isoforms), a phenomenon referred to as alternative splicing (AS). There are four major types of AS as shown in Figure 1. Many multi-exon genes may undergo more than one alternative splicing event, resulting in many possible isoforms from a single gene. [2] In addition to adding to the genetic repertoire of an organism by enabling a single gene to code for more than one protein, AS has been shown to be critical for gene regulation, contributing to tissue specificity, and facilitating evolutionary processes. Despite the evident importance of AS, its regulation and impact on specific genes remains poorly understood. The work presented here is concerned with the inference of single cassette exon AS levels (Figure 1a) based on data obtained from RNA expression arrays, also known as microarrays. 1.1 An exon microarray data set that probes alternative splicing events Although it is possible to directly analyze the proteins synthesized by a cell, it is easier, and often more informative, to instead measure the abundance of mRNA present. Traditionally, gene expression (abundance of mRNA) has been studied using low throughput techniques (such as RT-PCR or Northern blots), limited to studying a few sequences at a time and making large scale analysis nearly impossible. In the early 1990s, microarray technology emerged as a method capable of measuring the expression of thousands of DNA sequences simultaneously. Sequences of interest are deposited on a substrate the size of a small microscope slide, to form probes. The mRNA is extracted from the cell and reverse-transcribed back into DNA, which is labelled with red and green fluorescent dye molecules (cy3 and cy5 respectively). When the sample of tagged DNA is washed over the slide, complementary strands of DNA from the sample hybridize to the probes on the array forming A-T and C-G pairings. The slide is then scanned and the fluorescent intensity is measured at each probe. It is generally assumed that the intensity measure at the probe is linearly related to the abundance of mRNA in the cell over a wide dynamic range. Despite significant improvements in microarray technologies in recent years, microarray data still presents some difficulties in analysis. Low measurements tend to have extremely low signal to noise ratio (SNR) [7] and probes often bind to sequences that are very similar, but not identical, to the one for which they were designed (a process referred to as crossC2 C1 C :A 1 A:C 2 A C1 C2 C 1 C2 A 2 Inclusion junction probes 1 Exclusion junction probe C 3 Body probes 2 C1 C :C 1 2 A Figure 2: Each alternative splicing event is studied using six probes. Probes were chosen to measure the expression levels of each of the three exons involved in the event. Additionally, 3 probes are used that target the junctions that are formed by each of the two isoforms. The inclusion isoform would express the junctions formed by C1 and A, and A and C2, while the exclusion isoform would express the junction formed by C1 and C2 hybridization). Additionally, probes exhibit somewhat varying hybridization efficiency, and sequences exhibit varying labelling efficiency. To design our data sets, we mined public sequence databases and identified exons that were strong candidates for exhibiting AS (the details of that analysis are provided elsewhere [4, 3]). Of the candidates, 3,126 potential AS events in 2,647 unique mouse genes were selected for the design of Agilent Custom Oligonucleotide microarray. The arrays were hybridized with unamplified mRNA samples extracted from 10 wild-type mouse tissues (brain, heart, intestine, kidney, liver, lung, salivary gland, skeletal muscle, spleen, and testis). Each AS event has six target probes on the arrays, chosen from regions of the C1 exon, C2 exon, A exon, C1:A splice junction, A:C2 splice junction, and C1:C2 splice junction, as shown in Figure 2. 2 Unsupervised discovery of alternative splicing With the exception of the probe measuring the alternative exon, A (Figure 2), all probes measure sequences that occur in both isoforms. For example, while the sequence of the probe measuring the junction A:C1 is designed to measure the inclusion isoform, half of it corresponds to a sequence that is found in the exclusion isoform. We can therefore safely assume that the measured intensity at each probe is a result of a certain amount of both isoforms binding to the probe. Due to the generally assumed linear relationship between the abundance of mRNA hybridized at a probe and the fluorescent intensity measured, we model the measured intensity as a weighted sum of the overall abundance of the two isoforms. A stronger assumption is that of a single, consistent hybridization profile for both isoforms across all probes and all slides. Ideally, one would prefer to estimate an individual hybridization profile for each AS event studied across all slides. However, in our current setup, the number of tissues is small (10), resulting in two difficulties. First, the number of parameters is very large when compared to the number of data point using this model, and second, a portion of the events do not exhibit tissue specific alternative splicing within our small set of tissues. While the first hurdle could be accounted for using Baysian parameter estimation, the second cannot. 2.1 GenASAP - a generative model for alternative splicing array platform Using the setup described above, the expression vector x, containing the six microarray measurements as real numbers, can be decomposed as a linear combination of the abundance of the two splice isoforms, represented by the real vector s, with some added noise: x = Λs+noise, where Λ is a 6×2 weight matrix containing the hybridization profiles for xC :C 1 2 xA:C 2 xC :A 1 xA xC 2 xC 1 ^ ^ ^ ^ ^ ^ r x 1 2 C :C xA:C 2 xC :A 1 xA xC 2 xC 1 C :C 1 2 o oA:C 2 oA oC 2 oC 1 oC :A 1 Σ n 2 s1 s2 Figure 3: Graphical model for alternative splicing. Each measurement in the observed expression profile, x, is generated by either using a scale factor, r, on a linear combination of the isoforms, s, or drawing randomly from an outlier model. For a detailed description of the model, see text. the two isoforms across the six probes. Note that we may not have a negative amount of a given isoform, nor can the presence of an isoform deduct from the measured expression, and so both s and Λ are constrained to be positive. Expression levels measured by microarrays have previously been modelled as having expression-dependent noise [7]. To address this, we rewrite the above formulation as x = r(Λs + ε), (1) where r is a scale factor and ε is a zero-mean normally distributed random variable with a diagonal covariance matrix, Ψ, denoted as p(ε) = N(ε; 0, Ψ). The prior distribution for the abundance of the splice isoforms is given by a truncated normal distribution, denoted as p(s) ∝N(s, 0, I)[s ≥0], where [·] is an indicator function such that [s ≥0] = 1 if ∀i, si ≥0, and [s ≥0] = 0 otherwise. Lastly, there is a need to account for aberrant observations (e.g. due to faulty probes, flakes of dust, etc.) with an outlier model. The complete GenASAP model (shown in Figure 3) accounts for the observations as the outcome of either applying equation (1) or an outlier model. To avoid degenerate cases and ensure meaningful and interpretable results, the number of faulty probes considered for each AS event may not exceed two, as indicated by the filled-in square constraint node in Figure 3. The distribution of x conditional on the latent variables, s, r, and o, is: p(x|s, r, o) = Y i N(xi; rΛis, r2Ψi)[oi=0]N(xi; Ei, Vi)[oi=1], (2) where oi ∈{0, 1} is a bernoulli random variable indicating if the measurement at probe xi is the result of the AS model or the outlier model parameterized by p(oi = 1) = γi. The parameters of the outlier model, E and V, are not optimized and are set to the mean and variance of the data. 2.2 Variational learning in the GenASAP model To infer the posterior distribution over the splice isoform abundances while at the same time learning the model parameters we use a variational expectation-maximization algorithm (EM). EM maximizes the log likelihood of the data by iteratively estimating the posterior distribution of the model given the data in the expectation (E) step, and maximizing the log likelihood with respect to the parameters, while keeping the posterior fixed, in the maximization (M) step. Variational EM is used when, as in the case of GenASAP, the exact posterior is intractable. Variational EM minimizes the free energy of the model, defined as the KL-divergence between the joint distribution of the latent and observed variables and the approximation to the posterior under the model parameters [5, 6]. We approximate the true posterior using the Q distribution given by Q({s(t)}, {o(t)}, {r(t)}) = T Y t=1 Q(r(t))Q(o(t)|r(t)) Y i Q(s(t) i |o(t) i , r(t)) =Z(t)−1 T Y t=1 ρ(t)ω(t)N(s(t); µ(t)d ro , Σ(t)d ro )[s(t) ≥0], (3) where Z is a normalization constant, the superscript d indicates that Σ is constrained to be diagonal, and there are T iid AS events. For computational efficiency, r is selected from a finite set, r ∈{r1, r2, . . . , rC} with uniform probability. The variational free energy is given by F(Q, P) = X r X o Z s Q({s(t)}, {o(t)}, {r(t)}) log Q({s(t)}, {o(t)}, {r(t)}) P({s(t)}, {o(t)}, {r(t)}, {x(t)}). (4) Variational EM minimizes the free energy by iteratively updating the Q distribution’s variational parameters (ρ(t), ω(t), µ(t)d ro , and Σ(t)d ro ) in the E-step, and the model parameters (Λ, Ψ, {r1, r2, . . . , rC}, and γ) in the M-step. The resulting updates are too long to be shown in the context of this paper and are discussed in detail elsewhere [3]. A few particular points regarding the E-step are worth covering in detail here. If the prior on s was a full normal distribution, there would be no need for a variational approach, and exact EM is possible. For a truncated normal distribution, however, the mixing proportions, Q(r)Q(o|r) cannot be calculated analytically except for the case where s is scalar, necessitating the diagonality constraint. Note that if Σ was allowed to be a full covariance matrix, equation (3) would be the true posterior, and we could find the sufficient statistics of Q(s(t)|o(t), r(t)): µ(t) ro = (I + ΛT (I −O(t))T Ψ−1(I −O(t))Λ)−1ΛT (I −O(t))T Ψ−1x(t)r(t)−1 (5) Σ(t)−1 ro = (I + ΛT (I −O(t))T Ψ−1(I −O(t))Λ) (6) where O is a diagonal matrix with elements Oi,i = oi. Furthermore, it can be easily shown that the optimal settings for µd and Σd approximating a normal distribution with full covariance Σ and mean µ is µd optimal = µ (7) Σd−1 optimal = diag(Σ−1) (8) In the truncated case, equation (8) is still true. Equation (7) does not hold, though, and µd optimal cannot be found analytically. In our experiments, we found that using equation (7) still decreases the free energy every E-step, and it is significantly more efficient than using, for example, a gradient decent method to compute the optimal µd. Inclusion Isoform Exclusion Isoform 0 10 20 30 40 50 Inclusion Isoform Exclusion Isoform 0 10 20 30 40 50 (a) (b) Optimal Weight Matrix Intuitive Weigh Matrix Figure 4: (a) An intuitive set of weights. Based on the biological background, one would expect to see the inclusion isoform hybridize to the probes measuring C1, C2, A, C1:A, and A:C2, while the exclusion isoform hybridizes to C1, C2, and C1:C2. (b) The learned set of weights closely agrees with the intuition, and captures cross hybridization between the probes outliers 72% 70% measurement RT−PCR AS model prediction (% exclusion) (% exclusion) 27% 14% (a) (b) (c) 8% 22% Contribution of exclusion isoform Contribution of inclusion isoform AS model Original Data Figure 5: Three examples of data cases and their predictions. (a) The data does not follow our notion of single cassette exon AS, but the AS level is predicted accurately by the model.(b) The probe C1:A is marked as outlier, allowing the model to predict the other probes accurately. (c) Two probes are marked as outliers, and the model is still successful in predicting the AS levels. 3 Making biological predictions about alternative splicing The results presented in this paper were obtained using two stages of learning. In the first step, the weight matrix, Λ, is learned on a subset of the data that is selected for quality. Two selection criteria were used: (a) sequencing data was used to select those cases for which, with high confidence, no other AS event is present (Figure 1) and (b) probe sets were selected for high expression, as determined by a set of negative controls. The second selection criterion is motivated by the common assumption that low intensity measurements are of lesser quality (see Section 1.1). In the second step, Λ is kept fixed, and we introduce the additional constraint that the noise is isotropic (Ψ = ψI) and learn on the entire data set. The constraint on the noise is introduced to prevent the model from using only a subset of the six probes for making the final set of predictions. We show a typical learned set of weights in Figure 4. The weights fit well with our intuition of what they should be to capture the presence of the two isoforms. Moreover, the learned weights account for the specific trends in the data. Examples of model prediction based on the microarray data are shown in Figure 5. Due to the nature of the microarray data, we do not expect all the inferred abundances to be equally good, and we devised a scoring criterion that ranks each AS event based on its fit to the model. Intuitively, given two input vectors that are equivalent up to a scale factor, with inferred MAP estimations that are equal up to the same scale factor, we would like their scores to be identical. The scoring criterion used, therefore is P k(xk −rΛks)2/(xk + Rank Pearson’s correlation False positive coefficient rate 500 0.94 0.11 1000 0.95 0.08 2000 0.95 0.05 5000 0.79 0.2 10000 0.79 0.25 15000 0.78 0.29 20000 0.75 0.32 30000 0.65 0.42 Table 1: Model performance evaluated at various ranks. Using 180 RT-PCR measurements, we are able to predict the model’s performance at various ranks. Two evaluation criteria are used: Pearson’s correlation coefficient between the model’s predictions and the RT-PCR measurements and false positive rate, where a prediction is considered to be false positive if it is more than 15% away from the RT-PCR measurement. rΛks)2, where the MAP estimations for r and s are used. This scoring criterion can be viewed as proportional to the sum of noise to signal ratios, as estimated using the two values given by the observation and the model’s best prediction of that observation. Since it is the relative amount of the isoforms that is of most interest, we need to use the inferred distribution of the isoform abundances to obtain an estimate for the relative levels of AS. It is not immediately clear how this should be done. We do, however, have RTPCR measurements for 180 AS events to guide us (see figure 6 for details). Using the top 50 ranked RT-PCR measurement, we fit three parameters, {a1, a2, a3}, such that the proportion of excluded isoform present, p, is given by p = a1 s2 s1+a2s2 + a3, where s1 is the MAP estimation of the abundance of the inclusion isoform, s2 is the MAP estimation of the abundance of the exclusion isoform, and the RT-PCR measurement are used for target p. The parameters are fitted using gradient descent on a least squared error (LSE) evaluation criterion. We used two criteria to evaluate the quality of the AS model predictions. Pearson’s correlation coefficient (PCC) is used to evaluate the overall ability of the model to correctly estimate trends in the data. PCC is invariant to affine transformation and so is independent of the transformation parameters a1 and a3 discussed above, while the parameter a2 was found to effect PCC very little. The PCC stays above 0.75 for the top two thirds ranked predictions. The second evaluation criterion used is the false positive rate, where a prediction is considered to be false positive if it is more than 15% away from the RT-PCR measurement. This allows us to say, for example, that if a prediction is within the top 10000, we are 75% confident that it is within 15% of the actual levels of AS. 4 Summary We designed a novel AS model for the inference of the relative abundance of two alternatively spliced isoforms from six measurements. Unsupervised learning in the model is performed using a structured variational EM algorithm, which correctly captures the underlying structure of the data, as suggested by its biological nature. The AS model, though presented here for a cassette exon AS events, can be used to learn any type of AS, and with a simple adjustment, multiple types. The predictions obtained from the AS model are currently being used to verify various claims about the role of AS in evolution and functional genomics, and to help identify sequences that affect the regulation of AS. (a) Intestine Testis Kidney Salivary Brain Spleen Liver Muscle Lung RT−PCR measurements: 27 24 26 26 51 75 60 85 100 14 22 27 32 47 46 66 78 63 AS model prediction: 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 RT−PCR measurement AS model prediction RT−PCR measurement Vs. AS model predictions % Exclusion isoform (b) Figure 6: (a) Sample RT-PCR. RNA extracted from the cell is reverse-transcribed to DNA, amplified and labelled with radioactive or fluorescent molecules. The sample is pulled through a viscous gel in an electric field (DNA, being an acid, is positively charged). Shorter strands travel further through the gel than longer ones, resulting in two distinct bands, corresponding to the two isoforms, when exposed to a photosensitive or x-ray film. (b) A scatter plot showing the RT-PCR measurements as compared to the AS model predictions. The plot shows all available RT-PCR measurements with a rank of 8000 or better. The AS model presented assumes a single weight matrix for all data cases. This is an oversimplified view of the data, and current work is being carried out in identifying probe specific expression profiles. However, due to the low dimensionality of the problem (10 tissues, six probes per event), care must be taken to avoid overfitting and to ensure meaningful interpretations. Acknowledgments We would like to thank Wen Zhang, Naveed Mohammad, and Timothy Hughes for their contributions in generating the data set. This work was funded in part by an operating and infrastructure grants from the CIHR and CFI, and a operating grants from NSERC and a Premier’s Research Excellence Award. References [1] J. M. Johnson et al. Genome-wide survey of human alternative pre-mrna splicing with exon junction microarrays. Science, 302:2141–44, 2003. [2] L. Cartegni et al. Listening to silence and understanding nonsense: exonic mutations that affect splicing. Nature Gen. Rev., 3:285–98, 2002. [3] Q. Pan et al. Revealing global regulatory features of mammalian alternative splicing using a quantitative microarray platform. Molecular Cell, 16(6):929–41, 2004. [4] Q. Pan et al. Alternative splicing of conserved exons is frequently species specific in human and mouse. Trends Gen., In Press, 2005. [5] M. I. Jordan, Z. Ghahramani, T. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183– 233, 1999. [6] R. M. Neal and G. E. Hinton. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models. Cambridge, MIT Press, 1998. [7] D. M. Rocke and B. Durbin. A model for measurement error for gene expression arrays. Journal of Computational Biology, 8(6):557–69, 2001.
|
2004
|
122
|
2,533
|
Semi-supervised Learning by Entropy Minimization Yves Grandvalet ∗ Heudiasyc, CNRS/UTC 60205 Compi`egne cedex, France grandval@utc.fr Yoshua Bengio Dept. IRO, Universit´e de Montr´eal Montreal, Qc, H3C 3J7, Canada bengioy@iro.umontreal.ca Abstract We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the “cluster assumption”. Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces. 1 Introduction In the classical supervised learning classification framework, a decision rule is to be learned from a learning set Ln = {xi, yi}n i=1, where each example is described by a pattern xi ∈X and by the supervisor’s response yi ∈Ω= {ω1, . . . , ωK}. We consider semi-supervised learning, where the supervisor’s responses are limited to a subset of Ln. In the terminology used here, semi-supervised learning refers to learning a decision rule on X from labeled and unlabeled data. However, the related problem of transductive learning, i.e. of predicting labels on a set of predefined patterns, is addressed as a side issue. Semisupervised problems occur in many applications where labeling is performed by human experts. They have been receiving much attention during the last few years, but some important issues are unresolved [10]. In the probabilistic framework, semi-supervised learning can be modeled as a missing data problem, which can be addressed by generative models such as mixture models thanks to the EM algorithm and extensions thereof [6].Generative models apply to the joint density of patterns and class (X, Y ). They have appealing features, but they also have major drawbacks. Their estimation is much more demanding than discriminative models, since the model of P(X, Y ) is exhaustive, hence necessarily more complex than the model of ∗This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence IST-2002-506778. This publication only reflects the authors’ views. P(Y |X). More parameters are to be estimated, resulting in more uncertainty in the estimation process. The generative model being more precise, it is also more likely to be misspecified. Finally, the fitness measure is not discriminative, so that better models are not necessarily better predictors of class labels. These difficulties have lead to proposals aiming at processing unlabeled data in the framework of supervised classification [1, 5, 11]. Here, we propose an estimation principle applicable to any probabilistic classifier, aiming at making the most of unlabeled data when they are beneficial, while providing a control on their contribution to provide robustness to the learning scheme. 2 Derivation of the Criterion 2.1 Likelihood We first recall how the semi-supervised learning problem fits into standard supervised learning by using the maximum (conditional) likelihood estimation principle. The learning set is denoted Ln = {xi, zi}n i=1, where z ∈{0, 1}K denotes the dummy variable representing the actually available labels (while y represents the precise and complete class information): if xi is labeled ωk, then zik = 1 and ziℓ= 0 for ℓ̸= k; if xi is unlabeled, then ziℓ= 1 for ℓ= 1, . . . , K. We assume that labeling is missing at random, that is, for all unlabeled examples, P(z|x, ωk) = P(z|x, ωℓ), for any (ωk, ωℓ) pair, which implies P(ωk|x, z) = zkP(ωk|x) PK ℓ=1 zℓP(ωℓ|x) . (1) Assuming independent examples, the conditional log-likelihood of (Z|X) on the observed sample is then L(θ; Ln) = n X i=1 log K X k=1 zikfk(xi; θ) ! + h(zi) , (2) where h(z), which does not depend on P(X, Y ), is only affected by the missingness mechanism, and fk(x; θ) is the model of P(ωk|x) parameterized by θ. This criterion is a concave function of fk(xi; θ), and for simple models such as the ones provided by logistic regression, it is also concave in θ, so that the global solution can be obtained by numerical optimization. Maximizing (2) corresponds to maximizing the complete likelihood if no assumption whatsoever is made on P(X) [6]. Provided fk(xi; θ) sum to one, the likelihood is not affected by unlabeled data: unlabeled data convey no information. In the maximum a posteriori (MAP) framework, Seeger remarks that unlabeled data are useless regarding discrimination when the priors on P(X) and P(Y |X) factorize [10]: observing x does not inform about y, unless the modeler assumes so. Benefitting from unlabeled data requires assumptions of some sort on the relationship between X and Y . In the Bayesian framework, this will be encoded by a prior distribution. As there is no such thing like a universally relevant prior, we should look for an induction bias exploiting unlabeled data when the latter is known to convey information. 2.2 When Are Unlabeled Examples Informative? Theory provides little support to the numerous experimental evidences [5, 7, 8] showing that unlabeled examples can help the learning process. Learning theory is mostly developed at the two extremes of the statistical paradigm: in parametric statistics where examples are known to be generated from a known class of distribution, and in the distribution-free Structural Risk Minimization (SRM) or Probably Approximately Correct (PAC) frameworks. Semi-supervised learning, in the terminology used here, does not fit the distribution-free frameworks: no positive statement can be made without distributional assumptions, as for some distributions P(X, Y ) unlabeled data are non-informative while supervised learning is an easy task. In this regard, generalizing from labeled and unlabeled data may differ from transductive inference. In parametric statistics, theory has shown the benefit of unlabeled examples, either for specific distributions [9], or for mixtures of the form P(x) = pP(x|ω1) + (1 −p)P(x|ω2) where the estimation problem is essentially reduced to the one of estimating the mixture parameter p [4]. These studies conclude that the (asymptotic) information content of unlabeled examples decreases as classes overlap.1 Thus, the assumption that classes are well separated is sensible if we expect to take advantage of unlabeled examples. The conditional entropy H(Y |X) is a measure of class overlap, which is invariant to the parameterization of the model. This measure is related to the usefulness of unlabeled data where labeling is indeed ambiguous. Hence, we will measure the conditional entropy of class labels conditioned on the observed variables H(Y |X, Z) = −EXY Z[log P(Y |X, Z)] , (3) where EX denotes the expectation with respect to X. In the Bayesian framework, assumptions are encoded by means of a prior on the model parameters. Stating that we expect a high conditional entropy does not uniquely define the form of the prior distribution, but the latter can be derived by resorting to the maximum entropy principle.2 Let (θ, ψ) denote the model parameters of P(X, Y, Z); the maximum entropy prior verifying EΘΨ[H(Y |X, Z)] = c, where the constant c quantifies how small the entropy should be on average, takes the form P(θ, ψ) ∝exp (−λH(Y |X, Z))) , (4) where λ is the positive Lagrange multiplier corresponding to the constant c. Computing H(Y |X, Z) requires a model of P(X, Y, Z) whereas the choice of the diagnosis paradigm is motivated by the possibility to limit modeling to conditional probabilities. We circumvent the need of additional modeling by applying the plug-in principle, which consists in replacing the expectation with respect to (X, Z) by the sample average. This substitution, which can be interpreted as “modeling” P(X, Z) by its empirical distribution, yields Hemp(Y |X, Z; Ln) = −1 n n X i=1 K X k=1 P(ωk|xi, zi) log P(ωk|xi, zi) . (5) This empirical functional is plugged in (4) to define an empirical prior on parameters θ, that is, a prior whose form is partly defined from data [2]. 2.3 Entropy Regularization Recalling that fk(x; θ) denotes the model of P(ωk|x), the model of P(ωk|x, z) (1) is defined as follows: gk(x, z; θ) = zkfk(x; θ) PK ℓ=1 zℓfℓ(x; θ) . For labeled data, gk(x, z; θ) = zk, and for unlabeled data, gk(x, z; θ) = fk(x; θ). From now on, we drop the reference to parameter θ in fk and gk to lighten notation. The 1This statement, given explicitly by [9], is also formalized, though not stressed, by [4], where the Fisher information for unlabeled examples at the estimate ˆp is clearly a measure of the overlap between class conditional densities: Iu(ˆp) = R (P (x|ω1)−P (x|ω2))2 ˆpP (x|ω1)+(1−ˆp)P (x|ω2) dx. 2Here, maximum entropy refers to the construction principle which enables to derive distributions from constraints, not to the content of priors regarding entropy. MAP estimate is the maximizer of the posterior distribution, that is, the maximizer of C(θ, λ; Ln) = L(θ; Ln) −λHemp(Y |X, Z; Ln) = n X i=1 log K X k=1 zikfk(xi) ! + λ n X i=1 K X k=1 gk(xi, zi) log gk(xi, zi) , (6) where the constant terms in the log-likelihood (2) and log-prior (4) have been dropped. While L(θ; Ln) is only sensitive to labeled data, Hemp(Y |X, Z; Ln) is only affected by the value of fk(x) on unlabeled data. Note that the approximation Hemp (5) of H (3) breaks down for wiggly functions fk(·) with abrupt changes between data points (where P(X) is bounded from below). As a result, it is important to constrain fk(·) in order to enforce the closeness of the two functionals. In the following experimental section, we imposed a smoothness constraint on fk(·) by adding to the criterion C (6) a penalizer with its corresponding Lagrange multiplier ν. 3 Related Work Self-Training Self-training [7] is an iterative process, where a learner imputes the labels of examples which have been classified with confidence in the previous step. Amini et al. [1] analyzed this technique and shown that it is equivalent to a version of the classification EM algorithm, which minimizes the likelihood deprived of the entropy of the partition. In the context of conditional likelihood with labeled and unlabeled examples, the criterion is n X i=1 log K X k=1 zikfk(xi) ! + K X k=1 gk(xi) log gk(xi) , which is recognized as an instance of the criterion (6) with λ = 1. Self-confident logistic regression [5] is another algorithm optimizing the criterion for λ = 1. Using smaller λ values is expected to have two benefits: first, the influence of unlabeled examples can be controlled, in the spirit of the EM-λ [8], and second, slowly increasing λ defines a scheme similar to deterministic annealing, which should help the optimization process to avoid poor local minima of the criterion. Minimum entropy methods Minimum entropy regularizers have been used in other contexts to encode learnability priors (e.g. [3]). In a sense, Hemp can be seen as a poor’s man way to generalize this approach to continuous input spaces. This empirical functional was also used by Zhu et al. [13, Section 6] as a criterion to learn weight function parameters in the context of transduction on manifolds for learning. Input-Dependent Regularization Our criterion differs from input-dependent regularization [10, 11] in that it is expressed only in terms of P(Y |X, Z) and does not involve P(X). However, we stress that for unlabeled data, the regularizer agrees with the complete likelihood provided P(X) is small near the decision surface. Indeed, whereas a generative model would maximize log P(X) on the unlabeled data, our criterion minimizes the conditional entropy on the same points. In addition, when the model is regularized (e.g. with weight decay), the conditional entropy is prevented from being too small close to the decision surface. This will favor putting the decision surface in a low density area. 4 Experiments 4.1 Artificial Data In this section, we chose a simple experimental setup in order to avoid artifacts stemming from optimization problems. Our goal is to check to what extent supervised learning can be improved by unlabeled examples, and if minimum entropy can compete with generative models which are usually advocated in this framework. The minimum entropy regularizer is applied to the logistic regression model. It is compared to logistic regression fitted by maximum likelihood (ignoring unlabeled data) and logistic regression with all labels known. The former shows what has been gained by handling unlabeled data, and the latter provides the “crystal ball” performance obtained by guessing correctly all labels. All hyper-parameters (weight-decay for all logistic regression models plus the λ parameter (6) for minimum entropy) are tuned by ten-fold cross-validation. Minimum entropy logistic regression is also compared to the classic EM algorithm for Gaussian mixture models (two means and one common covariance matrix estimated by maximum likelihood on labeled and unlabeled examples, see e.g. [6]). Bad local maxima of the likelihood function are avoided by initializing EM with the parameters of the true distribution when the latter is a Gaussian mixture, or with maximum likelihood parameters on the (fully labeled) test sample when the distribution departs from the model. This initialization advantages EM, since it is guaranteed to pick, among all local maxima of the likelihood, the one which is in the basin of attraction of the optimal value. Furthermore, this initialization prevents interferences that may result from the “pseudo-labels” given to unlabeled examples at the first E-step. In particular, “label switching” (i.e. badly labeled clusters) is avoided at this stage. Correct joint density model In the first series of experiments, we consider two-class problems in an 50-dimensional input space. Each class is generated with equal probability from a normal distribution. Class ω1 is normal with mean (aa . . . a) and unit covariance matrix. Class ω2 is normal with mean −(aa . . . a) and unit covariance matrix. Parameter a tunes the Bayes error which varies from 1 % to 20 % (1 %, 2.5 %, 5 %, 10 %, 20 %). The learning sets comprise nl labeled examples, (nl = 50, 100, 200) and nu unlabeled examples, (nu = nl × (1, 3, 10, 30, 100)). Overall, 75 different setups are evaluated, and for each one, 10 different training samples are generated. Generalization performances are estimated on a test set of size 10 000. This benchmark provides a comparison for the algorithms in a situation where unlabeled data are known to convey information. Besides the favorable initialization of the EM algorithm to the optimal parameters, EM benefits from the correctness of the model: data were generated according to the model, that is, two Gaussian subpopulations with identical covariances. The logistic regression model is only compatible with the joint distribution, which is a weaker fulfillment than correctness. As there is no modeling bias, differences in error rates are only due to differences in estimation efficiency. The overall error rates (averaged over all settings) are in favor of minimum entropy logistic regression (14.1 ± 0.3 %). EM (15.6 ± 0.3 %) does worse on average than logistic regression (14.9 ± 0.3 %). For reference, the average Bayes error rate is 7.7 % and logistic regression reaches 10.4 ± 0.1 % when all examples are labeled. Figure 1 provides more informative summaries than these raw numbers. The plots represent the error rates (averaged over nl) versus Bayes error rate and the nu/nl ratio. The first plot shows that, as asymptotic theory suggests [4, 9], unlabeled examples are mostly informative when the Bayes error is low. This observation validates the relevance of the minimum entropy assumption. This graph also illustrates the consequence of the demanding parametrization of generative models. Mixture models are outperformed by the simple logistic regression model when the sample size is low, since their number of parameters grows quadratically (vs. linearly) with the number of input features. The second plot shows that the minimum entropy model takes quickly advantage of unlabeled data when classes are well separated. With nu = 3nl, the model considerably improves upon the one discarding unlabeled data. At this stage, the generative models do not perform well, as the number of available examples is low compared to the number of parameters in the model. However, for very large sample sizes, with 100 times more unla5 10 15 20 10 20 30 40 Bayes Error (%) Test Error (%) 1 3 10 30 100 5 10 15 Ratio nu/nl Test Error (%) Figure 1: Left: test error vs. Bayes error rate for nu/nl = 10; right: test error vs. nu/nl ratio for 5 % Bayes error (a = 0.23). Test errors of minimum entropy logistic regression (◦) and mixture models (+). The errors of logistic regression (dashed), and logistic regression with all labels known (dash-dotted) are shown for reference. beled examples than labeled examples, the generative approach eventually becomes more accurate than the diagnosis approach. Misspecified joint density model In a second series of experiments, the setup is slightly modified by letting the class-conditional densities be corrupted by outliers. For each class, the examples are generated from a mixture of two Gaussians centered on the same mean: a unit variance component gathers 98 % of examples, while the remaining 2 % are generated from a large variance component, where each variable has a standard deviation of 10. The mixture model used by EM is slightly misspecified since it is a simple Gaussian mixture. The results, displayed in the left-hand-side of Figure 2, should be compared with the right-hand-side of Figure 1. The generative model dramatically suffers from the misspecification and behaves worse than logistic regression for all sample sizes. The unlabeled examples have first a beneficial effect on test error, then have a detrimental effect when they overwhelm the number of labeled examples. On the other hand, the diagnosis models behave smoothly as in the previous case, and the minimum entropy criterion performance improves. 1 3 10 30 100 5 10 15 20 Ratio nu/nl Test Error (%) 1 3 10 30 100 0 5 10 15 20 25 30 Ratio nu/nl Test Error (%) Figure 2: Test error vs. nu/nl ratio for a = 0.23. Average test errors for minimum entropy logistic regression (◦) and mixture models (+). The test error rates of logistic regression (dotted), and logistic regression with all labels known (dash-dotted) are shown for reference. Left: experiment with outliers; right: experiment with uninformative unlabeled data. The last series of experiments illustrate the robustness with respect to the cluster assumption, by testing it on distributions where unlabeled examples are not informative, and where a low density P(X) does not indicate a boundary region. The data is drawn from two Gaussian clusters like in the first series of experiment, but the label is now independent of the clustering: an example x belongs to class ω1 if x2 > x1 and belongs to class ω2 otherwise: the Bayes decision boundary is now separates each cluster in its middle. The mixture model is unchanged. It is now far from the model used to generate data. The right-hand-side plot of Figure 1 shows that the favorable initialization of EM does not prevent the model to be fooled by unlabeled data: its test error steadily increases with the amount of unlabeled data. On the other hand, the diagnosis models behave well, and the minimum entropy algorithm is not distracted by the two clusters; its performance is nearly identical to the one of training with labeled data only (cross-validation provides λ values close to zero), which can be regarded as the ultimate performance in this situation. Comparison with manifold transduction Although our primary goal is to infer a decision function, we also provide comparisons with a transduction algorithm of the “manifold family”. We chose the consistency method of Zhou et al. [12] for its simplicity. As suggested by the authors, we set α = 0.99 and the scale parameter σ2 was optimized on test results [12]. The results are reported in Table 1. The experiments are limited due to the memory requirements of the consistency method in our naive MATLAB implementation. Table 1: Error rates (%) of minimum entropy (ME) vs. consistency method (CM), for a = 0.23, nl = 50, and a) pure Gaussian clusters b) Gaussian clusters corrupted by outliers c) class boundary separating one Gaussian cluster nu 50 150 500 1500 a) ME 10.8 ± 1.5 9.8 ± 1.9 8.8 ± 2.0 8.3 ± 2.6 a) CM 21.4 ± 7.2 25.5 ± 8.1 29.6 ± 9.0 26.8 ± 7.2 b) ME 8.5 ± 0.9 8.3 ± 1.5 7.5 ± 1.5 6.6 ± 1.5 b) CM 22.0 ± 6.7 25.6 ± 7.4 29.8 ± 9.7 27.7 ± 6.8 c) ME 8.7 ± 0.8 8.3 ± 1.1 7.2 ± 1.0 7.2 ± 1.7 c) CM 51.6 ± 7.9 50.5 ± 4.0 49.3 ± 2.6 50.2 ± 2.2 The results are extremely poor for the consistency method, whose error is way above minimum entropy, and which does not show any sign of improvement as the sample of unlabeled data grows. Furthermore, when classes do not correspond to clusters, the consistency method performs random class assignments. In fact, our setup, which was designed for the comparison of global classifiers, is extremely defavorable to manifold methods, since the data is truly 50-dimensional. In this situation, local methods suffer from the “curse of dimensionality”, and many more unlabeled examples would be required to get sensible results. Hence, these results mainly illustrate that manifold learning is not the best choice in semi-supervised learning for truly high dimensional data. 4.2 Facial Expression Recognition We now consider an image recognition problem, consisting in recognizing seven (balanced) classes corresponding to the universal emotions (anger, fear, disgust, joy, sadness, surprise and neutral). The patterns are gray level images of frontal faces, with standardized positions. The data set comprises 375 such pictures made of 140 × 100 pixels. We tested kernelized logistic regression (Gaussian kernel), its minimum entropy version, nearest neigbor and the consistency method. We repeatedly (10 times) sampled 1/10 of the dataset for providing the labeled part, and the remainder for testing. Although (α, σ2) were chosen to minimize the test error, the consistency method performed poorly with 63.8±1.3 % test error (compared to 86 % error for random assignments). Nearest-neighbor get similar results with 63.1±1.3 % test error, and Kernelized logistic regression (ignoring unlabeled examples) improved to reach 53.6±1.3 %. Minimum entropy kernelized logistic regression regression achieves 52.0 ± 1.9 % error (compared to about 20 % errors for human on this database). The scale parameter chosen for kernelized logistic regression (by ten-fold cross-validation) amount to use a global classifier. Again, the local methods fail. This may be explained by the fact that the database contains several pictures of each person, with different facial expressions. Hence, local methods are likely to pick the same identity instead of the same expression, while global methods are able to learn the relevant directions. 5 Discussion We propose to tackle the semi-supervised learning problem in the supervised learning framework by using the minimum entropy regularizer. This regularizer is motivated by theory, which shows that unlabeled examples are mostly beneficial when classes have small overlap. The MAP framework provides a means to control the weight of unlabeled examples, and thus to depart from optimism when unlabeled data tend to harm classification. Our proposal encompasses self-learning as a particular case, as minimizing entropy increases the confidence of the classifier output. It also approaches the solution of transductive large margin classifiers in another limiting case, as minimizing entropy is a means to drive the decision boundary from learning examples. The minimum entropy regularizer can be applied to both local and global classifiers. As a result, it can improve over manifold learning when the dimensionality of data is effectively high, that is, when data do not lie on a low-dimensional manifold. Also, our experiments suggest that the minimum entropy regularization may be a serious contender to generative models. It compares favorably to these mixture models in three situations: for small sample sizes, where the generative model cannot completely benefit from the knowledge of the correct joint model; when the joint distribution is (even slightly) misspecified; when the unlabeled examples turn out to be non-informative regarding class probabilities. References [1] M. R. Amini and P. Gallinari. Semi-supervised logistic regression. In 15th European Conference on Artificial Intelligence, pages 390–394. IOS Press, 2002. [2] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 2 edition, 1985. [3] M. Brand. Structure learning in conditional probability models via an entropic prior and parameter extinction. Neural Computation, 11(5):1155–1182, 1999. [4] V. Castelli and T. M. Cover. The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Trans. on Information Theory, 42(6):2102–2117, 1996. [5] Y. Grandvalet. Logistic regression for partial labels. In 9th Information Processing and Management of Uncertainty in Knowledge-based Systems – IPMU’02, pages 1935–1941, 2002. [6] G. J. McLachlan. Discriminant analysis and statistical pattern recognition. Wiley, 1992. [7] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Ninth International Conference on Information and Knowledge Management, pages 86–93, 2000. [8] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine learning, 39(2/3):135–167, 2000. [9] T. J. O’Neill. Normal discrimination with unclassified observations. Journal of the American Statistical Association, 73(364):821–826, 1978. [10] M. Seeger. Learning with labeled and unlabeled data. Technical report, Institute for Adaptive and Neural Computation, University of Edinburgh, 2002. [11] M. Szummer and T. S. Jaakkola. Information regularization with partially labeled data. In Advances in Neural Information Processing Systems 15. MIT Press, 2003. [12] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, 2004. [13] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In 20th Int. Conf. on Machine Learning, pages 912–919, 2003.
|
2004
|
123
|
2,534
|
Detecting Significant Multidimensional Spatial Clusters Daniel B. Neill, Andrew W. Moore, Francisco Pereira, and Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 {neill,awm,fpereira,t.mitchell}@cs.cmu.edu Abstract Assume a uniform, multidimensional grid of bivariate data, where each cell of the grid has a count ci and a baseline bi. Our goal is to find spatial regions (d-dimensional rectangles) where the ci are significantly higher than expected given bi. We focus on two applications: detection of clusters of disease cases from epidemiological data (emergency department visits, over-the-counter drug sales), and discovery of regions of increased brain activity corresponding to given cognitive tasks (from fMRI data). Each of these problems can be solved using a spatial scan statistic (Kulldorff, 1997), where we compute the maximum of a likelihood ratio statistic over all spatial regions, and find the significance of this region by randomization. However, computing the scan statistic for all spatial regions is generally computationally infeasible, so we introduce a novel fast spatial scan algorithm, generalizing the 2D scan algorithm of (Neill and Moore, 2004) to arbitrary dimensions. Our new multidimensional multiresolution algorithm allows us to find spatial clusters up to 1400x faster than the naive spatial scan, without any loss of accuracy. 1 Introduction One of the core goals of modern statistical inference and data mining is to discover patterns and relationships in data. In many applications, however, it is important not only to discover patterns, but to distinguish those patterns that are significant from those that are likely to have occurred by chance. This is particularly important in epidemiological applications, where a rise in the number of disease cases in a region may or may not be indicative of an emerging epidemic. In order to decide whether further investigation is necessary, epidemiologists must know not only the location of a possible outbreak, but also some measure of the likelihood that an outbreak is occurring in that region. Similarly, when investigating brain imaging data, we want to not only find regions of increased activity, but determine whether these increases are significant or due to chance fluctuations. More generally, we are interested in spatial data mining problems where the goal is detection of overdensities: spatial regions with high counts relative to some underlying baseline. In the epidemiological datasets, the count is some quantity (e.g. number of disease cases, or units of cough medication sold) in a given area, where the baseline is the expected value of that quantity based on historical data. In the brain imaging datasets, our count is the total fMRI activation in a given set of voxels under the experimental condition, while our baseline is the total activation in that set of voxels under the null or control condition. We consider the case in which data has been aggregated to a uniform, d-dimensional grid. For the fMRI data, we have three spatial dimensions; for the epidemiological data, we have two spatial dimensions but also use several other quantities (time, patients’ age and gender) as “pseudo-spatial” dimensions; this is discussed in more detail below. In the general case, let G be a d-dimensional grid of cells, with size N1 × N2 × ... × Nd. Each cell si ∈G (where i is a d-dimensional vector) is associated with a count ci and a baseline bi. Our goal is to search over all d-dimensional rectangular regions S ⊆G, and find regions where the total count C(S) = ∑S ci is higher than expected, given the baseline B(S) = ∑S bi. In addition to discovering these high-density regions, we must also perform statistical testing to determine whether these regions are significant. As is necessary in the scan statistics framework, we focus on finding the single, most significant region; the method can be iterated (removing each significant cluster once it is found) to find multiple significant regions. 1.1 Likelihood ratio statistics Our basic model assumes that counts ci are generated by an inhomogeneous Poisson process with mean qbi, where q (the underlying ratio of count to baseline) may vary spatially. We wish to detect hyper-rectangular regions S such that q is significantly higher inside S than outside S. To do so, for a given region S, we assume that q = qin uniformly for cells si ∈S, and q = qout uniformly for cells si ∈G−S. We then test the null hypothesis H0(S): qin ≤(1+ε)qout against the alternative hypothesis H1(S): qin > (1+ε)qout. If ε = 0, this is equivalent to the classical spatial scan statistic [1-2]: we are testing for regions where qin is greater than qout. However, in many real-world applications (including the epidemiological and fMRI datasets discussed later) we expect some fluctuation in the underlying baseline; thus, we do not want to detect all deviations from baseline, but only those where the amount of deviation is greater than some threshold. For example, a 10% increase in disease cases in some region may not be interesting to epidemiologists, even if the underlying population is large enough to conclude that this is a “real” (statistically significant) increase in q. By increasing ε, we can focus the scan statistic on regions with larger ratios of count to baseline. For example, we can use the scan statistic with ε = 0.25 to test for regions where qin is more than 25% higher than qout. Following Kulldorff [1], our spatial scan statistic is the maximum, over all regions S, of the ratio of the likelihoods under the alternative and null hypotheses. Taking logs for convenience, we have: Dε(S) = log supqin>(1+ε)qout ∏si∈S P(ci ∼Po(qinbi))∏si∈G−S P(ci ∼Po(qoutbi)) supqin≤(1+ε)qout ∏si∈S P(ci ∼Po(qinbi))∏si∈G−S P(ci ∼Po(qoutbi)) = (sgn) C(S)log C(S) (1+ε)B(S) +(Ctot −C(S))log Ctot −C(S) Btot −B(S) −Ctot log Ctot Btot +εB(S) where C(S) and B(S) are the count and baseline of the region S under consideration, Ctot and Btot are the total count and baseline of the entire grid G, and sgn = +1 if C(S) B(S) > (1 + ε)Ctot−C(S) Btot−B(S) and -1 otherwise. Then the scan statistic Dε,max is equal to the maximum Dε(S) over all spatial regions (d-dimensional rectangles) under consideration. We note that our statistical and computational methods are not limited to the Poisson model given here; any model of null and alternative hypotheses such that the resulting statistic D(S) satisfies the conditions given in [4] can be used for the fast spatial scan. 1.2 Randomization testing Once we have found the highest scoring region S∗= argmaxS D(S) of grid G, we must still determine the statistical significance of this region. Since the exact distribution of the test statistic Dmax is only known in special cases, in general we must find the region’s p-value by randomization. To do so, we run a large number R of random replications, where a replica has the same underlying baselines bi as G, but counts are randomly drawn from the null hypothesis H0(S∗). More precisely, we pick ci ∼Po(qbi), where q = qin = (1+ε) Ctot Btot+εB(S∗) for si ∈S∗, and q = qout = Ctot Btot+εB(S∗) for si ∈G −S∗. The number of replicas G′ with Dmax(G′) ≥Dmax(G), divided by the total number of replications R, gives us the p-value for our most significant region S∗. If this p-value is less than α (where α is the false positive rate, typically chosen to be 0.05 or 0.1), we can conclude that the discovered region is statistically significant at level α. 1.3 The naive spatial scan The simplest method of finding Dmax is to compute D(S) for all rectangular regions of sizes k1 ×k2 ×...×kd, where 1 ≤k j ≤Nj. Since there are a total of ∏d j=1(Nj −k j +1) regions of each size, there are a total of O(∏d j=1 N2 j ) regions to examine. We can compute D(S) for any region S in constant time, by first finding the count C(S) and baseline B(S), then computing D.1 This allows us to compute Dmax of a grid G in O(∏d j=1 N2 j ) time. However, significance testing by randomization also requires us to find Dmax for each replica G′, and compare this to Dmax(G); thus the total complexity is multiplied by the number of replications R. When the size of the grid is large, as is the case for the epidemiological and fMRI datasets we are considering, this naive approach is computationally infeasible. Instead, we apply our “overlap-multiresolution partitioning” algorithm [3-4], generalizing this method from two-dimensional to d-dimensional datasets. This reduces the complexity to O(∏d j=1 Nj logNj) in cases where the most significant region S∗has a sufficiently high ratio of count to baseline, and (as we show in Section 3) typically results in tens to thousands of times speedup over the naive approach. We note that this fast spatial scan algorithm is exact (always finds the correct value of Dmax and the corresponding region S∗); the speedup results from the observation that we do not need to search a given set of regions if we can prove that none of them have score > Dmax. Thus we use a top-down, branch-and-bound approach: we maintain the current maximum score of the regions we have searched so far, calculate upper bounds on the scores of subregions contained in a given region, and prune regions whose upper bounds are less than the current value of Dmax. When searching a replica grid, we care only whether Dmax of the replica grid is greater than Dmax(G). Thus we can use Dmax of the original grid for pruning on the replicas, and can stop searching a replica if we find a region with score > Dmax(G). 2 Overlap-multiresolution partitioning As in [4], we use a multiresolution search method which relies on an overlap-kd tree data structure. The overlap-kd tree, like kd-trees [5] and quadtrees [6], is a hierarchical, spacepartitioning data structure. The root node of the tree represents the entire space under consideration (i.e. the entire grid G), and each other node represents a subregion of the grid. Each non-leaf node of a d-dimensional overlap-kd tree has 2d children, an “upper” and a “lower” child in each dimension. For example, in three dimensions, a node has six children: upper and lower children in the x, y, and z dimensions. The overlap-kd tree is different from the standard kd-tree and quadtree in that adjacent regions overlap: rather than splitting the region in half along each dimension, instead each child contains more than half the area of the parent region. For example, a 64 × 64 × 64 grid will have six children: two of size 48×64×64, two of size 64×48×64, and two of size 64×64×48. 1An old trick makes it possible to compute the count and baseline of any rectangular region in time constant in N: we first form a d-dimensional array of the cumulative counts, then compute each region’s count by adding/subtracting at most 2d cumulative counts. Note that because of the exponential dependence on d, these techniques suffer from the “curse of dimensionality”: neither the naive spatial scan, nor the fast spatial scan discussed below, are appropriate for very high dimensional datasets. In general, let region S have size k1 ×k2 ×...×kd. Then the two children of S in dimension j (for j = 1...d) have size k1 ×...×k j−1 × f jkj ×k j+1 ×...×kd, where 1 2 < f j < 1. This partitioning (for the two-dimensional case, where f1 = f2 = 3 4) is illustrated in Figure 1. Note that there is a region SC common to all of these children; we call this region the center of S. When we partition region S in this manner, it can be proved that any subregion of S either a) is contained entirely in (at least) one of S1 ...S2d, or b) contains the center region SC. Figure 1 illustrates each of these possibilities, for the simple case of d = 2. S S_3 S_4 S_C S_1 S_2 Figure 1: Overlap-multires partitioning of region S (for d = 2). Any subregion of S either a) is contained in some Si, i = 1...4, or b) contains SC. Now we can search all subregions of S by recursively searching S1 ...S2d, then searching all of the regions contained in S which contain the center SC. There may be a large number of such “outer regions,” but since we know that each such region contains the center, we can place very tight bounds on the score of these regions, often allowing us to prune most or all of them. Thus the basic outline of our search procedure (ignoring pruning, for the moment) is: overlap-search(S) { call base-case-search(S) define child regions S_1..S_2d, center S_C as above call overlap-search(S_i) for i=1..2d for all S’ such that S’ is contained in S and contains S_C, call base-case-search(S’) } The fractions fi are selected based on the current sizes ki of the region being searched: if ki = 2m, then fi = 3 4, and if ki = 3 × 2m, then fi = 2 3. For simplicity, we assume that all Ni are powers of two, and thus all region sizes ki will fall into one of these two cases. Repeating this partitioning recursively, we obtain the overlap-kd tree structure. For d = 2, the first two levels of the overlap-kd tree are shown in Figure 2. Figure 2: The first two levels of the twodimensional overlap-kd tree. Each node represents a gridded region (denoted by a thick rectangle) of the entire dataset (thin square and dots). The overlap-kd tree has several useful properties, which we present here without proof. First, for every rectangular region S ⊆G, either S is a gridded region (contained in the overlap-kd tree), or there exists a unique gridded region S′ such that S is an outer region of S′ (i.e. S is contained in S′, and contains the center region of S′). This means that, if overlap-search is called exactly once for each gridded region2, and no pruning is done, then base-case-search will be called exactly once for every rectangular region S ⊆G. In practice, we will prune many regions, so base-case-search will be called at most once for every rectangular region, and every region will be either searched or pruned. The second nice property of our overlap-kd tree is that the total number of gridded regions is O(∏d j=1 Nj logNj). This implies that, if we are able to prune (almost) all outer regions, we can find Dmax of the grid in O(∏d j=1 Nj logNj) time rather than O(∏d j=1 N2 j ). In fact, we may not even need to search all gridded regions, so in many cases the search will be even faster. 2As in [4], we use “lazy expansion”to ensure that gridded regions are not multiply searched. 2.1 Score bounds and pruning We now consider which regions can be pruned (discarded without searching) during our multiresolution search procedure. First, given some region S, we must calculate an upper bound on the scores D(S′) for regions S′ ⊂S. More precisely, we are interested in two upper bounds: a bound on the score of all subregions S′ ⊂S, and a bound on the score of the outer subregions of S (those regions contained in S and containing its center SC). If the first bound is less than or equal to Dmax, we can prune region S completely; we do not need to search any (gridded or outer) subregion of S. If only the second bound is less than or equal to Dmax, we do not need to search the outer subregions of S, but we must recursively call overlap-search on the gridded children of S. If both bounds are greater than Dmax, we must both recursively call overlap-search and search the outer regions. Score bounds are calculated based on various pieces of information about the subregions of S, including: upper and lower bounds bmax, bmin on the baseline of subregions S′; an upper bound dmax on the ratio C B of S′; an upper bound dinc on the ratio C B of S′ −SC; and a lower bound dmin on the ratio C B of S −S′. We also know the count C and baseline B of region S, and the count ccenter and baseline bcenter of region SC. Let cin and bin be the count and baseline of S′. To find an upper bound on D(S′), we must calculate the values of cin and bin which maximize D subject to the given constraints: cin−ccenter bin−bcenter ≤dinc, cin bin ≤dmax, C−cin B−bin ≥dmin, and bmin ≤bin ≤bmax. The solution to this maximization problem is derived in [4], and (since scores are based only on count and baseline rather than the size and shape of the region) it applies directly to the multidimensional case. The bounds on baselines and ratios C B are first calculated using global values (as a fast, “first-pass” pruning technique). For the remaining, unpruned regions, we calculate tighter bounds using the quartering method of [4], and use these to prune more regions. 2.2 Related work Our work builds most directly on the results of Kulldorff [1], who presents the twodimensional spatial scan framework and the classical (ε = 0) likelihood ratio statistic. It also extends [4], in which we present the two-dimensional fast spatial scan. Our major extensions in the present work are twofold: the d-dimensional fast spatial scan, and the generalized likelihood ratio statistics Dε. A variety of other cluster detection techniques exist in the literature on epidemiology [1-3, 7-8], brain imaging [9-11], and machine learning [12-15]. The machine learning literature focuses on heuristic or approximate clusterfinding techniques, which typically cannot deal with spatially varying baselines, and more importantly, give no information about the statistical significance of the clusters found. Our technique is exact (in that it calculates the maximum of the likelihood ratio statistic over all hyper-rectangular spatial regions), and uses a powerful statistical test to determine significance. Nevertheless, other methods in the literature have some advantages over the present approach, such as applicability to high-dimensional data and fewer assumptions on the underlying model. The fMRI literature generally tests significance on a per-voxel basis (after applying some method of spatial smoothing); clusters must then be inferred by grouping individually significant voxels, and (with the exception of [10]) no per-cluster false positive rate is guaranteed. The epidemiological literature focuses on detecting significant circular, two-dimensional clusters, and thus cannot deal with multidimensional data or elongated regions. Detection of elongated regions is extremely important in both epidemiology (because of the need to detect windborne or waterborne pathogens) and brain imaging (because of the “folded sheet” structure of the brain); the present work, as well as [4], allow detection of such clusters. 3 Results We now describe results of our fast spatial scan algorithm on three sets of real-world data: two sets of epidemiological data (from emergency department visits and over-the-counter drug sales), and one set of fMRI data. Before presenting these results, we wish to emphasize three main points. First, the extension of scan statistics from two-dimensional to d-dimensional datasets dramatically increases the scope of problems for which these techniques can be used. In addition to datasets with more than two spatial dimensions (for example, the fMRI data, which consists of a 3D picture of the brain), we can also examine data with a temporal component (as in the OTC dataset), or where we wish to take demographic information into account (as in the ED dataset). Second, in all of these datasets, the use of the broader class of likelihood ratio statistics Dε (instead of only the classical scan statistic ε = 0) allows us to focus our search on smaller, denser regions rather than slight (but statistically significant) increases over a large area. Third, as our results here will demonstrate, the fast spatial scan gains huge performance improvements over the naive approach, making the use of the scan statistic feasible in these large, real-world datasets. Our first test set was a database of (anonymized) Emergency Department data collected from Western Pennsylvania hospitals in the period 1999-2002. This dataset contains a total of 630,000 records, each representing a single ED visit and giving the latitude and longitude of the patient’s home location to the nearest 1 3 mile (a sufficiently low resolution to ensure anonymity). Additionally, a record contains information about the patient’s gender and age decile. Thus we map records into a four-dimensional grid, consisting of two spatial dimensions (longitude, latitude) and two “pseudo-spatial” dimensions (patient gender and age decile). This has several advantages over the traditional (two-dimensional) spatial scan. First, our test has higher power to detect syndromes which affect differing patient demographics to different extents. For example, if a disease primarily strikes male infants, we might find a cluster with gender = male and age decile = 0 in some spatial region, and this cluster may not be detectable from the combined data. Second, our method accounts correctly for multiple hypothesis testing. If we were to instead perform a separate test at level α on each combination of gender and age decile, the overall false positive rate would be much higher than α. We mapped the ED dataset to a 128 × 128 × 2 × 8 grid, with the first two coordinates corresponding to longitude and latitude, the third coordinate corresponding to the patient’s gender, and the fourth coordinate corresponding to the patient’s age decile. We tested for spatial clustering of “recent” disease cases: the count of a cell was the number of ED visits in that spatial region, for patients of that age and gender, in 2002, and the baseline was the total number of ED visits in that spatial region, for patients of that age and gender, over the entire temporal period 1999-2002. We used the Dε scan statistic with values of ε ranging from 0 to 1.0. For the classical scan statistic (ε = 0), we found a region of size 35×34×2×8; thus the most significant region was spatially localized but cut across all genders and age groups. The region had C = 3570 and B = 6409, as compared to C B = 0.05 outside the region, and thus this is clearly an overdensity. This was confirmed by the algorithm, which found the region statistically significant (p-value 0/100). With the three other values of ε, the algorithm found almost the same region (35 × 33 × 2 × 8, C = 3566, B = 6390) and again found it statistically significant (p-value 0/100). For all values of ε, the fast scan statistic found the most significant region hundreds of times faster than the naive spatial scan (see Table 1): while the naive approach required approximately 12 hours per replication, the fast scan searched each replica in approximately 2 minutes, plus 100 minutes to search the original grid. Thus the fast algorithm achieved speedups of 235-325x over the naive approach for the entire run (i.e. searching the original grid and 100 replicas) on the ED dataset. Our second test set was a nationwide database of retail sales of over-the-counter cough and cold medication. Sales figures were reported by zip code; the data covered 5000 zip codes across the U.S. In this case, our goal was to see if the spatial distribution of sales in a given week (February 7-14, 2004) was significantly different than the spatial distribution of sales during the previous week, and to identify a significant cluster of increased sales if one exists. Since we wanted to detect clusters even if they were only present for part of the week, we used the date (Feb. 7-14) as a third dimension. This is similar to the retrospective Table 1: Performance of algorithm, real-world datasets test ε sec/orig sec/rep speedup regions (orig) regions (rep) ED 0 6140 126 x235 358M 622K (128×128×2×8) 0.25 6035 100 x275 352M 339K (7.35B regions) 0.5 5994 102 x272 348M 362K 1.0 5607 79.6 x325 334M 336K OTC 0 4453 195 x48 302M 2.46M (128×128×8) 0.25 429 123 x90 12.2M 1.39M (2.45B regions) 0.5 334 51 x210 8.65M 350K 1.0 229 5.9 x1400 4.40M < 10 fMRI 0 880 384 x7 39.9M 14.0M (64×64×16) 0.01 597 285 x9 35.2M 10.4M (588M regions) 0.02 558 188 x14 33.1M 6.65M 0.03 547 97.3 x27 32.3M 3.93M 0.04 538 30.0 x77 31.9M 1.44M 0.05 538 13.1 x148 31.7M 310K space-time scan statistic of [16], which also uses time as a third dimension. However, that algorithm searches over cylinders rather than hyper-rectangles, and thus cannot detect spatially elongated clusters. The count of a cell was taken to be the number of sales in that spatial region on that day; to adjust for day-of-week effects, the baseline of a cell was taken to be the number of sales in that spatial region on the day one week prior (Jan. 31-Feb. 7). Thus we created a 128 × 128 × 8 grid, where the first two coordinates were derived from the longitude and latitude of that zip code, and the third coordinate was temporal, based on the date. For this dataset, the classical scan statistic (ε = 0) found a region of size 123 × 76 from February 7-11. Unfortunately, since the ratio C B was only 0.99 inside the region (as compared to 0.96 outside) this region would not be interesting to an epidemiologist. Nevertheless, the region was found to be significant (p-value 0/100) because of the large total baseline. Thus, in this case, the classical scan statistic finds a large region of very slight overdensity rather than a smaller, denser region, and thus is not as useful for detecting epidemics. For ε = 0.25 and ε = 0.5, the scan statistic found a much more interesting region: a 4 × 1 region on February 9 where C = 882 and B = 240. In this region, the number of sales of cough medication was 3.7x its expected value; the region’s p-value was computed to be 0/100, so this is a significant overdensity. For ε = 1, the region found was almost the same, consisting of three of these four cells, with C = 825 and B = 190. Again the region was found to be significant (p-value 0/100). For this dataset, the naive approach took approximately three hours per replication. The fast scan statistic took between six seconds and four minutes per replication, plus ten minutes to search the original grid, thus obtaining speedups of 48-1400x on the OTC dataset. Our third and final test set was a set of fMRI data, consisting of two “snapshots” of a subject’s brain under null and experimental conditions respectively. The experimental condition was from a test [9] where the subject is given words, one at a time; he must read these words and identify them as verbs or nouns. The null condition is the subject’s average brain activity while fixating on a cursor, before any words are presented. Each snapshot consists of a 64 × 64 × 16 grid of voxels, with a reading of fMRI activation for the subset of the voxels where brain activity is occurring. In this case, the count of a cell is the fMRI activation for that voxel under the experimental condition, and the baseline is the activation for that voxel under the null condition. For voxels with no brain activity, we have ci = bi = 0. For the fMRI dataset, the amount of change between activated and non-activated regions is small, and thus we used values of ε ranging from 0 to 0.05. For the classical scan statistic (ε = 0) our algorithm found a 23×20×11 region, and again found this region significant (p-value 0/100). However, this is another example where the classical scan statistic finds a region which is large ( 1 4 of the entire brain) and only slightly increased in count: C B = 1.007 inside the region and C B = 1.002 outside the region. For ε = 0.01, we find a more interesting cluster: a 5 × 10 × 1 region in the visual cortex containing four non-zero voxels.3 For this region C B = 1.052, a large increase, and the region is significant at α = 0.1 (p-value 10/100) though not at α = 0.05. For ε = 0.02, we find the same region, but conclude that it is not significant (p-value 32/100). For ε = 0.03 and ε = 0.04, we find a 3 × 2 × 1 region with C B = 1.065, but this region is not significant (pvalues 61/100 and 89/100 respectively). Similarly, for ε = 0.05, we find a single voxel with C B = 1.075, but again it is not significant (p-value 91/100). For this dataset, the naive approach took approximately 45 minutes per replication. The fast scan statistic took between 13 seconds and six minutes per replication, thus obtaining speedups of 7-148x on the fMRI dataset. Thus we have demonstrated (through tests on a variety of real-world datasets) that the fast multidimensional spatial scan statistic has significant performance advantages over the naive approach, resulting in speedups up to 1400x without any loss of accuracy. This makes it feasible to apply scan statistics in a variety of application domains, including the spatial and spatio-temporal detection of disease epidemics (taking demographic information into account), as well as the detection of regions of increased brain activity in fMRI data. We are currently examining each of these application domains in more detail, and investigating which statistics are most useful for each domain. The generalized likelihood ratio statistics presented here are a first step toward this: by adjusting the parameter ε, we can “tune” the statistic to detect smaller and denser, or larger but less dense, regions as desired, and our statistical significance test is adjusted accordingly. We believe that the combination of fast computational algorithms and more powerful statistical tests presented here will enable the multidimensional spatial scan statistic to be useful in these and many other applications. References [1] M. Kulldorff. 1997. A spatial scan statistic. Communications in Statistics: Theory and Methods 26(6), 1481-1496. [2] M. Kulldorff. 1999. Spatial scan statistics: models, calculations, and applications. In Glaz and Balakrishnan, eds. Scan Statistics and Applications. Birkhauser: Boston, 303-322. [3] D. B. Neill and A. W. Moore. 2003. A fast multi-resolution method for detection of significant spatial disease clusters. In Advances in Neural Information Processing Systems 16. [4] D. B. Neill and A. W. Moore. 2004. Rapid detection of significant spatial clusters. To be published in Proc. 10th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining. [5] J. L. Bentley. 1975. Multidimensional binary search trees used for associative searching. Comm. ACM 18, 509-517. [6] R. A. Finkel and J. L. Bentley. 1974. Quadtrees: a data structure for retrieval on composite keys. Acta Informatica 4, 1-9. [7] S. Openshaw, et al. 1988. Investigation of leukemia clusters by use of a geographical analysis machine. Lancet 1, 272-273. [8] L. A. Waller, et al. 1994. Spatial analysis to detect disease clusters. In N. Lange, ed. Case Studies in Biometry. Wiley, 3-23. [9] T. Mitchell et al. 2003. Learning to detect cognitive states from brain images. Machine Learning, in press. [10] M. Perone Pacifico et al. 2003. False discovery rates for random fields. Carnegie Mellon University Dept. of Statistics, Technical Report 771. [11] K. Worsley et al. 2003. Detecting activation in fMRI data. Stat. Meth. in Medical Research 12, 401-418. [12] R. Agrawal, et al. 1998. Automatic subspace clustering of high dimensional data for data mining applications. Proc. ACM-SIGMOD Intl. Conference on Management of Data, 94-105. [13] J. H. Friedman and N. I. Fisher. 1999. Bump hunting in high dimensional data. Statistics and Computing 9, 123-143. [14] S. Goil, et al. 1999. MAFIA: efficient and scalable subspace clustering for very large data sets. Northwestern University, Technical Report CPDC-TR-9906-010. [15] W. Wang, et al. 1997. STING: a statistical information grid approach to spatial data mining. Proc. 23rd Conference on Very Large Databases, 186-195. [16] M. Kulldorff. 1998. Evaluating cluster alarms: a space-time scan statistic and brain cancer in Los Alamos. Am. J. Public Health 88, 1377-1380. 3In a longer run on a different subject, where we iterate the scan statistic to pick out multiple significant regions, we found significant clusters in Broca’s and Wernicke’s areas in addition to the visual cortex. This makes sense given the nature of the experimental task; however, more data is needed before we can draw conclusive cross-subject comparisons.
|
2004
|
124
|
2,535
|
Message Errors in Belief Propagation Alexander T. Ihler, John W. Fisher III, and Alan S. Willsky Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology ihler@mit.edu, fisher@csail.mit.edu, willsky@mit.edu Abstract Belief propagation (BP) is an increasingly popular method of performing approximate inference on arbitrary graphical models. At times, even further approximations are required, whether from quantization or other simplified message representations or from stochastic approximation methods. Introducing such errors into the BP message computations has the potential to adversely affect the solution obtained. We analyze this effect with respect to a particular measure of message error, and show bounds on the accumulation of errors in the system. This leads both to convergence conditions and error bounds in traditional and approximate BP message passing. 1 Introduction Graphical models and message-passing algorithms defined on graphs are a growing field of research. In particular, the belief propagation (BP, or sum-product) algorithm has become a popular means of solving inference problems exactly or approximately. One part of its appeal is its optimality for tree-structured graphical models (models which contain no loops). However, its is also widely applied to graphical models with cycles. In these cases it may not converge, and if it does its solution is approximate; however in practice these approximations are often good. Recently, further justifications for loopy belief propagation have been developed, including a few convergence results for graphs with cycles [1–3]. The approximate nature of loopy BP is often a more than acceptable price for efficient inference; in fact, it is sometimes desirable to make additional approximations. There may be a number of reasons for this—for example, when exact message representation is computationally intractable, the messages may be approximated stochastically [4] or deterministically by discarding low-likelihood states [5]. For BP involving continuous, non-Gaussian potentials, some form of approximation is required to obtain a finite parametrization for the messages [6–8]. Additionally, graph simplification by edge removal may be regarded as a coarse form of message approximation. Finally, one may wish to approximate the messages and reduce their representation size for another reason—to decrease the communications required for distributed inference applications. In a distributed environment, one may approximate the transmitted message to reduce its representational cost [9], or discard it entirely if it is deemed “sufficiently similar” to the previously sent version [10]. This may significantly reduce the amount of communication required. Given that message approximation may be desirable, we would like to know what effect the introduced errors have on our overall solution. To characterize the effect in graphs with cycles, we analyze the deviation from a solution given by “exact” loopy BP (not, as is typically considered, the deviation of loopy BP from the true marginal distributions). In the process, we also develop some results on the convergence of loopy BP. Section 3 describes the major themes of the paper; but first we provide a brief summary of belief propagation. 2 Graphical Models and Belief Propagation Graphical models provide a convenient means of representing conditional independence relations among large numbers of random variables. Specifically, each node s in a graph is associated with a random variable xs, while the set of edges E is used to describe the conditional dependency structure of the variables. A distribution satisfies the conditional independence relations specified by an undirected graph if it factors into a product of potential functions ψ defined on the cliques (fully-connected subsets) of the graph; the converse is also true if p(x) is strictly positive [11]. Here we consider graphs with at most pairwise interactions (a typical assumption in BP), where the distribution factors according to p(x) = Y (s,t)∈E ψst(xs, xt) Y s ψs(xs) (1) The goal of belief propagation [12], or BP, is to compute the marginal distribution p(xt) at each node t. BP takes the form of a message-passing algorithm between nodes, expressed in terms of an update to the outgoing message from each node t to each neighbor s in terms of the (previous iteration’s) incoming messages from t’s neighbors Γt, mts(xs) ∝ Z ψts(xt, xs)ψt(xt) Y u∈Γt\s mut(xt)dxt (2) Typically each message is normalized so as to integrate to unity (and we assume that such normalization is possible). At any iteration, one may calculate the belief at node t by M i t(xt) ∝ψt(xt) Y u∈Γt mi ut(xt) (3) For tree-structured graphical models, belief propagation can be used to efficiently perform exact marginalization. Specifically, the iteration (2) converges in a finite number of iterations (at most the length of the longest path in the graph), after which the belief (3) equals the correct marginal p(xt). However, as observed by [12], one may also apply belief propagation to arbitrary graphical models by following the same local message passing rules at each node and ignoring the presence of cycles in the graph; this procedure is typically referred to as “loopy” BP. 1 1 1 1 1 1 2 2 2 2 3 3 3 4 4 4 3 4 Figure 1: For a graph with cycles, one may show an equivalence between n iterations of loopy BP and the n-level computation tree (shown here for n = 3 and rooted at node 1; example from [2]). For loopy BP, the sequence of messages defined by (2) is not guaranteed to converge to a fixed point after any number of iterations. Under relatively mild conditions, one may guarantee the existence of fixed points [13]. However, they may not be unique, nor are the results exact (the belief M i t does not converge to the true marginal). In practice however the procedure often arrives at a reasonable set of approximations to the correct marginal distributions. It is sometimes convenient to think of loopy BP in terms of its computation tree [2]. The n-level computation tree rooted at some node t is a treestructured “unrolling” of the graph, so that n iterations of loopy BP on the original graph is equivalent at the node t to exact inference on the computation tree. An example of this structure is shown in Figure 1. m(x) ˆm(x) } log d (e) log m/ ˆm (a) (b) Figure 2: (a) A message m(x), solid, and its approximation ˆm(x), dashed. (b) Their log-ratio log m(x)/ ˆm(x); log d (e) characterizes their similarity by measuring the error’s dynamic range. 3 Overview of Results To orient the reader, we lay out the order and general results which are obtained in this paper. We begin by considering multiplicative error functions which describe the difference between a “true” message m(x) (typically meaning consistent with some BP fixed-point) and some approximation ˆm(x) = m(x) · e(x). We apply a particular functional measure d (e) (defined below) and show how this measure behaves with respect to the BP equations (2) and (3). When applied to traditional BP, this results in a novel sufficient condition for its convergence to a unique solution, specifically max (s,t)∈E X u∈Γt\s d (ψut)2 −1 d (ψut)2 + 1 < 1, (4) and may be further improved in most cases. The condition (4) is shown to be slightly stronger than the sufficient condition given in [2]. More importantly, however, the method in which it is derived allows us to generalize to many other situations: • The condition (4) is easily improved for graphs with irregular geometry or potential strengths • The method also provides a bound on the distance between any two BP fixed points. • The same methodology may be applied to the case of quantized or otherwise approximated messages, yielding bounds on the ensuing error (our original motivation). • By regarding message errors as a stochastic process and applying a few additional assumptions, a similar analysis obtains alternate, tighter estimates (though not necessarily bounds) of performance. 4 Message Approximations In order to discuss the effects and propagation of errors introduced to the BP messages, we first require a measure of the difference between two messages. Although there are certainly other possibilities, it is very natural to consider the message deviations (which we denote ets) to be multiplicative, or additive in the log-domain, and examine a measure of the error’s dynamic range: ˆmts(xs) = mts(xs)ets(xs) d (ets) = max a,b p ets(a)/ets(b) (5) Then, we have that mts(x) = ˆmts(x)∀x if and only if log d (ets) = 0. This measure may also be related to more traditional error measures, including an absolute error on log m(x), a floating-point precision on m(x), and the Kullback-Leibler divergence D(m(x)∥ˆm(x)); for details, see [14]. In this light our analysis of message approximation (Section 5.3) may be equivalently regarded as a statement about the required precision for an accurate implementation of loopy BP. Figure 2 shows an example message m(x) and approximation ˆm(x) along with their associated error e(x). To facilitate our analysis, we split the message update operation (2) into two parts. In the first, we focus on the message products Mts(xt) ∝ψt(xt) Y u∈Γt\s mut(xt) Mt(xt) ∝ψt(xt) Y u∈Γt mut(xt) (6) where as usual, the proportionality constant is chosen to normalize M. We show the message error metric is (sub-)additive, i.e. that the errors in each incoming message (at most) add in their impact on M. The second operation is the message convolution mts(xs) ∝ Z ψts(xt, xs)Mts(xt)dxt (7) where M is a normalized message or product of messages. We demonstrate a level of contraction, that is, the approximation of mts is measurably better than the approximation of Mts used to construct it. We use the convention that lowercase quantities (mts, ets, . . .) refer to messages and message errors, while uppercase ones (Mts, Ets, Mt, . . .) refer to products of messages or errors—all incoming messages to node t (Mt and Et), or all except the one from s (Mts and Ets). Due to space constraints, many omitted details and proofs can be found in [14]. 4.1 Additivity and Error Contraction The log of (5) is sub-additive, since for several incoming messages { ˆmut(x)} we have log d (Ets) = log d ³ ˆ Mts/Mts ´ = log d ³Y eut ´ ≤ X log d (eut) (8) We may also derive a minimum rate of contraction on the errors. We consider the message from t to s; since all quantities in this section relate to mts and Mts we suppress the subscripts. The error measure d (e) is given by d (e)2 = d ( ˆm/m)2 = max a,b R ψ(xt, a)M(xt)E(xt)dxt R ψ(xt, a)M(xt)dxt · R ψ(xt, b)M(xt)dxt R ψ(xt, b)M(xt)E(xt)dxt (9) subject to certain constraints, such as positivity of the messages and potentials. Since ∀f, g > 0, R f(x) dx / R g(x) dx ≤max x f(x)/g(x) (10) we can directly obtain the two bounds: d (e)2 ≤d (E)2 and d (e)2 ≤d (ψ)4 (11) log d(ψ)2 d(E)+1 d(ψ)2+d(E) log d (E) log d (ψ)2 log d (e) → log d (E) → Figure 3: Bounds on the error output d (e) as a function of the error in the product of incoming messages d (E). where we have extended the measure d (·) to functions of two variables (describing a minimum rate of mixing across the potential) by d (ψ)2 = max a,b,c,d ψ(a, b) ψ(c, d). (12) However, with some work one may show [14] the stronger measure of contraction, d (e) ≤d (ψ)2 d (E) + 1 d (ψ)2 + d (E) . (13) Sketch of proof: While the full proof is rather involved, we outline the procedure here. First, use (10) to show that the maximum of (9) given d (ψ) is attained by potentials of the form ψ(x, a) ∝1 + KχA(x) and ψ(x, b) ∝1 + KχB(x), where K = d (ψ)2 −1 and χA and χB take on only values {0, 1}, along with a similar form for E(x). Then define the variables MA = R M(x)ζA(x), MAE = R M(x)ζA(x)ζE(x), etc., and optimize given the constraints 0 ≤MA, MB, ME ≤1, MAE ≤ min[MA, ME], and MBE ≥max[0, ME −(1 −MB)] (where the last constraint arises from the fact that ME + MB −MBE ≤1). Simplifying and taking the square root yields (13). The bound (13) is shown in Figure 3; note that it improves both error bounds (11), shown as straight lines. In the next section, we use (8)-(13) to analyze the behavior of loopy BP. 5 Implications in Graphs with Cycles We begin by examining loopy BP with exact message passing, using the previous results to derive a new sufficient condition for convergence to a unique fixed point. When this condition is not satisfied, we instead obtain a bound on the relative distances between any two fixed points of the loopy BP equations. We then consider the effect of introducing additional errors into the messages passed at each iteration, showing sufficient conditions for this operation to converge, and a bound on the resulting error from exact loopy BP. 5.1 Convergence of Loopy BP & Fixed Point Distance Tatikonda and Jordan [2] showed that the convergence and fixed points of loopy BP may be considered in terms of a Gibbs measure on the graph’s computation tree, implying that loopy BP is guaranteed to converge if the graph satisfies Dobrushin’s condition [15]. Dobrushin’s condition is a global measure and difficult to verify; given in [2] is a sufficient condition (often called Simon’s condition): max t X u∈Γt log d (ψut) < 1 (14) where d (ψ) is defined as in (12). Using the previous section’s analysis, we may argue something slightly stronger. Let us take the “true” messages mts to be any fixed point of BP, and “approximate” them at each iteration by performing loopy BP from some arbitrary initial conditions. Now suppose that the largest message-product error log d (Eut) in any node u with parent t at level i of the computation tree (corresponding to iteration n −i out of n total iterations of loopy BP) is bounded above by some constant log ϵi. Note that this is trivially true (at any i) for the constant log ϵi = max(u,t)∈E |Γt| log d (ψut)2. Now, we may bound d (Ets) at any replicate of node t with parent s on level i −1 of the tree by log d (Ets) ≤gts(log ϵi) = X u∈Γt\s log d (ψut)2 ϵi + 1 d (ψut)2 + ϵi . (15) and we may define log ϵi−1 = maxt,s gts(log ϵi) to bound the error at level i−1. Loopy BP will converge if the sequence ϵi, ϵi−1, . . . is strictly decreasing for all ϵ > 1, i.e. gts(z) < z for all z > 0. This is guaranteed by the conditions gts(0) = 0, g′ ts(0) < 1 and g′′ ts(z) < 0. The first is easy to show, the third can be verified by algebra, and the condition g′ ts(0) < 1 can be rewritten to give the convergence criterion max (s,t)∈E X u∈Γt\s d (ψut)2 −1 d (ψut)2 + 1 < 1 (16) We may relate (16) to Simon’s condition (14) by expanding the set Γt \ s to the larger Γt and noting that log x ≥x2−1 x2+1 for all x ≥1 with equality as x →1. Doing so, we see that Simon’s condition is sufficient to guarantee (16), but that (16) may be true (implying convergence) when Simon’s condition is not satisfied. The improvement over Simon’s condition becomes negligible as connectivity increases (assuming the graph has approximately equal-strength potentials), but can be significant for low connectivity. For example, if the graph consists of a single loop then each node t has at most two neighbors. In this case, the contraction (16) tells us that the outgoing message in either direction is always closer to the BP fixed point than the incoming message. Thus we obtain the result of [1], that (for finite-strength potentials) BP always converges to a unique fixed point on graphs containing a single loop. Simon’s condition, on the other hand, is too loose to demonstrate this fact. If the condition (16) is not satisfied, then the sequence {ϵi} is not always decreasing and there may be multiple fixed points. In this case, the sequence {ϵi} as defined will decrease until it reaches the largest value ϵ such that maxts gts(log ϵ) = log ϵ. Since the choice of initialization was arbitrary, we may opt to initialize to any other fixed point, and observe that the difference Et between these two fixed point beliefs is bounded by log d (Et) ≤ X u∈Γt log d (ψut)2 ϵ + 1 d (ψut)2 + ϵ (17) 0 0.5 1 1.5 2 2.5 0 2 4 6 8 10 Simple bound, grids (a) and (b) Nonuniform bound, grid (a) Nonuniform bound, grid (b) Simons condition log d (Et) α → (a) (b) (c) Figure 4: Two small (5 × 5) grids, with (a) all equal-strength potentials log d (ψ)2 = α and (b) several weaker ones (log d (ψ)2 = .5α, thin lines). The methods described provide bounds (c) on the distance between any two fixed points as a function of potential strength α, all of which improve on Simon’s condition. See text for details. Thus, the fixed points of BP lie in some potentially small set. If log ϵ is small (the condition (16) is nearly satisfied) then although we cannot guarantee convergence to a unique fixed point, we can guarantee that every fixed point and our estimate are all mutually close (in a log-ratio sense). 5.2 Improving the Bounds by Path-counting If we are willing to put a bit more effort into our bound-computation, we may be able to improve it.In particular the proofs of (16)-(17) assume that, as a message error propagates through the graph, repeated convolution with only the strongest set of potentials is possible. But often even if the worst potentials are quite strong, every cycle which contains them also contains several weaker potentials. Using an iterative algorithm much like BP itself, we may obtain a more globally aware estimate of error propagation. Let us consider a message-passing procedure (potentially performed offline) in which node t passes a (scalar) bound υi ts on the message error d ¡ ei ts ¢ at iteration i to its neighbor s. The bound may be initialized to υ1 ts = d (ψts)2, and the next iteration’s (updated) outgoing bound is given by the pair of equations log υi+1 ts = log d (ψts)2 ϵi ts + 1 d (ψts)2 + ϵi ts log ϵi ts = X u∈Γt\s log υi ut (18) Here, as in Section 5.1, ϵi ts bounds the error d (Ets) in the product of incoming messages. If (18) converges to log υi ts →0 for all t, s we may guarantee a unique fixed point for loopy BP; if not, we may compute log ϵi t = P Γt log υi ut to obtain a bound on the belief error at each node t. If every node is identical (same number of neighbors, same potential strengths) this yields the same bound as (17); however, if the graph or potential strengths are inhomogeneous it provides a strictly stronger bound on loopy BP convergence and errors. This situation is illustrated in Figure 4—we specify two 5×5 grids in terms of their potential strengths and compute bounds on the log-range of their fixed point beliefs. (While potential strength does not completely specify the graphical model, it is sufficient for all the bounds considered here.) One grid (a) has equal-strength potentials log d (ψ)2 = α, while the other has many weaker potentials (α/2). The worst-case bounds are the same (since both have a node with four strong neighbors), shown as the solid curve in (c). However, the dashed curves show the estimate of (18), which improves only slightly for the strongly coupled graph (a) but considerably for the weaker graph (b). All three bounds give considerably more information than Simon’s condition (dotted vertical line). 5.3 Introducing additional errors As discussed in the introduction, we may wish to introduce or allow additional errors in our messages at each stage, in order to improve the computational or communication efficiency of the algorithm. This may be the result of an actual distortion imposed on the message (perhaps to decrease its complexity, for example quantization), from censoring the message update (reusing the message from the previous iteration) when the two are sufficiently similar, or from approximating or quantizing the model parameters (potential functions). Any of these additional errors can be easily incorporated into our framework. If at each iteration, we introduce an additional (perhaps stochastic) error to each message which has a dynamic range bounded by some constant δ, the relationships of (18) become log υi+1 ts = log d (ψts)2 ϵi ts + 1 d (ψts)2 + ϵi ts + log δ log ϵi ts = X u∈Γt\s log υi ut (19) and gives a bound on the steady-state error (distance from a fixed point) in the system. 5.4 Stochastic Analysis Unfortunately, the above bounds are often pessimistic compared to actual performance. By treating the perturbations as stochastic we may obtain a more realistic estimate (though no longer a strict bound) on the resulting error. Specifically, let us describe the error functions log ets(xs) for each xs as a random variable with mean zero and variance σ2 ts. By assuming that the errors in each incoming message are uncorrelated, we obtain additivity of their variances: Σ2 ts = P u∈Γt\s σ2 ut. The assumption of uncorrelated errors is clearly questionable since propagation around loops may couple the incoming message errors, but is common in quantization analysis, and we shall see that it appears reasonable in practice. We would also like to estimate the contraction of variance incurred in the convolution step. We may do so by applying a simple sigma-point quadrature (“unscented”) approximation [16], in which the standard deviation of the convolved function mts(xs) is estimated by applying the same nonlinearity (13) to the standard deviation of the error on the incoming product Mts. Thus, similarly to (18) and (19), we have σ2 ts = Ã log d (ψts)2 λts + 1 λts + d (ψts)2 !2 + (log δ)2 (log λts)2 = X u∈Γt\s σ2 ut (20) The steady-state solution of (20) yields an estimate of the variances of the log-belief log pt by σ2 t = P u∈Γt σ2 ut; this estimate is typically much smaller than the bound (18) due to the strict sub-additive relationship between the standard deviations. Although it is not a bound, using a Chebyshev-like argument we may conclude that, for example, the 2σt distance will be greater than the typical errors observed in practice. 6 Experiments We demonstrate the error bounds for perturbed messages with a set of Monte Carlo trials. In particular, for each trial we construct a binary-valued 5 × 5 grid with uniform potential strengths, which are either (1) all positively correlated, or (2) randomly chosen to be positively or negatively correlated (equally likely); we also assign random single-node potentials to each xs. We then run a quantized version of BP, rounding each log-message to discrete values separated by 2 log δ (ensuring that the newly introduced error satisifies d (e) ≤δ). Figure 5 shows the maximum belief error in each of 100 trials of this procedure for various values of δ. Also shown are the bound on belief error developed in Section 5.3 and the 2σ estimate computed assuming uncorrelated message errors. As can be seen, the stochastic estimate is often a much tighter, more accurate assessment of error, but it does not possess the same strong theoretical guarantees. Since, as observed in analysis of quantization and stability in digital filtering [17], the errors introduced by quantization are typically close to independent, the assumptions of the stochastic estimate are reasonable and empirically we observe that the estimate and actual errors behave similarly. 10 3 10 2 10 1 10 0 10 3 10 2 10 1 10 0 10 1 Strict bound Stochastic estimate Positive corr. potentials Mixed corr. potentials log δ → max log d (Et) 10-3 10 -2 10 -1 100 10 -3 10 -2 10 -1 100 101 log δ → max log d (Et) (a) log d (ψ)2 = .25 (b) log d (ψ)2 = 1 Figure 5: Maximum belief errors incurred as a function of the quantization error. The scatterplot indicates the maximum error measured in the graph for each of 200 Monte Carlo runs; this is strictly bounded above by the solution of (18), solid, and bounded with high probability (assuming uncorrelated errors) by (20), dashed. 7 Conclusions We have described a particular measure of distortion on BP messages and shown that it is sub-additive and measurably contractive, leading to sufficient conditions for loopy BP to converge to a unique fixed point. Furthermore, this enables analysis of quantized, stochastic, or other approximate forms of BP, yielding sufficient conditions for convergence and bounds on the deviation from exact message passing. Assuming the perturbations are uncorrelated can often give tighter estimates of the resulting error. For additional details as well as some further consequences and extensions, see [14]. The authors would like to thank Erik Sudderth, Martin Wainwright, Tom Heskes, and Lei Chen for many helpful discussions. This research was supported in part by MIT Lincoln Laboratory under Lincoln Program 2209-3023 and by ODDR&E MURI through ARO grant DAAD19-00-0466. References [1] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1), 2000. [2] S. Tatikonda and M. Jordan. Loopy belief propagation and gibbs measures. In UAI, 2002. [3] T. Heskes. On the uniqueness of loopy belief propagation fixed points. To appear in Neural Computation, 2004. [4] D. Koller, U. Lerner, and D. Angelov. A general algorithm for approximate inference and its application to hybrid Bayes nets. In UAI 15, pages 324–333, 1999. [5] J. M. Coughlan and S. J. Ferreira. Finding deformable shapes using loopy belief propagation. In ECCV 7, May 2002. [6] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In CVPR, 2003. [7] M. Isard. PAMPAS: Real–valued graphical models for computer vision. In CVPR, 2003. [8] T. Minka. Expecatation propagation for approximate bayesian inference. In UAI, 2001. [9] A. T. Ihler, J. W. Fisher III, and A. S. Willsky. Communication-constrained inference. Technical Report TR-2601, Laboratory for Information and Decision Systems, 2004. [10] L. Chen, M. Wainwright, M. Cetin, and A. Willsky. Data association based on optimization in graphical models with application to sensor networks. Submitted to Mathematical and Computer Modeling, 2004. [11] P. Clifford. Markov random fields in statistics. In G. R. Grimmett and D. J. A. Welsh, editors, Disorder in Physical Systems, pages 19–32. Oxford University Press, Oxford, 1990. [12] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman, San Mateo, 1988. [13] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. Technical Report 2004-040, MERL, May 2004. [14] A. T. Ihler, J. W. Fisher III, and A. S. Willsky. Message errors in belief propagation. Technical Report TR-2602, Laboratory for Information and Decision Systems, 2004. [15] Hans-Otto Georgii. Gibbs measures and phase transitions. Studies in Mathematics. de Gruyter, Berlin / New York, 1988. [16] S. Julier and J. Uhlmann. A general method for approximating nonlinear transformations of probability distributions. Technical report, RRG, Dept. of Eng. Science, Univ. of Oxford, 1996. [17] A. Willsky. Relationships between digital signal processing and control and estimation theory. Proc. IEEE, 66(9):996–1017, September 1978.
|
2004
|
125
|
2,536
|
Methods Towards Invasive Human Brain Computer Interfaces Thomas Navin Lal1, Thilo Hinterberger2, Guido Widman3, Michael Schr¨oder4, Jeremy Hill1, Wolfgang Rosenstiel4, Christian E. Elger3, Bernhard Sch¨olkopf1 and Niels Birbaumer2,5 1 Max-Planck-Institute for Biological Cybernetics, T¨ubingen, Germany {navin,jez,bs}@tuebingen.mpg.de 2 Eberhard Karls University, Dept. of Medical Psychology and Behavioral Neurobiology, T¨ubingen, Germany {thilo.hinterberger,niels.birbaumer}@uni-tuebingen.de 3 University of Bonn, Department of Epileptology, Bonn, Germany {guido.widman,christian.elger}@ukb.uni-bonn.de 4 Eberhard Karls University, Dept. of Computer Engineering, T¨ubingen, Germany {schroedm,rosenstiel}@informatik.uni-tuebingen.de 5 Center for Cognitive Neuroscience, University of Trento, Italy Abstract During the last ten years there has been growing interest in the development of Brain Computer Interfaces (BCIs). The field has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial electroencephalography (EEG). However, reported bit rates are still low. One reason for this is the low signal-to-noise ratio of the EEG [16]. We are currently investigating if BCIs based on electrocorticography (ECoG) are a viable alternative. In this paper we present the method and examples of intracranial EEG recordings of three epilepsy patients with electrode grids placed on the motor cortex. The patients were asked to repeatedly imagine movements of two kinds, e.g., tongue or finger movements. We analyze the classifiability of the data using Support Vector Machines (SVMs) [18,21] and Recursive Channel Elimination (RCE) [11]. 1 Introduction Completely paralyzed patients cannot communicate despite intact cognitive functions. The disease Amyotrophic Lateral Sclerosis (ALS) for example, leads to complete paralysis of the voluntary muscular system caused by the degeneration of the motor neurons. Birbaumer et al. [1, 9] developed a Brain Computer Interface (BCI), called the Thought Translation Device (TTD), which is used by several paralyzed patients. In order to use the interface, patients have to learn to voluntary regulate their Slow Cortical Potentials (SCP). The system then allows its users to write text on the screen of a computer or to surf the web. Although it presents a major breakthrough, the system has two disadvantages. Not all patients manage Figure 1: The left picture schematically shows the position of the 8x8 electrode grid of patient II. It was placed on the right hemisphere. As shown in the right picture the electrodes are connected to the amplifier via cables that are passed through the skull. to control their SCP. Furthermore the bit rate is quite low. A well-trained user requires about 30 seconds to write one character. Recently there has been increasing interest on EEG-based BCIs in the machine learning community. In contrast to the TTD, in many BCI-systems the computer learns rather than the system’s user [2, 5, 11]. Most such BCIs require a data collection phase during which the subject repeatedly produces brain states of clearly separable locations. Machine learning techniques like Support Vector Machines or Fisher Discriminant are applied to the data to derive a classifying function. This function can be used in online applications to identify the different brain states produced by the subject. The majority of BCIs is based on extracranial EEG-recordings during imagined limb movements. We restrict ourselves to mentioning just a few publications [14, 15, 17, 22]. Movement-related cortical potentials in humans on the basis of electrocorticographical data have also been studied, e.g. by [20]. Very recently the first work describing BCIs based on electrocorticographic recordings was published [6, 13]. Successful approaches have been developed using BCIs based on single unit, multiunit or field potentials recordings of primates. Serruya et al. taught monkeys to control a cursor on the basis of potentials from 7-30 motor cortex neurons [19]. The BCI developed by [3] enables monkeys to reach and grasp using a robot arm. Their system is based on recordings from frontoparietal cell ensembles. Driven by the success of BCIs for primates based on single unit or multiunit recordings, we are currently developing a BCI-system that is based on ECoG recordings, as described in the present paper. 2 Electrocorticography and Epilepsy All patients presented suffer from a focal epilepsy. The epileptic focus - the part of the brain which is responsible for the seizures - is removed by resection. Prior to surgery, the epileptic focus has to be localized. In some complicated cases, this must be done by placing electrodes onto the surface of the cortex as well as into deeper regions of the brain. The skull over the region of interest is removed, the electrodes are positioned and the incision is sutured. The electrodes are connected to a recording device via cables (cf. Figure 1). Over a period of a 5 to 14 days ECoG is continuously recorded until the patient has had enough seizures to precisely localize the focus [10]. Prior to surgery the parts of the cortex that are covered by the electrodes are identified by the electric stimulation of electrodes. In the current setup, the patients keep the electrode implants for one to two weeks. After the implantation surgery, several days of recovery and follow-up examinations are needed. Due to the tight time constraints, it is therefore not possible to run long experiments. Furthermore most of the patients cannot concentrate for a long period of time. Therefore only a small amount of data could be collected. Table 1: Positions of implanted electrodes. All three patients had an electrode grid implanted that partly covered the right or the left motor cortex. patient implanted electrodes task trials I 64-grid right hemisphere, left vs. right hand 200 two 4-strip interhemisphere II 64-grid right hemisphere little left finger vs. tongue 150 III 20-grid central, little right finger vs. tongue 100 four 16-strips frontal 3 Experimental Situation and Data Acquisition The experiments were performed in the department of epileptology of the University of Bonn. We recorded ECoG data from three epileptic patients with a sampling rate of 1000Hz. The electrode grids were placed on the cortex under the dura mater and covered the primary motor and premotor area as well as the fronto-temporal region either of the right or left hemisphere. The grid-sizes ranged from 20 to 64 electrodes. Furthermore two of the patients had additional electrodes implanted on other parts of the cortex (cf. Table 1). The imagery tasks were chosen such that the involved parts of the brain • were covered by the electrode grid • were represented spatially separate in the primary motor cortex. The expected well-localized signal in motor-related tasks suggested discrimination tasks using imagination of hand, little finger, or tongue movements. The patients were seated in a bed facing a monitor and were asked to repeatedly imagine two different movements. At the beginning of each trial, a small fixation cross was displayed in the center of the screen. The 4 second imagination phase started with a cue that was presented in the form of a picture showing either a tongue or a little finger for patients II and III. The cue for patient I was an arrow pointing left or right. There was a short break between the trials. The images which were used as a cue are shown in Figure 5. 4 Preprocessing Starting half a second after the visualization of the task-cue, we extracted a window of length 1.5 seconds from the data of each electrode. For every trial and every electrode we thus obtained an EEG sequence that consisted of 1500 samples. The linear trend from every sequence was removed. Following [8,11,15] we fitted a forward-backward autoregressive model of order three to each sequence. The concatenated model parameters of the channels together with the descriptor of the imagined task (i.e. +1, -1) form one training point. For a given number n of EEG channels, a training point (x, y) is therefor a point in R3n × {−1, 1}. 5 Channel Selection The number of available training points is relatively small compared to the dimensionality of the data. The data of patient III for example, consists of only 100 training points of Figure 2: The patients were asked to repeatedly imagine two different movements that are represented separately at the primary cortex, e.g. tongue and little finger movements. This figure shows two stimuli that were used as a cue for imagery. The trial structure is shown on the right. The imagination phase lasted four seconds. We extracted segments of 1.5 seconds from the ECoG recordings for the analysis. dimension 252. This is a typical setting in which features selection methods can improve classification accuracy. Lal et al. [11] recently introduced a feature selection method for the special case of EEG data. Their method is based on Recursive Feature Elimination (RFE) [7]. RFE is a backward feature selection method. Starting with the full data set, features are iteratively removed from the data until a stopping criteria is met. In each iteration a Support Vector Machine (SVM) is trained and its weight vector is analyzed. The feature that corresponds to the smallest weight vector entry is removed. Recursive Channel Elimination (RCE) [11] treats features that belong to the data of a channel in a consistent way. As in RFE, in every iteration one SVM is trained. The evaluation criteria that determines which of the remaining channels will be removed is the mean of the weight vector entries that correspond to a channel’s features. All features of the channel with the smallest mean value are removed from the data. The output of RCE is a list of ranked channels. 6 Data Analysis To begin with, we are interested in how well SVMs can learn from small ECoG data sets. Furthermore we would like to understand how localized the classification-relevant information is, i.e. how many recording positions are necessary to obtain high classification accuracy. We compare how well SVMs can generalize given the data of different subsets of ECoG-channels: (i) the complete data, i.e. all channels (ii) the subset of channels suggested by RCE. In this setting we use the list of ranked channels from RCE in the following way: For every l in the range of one to the total number of channels, we calculate a 10-fold cross-validation error on the data of the l best-ranked channels. We use the subset of channels which leads to the lowest error estimate. (iii) the two best-ranked channels by RCE. The underlying assumption used here is that the classification-relevant information is extremely localized and that two correctly chosen channels contain sufficient information for classification purposes. (iv) two channels drawn at random. Throughout the paper we use linear SVMs. For regularization purposes we use a ridge on the kernel matrix which corresponds to a 2-norm penalty on the slack variables [4]. 0 500 1000 1500 2000 2500 3000 3500 C1 C2 C3 C4 muV time [ms] Figure 3: This plot shows ECoG recordings from 4 channels while the patient was imagining movements. The distance of two horizontal lines decodes 100µV . The amplitude of the recordings ranges roughly from -100 µV to +100 µV which is on the order of five to ten times the amplitude measured with extracranial EEG. To evaluate the classification performance of an SVM that is trained on a specific subset of channels we calculate its prediction error on a separate test set. We use a double-crossvalidation scheme - the following procedure is repeated 50 times: We randomly split the data into a training set (80%) and a test set (20%). Via 10-fold cross-validation on the training set we estimate all parameters for the different considered subsets (i)-(iv): (i) The ridge is estimated. (ii) On the basis of the training set RCE suggests a subset of channels. We restrict the training set as well as the test set to these channels. A ridge-value is then estimated from the restricted training set. (iii) We restrict the training set and the test set to the 2 best ranked channels by RCE. The ridge is then estimated on the restricted training set. (iv) The ridge is estimated. We then train an SVM on the (restricted) training set using the estimated ridge. The trained model is tested on the (restricted) test set. For (i)-(iv) we obtain 50 test error estimates from the 50 repetitions for each patient. Table 2 summarizes the results. 7 Results The results in Table 2 show that the generalization ability can significantly be increased by RCE. For patient I the error decreases from 38% to 24% when using the channel subsets suggested by RCE. In average RCE selects channel subsets of size 5.8. For patient II the number of channels is reduced to one third but the channel selection process does not yield an increased accuracy. The error of 40% can be reduced to 23% for patient III using in average 5 channels selected by RCE. For patients I and III the choice of the best 2 ranked channels leads to a much lower error as well. The direct comparison of the results using the two best ranked channels to two randomly chosen channels shows how well the RCE ranking method works: For patient three the error drops from chance level for two random channels to 18 % using the two best-ranked channels. The reason why there is such a big difference in performance for patient III when comparing (i) and (iii) might be, that out of the 84 electrodes, only 20 are located over or close to the motor cortex. RCE successfully identifies the important electrodes. In contrast to patient III, the electrodes of patient II are all more or less located close to Table 2: Classification Results. We compare the classification accuracy of SVMs trained on the data of different channel subsets: (i) all ECoG-channels, (ii) the subset determined by Recursive Channel Elimination (RCE), (iii) the subset consisting of the two best ranked channels by RCE and (iv) two randomly drawn channels. The mean errors of 50 repetitions are given along with the standard deviations. The test error can significantly be reduced by RCE for two of the three patients. Using the two best ranked channels by RCE also yields good results for two patients. SVMs trained on two random channels show performance better than chance only for patient II. all channels (i) RCE cross-val. (ii) RCE top 2 (iii) random 2 (iv) pat #channels error #channels error error error I 74 0.382 ± 0.071 5.8 0.243 ± 0.063 0.244 ± 0.078 chance level II 64 0.257 ±0.076 21.5 0.268 ± 0.080 0.309 ± 0.086 0.419 ± 0.123 III 84 0.4 ±0.1 5.0 0.233 ±0.13 0.175 ± 0.078 chance level the motor cortex. This explains why data from two randomly drawn channels can yield a classification rate better than chance. Furthermore patient II had the fewest electrodes implanted and thus the chance of randomly choosing an electrode close to an important location is higher than for the other two patients. 8 Discussion We recorded ECoG-data from three epilepsy patients during a motor imagery experiment. Although only few data were collected, the following conclusions can be drawn: • The data of all three patients is reasonably well classifiable. The error rates range from 17.5% to 23.3%. This is still high compared to the best error rates from BCI based on extracranial EEG which are as low as 10% (e.g. [12]). Please note that we used 1.5 seconds data from each trial only and that very few training points (100-200) were available. Furthermore, extracranial EEG has been studied and developed for a number of years. • Recursive Channel Elimination (RCE) shows very good performance. RCE successfully identifies subsets of ECoG-channels that lead to good classification performance. On average, RCE leads to a significantly improved classification rate compared to a classifier that is based on the data of all available channels. • Poor classification rates using two randomly drawn channels and high classification rates using the two best-ranked channels by RCE suggest that classification relevant information is focused on small parts of the cortex and depends on the location of the physiological function. • The best ranked RCE-channels correspond well with the results from the electric stimulation (cf. Figure 8). 9 Ongoing Work and Further Research Although our preliminary results indicate that invasive Brain Computer Interfaces may be feasible, a number of questions need to be investigated in further experiments. For instance, it is still an open question whether the patients are able to adjust to a trained classifier and whether the classifying function can be transferred from session to session. Moreover, experiments that are based on tasks different from motor imaginary need to X X X X X X X X X Figure 4: Electric stimulation of the implanted electrodes helps to identify the parts of the cortex that are covered by the electrode grid. This information is necessary for the surgery. The red (solid) dots on the left picture mark the motor cortex of patient II as identified by the electric stimulation method. The positions marked with yellow crosses correspond to the epileptic focus. The red points on the right image are the best ranked channels by Recursive Channel Elimination (RCE). The RCE-channels correspond well to the results from the electro stimulation diagnosis. be implemented and tested. It is quite conceivable that the tasks that have been found to work well for extracranial EEG are not ideal for ECoG. Likewise, it is unclear whether our preprocessing and machine learning methods, originally developed for extracranial EEG data, are well adapted to the different type of data that ECoG delivers. Acknowledgements This work was supported in part by the Deutsche Forschungsgemeinschaft (SFB 550, B5 and grant RO 1030/12), by the National Institute of Health (D.31.03765.2), and by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. T.N.L. was supported by a grant from the Studienstiftung des deutschen Volkes. Special thanks go to Theresa Cooke. References [1] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K¨ubler, J. Perelmouter, E. Taub, and H. Flor. A spelling device for the paralysed. Nature, 398:297–298, 1999. [2] B. Blankertz, G. Curio, and K. M¨uller. Classifying single trial EEG: Towards brain computer interfacing. In T.K. Leen, T.G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems, volume 14, Cambridge, MA, USA, 2001. MIT Press. [3] J.M. Carmena, M.A Lebedev, R.E Crist, J.E O’Doherty, D.M. Santucci, D. Dimitrov, P.G. Patil, C.S Henriquez, and M.A. Nicolelis. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology, 1(2), 2003. [4] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20:273–297, 1995. [5] J. del R. Millan, F. Renkens, J. Mourino, and W. Gerstner. Noninvasive brain-actuated control of a mobile robot by human eeg. IEEE Transactions on Biomedical Engineering. Special Issue on Brain-Computer Interfaces, 51(6):1026–1033, June 2004. [6] B. Graimann, J. E. Huggins, S. P. Levine, and G. Pfurtscheller. Towards a direct brain interface based on human subdural recordings and wavelet packet analysis. IEEE Trans. IEEE Transactions on Biomedical Engineering, 51(6):954–962, 2004. [7] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. Journal of Machine Learning Research, 3:1439–1461, March 2003. [8] S. Haykin. Adaptive Filter Theory. Prentice-Hall International, Inc., Upper Saddle River, NJ, USA, 1996. [9] T. Hinterberger, J. Kaiser, A. Kbler, N. Neumann, and N. Birbaumer. The Thought Translation Device and its Applications to the Completely Paralyzed. In Diebner, Druckrey, and Weibel, editors, Sciences of the Interfaces. Genista-Verlag, T¨ubingen, 2001. [10] J. Engel Jr. Presurgical evaluation protocols. In Surgical Treatment of the Epilepsies, pages 740–742. Raven Press Ltd., New York, 2nd edition, 1993. [11] T.N. Lal, M. Schr¨oder, T. Hinterberger, J. Weston, M. Bogdan, N. Birbaumer, and B. Sch¨olkopf. Support Vector Channel Selection in BCI. IEEE Transactions on Biomedical Engineering. Special Issue on Brain-Computer Interfaces, 51(6):1003– 1010, June 2004. [12] S. Lemm, C. Sch¨afer, and G. Curio. BCI Competition 2003 - Data Set III: Probabilistic Modeling of Sensorimotor mu-Rhythms for Classification of Imaginary Hand Movements. IEEE Transactions on Biomedical Engineering. Special Issue on BrainComputer Interfaces, 51(6):1077–1080, June 2004. [13] E. C. Leuthardt, G. Schalk, J. R. Wolpaw, J. G. Ojemann, and D. W. Moran. A braincomputer interface using electrocorticographic signals in humans. Journal of Neural Engineering, 1:63–71, 2004. [14] D.J. McFarland, L.M. McCane, S.V. David, and J.R. Wolpaw. Spatial filter selection for EEG-based communication. Electroencephalography and Clinical Neurophysiology, 103:386–394, 1997. [15] G. Pfurtscheller., C. Neuper amd A. Schl¨ogl, and K. Lugger. Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters. IEEE Transactions on Rehabilitation Engineering, 6(3):316–325, 1998. [16] J. Raethjen, M. Lindemann, M. D¨umpelmann, R. Wenzelburger, H. Stolze, G. Pfister, C. E. Elger, J. Timmer, and G. Deuschl. Corticomuscular coherence in the 6-15 hz band: is the cortex involved in the generation of physiologic tremor? Experimental Brain Research, 142:32–40, 2002. [17] H. Ramoser, J. M¨uller-Gerking, and G. Pfurtscheller. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Transactions on Rehabilitation Engineering, 8(4):441–446, 2000. [18] B. Sch¨olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, USA, 2002. [19] M.D. Serruya, N.G Hatsopoulos, L. Paninski, M.R. Fellows, and Donoghue J.P. Instant neural control of a movement signal. Nature, 416:141–142, 2002. [20] C. Toro, G. Deuschl, R. Thatcher, S. Sato, C. Kufta, and M. Hallett. Event-related desynchronization and movement-related cortical potentials on the ECoG and EEG. Electroencephalography Clinical Neurophysiology, 5:380–389, 1994. [21] V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, USA, 1998. [22] R. Wolpaw and D.J McFarland. Multichannel EEG-based brain-computer communication. Electroencephalography and Clinical Neurophysiology, 90:444–449, 1994.
|
2004
|
126
|
2,537
|
A Three Tiered Approach for Articulated Object Action Modeling and Recognition Le Lu Gregory D. Hager Department of Computer Science Johns Hopkins University Baltimore, MD 21218 lelu/hager@cs.jhu.edu Laurent Younes Center of Imaging Science Johns Hopkins University Baltimore, MD 21218 younes@cis.jhu.edu Abstract Visual action recognition is an important problem in computer vision. In this paper, we propose a new method to probabilistically model and recognize actions of articulated objects, such as hand or body gestures, in image sequences. Our method consists of three levels of representation. At the low level, we first extract a feature vector invariant to scale and in-plane rotation by using the Fourier transform of a circular spatial histogram. Then, spectral partitioning [20] is utilized to obtain an initial clustering; this clustering is then refined using a temporal smoothness constraint. Gaussian mixture model (GMM) based clustering and density estimation in the subspace of linear discriminant analysis (LDA) are then applied to thousands of image feature vectors to obtain an intermediate level representation. Finally, at the high level we build a temporal multiresolution histogram model for each action by aggregating the clustering weights of sampled images belonging to that action. We discuss how this high level representation can be extended to achieve temporal scaling invariance and to include Bi-gram or Multi-gram transition information. Both image clustering and action recognition/segmentation results are given to show the validity of our three tiered representation. 1 Introduction Articulated object action modeling, tracking and recognition has been an important research issue in computer vision community for decades. Past approaches [3, 13, 4, 6, 23, 2] have used many different kinds of direct image observations, including color, edges, contour or moments [14], to fit a hand or body’s shape model and motion parameters. In this paper, we propose to learn a small set of object appearance descriptors, and then to build an aggregated temporal representation of clustered object descriptors over time. There are several obvious reasons to base gesture or motion recognition on a time sequence of observations. First, most hand or body postures are ambiguous. For example, in American Sign Language, ’D’ and ’G’, ’H’ and ’U’ have indistinguishable appearance from some viewpoints. Furthermore, these gestures are difficult to track from frame to frame due to motion blur, lack of features, and complex self-occlusions. By modeling hand/body gesture as a sequential learning problem, appropriate discriminative information can be retrieved and more action categories can be handled. In related work, Darrell and Pentland [7] describe dynamic time warping (DTW) to align and recognize a space-time gesture against a stored library. To build the library, key views are selected from incoming an video by choosing views that have low correlation with all current views. This approach is empirical and does not guarantee any sort of global consistency of the chosen views. As a result, recognition may be unstable. In comparision, our method describes image appearances uniformly and clusters them globally from a training set containing different gestures. For static hand posture recognition, Tomasi et al. [24] apply vector quantization methods to cluster images of different postures and different viewpoints. This is a feature-based approach, with thousands of features extracted for each image. However, clustering in a high dimensional space is very difficult and can be unstable. We argue that fewer, more global features are adequate for the purposes of gesture recognition. Furthermore, the circular histogram representation has adjustable spatial resolution to accomodate differing appearance complexities, and it is translation, rotation, and scale invariant. In other work, [27, 9] recognize human actions at a distance by computing motion information between images and relying on temporal correlation on motion vectors across sequences. Our work also makes use of motion information, but does not rely exclusively on it. Rather, we combine appearance and motion cues to increase sensitivity beyond what either can provide alone. Since our method is based on the temporal aggregation of image clusters as a histogram to recognize an action, it can also be considered to be a temporal texton-like method [17, 16]. One advantage of the aggregated histogram model in a time-series is that it is straightforward to accommodate temporal scaling by using a sliding window. In addition, higher order models corresponding to bigrams or trigrams of simpler “gestemes” can also be naturally employed to extend the descriptive power of the method. In summary, there are four principal contributions in this paper. First, we propose a new scale/rotation-invariant hand image descriptor which is stable, compact and representative. Second, we introduce a methods for sequential smoothing of clustering results. Third, we show LDA/GMM with spectral partitioning initialization is an effective way to learn well-formed probability densities for clusters. Finally, we recognize image sequences as actions efficiently based on a flexible histogram model. We also discuss improvement to the method by incorporating motion information. 2 A Three Tiered Approach We propose a three tiered approach for dynamic action modeling comprising low level feature extraction, intermediate level feature vector clustering and high level histogram recognition as shown in Figure 1. Probabilistic Foreground Map
(GMM Color Segmentation,
Probabilistic Appearance Modeling,
Dynamic Texture Segmentation by
GPCA)
Feature Extraction
via Circular/Fourier Representation
Feature Dimension Reduction
via Variance Analysis
Framewise Clustering via
Spectral Segmentation
(Initialization)
GMM/LDA Density Modeling for
Clusters
Temporally Constrained
Clustering (Refinement)
Temporal Aggregated
Multiresolution Histogram for
Action
From Unigram to Bigram,
Multigram Histogram Model
Temporal Pyramids and Scaling
Low Level: Rotation Invariant Feature
Extraction
Intermediate Level: Clustering Presentation
for Image Frames
High Level: Aggregated Histogram Model
for Action Recognition
Figure 1: Diagram of a three tier approach for dynamic articulated object action modeling. (a) (b) (c) Figure 2: (a) Image after background subtraction (b) GMM based color segmentation (c) Circular histogram for feature extraction. 2.1 Low Level: Rotation Invariant Feature Extraction In the low level image processing, our goals are to locate the region of interest in an image and to extract a scale and in-plane rotation invariant feature vector as its descriptor. In order to accomplish this, a reliable and stable foreground model of the target in question is expected. Depending on the circumstances, a Gaussian mixture model (GMM) for segmentation [15], probabilistic appearance modeling [5], or dynamic object segmentation by Generalized Principal Component Analysis (GPCA) [25] are possible solutions. In this paper, we apply a GMM for hand skin color segmentation. We fit a GMM by first performing a simple background subtraction to obtain a noisy foreground containing a hand object (shown in Figure 2 (a)). From this, more than 1 million RGB pixels are used to train skin and non-skin color density models with 10 Gaussian kernels for each class. Having done this, for new images a probability density ratio Pskin/Pnonskin of these two classes is computed. If Pskin/Pnonskin is larger than 1, the pixel is considered as skin (foreground) and is otherwise background. A morphological operator is then used to clean up this initial segmentaion and create a binary mask for the hand object. We then compute the centroid and second central moments of this 2D mask. A circle is defined about the target by setting its center as the centroid and its radius as 2.8 times largest eigenvalues of the second central moment matrix (covering over 99% skin pixels in Figure 2 (c)). This circle is then divided to have 6 concentric annuli which contain 1, 2, 4, 8, 16, 32 bins from inner to outer, respectively. Since the position and size of this circular histogram is determined by the color segmentation, it is translation and scale invariant. We then normalize the density value Pskin + Pnonskin = 1 for every pixel within the foreground mask (Figure 2) over the hand region. For each bin of the circular histogram, we calculate the mean of Pskin ( −log(Pskin), or −log(Pskin/Pnonskin) are also possible choices) of pixels in that bin as its value. The values of all bins along each circle form a vector, and 1D Fourier transform is applied to this vector. The power spectra of all annuli are ordered into a linear list producing a feature vector ⃗f(t) of 63 dimensions representing the appearance of a hand image.1 Note that the use of the Fourier power spectrum of the annuli makes the representation rotation invariant. 2.2 Intermediate Level: Clustering Presentation for Image Frames After the low level processing, we obtain a scale and rotation invariant feature vector as an appearance representation for each image frame. The temporal evolution of feature vectors represent actions. However, not all the images are actually unique in appearance. 1An optional dimension reduction of feature vectors can be achieved by eliminating dimensions which have low variance. It means that feature values of those dimensions do not change much in the data, therefore are non-informative. At the intermediate level, we cluster images from a set of feature vectors. This frame-wise clustering is critical for dimension reduction and the stability of high level recognition. Initializing Clusters by Spectral Segmentation There are two critical problems with clustering algorithms: determining the true number of clusters and initializing each cluster. Here we use a spectral clustering method [20, 22, 26, 18] to solve both problems. We first build the affinity matrix of pairwise distances between feature fectors2. We then perform a singular value decomposition on the affinity matrix with proper normalization [20]. The number of clusters is determined by choosing the n dominant eigenvalues. The corresponding eigenvectors are taken as an orthogonal subspace for all the data. To get n cluster centers, we take the approach of [20] and choose vectors that minimize the absolute value of cosine between any two cluster centers: ID(k) = rand(0, N) : k = 1 arg mint=1..N Pk−1 c=1 | cos( ⃗f n(ID(c)), ⃗f n(t))| : n ≥k > 1 (1) where ⃗f n(t) is the feature vector of image frame t after numerical normalization in [20] and ID(k) is the image frame number chosen for the center of cluster k. N is the number of images used for spectral clustering. For better clustering results, multiple restarts are used for initialization. Unlike [18], we find this simple clustering procedure is sufficient to obtain a good set of clusters from only a few restarts. After initialization, the Kmeans [8] is used to smooth the centers. Let C1(t) denote the class label for image t, and ⃗g(c) = ⃗f(ID(c)); c = 1 . . . n denote cluster centers. Refinement: Temporally Constrained Clustering Spectral clustering methods are designed for an unordered “bag” of feature vectors, but, in our case, the temporal ordering of image is an important source of information. In particular, the stablity of appearance is easily computed by computing the motion energy3 between two frames. Let M(t) denote the motion energy between frames t and t−1. Define Tk,j = {t|C1(t) = k, C1(t−1) = j} and ¯ M(k, j) = P t∈Tk,j M(t)/|Tk,j|. We now create a regularized clustering cost function as C2(t) = arg maxc=1..n e−∥f(t)−g(c)∥ Pn c=1 e−∥f(t)−g(c)∥+ λ e−∥g(c)−g(C2 (t−1))∥ M(t) Pn c=1 e−∥g(c)−g(C2 (t−1))∥ ¯ M(c,C2(t−1)) (2) where λ is the weighting parameter. Here motion energy M(t) plays a role as the temperature T in simulated annealing. When it is high (strong motion between frames), the motion continuity condition is violated and the labels of successive frames can change freely; when it is low, the smoothness term constrains the possible transitions of classes with low ¯ M(k, j). With this in place, we now scan through the sequence searching for C2(t) of maximum value given C2(t −1) is already fixed. 4 This temporal smoothing is most relevant with images with motions, and static frames are already stably clustered and therefore their cluster labels to not change. 2The exponent of either Euclidean distance or Cosine distance between two feature vectors can be used in this case. 3A simple method is to compute motion energy as the Sum of Squared Differences (SSD) by subtracting two Pskin density masses from successive images. 4Note that ¯ M(k, j) changes after scanning the labels of the image sequence once, thus more iterations could be used to achieve more accurate temporal smoothness of C3(t), t = 1..N. From our experiments, more iterations does not change the result much. GMM for Density Modeling and Smoothing Given clusters, we build a probability density model for each. A Gaussian Mixture Model [11, 8] is used to gain good local relaxation based on the initial clustering result provided by the above method and good generalization for new data. Due to the curse of dimensionality, it is difficult to obtain a good estimate of a high dimensional density function with limited and largely varied training data. We introduce an iterative method incorporating Linear Discriminative Analysis (LDA) [8] and a GMM in an EM-like fashion to perform dimensional reduction. The initial clustering labels help to build the scatter matrices for LDA. The optimal projection matrix of LDA is then obtained from the decomposition of clusters’ scatter matrices [8]. The original feature vectors can be further projected into a low dimensional space, which improves the estimation of multi-variate Gaussian density function. With the new clustering result from GMM, LDA’s scatter matrices and projection matrix can be re-estimated, and GMM can also be re-modeled in the new LDA subspace. This loop converges within 3 ∼5 iterations from our experiments. Intuitively, LDA projects the data into a low dimensional subspace where the image clusters are well separated, which helps to have a good parameter estimation for GMM with limited data. Given more accurate GMM, more accurate clustering results are obtained, which also causes better estimate of LDA. The theoretical proof of convergence is undertaken. After this process, we have a Gaussian density model for each cluster. 2.3 High Level: Aggregated Histogram Model for Action Recognition Given a set of n clusters, define w(t) = [pc1(f(t)), pc2(f(t)), ..., pcn(f(t))]T where px(y) denotes the density value of the vector y with respect to the GMM for cluster x. An action is then a trajectory of [w(t1), w(t1 + 1), ..., w(t2)]T in ℜn. For recognition purposes, we want to calculate some discriminative statistics from each trajectory. One natural way is to use its mean Ht1,t2 = Pt2 t=t1 w(t)/(t2 −t1 + 1) over time which is a temporal weighted histogram. Note that the histogram Ht1,t2 bins are precisely corresponding to the trained clusters. From the training set, we aggregate the cluster weights of images within a given hand action to form a histogram model. In this way, a temporal image sequence corresponding to one action is represented by a single vector. The matching of different actions is equivalent to compute the similarity of two histograms which has variants. Here we use Bhattacharyya similarity metric [1] which has has several useful properties including: it is an approximation of χ2 test statistics with fixed bias; it is self-consistent; it does not have the singularity problem while matching empty histogram bins; and its value is properly bounded within [0, 1]. Assume we have a library of action histograms H∗ 1, H∗ 2, ..., H∗ M, the class label of a new action ˆHt1,t2 is determined by the following equation. L( ˆHt1,t2) = arg min l=1..M D(H∗ l , ˆHt1,t2) = " 1 − n X c=1 q H∗ l (c) ∗ˆHt1,t2(c) # 1 2 (3) This method is low cost because only one exemplar per action category is needed. One problem with this method is that all sequence information has been compressed, e.g., we cannot distinguish an opening hand gesture from a closing hand using only one histogram. This problem can be easily solved by subdividing the sequence and histogram model into m parts: Hm t1,t2 = [Ht1,(t1+t2)/m, ..., H(t1+t2)∗(m−1)/m,t2]T . For an extreme case when one frame is a subsequence, the histogram model simply becomes exactly the vector form of the representative surface. We intend to classify hand actions with speed differences into the same category. To achieve this, the image frames within a hand action can be sub-sampled to build a set of temporal pyramids. In order to segment hand gestures from a long video sequence, we create several sliding windows with different frame sampling rates. The proper time scaling magnitude is found by searching for the best fit over temporal pyramids. Taken together, the histogram representation achieves an adjustable multi-resolution measurement to describe actions. A Hidden Markov Model (HMM) with discrete observations could be also employed to train models for different hand actions, but more template samples per gesture class are required. The histogram recognition method has the additional advantage that it does not depend on extremely accurate frame-wise clustering. A small proportion of incorrect labels does not effect the matching value much. In comparison, in an HMM with few training samples, outliers seriously impact the accuracy of learning. From the viewpoint of considering hand actions as a language process, our model is an integration of individual observations (by labelling each frame with a set of learned clusters) from different time slots. The labels’ transitions between successive frames are not used to describe the temporal sequence. By subdividing the histogram, we are extending the representation to contain bigram, trigram, etc. information. 3 Results We have tested our three tiered method on the problem of recognizing sequences of hand spelling gestures. Framewise clustering. We first evaluate the low level representation of single images and intermediate clustering algorithms. A training set of 3015 images are used. The frame-toframe motion energy is used to label images as static or dynamic. For spectral clustering, 3 ∼4 restarts from both the dynamic and static set are sufficient to cover all the modes in the training set. Then, temporal smoothing is employed and a Gaussian density is calculated for each cluster in a 10 dimensional subspace of the LDA projection. As a result, 24 clusters are obtained which contain 16 static and 8 dynamic modes. Figure 3 shows 5 frames closest to the mean of the probability density of cluster 1, 3, 19, 5, 13, 8, 21, 15, 6, 12. It can be seen that clustering results are insensitive to artifacts of skin segmentation. From Figure 3, it is also clear that dynamic modes have significantly larger determinants than static ones. The study of the eigenvalues of covariance matrices shows that their superellipsoid shapes are expanded within 2 ∼3 dimensions or 6 ∼8 dimensions for static or dynamic clusters. Taken together, this means that static clusters are quite tight, while dynamic clusters contain much more in-class variation. From Figure 4 (c), dynamic clusters gain more weight during the smoothing process incorporating the temporal constraint and subsequent GMM refinement. Figure 3: Image clustering results after low and intermediate level processing. Action recognition and segmentation. For testing images, we first project their feature A
A
Y
Y
A
A
Y
Y
A
A
W
W
A
A
A
X
X
A
(a) (b) (c) (d) Figure 4: (a) Affinity matrix of 3015 images. (b) Affinity matrices of cluster centoids (from upper left to lower right) after spectral clustering, temporal smoothing and GMM. (c) Labelling results of 3015 images (red squares are frames whose labels changed with smoothing process after spectral clustering). (d) The similarity matrix of segmented hand gestures. The letters are labels of gestures. vectors into the LDA subspace. Then, the GMM is used to compute their weights with respect to each cluster. We manually choose 100 sequences for testing purposes, and compute their similarities with respect to a library of 25 gestures. The length of the action sequences was 9 ∼38 frames. The temporal scale of actions in the same category ranged from 1 to 2.4. The results were recognition rates of 90% and 93% without/with temporal smoothing (Equation 2). Including the top three candidates, the recognition rates increase to 94% and 96%, respectively. We also used the learned model and a sliding window with temporal scaling to segment actions from a 6034 frame video sequence containing dynamic gestures and static hand postures. The similarity matrices among 123 actions found in the video is shown in Figure 4 (d). 106 out of 123 actions (86.2%) are correctly segmented and recognized. Integrating motion information. As noted previously, our method cannot distinguish opening/closing hand gestures without temporally subdividing histograms. An alternative solution is to integrate motion information5 between frames. Motion feature vectors are also clustered, which results a joint (appearance and motion) histogram model for actions. We assume independence of the data and therefore simple contatenate these two histograms into a single action representation. From our preliminary experiments, both motion integration and histogram subdivision are comparably effective to recognize gestures with opposite direction. 4 Conclusion and Discussion We have presented a method for classifying the motion of articulated gestures using LDA/GMM-based clustering methods and a histogram-based model of temporal evolution. Using this model, we have obtained extremely good recognition results using a relatively coarse representation of appearance and motion in images. There are mainly three methods to improve the performance of histogram-based classification, i.e., adaptive binning, adaptive subregion, and adaptive weighting [21]. In our approach, adaptive binning of the histogram is automatically learned by our clustering algorithms; adaptive subregion is realized by subdividing action sequences to enrich the histogram’s descriptive capacity in the temporal domain; adaptive weighting is achieved from the trained weights of Gaussian kernels in GMM. Our future work will focus on building a larger hand action database containing 50 ∼100 5Motion information can be extracted by first aligning two hand blobs, subtracting two skin-color density masses, then using the same circular histogram in section 2.1 to extract a feature vector for positive and negative density residues respectively. Another simple way is to subtract two frames’ feature vectors directly. categories for more extensive testing, and on extending the representation to include other types of image information (e.g. contour information). Also, by finding an effective foreground segmentation module, we intend to apply the same methods to other applications such as recognizing stylized human body motion. References [1] F. Aherne, N. Thacker, and P. Rockett, The Bhattacharyya Metric as an Absolute Similarity Measure for Frequency Coded Data, Kybernetika, 34:4, pp. 363-68, 1998. [2] V. Athitsos and S. Sclaroff, Estimating 3D Hand Pose From a Cluttered Image, CVPR, 2003. [3] M. Brand, Shadow Puppetry, ICCV, 1999. [4] R. Bowden and M. Sarhadi, A Non-linear of Shape and Motion for Tracking Finger Spelt American Sign Language, Image and Vision Computing, 20:597-607, 2002. [5] T. Cootes, G. Edwards and C. Taylor, Active Appearance Models, IEEE Trans. PAMI, 23:6, pp. 681-685, 2001. [6] D. Cremers, T. Kohlberger and C. Schnrr, Shape statistics in Kernel Space for Variational Image Segmentation, Pattern Recognition, 36:1929-1943, 2003. [7] T. J. Darrell and A. P. Pentland, Recognition of Space-Time Gestures using a Distributed Representation, MIT Media Laboratory Vision and Modeling TR-197. [8] R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification, Wiley Interscience, 2002. [9] A. Efros, A. Berg, G. Mori and J. Malik, Recognizing Action at a Distance. ICCV, pp. 726–733, 2003. [10] W. T. Freeman and E. H. Adelson, The Design and Use of Steerable Filters, IEEE Trans. PAMI, 13:9, pp. 891-906, 1991. [11] T. Hastie and R. Tibshirani, Discriminant Analysis by Gaussian Mixtures. Journal of Royal Statistical Society Series B, 58(1):155-176. [12] W. Hawkins, P. Leichner and N. Yang, The Circular Harmonic Transform for SPECT Reconstruction and Boundary Conditions on the Fourier Transform of the Sinogram, IEEE Trans. on Medical Imaging, 7:2, 1988. [13] A. Heap and D. Hogg, Wormholes in Shape Space: Tracking through Discontinuous Changes in Shape, ICCV, 1998. [14] M. K. Hu, Visual pattern recognition by moment invariants, IEEE Trans. Inform. Theory, 8:179-187, 1962. [15] M. J. Jones and J. M. Rehg, Statistical Color Models with Application to Skin Detection Int. J. of Computer Vision, 46:1 pp: 81-96, 2002. [16] B. Julesz, Textons, the elements of texture perception, and their interactions. Nature, 290:91-97, 1981. [17] T. Leung and J. Malik, Representing and Recognizing the Visual Appearance of Materials using Three-dimensional Textons, Int. Journal of Computer Vision, 41:1, pp. 29-44, 2001. [18] M. Maila and J. Shi, Learning Segmentation with Random Walk, NIPS 2001. [19] B. Moghaddam and A. Pentland, Probabilistic Visual Learning for Object Representation, IEEE Trans. PAMI 19:7, 1997. [20] A. Ng, M. Jordan and Y. Weiss, On Spectral Clustering: Analysis and an algorithm, NIPS, 2001. [21] S. Satoh, Generalized Histogram: Empirical Optimization of Low Dimensional Features for Image Matching, ECCV, 2004. [22] J. Shi and J. Malik, Normalized Cuts and Image Segmentation, IEEE Trans. on PAMI, 2000. [23] B. Stenger, A. Thayananthan, P. H. S. Torr, and R. Cipolla, Filtering Using a Tree-Based Estimator, ICCV, II:1063-1070, 2003. [24] C. Tomasi, S. Petrov and A. Sastry, 3D tracking = classification + interpolation, ICCV, 2003. [25] R. Vidal and R. Hartley, Motion Segmentation with Missing Data using PowerFactorization and GPCA, CVPR, 2004. [26] Y. Weiss, Segmentation using eigenvectors: A Unifying view. ICCV, 1999. [27] Lihi Zelnik-Manor and Michal Irani, Event-based video analysis, CVPR, 2001.
|
2004
|
127
|
2,538
|
Temporal-Difference Networks Richard S. Sutton and Brian Tanner Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 {sutton,btanner}@cs.ualberta.ca Abstract We introduce a generalization of temporal-difference (TD) learning to networks of interrelated predictions. Rather than relating a single prediction to itself at a later time, as in conventional TD methods, a TD network relates each prediction in a set of predictions to other predictions in the set at a later time. TD networks can represent and apply TD learning to a much wider class of predictions than has previously been possible. Using a random-walk example, we show that these networks can be used to learn to predict by a fixed interval, which is not possible with conventional TD methods. Secondly, we show that if the interpredictive relationships are made conditional on action, then the usual learning-efficiency advantage of TD methods over Monte Carlo (supervised learning) methods becomes particularly pronounced. Thirdly, we demonstrate that TD networks can learn predictive state representations that enable exact solution of a non-Markov problem. A very broad range of inter-predictive temporal relationships can be expressed in these networks. Overall we argue that TD networks represent a substantial extension of the abilities of TD methods and bring us closer to the goal of representing world knowledge in entirely predictive, grounded terms. Temporal-difference (TD) learning is widely used in reinforcement learning methods to learn moment-to-moment predictions of total future reward (value functions). In this setting, TD learning is often simpler and more data-efficient than other methods. But the idea of TD learning can be used more generally than it is in reinforcement learning. TD learning is a general method for learning predictions whenever multiple predictions are made of the same event over time, value functions being just one example. The most pertinent of the more general uses of TD learning have been in learning models of an environment or task domain (Dayan, 1993; Kaelbling, 1993; Sutton, 1995; Sutton, Precup & Singh, 1999). In these works, TD learning is used to predict future values of many observations or state variables of a dynamical system. The essential idea of TD learning can be described as “learning a guess from a guess”. In all previous work, the two guesses involved were predictions of the same quantity at two points in time, for example, of the discounted future reward at successive time steps. In this paper we explore a few of the possibilities that open up when the second guess is allowed to be different from the first. To be more precise, we must make a distinction between the extensive definition of a prediction, expressing its desired relationship to measurable data, and its TD definition, expressing its desired relationship to other predictions. In reinforcement learning, for example, state values are extensively defined as an expectation of the discounted sum of future rewards, while they are TD defined as the solution to the Bellman equation (a relationship to the expectation of the value of successor states, plus the immediate reward). It’s the same prediction, just defined or expressed in different ways. In past work with TD methods, the TD relationship was always between predictions with identical or very similar extensive semantics. In this paper we retain the TD idea of learning predictions based on others, but allow the predictions to have different extensive semantics. 1 The Learning-to-predict Problem The problem we consider in this paper is a general one of learning to predict aspects of the interaction between a decision making agent and its environment. At each of a series of discrete time steps t, the environment generates an observation ot ∈O, and the agent takes an action at ∈A. Whereas A is an arbitrary discrete set, we assume without loss of generality that ot can be represented as a vector of bits. The action and observation events occur in sequence, o1, a1, o2, a2, o3 · · ·, with each event of course dependent only on those preceding it. This sequence will be called experience. We are interested in predicting not just each next observation but more general, action-conditional functions of future experience, as discussed in the next section. In this paper we use a random-walk problem with seven states, with left and right actions available in every state: 1 1 1 2 3 4 5 6 7 0 0 0 0 0 The observation upon arriving in a state consists of a special bit that is 1 only at the two ends of the walk and, in the first two of our three experiments, seven additional bits explicitly indicating the state number (only one of them is 1). This is a continuing task: reaching an end state does not end or interrupt experience. Although the sequence depends deterministically on action, we assume that the actions are selected randomly with equal probability so that the overall system can be viewed as a Markov chain. The TD networks introduced in this paper can represent a wide variety of predictions, far more than can be represented by a conventional TD predictor. In this paper we take just a few steps toward more general predictions. In particular, we consider variations of the problem of prediction by a fixed interval. This is one of the simplest cases that cannot otherwise be handled by TD methods. For the seven-state random walk, we will predict the special observation bit some numbers of discrete steps in advance, first unconditionally and then conditioned on action sequences. 2 TD Networks A TD network is a network of nodes, each representing a single scalar prediction. The nodes are interconnected by links representing the TD relationships among the predictions and to the observations and actions. These links determine the extensive semantics of each prediction—its desired or target relationship to the data. They represent what we seek to predict about the data as opposed to how we try to predict it. We think of these links as determining a set of questions being asked about the data, and accordingly we call them the question network. A separate set of interconnections determines the actual computational process—the updating of the predictions at each node from their previous values and the current action and observation. We think of this process as providing the answers to the questions, and accordingly we call them the answer network. The question network provides targets for a learning process shaping the answer network and does not otherwise affect the behavior of the TD network. It is natural to consider changing the question network, but in this paper we take it as fixed and given. Figure 1a shows a suggestive example of a question network. The three squares across the top represent three observation bits. The node labeled 1 is directly connected to the first observation bit and represents a prediction that that bit will be 1 on the next time step. The node labeled 2 is similarly a prediction of the expected value of node 1 on the next step. Thus the extensive definition of Node 2’s prediction is the probability that the first observation bit will be 1 two time steps from now. Node 3 similarly predicts the first observation bit three time steps in the future. Node 4 is a conventional TD prediction, in this case of the future discounted sum of the second observation bit, with discount parameter γ. Its target is the familiar TD target, the data bit plus the node’s own prediction on the next time step (with weightings 1 −γ and γ respectively). Nodes 5 and 6 predict the probability of the third observation bit being 1 if particular actions a or b are taken respectively. Node 7 is a prediction of the average of the first observation bit and Node 4’s prediction, both on the next step. This is the first case where it is not easy to see or state the extensive semantics of the prediction in terms of the data. Node 8 predicts another average, this time of nodes 4 and 5, and the question it asks is even harder to express extensively. One could continue in this way, adding more and more nodes whose extensive definitions are difficult to express but which would nevertheless be completely defined as long as these local TD relationships are clear. The thinner links shown entering some nodes are meant to be a suggestion of the entirely separate answer network determining the actual computation (as opposed to the goals) of the network. In this paper we consider only simple question networks such as the left column of Figure 1a and of the action-conditional tree form shown in Figure 1b. L L L R R R 1 5 2 3 4 a b 6 7 8 γ γ 1− (a) (b) Figure 1: The question networks of two TD networks. (a) a question network discussed in the text, and (b) a depth-2 fully-action-conditional question network used in Experiments 2 and 3. Observation bits are represented as squares across the top while actual nodes of the TD network, corresponding each to a separate prediction, are below. The thick lines represent the question network and the thin lines in (a) suggest the answer network (the bulk of which is not shown). Note that all of these nodes, arrows, and numbers are completely different and separate from those representing the random-walk problem on the preceding page. More formally and generally, let yi t ∈[0, 1], i = 1, . . . , n, denote the prediction of the ith node at time step t. The column vector of predictions yt = (y1 t , . . . , yn t )T is updated according to a vector-valued function u with modifiable parameter W: yt = u(yt−1, at−1, ot, Wt) ∈ℜn. (1) The update function u corresponds to the answer network, with W being the weights on its links. Before detailing that process, we turn to the question network, the defining TD relationships between nodes. The TD target zi t for yi t is an arbitrary function zi of the successive predictions and observations. In vector form we have1 zt = z(ot+1, ˜yt+1) ∈ℜn, (2) where ˜yt+1 is just like yt+1, as in (1), except calculated with the old weights before they are updated on the basis of zt: ˜yt = u(yt−1, at−1, ot, Wt−1) ∈ℜn. (3) (This temporal subtlety also arises in conventional TD learning.) For example, for the nodes in Figure 1a we have z1 t = o1 t+1, z2 t = y1 t+1, z3 t = y2 t+1, z4 t = (1 −γ)o2 t+1 + γy4 t+1, z5 t = z6 t = o3 t+1, z7 t = 1 2o1 t+1 + 1 2y4 t+1, and z8 t = 1 2y4 t+1 + 1 2y5 t+1. The target functions zi are only part of specifying the question network. The other part has to do with making them potentially conditional on action and observation. For example, Node 5 in Figure 1a predicts what the third observation bit will be if action a is taken. To arrange for such semantics we introduce a new vector ct of conditions, ci t, indicating the extent to which yi t is held responsible for matching zi t, thus making the ith prediction conditional on ci t. Each ci t is determined as an arbitrary function ci of at and yt. In vector form we have: ct = c(at, yt) ∈[0, 1]n. (4) For example, for Node 5 in Figure 1a, c5 t = 1 if at = a, otherwise c5 t = 0. Equations (2–4) correspond to the question network. Let us now turn to defining u, the update function for yt mentioned earlier and which corresponds to the answer network. In general u is an arbitrary function approximator, but for concreteness we define it to be of a linear form yt = σ(Wtxt) (5) where xt ∈ℜm is a feature vector, Wt is an n × m matrix, and σ is the n-vector form of the identity function (Experiments 1 and 2) or the S-shaped logistic function σ(s) = 1 1+e−s (Experiment 3). The feature vector is an arbitrary function of the preceding action, observation, and node values: xt = x(at−1, ot, yt−1) ∈ℜm. (6) For example, xt might have one component for each observation bit, one for each possible action (one of which is 1, the rest 0), and n more for the previous node values yt−1. The learning algorithm for each component wij t of Wt is wij t+1 −wij t = α(zi t −yi t)ci t ∂yi t ∂wij t , (7) where α is a step-size parameter. The timing details may be clarified by writing the sequence of quantities in the order in which they are computed: yt at ct ot+1 xt+1 ˜yt+1 zt Wt+1 yt+1. (8) Finally, the target in the extensive sense for yt is y∗ t = Et,π (1 −ct) · y∗ t + ct · z(ot+1, y∗ t+1) , (9) where · represents component-wise multiplication and π is the policy being followed, which is assumed fixed. 1In general, z is a function of all the future predictions and observations, but in this paper we treat only the one-step case. 3 Experiment 1: n-step Unconditional Prediction In this experiment we sought to predict the observation bit precisely n steps in advance, for n = 1, 2, 5, 10, and 25. In order to predict n steps in advance, of course, we also have to predict n −1 steps in advance, n −2 steps in advance, etc., all the way down to predicting one step ahead. This is specified by a TD network consisting of a single chain of predictions like the left column of Figure 1a, but of length 25 rather than 3. Random-walk sequences were constructed by starting at the center state and then taking random actions for 50, 100, 150, and 200 steps (100 sequences each). We applied a TD network and a correspondingMonte Carlo method to this data. The Monte Carlo method learned the same predictions, but learned them by comparing them to the actual outcomes in the sequence (instead of zi t in (7)). This involved significant additional complexity to store the predictions until their corresponding targets were available. Both algorithms used feature vectors of 7 binary components, one for each of the seven states, all of which were zero except for the one corresponding to the current state. Both algorithms formed their predictions linearly (σ(·) was the identity) and unconditionally (ci t = 1 ∀i, t). In an initial set of experiments, both algorithms were applied online with a variety of values for their step-size parameter α. Under these conditions we did not find that either algorithm was clearly better in terms of the mean square error in their predictions over the data sets. We found a clearer result when both algorithms were trained using batch updating, in which weight changes are collected “on the side” over an experience sequence and then made all at once at the end, and the whole process is repeated until convergence. Under batch updating, convergence is to the same predictions regardless of initial conditions or α value (as long as α is sufficiently small), which greatly simplifies comparison of algorithms. The predictions learned under batch updating are also the same as would be computed by least squares algorithms such as LSTD(λ) (Bradtke & Barto, 1996; Boyan, 2000; Lagoudakis & Parr, 2003). The errors in the final predictions are shown in Table 1. For 1-step predictions, the Monte-Carlo and TD methods performed identically of course, but for longer predictions a significant difference was observed. The RMSE of the Monte Carlo method increased with prediction length whereas for the TD network it decreased. The largest standard error in any of the numbers shown in the table is 0.008, so almost all of the differences are statistically significant. TD methods appear to have a significant data-efficiency advantage over non-TD methods in this prediction-by-n context (and this task) just as they do in conventional multi-step prediction (Sutton, 1988). Time 1-step 2-step 5-step 10-step 25-step Steps MC/TD MC TD MC TD MC TD MC TD 50 0.205 0.219 0.172 0.234 0.159 0.249 0.139 0.297 0.129 100 0.124 0.133 0.100 0.160 0.098 0.168 0.079 0.187 0.068 150 0.089 0.103 0.073 0.121 0.076 0.130 0.063 0.153 0.054 200 0.076 0.084 0.060 0.109 0.065 0.112 0.056 0.118 0.049 Table 1: RMSE of Monte-Carlo and TD-network predictions of various lengths and for increasing amounts of training data on the random-walk example with batch updating. 4 Experiment 2: Action-conditional Prediction The advantage of TD methods should be greater for predictions that apply only when the experience sequence unfolds in a particular way, such as when a particular sequence of actions are made. In a second experiment we sought to learn n-step-ahead predictions conditional on action selections. The question network for learning all 2-step-ahead predictions is shown in Figure 1b. The upper two nodes predict the observation bit conditional on taking a left action (L) or a right action (R). The lower four nodes correspond to the two-step predictions, e.g., the second lower node is the prediction of what the observation bit will be if an L action is taken followed by an R action. These predictions are the same as the e-tests used in some of the work on predictive state representations (Littman, Sutton & Singh, 2002; Rudary & Singh, 2003). In this experiment we used a question network like that in Figure 1b except of depth four, consisting of 30 (2+4+8+16) nodes. The conditions for each node were set to 0 or 1 depending on whether the action taken on the step matched that indicated in the figure. The feature vectors were as in the previous experiment. Now that we are conditioning on action, the problem is deterministic and α can be set uniformly to 1. A Monte Carlo prediction can be learned only when its corresponding action sequence occurs in its entirety, but then it is complete and accurate in one step. The TD network, on the other hand, can learn from incomplete sequences but must propagate them back one level at a time. First the one-step predictions must be learned, then the two-step predictions from them, and so on. The results for online and batch training are shown in Tables 2 and 3. As anticipated, the TD network learns much faster than Monte Carlo with both online and batch updating. Because the TD network learns its n step predictions based on its n −1 step predictions, it has a clear advantage for this task. Once the TD Network has seen each action in each state, it can quickly learn any prediction 2, 10, or 1000 steps in the future. Monte Carlo, on the other hand, must sample actual sequences, so each exact action sequence must be observed. 1-Step 2-Step 3-Step 4-Step Time Step MC/TD MC TD MC TD MC TD 100 0.153 0.222 0.182 0.253 0.195 0.285 0.185 200 0.019 0.092 0.044 0.142 0.054 0.196 0.062 300 0.000 0.040 0.000 0.089 0.013 0.139 0.017 400 0.000 0.019 0.000 0.055 0.000 0.093 0.000 500 0.000 0.019 0.000 0.038 0.000 0.062 0.000 Table 2: RMSE of the action-conditional predictions of various lengths for Monte-Carlo and TD-network methods on the random-walk problem with online updating. Time Steps MC TD 50 53.48% 17.21% 100 30.81% 4.50% 150 19.26% 1.57% 200 11.69% 0.14% Table 3: Average proportion of incorrect action-conditional predictions for batch-updating versions of Monte-Carlo and TD-network methods, for various amounts of data, on the random-walk task. All differences are statistically significant. 5 Experiment 3: Learning a Predictive State Representation Experiments 1 and 2 showed advantages for TD learning methods in Markov problems. The feature vectors in both experiments provided complete information about the nominal state of the random walk. In Experiment 3, on the other hand, we applied TD networks to a non-Markov version of the random-walk example, in particular, in which only the special observation bit was visible and not the state number. In this case it is not possible to make accurate predictions based solely on the current action and observation; the previous time step’s predictions must be used as well. As in the previous experiment, we sought to learn n-step predictions using actionconditional question networks of depths 2, 3, and 4. The feature vector xt consisted of three parts: a constant 1, four binary features to represent the pair of action at−1 and observation bit ot, and n more features correspondingto the components of yt−1. The features vectors were thus of length m = 11, 19, and 35 for the three depths. In this experiment, σ(·) was the S-shaped logistic function. The initial weights W0 and predictions y0 were both 0. Fifty random-walk sequences were constructed, each of 250,000 time steps, and presented to TD networks of the three depths, with a range of step-size parameters α. We measured the RMSE of all predictions made by the networks (computed from knowledge of the task) and also the “empirical RMSE,” the error in the one-step prediction for the action actually taken on each step. We found that in all cases the errors approached zero over time, showing that the problem was completely solved. Figure 2 shows some representative learning curves for the depth-2 and depth-4 TD networks. 0 .1 .2 .3 0 50K 100K 150K 200K 250K Time Steps Empirical RMS error α=.1 α=.25 α=.5 α=.75 α=.5 depth 2 Figure 2: Prediction performance on the non-Markov random walk with depth-4 TD networks (and one depth-2 network) with various step-size parameters, averaged over 50 runs and 1000 time-step bins. The “bump” most clearly seen with small step sizes is reliably present and may be due to predictions of different lengths being learned at different times. In ongoing experiments on other non-Markov problems we have found that TD networks do not always find such complete solutions. Other problems seem to require more than one step of history information (the one-step-preceding action and observation), though less than would be required using history information alone. Our results as a whole suggest that TD networks may provide an effective alternative learning algorithm for predictive state representations (Littman et al., 2000). Previous algorithms have been found to be effective on some tasks but not on others (e.g, Singh et al., 2003; Rudary & Singh, 2004; James & Singh, 2004). More work is needed to assess the range of effectiveness and learning rate of TD methods vis-a-vis previous methods, and to explore their combination with history information. 6 Conclusion TD networks suggest a large set of possibilities for learning to predict, and in this paper we have begun exploring the first few. Our results show that even in a fully observable setting there may be significant advantages to TD methods when learning TD-defined predictions. Our action-conditional results show that TD methods can learn dramatically faster than other methods. TD networks allow the expression of many new kinds of predictions whose extensive semantics is not immediately clear, but which are ultimately fully grounded in data. It may be fruitful to further explore the expressive potential of TD-defined predictions. Although most of our experiments have concerned the representational expressiveness and efficiency of TD-defined predictions, it is also natural to consider using them as state, as in predictive state representations. Our experiments suggest that this is a promising direction and that TD learning algorithms may have advantages over previous learning methods. Finally, we note that adding nodes to a question network produces new predictions and thus may be a way to address the discovery problem for predictive representations. Acknowledgments The authors gratefully acknowledge the ideas and encouragement they have received in this work from Satinder Singh, Doina Precup, Michael Littman, Mark Ring, Vadim Bulitko, Eddie Rafols, Anna Koop, Tao Wang, and all the members of the rlai.net group. References Boyan, J. A. (2000). Technical update: Least-squares temporal difference learning. Machine Learning 49:233–246. Bradtke, S. J. and Barto, A. G. (1996). Linear least-squares algorithms for temporal difference learning. Machine Learning 22(1/2/3):33–57. Dayan, P. (1993). Improving generalization for temporal difference learning: The successor representation. Neural Computation 5(4):613–624. James, M. and Singh, S. (2004). Learning and discovery of predictive state representations in dynamical systems with reset. In Proceedings of the Twenty-First International Conference on Machine Learning, pages 417–424. Kaelbling, L. P. (1993). Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning, pp. 167–173. Lagoudakis, M. G. and Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning Research 4(Dec):1107–1149. Littman, M. L., Sutton, R. S. and Singh, S. (2002). Predictive representations of state. In Advances In Neural Information Processing Systems 14:1555–1561. Rudary, M. R. and Singh, S. (2004). A nonlinear predictive state representation. In Advances in Neural Information Processing Systems 16:855–862. Singh, S., Littman, M. L., Jong, N. K., Pardoe, D. and Stone, P. (2003) Learning predictive state representations. In Proceedings of the Twentieth Int. Conference on Machine Learning, pp. 712–719. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning 3:9–44. Sutton, R. S. (1995). TD models: Modeling the world at a mixture of time scales. In A. Prieditis and S. Russell (eds.), Proceedings of the Twelfth International Conference on Machine Learning, pp. 531–539. Morgan Kaufmann, San Francisco. Sutton, R. S., Precup, D. and Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112:181–121.
|
2004
|
128
|
2,539
|
Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische Universit¨at Graz A-8010 Graz, Austria {maass, legi, nilsb}@igi.tugraz.at Abstract What makes a neural microcircuit computationally powerful? Or more precisely, which measurable quantities could explain why one microcircuit C is better suited for a particular family of computational tasks than another microcircuit C′? We propose in this article quantitative measures for evaluating the computational power and generalization capability of a neural microcircuit, and apply them to generic neural microcircuit models drawn from different distributions. We validate the proposed measures by comparing their prediction with direct evaluations of the computational performance of these microcircuit models. This procedure is applied first to microcircuit models that differ with regard to the spatial range of synaptic connections and with regard to the scale of synaptic efficacies in the circuit, and then to microcircuit models that differ with regard to the level of background input currents and the level of noise on the membrane potential of neurons. In this case the proposed method allows us to quantify differences in the computational power and generalization capability of circuits in different dynamic regimes (UP- and DOWN-states) that have been demonstrated through intracellular recordings in vivo. 1 Introduction Rather than constructing particular microcircuit models that carry out particular computations, we pursue in this article a different strategy, which is based on the assumption that the computational function of cortical microcircuits is not fully genetically encoded, but rather emerges through various forms of plasticity (“learning”) in response to the actual distribution of signals that the neural microcircuit receives from its environment. From this perspective the question about the computational function of cortical microcircuits C turns into the questions: a) What functions (i.e. maps from circuit inputs to circuit outputs) can the circuit C learn to compute. b) How well can the circuit C generalize a specific learned computational function to new inputs? We propose in this article a conceptual framework and quantitative measures for the investigation of these two questions. In order to make this approach feasible, in spite of numerous unknowns regarding synaptic plasticity and the distribution of electrical and biochemical signals impinging on a cortical microcircuit, we make in the present first step of this approach the following simplifying assumptions: 1. Particular neurons (“readout neurons”) learn via synaptic plasticity to extract specific information encoded in the spiking activity of neurons in the circuit. 2. We assume that the cortical microcircuit itself is highly recurrent, but that the impact of feedback that a readout neuron might send back into this circuit can be neglected.1 3. We assume that synaptic plasticity of readout neurons enables them to learn arbitrary linear transformations. More precisely, we assume that the input to such readout neuron can be approximated by a term Pn−1 i=1 wixi(t), where n −1 is the number of presynaptic neurons, xi(t) results from the output spike train of the ith presynaptic neuron by filtering it according to the low-pass filtering property of the membrane of the readout neuron,2 and wi is the efficacy of the synaptic connection. Thus wixi(t) models the time course of the contribution of previous spikes from the ith presynaptic neuron to the membrane potential at the soma of this readout neuron. We will refer to the vector x(t) as the circuit state at time t. Under these unpleasant but apparently unavoidable simplifying assumptions we propose new quantitative criteria based on rigorous mathematical principles for evaluating a neural microcircuit C with regard to questions a) and b). We will compare in sections 4 and 5 the predictions of these quantitative measures with the actual computational performance achieved by 132 different types of neural microcircuit models, for a fairly large number of different computational tasks. All microcircuit models that we consider are based on biological data for generic cortical microcircuits (as described in section 3), but have different settings of their parameters. 2 Measures for the kernel-quality and generalization capability of neural microcircuits One interesting measure for probing the computational power of a neural circuit is the pairwise separation property considered in [Maass et al., 2002]. This measure tells us to what extent the current circuit state x(t) reflects details of the input stream that occurred some time back in the past (see Fig. 1). Both circuit 2 and circuit 3 could be described as being chaotic since state differences resulting from earlier input differences persist. The “edge-ofchaos” [Langton, 1990] lies somewhere between points 1 and 2 according to Fig. 1c). But the best computational performance occurs between points 2 and 3 (see Fig. 2b)). Hence the “edge-of-chaos” is not a reliable predictor of computational power for circuits of spiking neurons. In addition, most real-world computational tasks require that the circuit gives a desired output not just for 2, but for a fairly large number m of significantly different inputs. One could of course test whether a circuit C can separate each of the m 2 pairs of 1This assumption is best justified if such readout neuron is located for example in another brain area that receives massive input from many neurons in this microcircuit and only has diffuse backwards projection. But it is certainly problematic and should be addressed in future elaborations of the present approach. 2One can be even more realistic and filter it also by a model for the short term dynamics of the synapse into the readout neuron, but this turns out to make no difference for the analysis proposed in this article. 0.5 1 1.4 2 3 4 6 8 0.05 0.1 0.3 0.5 0.7 1 2 4 8 1 2 3 λ Wscale a 1 2 3 4 5 6 7 0 1 2 3 0 0.05 0.1 t [s] circuit 1 0 1 2 3 0 0.1 0.2 circuit 2 0 1 2 3 0 2 4 b state separation circuit 3 1.4 1.6 1.8 2 2.2 0 0.05 0.1 0.15 0.2 0.25 λ state separation c Figure 1: Pointwise separation property for different types of neural microcircuit models as specified in section 3. Each circuit C was tested for two arrays u and v of 4 input spike trains at 20 Hz over 3 s that differed only during the first second. a) Euclidean differences between resulting circuit states xu(t) and xv(t) for t = 3 s, averaged over 20 circuits C and 20 pairs u, v for each indicated value of λ and Wscale (see section 3). b) Temporal evolution of ∥xu(t) −xv(t) ∥for 3 different circuits with values of λ, Wscale according to the 3 points marked in panel a) (λ = 1.4, 2, 3 and Wscale = 0.3, 0.7, 2 for circuit 1, 2, and 3 respectively). c) Pointwise separation along a straight line between point 1 and point 2 of panel a). such inputs. But even if the circuit can do this, we do not know whether a neural readout from such circuit would be able to produce given target outputs for these m inputs. Therefore we propose here the linear separation property as a more suitable quantitative measure for evaluating the computational power of a neural microcircuit (or more precisely: the kernel-quality of a circuit; see below). To evaluate the linear separation property of a circuit C for m different inputs u1, . . . , um (which are in this article always functions of time, i.e. input streams such as for example multiple spike trains) we compute the rank of the n × m matrix M whose columns are the circuit states xui(t0) resulting at some fixed time t0 for the preceding input stream ui. If this matrix has rank m, then it is guaranteed that any given assignment of target outputs yi ∈R at time t0 for the inputs ui can be implemented by this circuit C (in combination with a linear readout). In particular, each of the 2m possible binary classifications of these m inputs can then be carried out by a linear readout from this fixed circuit C. Obviously such insight is much more informative than a demonstration that some particular classification task can be carried out by such circuit C. If the rank of this matrix M has a value r < m, then this value r can still be viewed as a measure for the computational power of this circuit C, since r is the number of “degrees of freedom” that a linear readout has in assigning target outputs yi to these inputs ui (in a way which can be made mathematically precise with concepts of linear algebra). Note that this rank-measure for the linear separation property of a circuit C may be viewed as an empirical measure for its kernel-quality, i.e. for the complexity and diversity of nonlinear operations carried out by C on its input stream in order to boost the classification power of a subsequent linear decision-hyperplane (see [Vapnik, 1998]). Obviously the preceding measure addresses only one component of the computational performance of a neural circuit C. Another component is its capability to generalize a learnt computational function to new inputs. Mathematical criteria for generalization capability are derived in [Vapnik, 1998] (see ch. 4 of [Cherkassky and Mulier, 1998] for a compact account of results relevant for our arguments). According to this mathematical theory one can quantify the generalization capability of any learning device in terms of the VC-dimension of the class H of hypotheses that are potentially used by that learning device.3 More pre3The VC-dimension (of a class H of maps H from some universe Suniv of inputs into {0, 1}) is defined as the size of the largest subset S ⊆Suniv which can be shattered by H. One says that S ⊆Suniv is shattered by H if for every map f : S →{0, 1} there exists a map H in H such that H(u) = f(u) for all u ∈S (this means that every possible binary classification of the inputs u ∈S cisely: if VC-dimension (H) is substantially smaller than the size of the training set Strain, one can prove that this learning device generalizes well, in the sense that the hypothesis (or input-output map) produced by this learning device is likely to have for new examples an error rate which is not much higher than its error rate on Strain, provided that the new examples are drawn from the same distribution as the training examples (see equ. 4.22 in [Cherkassky and Mulier, 1998]). We apply this mathematical framework to the class HC of all maps from a set Suniv of inputs u into {0, 1} which can be implemented by a circuit C. More precisely: HC consists of all maps from Suniv into {0, 1} that a linear readout from circuit C with fixed internal parameters (weights etc.) but arbitrary weights w ∈Rn of the readout (that classifies the circuit input u as belonging to class 1 if w · xu(t0) ≥0, and to class 0 if w · xu(t0) < 0) could possibly implement. Whereas it is very difficult to achieve tight theoretical bounds for the VC-dimension of even much simpler neural circuits, see [Bartlett and Maass, 2003], one can efficiently estimate the VC-dimension of the class HC that arises in our context for some finite ensemble Suniv of inputs (that contains all examples used for training or testing) by using the following mathematical result (which can be proved with the help of Radon’s Theorem): Theorem 2.1 Let r be the rank of the n × s matrix consisting of the s vectors xu(t0) for all inputs u in Suniv (we assume that Suniv is finite and contains s inputs). Then r ≤VC-dimension(HC) ≤r + 1. We propose to use the rank r defined in Theorem 2.1 as an estimate of VC-dimension(HC), and hence as a measure that informs us about the generalization capability of a neural microcircuit C. It is assumed here that the set Suniv contains many noisy variations of the same input signal, since otherwise learning with a randomly drawn training set Strain ⊆Suniv has no chance to generalize to new noisy variations. Note that each family of computational tasks induces a particular notion of what aspects of the input are viewed as noise, and what input features are viewed as signals that carry information which is relevant for the target output for at least one of these computational tasks. For example for computations on spike patterns some small jitter in the spike timing is viewed as noise. For computations on firing rates even the sequence of interspike intervals and temporal relations between spikes that arrive from different input sources are viewed as noise, as long as these input spike trains represent the same firing rates. Examples for both families of computational tasks will be discussed in this article. 3 Models for generic cortical microcircuits We test the validity of the proposed measures by comparing their predictions with direct evaluations of the computational performance for a large variety of models for generic cortical microcircuits consisting of 540 neurons. We used leaky-integrate-and-fire neurons4 and biologically quite realistic models for dynamic synapses.5 Neurons (20 % of which were randomly chosen to be inhibitory) were located on the grid points of a 3D grid of dimensions 6 × 6 × 15 with edges of unit length. The probability of a synaptic connection can be carried out by some hypothesis H in H). 4Membrane voltage Vm modeled by τm dVm dt = −(Vm−Vresting)+Rm·(Isyn(t)+Ibackground+ Inoise), where τm = 30 ms is the membrane time constant, Isyn models synaptic inputs from other neurons in the circuits, Ibackground models a constant unspecific background input and Inoise models noise in the input. 5Short term synaptic dynamics was modeled according to [Markram et al., 1998], with distributions of synaptic parameters U (initial release probability), D (time constant for depression), F (time constant for facilitation) chosen to reflect empirical data (see [Maass et al., 2002] for details). from neuron a to neuron b was proportional to exp(−D2(a, b)/λ2), where D(a, b) is the Euclidean distance between a and b, and λ regulates the spatial scaling of synaptic connectivity. Synaptic efficacies w were chosen randomly from distributions that reflect biological data (as in [Maass et al., 2002]), with a common scaling factor Wscale. 0 50 100 150 200 t [ms] a 0 50 100 150 200 t [ms] 0.5 1 1.4 2 3 4 6 8 0.05 0.1 0.3 0.5 0.7 1 2 4 8 1 2 3 λ Wscale b 0.6 0.65 0.7 Figure 2: Performance of different types of neural microcircuit models for classification of spike patterns. a) In the top row are two examples of the 80 spike patterns that were used (each consisting of 4 Poisson spike trains at 20 Hz over 200 ms), and in the bottom row are examples of noisy variations (Gaussian jitter with SD 10 ms) of these spike patterns which were used as circuit inputs. b) Fraction of examples (for 200 test examples) that were correctly classified by a linear readout (trained by linear regression with 500 training examples). Results are shown for 90 different types of neural microcircuits C with λ varying on the x-axis and Wscale on the y-axis (20 randomly drawn circuits and 20 target classification functions randomly drawn from the set of 280 possible classification functions were tested for each of the 90 different circuit types, and resulting correctness-rates were averaged. The mean SD of the results is 0.028.). Points 1, 2, 3 defined as in Fig. 1. Linear readouts from circuits with n −1 neurons were assumed to compute a weighted sum Pn−1 i=1 wixi(t) + w0 (see section 1). In order to simplify notation we assume that the vector x(t) contains an additional constant component x0(t) = 1, so that one can write w · x(t) instead of Pn−1 i=1 wixi(t) + w0. In the case of classification tasks we assume that the readout outputs 1 if w · x(t) ≥0, and 0 otherwise. 4 Evaluating the influence of synaptic connectivity on computational performance Neural microcircuits were drawn from the distribution described in section 3 for 10 different values of λ (which scales the number and average distance of synaptically connected neurons) and 9 different values of Wscale (which scales the efficacy of all synaptic connections). 20 microcircuit models C were drawn for each of these 90 different assignments of values to λ and Wscale. For each circuit a linear readout was trained to perform one (randomly chosen) out of 280 possible classification tasks on noisy variations u of 80 fixed spike patterns as circuit inputs u. The target performance of any such circuit input was to output at time t = 100 ms the class (0 or 1) of the spike pattern from which the preceding circuit input had been generated (for some arbitrary partition of the 80 fixed spike patterns into two classes. Each spike pattern u consisted of 4 Poisson spike trains over 200 ms. Performance results are shown in Fig. 2b for 90 different types of neural microcircuit models. We now test the predictive quality of the two proposed measures for the computational power of a microcircuit on spike patterns. One should keep in mind that the proposed measures do not attempt to test the computational capability of a circuit for one particular computational task, but for any distribution on Suniv and for a very large (in general infinitely large) family of computational tasks that only have in common a particular bias regarding which aspects of the incoming spike trains may carry information that is relevant for the target output of computations, and which aspects should be viewed as noise. Fig. 3a explains why the lower left part of the parameter map in Fig. 2b is less suitable for any 0.5 1 1.4 2 3 4 6 8 0.05 0.1 0.3 0.5 0.7 1 2 4 8 λ Wscale a 200 250 300 350 400 450 0.5 1 1.4 2 3 4 6 8 0.05 0.1 0.3 0.5 0.7 1 2 4 8 λ b 200 250 300 350 400 450 0.5 1 1.4 2 3 4 6 8 0.05 0.1 0.3 0.5 0.7 1 2 4 8 1 2 3 λ c 0 5 10 15 20 Figure 3: Values of the proposed measures for computations on spike patterns. a) Kernel-quality for spike patterns of 90 different circuit types (average over 20 circuits, mean SD = 13; For each circuit, the average over 5 different sets of spike patterns was used).6 b) Generalization capability for spike patterns: estimated VC-dimension of HC (for a set Suniv of inputs u consisting of 500 jittered versions of 4 spike patterns), for 90 different circuit types (average over 20 circuits, mean SD = 14; For each circuit, the average over 5 different sets of spike patterns was used). c) Difference of both measures (mean SD = 5.3). This should be compared with actual computational performance plotted in Fig. 2b. Points 1, 2, 3 defined as in Fig. 1. such computation, since there the kernel-quality of the circuits is too low. Fig. 3b explains why the upper right part of the parameter map in Fig. 2b is less suitable, since a higher VC-dimension (for a training set of fixed size) entails poorer generalization capability. We are not aware of a theoretically founded way of combining both measures into a single value that predicts overall computational performance. But if one just takes the difference of both measures then the resulting number (see Fig. 3c) predicts quite well which types of neural microcircuit models perform well for the particular computational tasks considered in Fig. 2b. 5 Evaluating the computational power of neural microcircuit models in UP- and DOWN-states Data from numerous intracellular recordings suggest that neural circuits in vivo switch between two different dynamic regimes that are commonly referred to as UP- and DOWN states. UP-states are characterized by a bombardment with synaptic inputs from recurrent activity in the circuit, resulting in a membrane potential whose average value is significantly closer to the firing threshold, but also has larger variance. We have simulated these different dynamic regimes by varying the background current Ibackground and the noise current Inoise. Fig. 4a shows that one can simulate in this way different dynamic regimes of the same circuit where the time course of the membrane potential qualitatively matches data from intracellular recordings in UP- and DOWN-states (see e.g. [Shu et al., 2003]). We have tested the computational performance of circuits in 42 different dynamic regimes (for 7 values of Ibackground and 6 values of Inoise) with 3 complex nonlinear computations on firing rates of circuit inputs.7 Inputs u consisted of 4 Poisson spike trains with timevarying rates (drawn independently every 30 ms from the interval of 0 to 80 Hz for the first two and the second two of 4 input spike trains, see middle row of Fig. 4a for a sample). Let f1(t) (f2(t)) be the actual sum of rates normalized to the interval [0, 1] for the first 6The rank of the matrix consisting of 500 circuit states xu(t) for t = 200 ms was computed for 500 spike patterns over 200 ms as described in section 2, see Fig. 2a. 7Computations on firing rates were chosen as benchmark tasks both because UP states were conjectured to enhance the performance for such tasks, and because we want to show that the proposed measures are applicable to other types of computational tasks than those considered in section 4. 300 350 400 450 500 12 14 16 Vm [mV] t [ms] DOWN−state 350 400 450 500 0 50 100 t [ms] 12 14 16 Vm [mV] a UP−state 0 50 100 11.5 12 12.5 13.5 14.3 0.6 1.9 3.2 4.5 6 10 Inoise b UP DOWN 30 40 50 60 70 11.5 12 12.5 13.5 14.3 0.6 1.9 3.2 4.5 6 10 c 20 40 60 80 100 120 11.5 12 12.5 13.5 14.3 0.6 1.9 3.2 4.5 6 10 d 0 0.05 0.1 0.15 0.2 11.5 12 12.5 13.5 14.3 0.6 1.9 3.2 4.5 6 10 Ibackground Inoise e 0.5 0.6 0.7 11.5 12 12.5 13.5 14.3 0.6 1.9 3.2 4.5 6 10 Ibackground f 0.1 0.15 0.2 0.25 11.5 12 12.5 13.5 14.3 0.6 1.9 3.2 4.5 6 10 Ibackground g 0.2 0.25 0.3 Figure 4: Analysis of the computational power of simulated neural microcircuits in different dynamic regimes. a) Membrane potential (for a firing threshold of 15 mV) of two randomly selected neurons from circuits in the two parameter regimes marked in panel b), as well as spike rasters for the same two parameter regimes (with the actual circuit inputs shown between the two rows). b) Estimates of the kernel-quality for input streams u with 34 different combinations of firing rates from 0, 20, 40 Hz in the 4 input spike trains (mean SD = 12). c) Estimate of the VC-dimension for a set Suniv of inputs consisting of 200 different spike trains u that represent 2 different combinations of firing rates (mean SD = 4.6). d) Difference of measures from panels b and c (after scaling each linearly into a common range [0,1]). e), f), g): Evaluation of the computational performance (correlation coefficient; all for test data; mean SD is 0.06, 0.04, and 0.03 for panels e), f), and g) respectively.) of the same circuits in different dynamic regimes for computations involving multiplication and absolute value of differences of firing rates (see text). The theoretically predicted parameter regime with good computational performance for any computations on firing rates (see panel d) agrees quite well with the intersection of areas with good computational performance in panels e, f, g. two (second two) input spike trains computed from the time interval [t −30ms, t]. The computational tasks considered in Fig. 4 were to compute online (and in real-time) every 30 ms the functions f1(t) · f2(t) (see panel e), to decide whether the value of the product f1(t) · f2(t) lies in the interval [0.1, 0.3] or lies outside of this interval (see panel f), and to decide whether the absolute value of the difference f1(t) −f2(t) is greater than 0.25 (see panel g). We wanted to test whether the proposed measures for computational power and generalization capability were able to make reasonable predictions for this completely different parameter map, and for computations on firing rates instead of spike patterns. It turns out that also in this case the kernel-quality (Fig. 4b) explains why circuits in the dynamic regime corresponding to the left-hand side of the parameter map have inferior computational power for all three computations on firing rates (see Fig. 4 e,f,g). The VC-dimension (Fig. 4c) explains the decline of computational performance in the right part of the parameter map. The difference of both measures (Fig. 4d) predicts quite well the dynamic regime where high performance is achieved for all three computational tasks considered in Fig. 4 e,f,g. Note that Fig. 4e has high performance in the upper right corner, in spite of a very high VC-dimension. This could be explained by the inherent bias of linear readouts to compute smooth functions on firing rates, which fits particularly well to this particular target output. If one estimates kernel-quality and VC-dimension for the same circuits, but for computations on sparse spike patterns (for an input ensemble Suniv similarly as in section 4), one finds that circuits at the lower left corner of this parameter map (corresponding to DOWNstates) are predicted to have better computational performance for these computations on sparse input. This agrees quite well with direct evaluations of computational performance (not shown). Hence the proposed quantitative measures may provide a theoretical foundation for understanding the computational function of different states of neural activity. 6 Discussion We have proposed a new method for understanding why one neural microcircuit C is computationally more powerful than another neural microcircuit C′. This method is in principle applicable not just to circuit models, but also to neural microcircuits in vivo and in vitro. Here it can be used to analyze (for example by optical imaging) for which family of computational tasks a particular microcircuit in a particular dynamic regime is well-suited. The main assumption of the method is that (approximately) linear readouts from neural microcircuits have the task to produce the actual outputs of specific computations. We are not aware of specific theoretically founded rules for choosing the sizes of the ensembles of inputs for which the kernel-measure and the VC-dimension are to be estimated. Obviously both have to be chosen sufficiently large so that they produce a significant gradient over the parameter map under consideration (taking into account that their maximal possible value is bounded by the circuit size). To achieve theoretical guarantees for the performance of the proposed predictor of the generalization capability of a neural microcircuit one should apply it to a relatively large ensemble Suniv of circuit inputs (and the dimension n of circuit states should be even larger). But the computer simulations of 132 types of neural microcircuit models that were discussed in this article suggest that practically quite good prediction can already be achieved for a much smaller ensemble of circuit inputs. Acknowledgment: The work was partially supported by the Austrian Science Fund FWF, project # P15386, and PASCAL project # IST2002-506778 of the European Union. References [Bartlett and Maass, 2003] Bartlett, P. L. and Maass, W. (2003). Vapnik-Chervonenkis dimension of neural nets. In Arbib, M. A., editor, The Handbook of Brain Theory and Neural Networks, pages 1188–1192. MIT Press (Cambridge), 2nd edition. [Cherkassky and Mulier, 1998] Cherkassky, V. and Mulier, F. (1998). Learning from Data. Wiley, New York. [Langton, 1990] Langton, C. G. (1990). Computation at the edge of chaos. Physica D, 42:12–37. [Maass et al., 2002] Maass, W., Natschl¨ager, T., and Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531–2560. [Markram et al., 1998] Markram, H., Wang, Y., and Tsodyks, M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. PNAS, 95:5323–5328. [Shu et al., 2003] Shu, Y., Hasenstaub, A., and McCormick, D. A. (2003). Turning on and off recurrent balanced cortical activity. Nature, 103:288–293. [Vapnik, 1998] Vapnik, V. N. (1998). Statistical Learning Theory. John Wiley (New York).
|
2004
|
129
|
2,540
|
Conditional Random Fields for Object Recognition Ariadna Quattoni Michael Collins Trevor Darrell MIT Computer Science and Artificial Intelligence Laboratory Cambridge, MA 02139 {ariadna, mcollins, trevor}@csail.mit.edu Abstract We present a discriminative part-based approach for the recognition of object classes from unsegmented cluttered scenes. Objects are modeled as flexible constellations of parts conditioned on local observations found by an interest operator. For each object class the probability of a given assignment of parts to local features is modeled by a Conditional Random Field (CRF). We propose an extension of the CRF framework that incorporates hidden variables and combines class conditional CRFs into a unified framework for part-based object recognition. The parameters of the CRF are estimated in a maximum likelihood framework and recognition proceeds by finding the most likely class under our model. The main advantage of the proposed CRF framework is that it allows us to relax the assumption of conditional independence of the observed data (i.e. local features) often used in generative approaches, an assumption that might be too restrictive for a considerable number of object classes. 1 Introduction The problem that we address in this paper is that of learning object categories from supervised data. Given a training set of n pairs (xi, yi), where xi is the ith image and yi is the category of the object present in xi, we would like to learn a model that maps images to object categories. In particular, we are interested in learning to recognize rigid objects such as cars, motorbikes, and faces from one or more fixed view-points. The part-based models we consider represent images as sets of patches, or local features, which are detected by an interest operator such as that described in [4]. Thus an image xi can be considered to be a vector {xi,1, . . . , xi,m} of m patches. Each patch xi,j has a feature-vector representation φ(xi,j) ∈Rd; the feature vector might capture various features of the appearance of a patch, as well as features of its relative location and scale. This scenario presents an interesting challenge to conventional classification approaches in machine learning, as the input space xi is naturally represented as a set of feature-vectors {φ(xi,1), . . . , φ(xi,m)} rather than as a single feature vector. Moreover, the local patches underlying the local feature vectors may have complex interdependencies: for example, they may correspond to different parts of an object, whose spatial arrangement is important to the classification task. The most widely used approach for part-based object recognition is the generative model proposed in [1]. This classification system models the appearance, spatial relations and co-occurrence of local parts. One limitation of this framework is that to make the model computationally tractable one has to assume the independence of the observed data (i.e., local features) given their assignment to parts in the model. This assumption might be too restrictive for a considerable number of object classes made of structured patterns. A second limitation of generative approaches is that they require a model P(xi,j|hi,j) of patches xi,j given underlying variables hi,j (e.g., hi,j may be a hidden variable in the model, or may simply be yi). Accurately specifying such a generative model may be challenging – in particular in cases where patches overlap one another, or where we wish to allow a hidden variable hi,j to depend on several surrounding patches. A more direct approach may be to use a feature vector representation of patches, and to use a discriminative learning approach. We follow an approach of this type in this paper. Similar observations concerning the limitations of generative models have been made in the context of natural language processing, in particular in sequence-labeling tasks such as part-of-speech tagging [7, 5, 3] and in previous work on conditional random fields (CRFs) for vision [2]. In sequence-labeling problems for NLP each observation xi,j is typically the j’th word for some input sentence, and hi,j is a hidden state, for example representing the part-of-speech of that word. Hidden Markov models (HMMs), a generative approach, require a model of P(xi,j|hi,j), and this can be a challenging task when features such as word prefixes or suffixes are included in the model, or where hi,j is required to depend directly on words other than xi,j. This has led to research on discriminative models for sequence labeling such as MEMM’s [7, 5] and conditional random fields (CRFs)[3]. A strong argument for these models as opposed to HMMs concerns their flexibility in terms of representation, in that they can incorporate essentially arbitrary feature-vector representations φ(xi,j) of the observed data points. We propose a new model for object recognition based on Conditional Random Fields. We model the conditional distribution p(y|x) directly. A key difference of our approach from previous work on CRFs is that we make use of hidden variables in the model. In previous work on CRFs (e.g., [2, 3]) each “label” yi is a sequence hi = {hi,1, hi,2, . . . , hi,m} of labels hi,j for each observation xi,j. The label sequences are typically taken to be fully observed on training examples. In our case the labels yi are unstructured labels from some fixed set of object categories, and the relationship between yi and each observation xi,j is not clearly defined. Instead, we model intermediate part-labels hi,j as hidden variables in the model. The model defines conditional probabilities P(y, h | x), and hence indirectly P(y | x) = P h P(y, h | x), using a CRF. Dependencies between the hidden variables h are modeled by an undirected graph over these variables. The result is a model where inference and parameter estimation can be carried out using standard graphical model algorithms such as belief propagation [6]. 2 The Model 2.1 Conditional Random Fields with Hidden Variables Our task is to learn a mapping from images x to labels y. Each y is a member of a set Y of possible image labels, for example, Y = {background, car}. We take each image x to be a vector of m “patches” x = {x1, x2, . . . , xm}.1 Each patch xj is represented by a feature vector φ(xj) ∈Rd. For example, in our experiments each xj corresponds to a patch that is detected by the feature detector in [4]; section [3] gives details of the feature-vector representation φ(xj) for each patch. Our training set consists of labeled images (xi, yi) for i = 1 . . . n, where each yi ∈Y, and each xi = {xi,1, xi,2, . . . , xi,m}. For any image x we also assume a vector of “parts” variables h = {h1, h2, . . . , hm}. These variables are not observed on training examples, and will therefore form a set of hidden variables in the 1Note that the number of patches m can vary across images, and did vary in our experiments. For convenience we use notation where m is fixed across different images; in reality it will vary across images but this leads to minor changes to the model. model. Each hj is a member of H where H is a finite set of possible parts in the model. Intuitively, each hj corresponds to a labeling of xj with some member of H. Given these definitions of image-labels y, images x, and part-labels h, we will define a conditional probabilistic model: P(y, h | x, θ) = eΨ(y,h,x;θ) P y′,h eΨ(y′,h,x;θ) . (1) Here θ are the parameters of the model, and Ψ(y, h, x; θ) ∈R is a potential function parameterized by θ. We will discuss the choice of Ψ shortly. It follows that P(y | x, θ) = X h P(y, h | x, θ) = P h eΨ(y,h,x;θ) P y′,h eΨ(y′,h,x;θ) . (2) Given a new test image x, and parameter values θ∗induced from a training example, we will take the label for the image to be arg maxy∈Y P(y | x, θ∗). Following previous work on CRFs [2, 3], we use the following objective function in training the parameters: L(θ) = X i log P(yi | xi, θ) − 1 2σ2 ||θ||2 (3) The first term in Eq. 3 is the log-likelihood of the data. The second term is the log of a Gaussian prior with variance σ2, i.e., P(θ) ∼exp ¡ 1 2σ2 ||θ||2¢ . We will use gradient ascent to search for the optimal parameter values, θ∗= arg maxθ L(θ), under this criterion. We now turn to the definition of the potential function Ψ(y, h, x; θ). We assume an undirected graph structure, with the hidden variables {h1, . . . , hm} corresponding to vertices in the graph. We use E to denote the set of edges in the graph, and we will write (j, k) ∈E to signify that there is an edge in the graph between variables hj and hk. In this paper we assume that E is a tree.2 We define Ψ to take the following form: Ψ(y, h, x; θ) = m X j=1 X l f 1 l (j, y, hj, x)θ1 l + X (j,k)∈E X l f 2 l (j, k, y, hj, hk, x)θ2 l (4) where f 1 l , f 2 l are functions defining the features in the model, and θ1 l , θ2 l are the components of θ. The f 1 features depend on single hidden variable values in the model, the f 2 features can depend on pairs of values. Note that Ψ is linear in the parameters θ, and the model in Eq. 1 is a log-linear model. Moreover the features respect the structure of the graph, in that no feature depends on more than two hidden variables hj, hk, and if a feature does depend on variables hj and hk there must be an edge (j, k) in the graph E. Assuming that the edges in E form a tree, and that Ψ takes the form in Eq. 4, then exact methods exist for inference and parameter estimation in the model. This follows because belief propagation [6] can be used to calculate the following quantities in O(|E||Y|) time: ∀y ∈Y, Z(y | x, θ) = X h exp{Ψ(y, h, x; θ)} ∀y ∈Y, j ∈1 . . . m, a ∈H, P(hj = a | y, x, θ) = X h:hj=a P(h | y, x, θ) ∀y ∈Y, (j, k) ∈E, a, b ∈H, P(hj = a, hk = b | y, x, θ) = X h:hj=a,hk=b P(h | y, x, θ) 2This will allow exact methods for inference and parameter estimation in the model, for example using belief propagation. If E contains cycles then approximate methods, such as loopy beliefpropagation, may be necessary for inference and parameter estimation. The first term Z(y | x, θ) is a partition function defined by a summation over the h variables. Terms of this form can be used to calculate P(y | x, θ) = Z(y | x, θ)/ P y′ Z(y′ | x, θ). Hence inference—calculation of arg max P(y | x, θ)— can be performed efficiently in the model. The second and third terms are marginal distributions over individual variables hj or pairs of variables hj, hk corresponding to edges in the graph. The next section shows that the gradient of L(θ) can be defined in terms of these marginals, and hence can be calculated efficiently. 2.2 Parameter Estimation Using Belief Propagation This section considers estimation of the parameters θ∗= arg max L(θ) from a training sample, where L(θ) is defined in Eq. 3. In our work we used a conjugate-gradient method to optimize L(θ) (note that due to the use of hidden variables, L(θ) has multiple local minima, and our method is therefore not guaranteed to reach the globally optimal point). In this section we describe how the gradient of L(θ) can be calculated efficiently. Consider the likelihood term that is contributed by the i’th training example, defined as: Li(θ) = log P(yi | xi, θ) = log à P h eΨ(yi,h,xi;θ) P y′,h eΨ(y′,h,xi;θ) . ! (5) We first consider derivatives with respect to the parameters θ1 l corresponding to features f 1 l (j, y, hj, x) that depend on single hidden variables. Taking derivatives gives ∂Li(θ) ∂θ1 l = X h P(h | yi, xi, θ)∂Ψ(yi, h, xi; θ) ∂θ1 l − X y′,h P(y′, h | xi, θ)∂Ψ(y′, h, xi; θ) ∂θ1 l = X h P(h | yi, xi, θ) m X j=1 f 1 l (j, yi, hj, xi) − X y′,h P(y′, h | xi, θ) m X j=1 f 1 l (j, y′, hj, xi) = X j,a P(hj = a | yi, xi, θ)f 1 l (j, yi, a, xi) − X y′,j,a P(hj = a, y′ | xi, θ)f 1 l (j, y′, a, xi) It follows that ∂Li(θ) ∂θ1 l can be expressed in terms of components P(hj = a | xi, θ) and P(y | xi, θ), which can be calculated using belief propagation, provided that the graph E forms a tree structure. A similar calculation gives ∂Li(θ) ∂θ2 l = X (j,k)∈E,a,b P(hj = a, hk = b | yi, xi, θ)f 2 l (j, k, yi, a, b, xi) − X y′,(j,k)∈E,a,b P(hj = a, hk = b, y′ | xi, θ)f 2 l (j, k, y′, a, b, xi) hence ∂Li(θ)/∂θ2 l can also be expressed in terms of expressions that can be calculated using belief propagation. 2.3 The Specific Form of our Model We now turn to the specific form for the model in this paper. We define Ψ(y, h, x; θ) = X j φ(xj) · θ(hj) + X j θ(y, hj) + X (j,k)∈E θ(y, hj, hk) (6) Here θ(k) ∈Rd for k ∈H is a parameter vector corresponding to the k’th part label. The inner-product φ(xj) · θ(hj) can be interpreted as a measure of the compatibility between patch xj and part-label hj. Each parameter θ(y, k) ∈R for k ∈H, y ∈Y can be interpreted as a measure of the compatibility between part k and label y. Finally, each parameter θ(y, k, l) ∈R for y ∈Y, and k, l ∈H measures the compatibility between an edge with labels k and l and the label y. It is straightforward to verify that the definition in Eq. 6 can be written in the same form as Eq. 4. Hence belief propagation can be used for inference and parameter estimation in the model. The patches xi,j in each image are obtained using the SIFT detector [4]. Each patch xi,j is then represented by a feature vector φ(xi,j) that incorporates a combination of SIFT and relative location and scale features. The tree E is formed by running a minimum spanning tree algorithm over the parts hi,j, where the cost of an edge in the graph between hi,j and hi,k is taken to be the distance between xi,j and xi,k in the image. Note that the structure of E will vary across different images. Our choice of E encodes our assumption that parts conditioned on features that are spatially close are more likely to be dependent. In the future we plan to experiment with the minimum spanning tree approach under other definitions of edge-cost. We also plan to investigate more complex graph structures that involve cycles, which may require approximate methods such as loopy belief propagation for parameter estimation and inference. 3 Experiments We carried out three sets of experiments on a number of different data sets.3 The first two experiments consisted of training a two class model (object vs. background) to distinguish between a category from a single viewpoint and background. The third experiment consisted of training a multi-class model to distinguish between n classes. The only parameter that was adjusted in the experiments was the scale of the images upon which the interest point detector was run. In particular, we adjusted the scale on the car side data set: in this data set the images were too small and without this adjustment the detector would fail to find a significant amount of features. For the experiments we randomly split each data set into three separate data sets: training, validation and testing. We use the validation data set to set the variance parameters σ2 of the gaussian prior. 3.1 Results In figure 2.a we show how the number of parts in the model affects performance. In the case of the car side data set, the ten-part model shows a significant improvement compared to the five parts model while for the car rear data set the performance improvement obtained by increasing the number of parts is not as significant. Figure 2.b shows a performance comparison with previous approaches [1] tested on the same data set (though on a different partition). We observe an improvement between 2 % and 5 % for all data sets. Figures 3 and 4 show results for the multi-class experiments. Notice that random performance for the animal data set would be 25 % across the diagonal. The model exhibits best performance for the Leopard data set, for which the presence of part 1 alone is a clear predictor of the class. This shows again that our model can learn discriminative part distributions for each class. Figure 3 shows results for a multi-view experiment where the task is two distinguish between two different views of a car and background. 3The images were obtained from http://www.vision.caltech.edu/html-files/archive.html and the car side images from http://l2r.cs.uiuc.edu/ cogcomp/Data/Car/. Notice, that since our algorithm does not currently allow for the recognition of multiple instances of an object we test it on a partition of the the training set in http://l2r.cs.uiuc.edu/ cogcomp/Data/Car/ and not on the testing set in that site. The animals data set is a subset of Caltech’s 101 categories data set. Figure 1: Examples of the most likely assignment of parts to features for the two class experiments (car data set). (a) Data set 5 parts 10 parts Car Side 94 % 99 % Car Rear 91 % 91.7 % (b) Data set Our Model Others [1] Car Side 99 % Car Rear 94.6 % 90.3 % Face 99 % 96.4 % Plane 96 % 90.2 % Motorbike 95 % 92.5 % Figure 2: (a) Equal Error Rates for the car side and car rear experiments with different number of parts. (b) Comparative Equal Error Rates. Figure 1 displays the Viterbi labeling4 for a set of example images showing the most likely assignment of local features to parts in the model. Figure 6 shows the mean and variance of each part’s location for car side images and background images. The mean and variance of each part’s location for the car side images were calculated in the following manner: First we find for every image classified as class a the most likely part assignment under our model. Second, we calculate the mean and variance of positions of all local features that were assigned to the same part. Similarly Figure 5 shows part counts among the Viterbi paths assigned to examples of a given class. As can be seen in Figure 6 , while the mean location of a given part in the background images and the mean location of the same part in the car images are very similar, the parts in the car have a much tighter distribution which seems to suggest that the model is learning the shape of the object. As shown in Figure 5 the model has also learnt discriminative part distributions for each class, for example the presence of part 1 seems to be a clear predictor for the car class. In general part assignments seem to rely on a combination of appearance and relative location. Part 1, for example, is assigned to wheel like patterns located on the left of the object. 4This is the labeling h∗= arg maxh P(h | y, x, θ) where x is an image and y is the label for the image under the model. Data set Precision Recall Car Side 87.5 % 98 % Car Rear 87.4 % 86.5 % Figure 3: Precision and recall results for 3 class experiment. Data set Leopards Llamas Rhinos Pigeons Leopards 91 % 2 % 0 % 7 % Llamas 0 % 50 % 27 % 23 % Rhinos 0 % 40 % 46 % 14 % Pigeons 0 % 30 % 20 % 50 % Figure 4: Confusion table for 4 class experiment. However, the parts might not carry semantic meaning. It appears that the model has learnt a vocabulary of very general parts with significant variability in appearance and learns to discriminate between classes by capturing the most likely arrangement of these parts for each class. In some cases the model relies more heavily on relative location than appearance because the appearance information might not be very useful for discriminating between the two classes. One of the reasons for this is that the detector produces a large number of false detections, making the appearance data too noisy for discrimination. The fact that the model is able to cope with this lack of discriminating appearance information illustrates its flexible data-driven nature. This can be a desirable model property of a general object recognition system, because for some object classes appearance is the important discriminant (i.e., in textured classes) while for others shape may be important (i.e., in geometrically constrained classes). One noticeable difference between our model and similar part-based models is that our model learns large parts composed of small local features. This is not surprising given how the part dependencies were built (i.e., through their position in minimum spanning tree): the potential functions defined on pairs of hidden variables tend to smooth the allocation of parts to patches. 3 8 4 5 1 3 4 5 6 7 8 Figure 5: Graph showing part counts for the background (left) and car side images (right) 4 Conclusions and Further Work In this work we have presented a novel approach that extends the CRF framework by incorporating hidden variables and combining class conditional CRFs into an unified framework for object recognition. Similarly to CRFs and other maximum entropy models our approach allows us to combine arbitrary observation features for training discriminative classifiers with hidden variables. Furthermore, by making some assumptions about the joint distribution of hidden variables one can derive efficient training algorithms based on dynamic programming. −200 −150 −100 −50 0 50 100 150 200 −200 −150 −100 −50 0 50 100 150 200 Background Shape 5 3 4 8 −200 −150 −100 −50 0 50 100 150 200 −200 −150 −100 −50 0 50 100 150 200 Car Shape 5 3 1 9 7 6 4 8 (a) (b) Figure 6: (a) Graph showing mean and variance of locations for the different parts for the car side images; (b) Mean and variance of part locations for the background images. The main limitation of our model is that it is dependent on the feature detector picking up discriminative features of the object. Furthermore, our model might learn to discriminate between classes based on the statistics of the feature detector and not the true underlying data, to which it has no access. This is not a desirable property since it assumes the feature detector to be consistent. As future work we would like to incorporate the feature detection process into the model. References [1] R. Fergus, P. Perona,and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,volume 2, pages 264-271, June 2003. [2] S. Kumar and M. Hebert. Discriminative random fields: A framework for contextual interaction in classification. In IEEE Int Conference on Computer Vision,volume 2, pages 1150-1157, June 2003. [3] J. Lafferty,A. McCallum and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. Int Conf. on Machine Learning, 2001. [4] D. Lowe. Object Recognition from local scale-invariant features. In IEEE Int Conference on Computer Vision, 1999. [5] A. McCallum, D. Freitag, and F. Pereira. Maximum entropy markov models for information extraction and segmentation. In ICML-2000, 2000. [6] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann,1988. [7] A. Ratnaparkhi. A maximum entropy part-of-speech tagger. In EMNLP, 1996.
|
2004
|
13
|
2,541
|
The Variational Ising Classifier (VIC) algorithm for coherently contaminated data Oliver Williams Dept. of Engineering University of Cambridge omcw2@cam.ac.uk Andrew Blake Microsoft Research Ltd. Cambridge, UK Roberto Cipolla Dept. of Engineering University of Cambridge Abstract There has been substantial progress in the past decade in the development of object classifiers for images, for example of faces, humans and vehicles. Here we address the problem of contaminations (e.g. occlusion, shadows) in test images which have not explicitly been encountered in training data. The Variational Ising Classifier (VIC) algorithm models contamination as a mask (a field of binary variables) with a strong spatial coherence prior. Variational inference is used to marginalize over contamination and obtain robust classification. In this way the VIC approach can turn a kernel classifier for clean data into one that can tolerate contamination, without any specific training on contaminated positives. 1 Introduction Recent progress in discriminative object detection, especially for faces, has yielded good performance and efficiency [1, 2, 3, 4]. Such systems are capable of classifying those positives that can be generalized from positive training data. This is restrictive in practice in that test data may contain distortions that take it outside the strict ambit of the training positives. One example would be lighting changes (to a face) but this can be addressed reasonably effectively by a normalizing transformation applied to training and test images; doing so is common practice in face classification. Other sorts of disruption are not so easily factored out. A prime example is partial occlusion. The aim of this paper is to extend a classifier trained on clean positives to accept also partially occluded positives, without further training. The approach is to capture some of the regularity inherent in a typical pattern of contamination, namely its spatial coherence. This can be thought of as extending the generalizing capability of a classifier to tolerate the sorts of image distortion that occur as a result of contamination. As done previously in one-dimension, for image contours [5], the Variational Ising Classifier (VIC) models contamination explicitly as switches with a strong coherence prior in the form of an Ising model, but here over the full two-dimensional image array. In addition, the Ising model is loaded with a bias towards non-contamination. The aim is to incorporate these hidden contamination variables into a kernel classifier such as [1, 3]. In fact the Relevance Vector Machine (RVM) is particularly suitable [6] as it is explicitly probabilistic, so that contamination variables can be incorporated as a hidden layer of random variables. i edge neighbours of i Figure 1: The 2D Ising model is applied over a graph with edges e ∈Υ between neighbouring pixels (connected 4-wise). Classification is done by marginalization over all possible configurations of the hidden variable array, and this is made tractable by variational (mean field) inference. The inference scheme makes use of “hallucination” to fill in parts of the object that are unobserved due to occlusion. Results of VIC are given for face detection. First we show that the classifier performance is not significantly damaged by the inclusion of contamination variables. Then a contaminated test set is generated using real test images and computer generated contaminations. Over this test data the VIC algorithm does indeed perform significantly better than a conventional classifier (similar to [4]). The hidden variable layer is shown to operate effectively, successfully inferring areas of contamination. Finally, inference of contamination is shown working on real images with real contaminations. 2 Bayesian modelling of contamination Classification requires P(F|I), the posterior for the proposition F that an object is present given the image data intensity array I. This can be computed in terms of likelihoods P(F | I) = P(I | F)P(F)/ P(I | F)P(F) + P(I | F)P(F) (1) so then the test P(F | I) > 1 2 becomes log P(I | F) −log P(I | F) > t (2) where t is a prior-dependent threshold that controls the tradeoff between positive and negative classification errors. Suppose we are given a likelihood P(I|θ, F) for the presence of a face given contamination θ, an array of binary “observation” variables corresponding to each pixel Ij of I, such that θj = 0 indicates contamination at that pixel, whereas θj = 1 indicates a successfully observed pixel. Then, in principle, P(I|F) = X θ P(I|θ, F)P(θ), (3) (making the reasonable assumption P(θ|F) = P(θ), that the pattern of contamination is object independent) and similarly for log P(I | F). The marginalization itself is intractable, requiring a summation over all 2N possible configurations of θ, for images with N pixels. Approximating that marginalization is dealt with in the next section. In the meantime, there are two other problems to deal with: specifying the prior P(θ); and specifying the likelihood under contamination P(I|θ, F) given only training data for the unoccluded object. 2.1 Prior over contaminations The prior contains two terms: the first expresses the belief that contamination will occur in coherent regions of a subimage. This takes the form of an Ising model [7] with energy UI(θ) that penalizes adjacent pixels which differ in their labelling (see Figure 1); the second term UC biases generally against contamination a priori and its balance with the first term is mediated by the constant λ. The total prior energy is then U(θ) = UI(θ) + λUC(θ) = X e∈Υ [1 −δ(θe1 −θe2)] + λ X j δ(θj), (4) where δ(x) = 1 if x = 0 and 0 otherwise, and e1, e2 are the indices of the pixels at either end of edge e ∈Υ (figure 1). The prior energy determines a probability via a temperature constant 1/T0 [7]: P(θ) ∝e−U(θ)/T0 = e−UI(θ)/T0e−λUC(θ)/T0 (5) 2.2 Relevance vector machine An unoccluded classifier P(F|I, θ = 0) can be learned from training data using a Relevance Vector Machine (RVM) [6], trained on a database of frontal face and non-face images [8] (see Section 4 for details). The probabilistic properties of the RVM make it a good choice when (later) it comes to marginalising over θ. For now we consider how to construct the likelihood itself. First the conventional, unoccluded case is considered for which the posterior P(F|I) is learned from positive and negative examples. Kernel functions [9] are computed between a candidate image I and a subset of relevance vectors {xk}, retained from the training set. Gaussian kernels are used here to compute y(I) = X k wk exp −α X j (Ij −xkj)2 . (6) where wk are learned weights, and xkj is the jth pixel of the kth relevance vector. Then the posterior is computed via the logistic sigmoid function as P(F|I, θ = 1) = σ(y(I)) = 1 1 + e−y(I) . (7) and finally the unoccluded data-likelihood would be P(I|F, θ = 1) ∝σ(y(I))/P(F). (8) 2.3 Hallucinating appearance The aim now is to derive the occluded likelihood from the unoccluded case, where the contamination mask is known, without any further training. To do this, (8) must be extended to give P(I|F, θ) for arbitrary masks θ, despite the fact the pixels Ij from the object are not observed wherever θj = 0. In principle one should take into account all possible (or at least probable) values for the occluded pixels. Here, for simplicity, a single fixed hallucination is substituted for occluded pixels, then we proceed as if those values had actually been observed. This gives P(I|F, θ) ∝σ(˜y(I, θ))/P(F) (9) where ˜y(θ, I) = y(˜I(I, θ, F)) and ˜I(I, θ, F) j = Ij if θj = 1 (E[I|F])j otherwise (10) in which E[I|F] is a fixed hallucination, conditioned on the model F, and computed as a sample mean over training instances. 3 Approximate marginalization of θ by mean field At this point we return to the task of marginalising over θ (3) to obtain P(I|F) and P(I|F) for use in classification (2). Due to the connectedness of neighbouring pixels in the Ising prior (figure 1), P(I, θ|F) is a Markov Random Field (MRF) [7]. The marginalized likelihood P(I|F) could be estimated by Gibbs sampling [10] but that takes tens of minutes to converge in our experiments. The following section describes a mean field approximation which converges in a few seconds. The mean field algorithm is given here for P(I|F) but must be repeated also for P(I|F), simply substituting F for F throughout. 3.1 Variational approximation Mean field approximation is a form of variational approximation [11] and transforms an inference problem into the optimization of a functional J: J(Q) = log P(I|F) −KL [Q(θ)∥P(θ|F, I)] , (11) where KL is the Kullback-Liebler divergence KL [Q(θ)∥P(θ|F, I)] = X θ Q(θ) log Q(θ) P(θ|F, I). The objective functional J(Q) is a lower bound on the log-marginal probability log P(I|F) [11]; when it is maximized at Q∗, it gives both the marginal likelihood J(Q∗) = log P(I|F), and the posterior distribution Q∗(θ) = P(θ|F, I) over hidden variables. Following [11], J(Q) is simplified using Bayes’ rule: J(Q) = H(Q) + EQ [log P(I, θ|F)] where H(·) is the entropy of a distribution [12] and EQ[g(θ)] = P θ Q(θ)g(θ) denotes the expectation of a function g with respect to Q(θ). A form of Q(θ) must be chosen that makes the maximization of J(Q) tractable. For mean-field approximation, Q(θ) is modelled as a pixel-wise product of factors: Q(θ) = Q i Qi(θi). It is now possible to maximize J iteratively with respect to each marginal Qi(θi) in turn, giving the mean field update [11]: Qi ←1 Zi exp EQ|θi [log P(I, θ|F)] , (12) where Zi = X θi exp EQ|θi [log P(I, θ|F)] is the partition function and EQ|θi[·] is the expectation with respect to Q given θi: EQ|θi[g(θ)] = X {θ}j\i Y j\i Qj(θj) g(θ). 3.2 Taking expectations over P(I, θ|F) To perform the expectation required in (12), the log-joint distribution is written as: log {P(I, θ|F)} = −log 1 + e−˜y(θ,I) − 1 T0 UI(θ) −λ T0 UC(θ) + const. The conditional expectation EQ|θi in (12) is found efficiently from the complete expectations by replacing only terms in θi. Likewise, when one factor of Q changes (12), the complete expectations may be updated without recomputing them ab initio. For brevity, we give the expressions for the complete expectations only. For the prior this is simply: EQ[U(θ)] = X e∈Υ X θe Qe(θe) [1 −δ(θe1 −θe2)] + λ X j Qj(θj = 0). (13) For the likelihood it is more difficult. Saul et al. [13] show how to approximate the expectation over the sigmoid function by introducing a dummy variable ξ: EQ h log(1 + e−˜y(θ,I)) i ≤−ξEQ[˜y(θ, I)] + log n EQ h eξ˜y(θ,I)i + EQ h e(ξ−1)˜y(θ,I)io . The Gaussian RBF in (6) means that it is not feasible to compute the expectation1 EQ eξ˜y(θ,I) , so a simpler approximation is used: EQ[log σ(˜y(θ, I)] ≈log σ (EQ[˜y(θ, I)]) , where EQ[˜y(θ, I)] = X k wk Y j X θj Qj(θj) exp −α ˜I(I, θ, F)j −xkj 2 . (14) 4 Results and discussion The mean field algorithm described above is capable only of local optimization of J(Q). A symptom of this is that it exhibits spontaneous symmetry breaking [11], setting the contamination field to either all contaminated or all uncontaminated. This is alleviated through careful initialization. By performing iterations initially at a high temperature, Th, the prior is weakened. The temperature is then progressively decreased, on a linear annealing schedule [10], until the modelled prior temperature T0 is reached. Figure 2 shows pseudo-code for the VIC algorithm. Note also that an advantage of hallucinating appearance from the mean face is that the hallucination process requires no computation within the optimization loop. For 19 × 19 subimages, the average time taken for the VIC algorithm to converge is 4 seconds. However this is an unoptimized Matlab implementation; and in C++ it is anticipated to be at least 10 times faster. The training set used for the RVM [8] contains subimages of registered faces and non-faces which were histogram equalized [14] to reduce the effect of different lighting with their pixel values scaled to the range [0, 1]. The same is done to each test subimage I. The RVM was trained using 1500 face examples and 1500 non-face examples 2. Parameters were set as follows: the RBF width parameter in (6) is α = 0.05; the contamination cost λ = 0.2 and the temperature constants are Th = 2.5, T0 = 1.5 and ∆T = 0.2. As a by-product of the VIC algorithm, the posterior pattern P(θ|F, I) of contamination is approximately inferred as the value of Q which maximizes J. Figure 3 shows some results of this. As might be expected, for a non-face, the algorithm hallucinates an intact face with total contamination (For example, row 4 of the figure); but of course the marginalized posterior probability P(F|I) is very small in such a case. 4.1 Classifier To assess the classification performance of the VIC, contaminated positives were automatically generated (figure 4). These were combined with pure faces and pure non-faces (none of which were used in the training set) and tested to produce the Receiver Operating Characteristic (ROC) curves are given in Figure 4 for the unaltered RVM acting on the 1The term exp[ξ˜y(θ, I)] = exp[ξ P k wk Q j e−αdj(I,xk|θj)] does not factorize across pixels 2These sizes are limited in practice by the complexity of the training algorithm [6] Require: Candidate image region I Require: Parameters Th, T0, ∆T, λ Require: RVM weights and examples wk, xk Require: Mean face appearance ¯I Initialize Qi(θi = 1) ←0.5 ∀i Compute EQ[U(θ)] (13) Compute EQ[˜y(θ, I)] (14) T ←Th while T > T0 do while Q not converged do for All image locations i do Compute conditional expectations EQ|θi[U(θ)] and EQ|θi[˜y(θ, I)] Compute EQ|θi[log P(I, θ|F)] = log σ EQ|θi[˜y(θ, I)] −EQ|θi[U(θ)] Compute partition Zi = P θi exp EQ|θi [log P(I, θ|F)] Update Qi(θi) ← 1 Zi exp EQ|θi[log P(I, θ|F)] Update complete expectations EQ[U(θ)] and EQ[˜y(θ, I)] end for T ←T −∆T end while end while Figure 2: Pseudo-code for the VIC algorithm Input I Hallucinated image Contamination field Q(θ = 1) 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 0.1 0.2 0.3 0.4 Figure 3: Partially occluded mages with inferred areas of probable contamination (dark). contaminated set and for the new contamination-tolerant VIC outlined in this paper. For comparison, points are shown for a boosted cascade of classifiers [15] which is a publicly available detector based on the system of Viola and Jones [4]. The curve shown for the RVM against an uncontaminated test set confirms that contamination does make the classification task considerably harder. Figure 5 shows some natural face images that the boosted cascade [15] fails to detect, either because of occlusion or due to a degree of deviation from 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.75 0.8 0.85 0.9 0.95 1 False positive rate True positive rate RVM, no cont. RVM VIC Boosted cascade Cascade, no cont. Figure 4: ROC curves. Also shown are some of the contaminated positives used to generate the curves. These were made by sampling contamination patterns from the prior and using them to mix a face and a non-face artificially. Input I Hallucinated image Contamination field Q(θ = 1) 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Figure 5: Images that the boosted cascade [15] failed to detect as faces: the VIC algorithm produces higher posterior face probability by labelling certain regions with unusual appearance (eg due to 3D rotation) as contaminated. the frontal pose. The VIC algorithm detects them successfully however. 4.2 Discussion Figure 4 shows that by modelling the contamination field explicitly, the VIC detector improves on the performance, over a contaminated test set, both of a plain RVM and of a boosted cascade detector. The algorithm is relatively expensive to execute compared, say, with the contamination-free RVM. However, this could be mitigated by cascading [4], in which a simple and efficient classifier, tuned to return a high rate of false positives for all objects, contaminated and non-contaminated, would make a preliminary sweep of a test image. The contamination-tolerant VIC algorithm would then be applied to the candidate subimages that remain, thereby concentrating computational power on just a few locations. Figure 5 illustrates the operation of the contamination mechanism on real images, all of which are detected as faces by the VIC algorithm but missed by the boosted cascade. There is no occlusion in these examples but rotations have distorted the appearance of certain features. The VIC algorithm has deals with this by labelling the distortions as contaminated areas, and hallucinating face-like texture in their place. In conclusion, we have developed the VNC algorithm for object detection in the presence of coherently contaminated data. Contamination is modelled as coherent via an Ising prior, and is marginalized out by variational inference. Experiments show that VIC classifies contaminated images more robustly than classifiers designed for clean data. It is worth pointing out that the approach of the VIC algorithm is not limited to RVMs. Any probabilistic detector for which it is possible to estimate the expectation (14) could be modified in a similar way to deal with spatially coherent contamination. Future work will address: improved efficiency by incorporating the VIC into a cascade of simple classifiers; alternatives to data hallucination using marginalization over missing data, if a tractable means of doing this can be found. References [1] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: An application to face detection. Proc. Conf. Computer Vision and Pattern Recognition, pages 130– 136, 1997. [2] H.A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Transactions on Pattern Alaysis and Machine Intelligence, 20(1):23–38, 1998. [3] S. Romdhani, P. Torr, B. Sch¨olkopf, and A. Blake. Computationally efficient face detection. In Proc. Int. Conf. on Computer Vision, volume 2, pages 524–531, 2001. [4] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proc. Conf. Computer Vision and Pattern Recognition, 2001. [5] J. MacCormick and A. Blake. Spatial dependence in the observation of visual contours. In Proc. European Conf. on Computer Vision, pages 765–781, 1998. [6] M.E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001. [7] R. Kindermann and J.L. Snell. Markov Random Fields and Their Applications. American Mathematical Society, 1980. [8] CBCL face database #1. MIT Center For Biological and Computation Learning: http://www.ai.mit.edu/projects/cbcl. [9] B. Sch¨olkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2001. [10] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 6(6):721–741, 1984. [11] T. Jaakkola. Tutorial on variational approximation methods. In Advanced Mean Field Methods: Theory and Practice. MIT Press, 2000. [12] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, 1991. [13] L. Saul, T. Jaakkola, and M. Jordan. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4:61–76, 1996. [14] A.K. Jain. Fundamentals of Digital Image Processing. System Sciences. PrenticeHall, New Jersey, 1989. [15] R. Lienhart and J. Maydt. An extended set of Haar-like features for rapid object detection. In Proc. IEEE ICIP, volume 1, pages 900–903, 2002.
|
2004
|
130
|
2,542
|
Generalization Error and Algorithmic Convergence of Median Boosting Bal´azs K´egl Department of Computer Science and Operations Research, University of Montreal CP 6128 succ. Centre-Ville, Montr´eal, Canada H3C 3J7 kegl@iro.umontreal.ca Abstract We have recently proposed an extension of ADABOOST to regression that uses the median of the base regressors as the final regressor. In this paper we extend theoretical results obtained for ADABOOST to median boosting and to its localized variant. First, we extend recent results on efficient margin maximizing to show that the algorithm can converge to the maximum achievable margin within a preset precision in a finite number of steps. Then we provide confidence-interval-type bounds on the generalization error. 1 Introduction In a recent paper [1] we introduced MEDBOOST, a boosting algorithm that trains base regressors and returns their weighted median as the final regressor. In another line of research, [2, 3] extended ADABOOST to boost localized or confidence-rated experts with input-dependent weighting of the base classifiers. In [4] we propose a synthesis of the two methods, which we call LOCMEDBOOST. In this paper we analyze the algorithmic convergence of MEDBOOST and LOCMEDBOOST, and provide bounds on the generalization error. We start by describing the algorithm in its most general form, and extend the result of [1] on the convergence of the robust (marginal) training error (Section 2). The robustness of the regressor is measured in terms of the dispersion of the expert population, and with respect to the underlying average confidence estimate. In Section 3, we analyze the algorithmic convergence. In particular, we extend recent results [5] on efficient margin maximizing to show that the algorithm can converge to the maximum achievable margin within a preset precision in a finite number of steps. In Section 4, we provide confidence-interval-type bounds on the generalization error by generalizing results obtained for ADABOOST [6, 2, 3]. As in the case of ADABOOST, the bounds justify the algorithmic objective of minimizing the robust training error. Note that the omitted proofs can be found in [4]. 2 The LOCMEDBOOST algorithm and the convergence result For the formal description, let the training data be Dn = (x1, y1), . . . , (xn, yn) where data points (xi, yi) are from the set Rd × R. The algorithm maintains a weight distribution w(t) = w(t) 1 , . . . , w(t) n over the data points. The weights are initialized uniformly LOCMEDBOOST(Dn, Cϵ(y′, y), BASE(Dn, w), ϱ, T) 1 w ←(1/n, . . . , 1/n) 2 for t ←1 to T 3 (h(t), κ(t)) ←BASE(Dn, w) ▷see (1) 4 for i ←1 to n 5 θi ←1 −2Cϵ h(t)(xi), yi ▷base rewards 6 κi ←κ(t)(xi) ▷base confidences 7 α(t) ←arg min α eϱα n X i=1 w(t) i e−ακiθi 8 if α(t) = ∞ ▷κiθi ≥ϱ for all i = 1, . . . , n 9 return f (t)(·) = medα,κ(·) h(·) 10 if α(t) < 0 ▷equivalent to n X i=1 w(t) i κiθi < ϱ 11 return f (t−1)(·) = medα,κ(·) h(·) 12 for i ←1 to n 13 w(t+1) i ←w(t) i exp(−α(t)κiθi) Pn j=1 w(t) j exp(−α(t)κjθj) =w(t) i exp(−α(t)κiθi) Z(t) 14 return f (T )(·) = medα,κ(·) h(·) Figure 1: The pseudocode of the LOCMEDBOOST algorithm. Dn is the training data, Cϵ(y′, y) ≥I{|y−y′|>ϵ} is the cost function, BASE(Dn, w) is the base regression algorithm, ϱ is the robustness parameter, and T is the number of iterations. in line 1, and are updated in each iteration in line 13 (Figure 1). We suppose that we are given a base learner algorithm BASE(Dn, w) that, in each iteration t, returns a base hypothesis that consists of a real-valued base regressor h(t) ∈H and a non-negative base confidence function κ(t) ∈K. In general, the base learner should attempt to minimize the base objective e(t) 1 (Dn) = 2 n X i=1 w(t) i κ(t)(xi)Cϵ h(t)(xi), yi −¯κ(t), (1) where Cϵ(y, y′) is an ϵ-dependent loss function satisfying Cϵ(y, y′) ≥C(0−1) ϵ (y, y′) = I{|y −y′| > ϵ}, 1 (2) and ¯κ(t) = n X i=1 wiκ(t)(xi) (3) is the average confidence of κ(t) on the training set. Intuitively, e(t) 1 (Dn) is a mixture of the two objectives of error minimization and confidence maximization. The first term is a weighted regression loss where the weight of a point xi is the product of its “constant” weight w(t) i and the confidence κ(t)(xi) of the base hypothesis. Minimizing this 1The indicator function I{A} is 1 if its argument A is true and 0 otherwise. term means to place the high-confidence region of the base regressor into areas where the regression error is small. On the other hand, the minimization of the second term drives the high-confidence region of the base regressor into dense areas. After Theorem 1, we will explain the derivation of the base objective (1). To simplify the notation in Figure 1 and in Theorem 1 below, we define the base rewards θ(t) i and the base confidences κ(t) i for each training point (xi, yi), i = 1, . . . , n, base regressor h(t), and base confidence function κ(t), t = 1, . . . , T, as θ(t) i = 1 −2Cϵ(h(t)(xi), yi) and κ(t) i = κ(t)(xi), (4) respectively.2 After computing the base rewards and the base confidences in lines 5 and 6, the algorithm sets the weight α(t) of the base regressor h(t) to the value that minimizes the exponential loss E(t) ϱ (α) = eϱα n X i=1 w(t) i e−ακiθi, (5) where ϱ is a robustness parameter that has a role in keeping the algorithm in its operating range, in avoiding over- and underfitting, and in maximizing the margin (Section 3). If κiθi ≥ϱ for all training points, then α(t) = ∞and E(t) ϱ (α(t)) = 0, so the algorithm returns the actual regressor (line 9). Intuitively, this means that the capacity of the set of base hypotheses is too large, so we are overfitting. If α(t) < 0, the algorithm returns the regressor up to the last iteration (line 11). Intuitively, this means that the capacity of the set of base hypotheses is too small, so we cannot find a new base regressor that would decrease the training loss. In general, α(t) can be found easily by line-search because of the convexity of E(t) ϱ (α). In some special cases, α(t) can be computed analytically. In lines 9, 11, or 14, the algorithm returns the weighted median of the base regressors. For the analysis of the algorithm, we formally define the final regressor in a more general manner. First, let eα(t) = α(t) PT j=1 α(j) be the normalized coefficient of the base hypothesis (h(t), κ(t)), and let c(T )(x) = T X t=1 eα(t)κ(t)(x) = PT t=1 α(t)κ(t)(x) PT t=1 α(t) (6) be the average confidence function3 after the Tth iteration. Let f (T ) ρ+ (x) and f (T ) ρ−(x) be the weighted 1+ρ/c(T )(x) 2 - and 1−ρ/c(T )(x) 2 -quantiles, respectively, of the base regressors h(1)(x), . . . , h(T )(x) with respective weights α(1)κ(1)(x), . . . , α(T )κ(T )(x) (Figure 2(a)). Formally, for any ρ ∈R, if −c(T )(x) < ρ < c(T )(x), let f (T ) ρ+ (x) = min j ( h(j)(x) : PT t=1 α(t)κ(t)(x)I{h(j)(x) < h(t)(x)} PT t=1 α(t)κ(t)(x) < 1 − ρ c(T )(x) 2 ) , (7) f (T ) ρ−(x) = max j ( h(j)(x) : PT t=1 α(t)κ(t)(x)I{h(j)(x) > h(t)(x)} PT t=1 α(t)κ(t)(x) < 1 − ρ c(T )(x) 2 ) ,(8) otherwise (including the case when c(T )(x) = 0) let f (T ) ρ+ (x) = ρ · (+∞) and f (T ) ρ−(x) = ρ · (−∞)4. Then the weighted median is defined as f (T )(·) = medα,κ(·) h(·) = f (T ) 0+ (·). 2Note that we will omit the iteration index (t) where it does not cause confusion. 3Not to be confused with ¯κ(t) in (3) which is the average base confidence over the training data. 4In the degenerative case we define 0 · ∞= 0/0 = ∞. } } PSfrag replacements < 1 − ρ c(T )(·) 2 < 1 − ρ c(T )(·) 2 fρ+ fρ− medα,κ(·) = f0+ (a) PSfrag replacements fρ+ fρ− f0+ yi yi + ϵ yi −ϵ (b) Figure 2: (a) Weighted 1+ρ/c(T )(x) 2 - and 1−ρ/c(T )(x) 2 -quantiles, and the weighted median of linear base regressors with equal weights α(t) = 1/9, constant base confidence functions κ(x) ≡1, and ρ c(T )(x) ≡0.25. (b) ρ-robust ϵ-precise regressor. To assess the final regressor f (T )(·), we say that f (T )(·) is ρ-robust ϵ-precise on (xi, yi) if and only if f (T ) ρ+ (xi) ≤yi + ϵ, and f (T ) ρ−(xi) ≥yi −ϵ. For ρ ≥0, this condition is equivalent to both quantiles being in the “ϵ-tube” around yi (Figure 2(b)). In the rest of this section we show that the algorithm minimizes the relative frequency of training points on which f (T )(·) is not ϱ-robust ϵ-precise. Formally, let the ρ-robust ϵ-precise training error of f (T ) be defined as L(ρ)(f (T )) = 1 n n X i=1 I n f (T ) ρ+ (xi) > yi + ϵ ∨f (T ) ρ−(xi) < yi −ϵ o .5 (9) If ρ = 0, L(0)(f (T )) gives the relative frequency of training points on which the regressor f (T ) has a larger L1 error than ϵ. If we have equality in (2), this is exactly the average loss of the regressor f (T ) on the training data. A small value for L(0)(f (T )) indicates that the regressor predicts most of the training points with ϵ-precision, whereas a small value for L(ρ)(f (T )) with a positive ρ suggests that the prediction is not only precise but also robust in the sense that a small perturbation of the base regressors and their weights will not increase L(0)(f (T )). For classification with bi-valued base classifiers h : Rd 7→{−1, 1}, the definition (9) (with ϵ = 1) recovers the traditional notion of robust training error, that is, L(ρ)(f (T )) is the relative frequency of data points with margin smaller than ρ. The following theorem upper bounds the ρ-robust ϵ-precise training error L(ρ) of the regressor f (T ) output by LOCMEDBOOST. Theorem 1 Let L(ρ)(f (T )) defined as in (9) and suppose that condition (2) holds for the loss function Cϵ(·, ·). Define the base rewards θ(t) i and the base confidences κ(t) i as in (4). Let w(t) i be the weight of training point xi after the tth iteration (updated in line 13 in Figure 1), and let α(t) be the weight of the base regressor h(t)(·) (computed in line 7 in Figure 1). Then for all ρ ∈R L(ρ)(f (T )) ≤ T Y t=1 E(t) ρ (α(t)), (10) where E(t) ρ (α(t)) is defined in (5). 5For the sake of simplicity, in the notation we suppress the fact that L(ρ) depends on the whole sequence of base regressors, base confidences, and weights, not only on the final regressor f(T ). The proof is based on the observation that if the median of the base regressors goes further than ϵ from the real response yi at training point xi, then most of the base regressors must also be far from yi, giving small base rewards to this point. The goal of LOCMEDBOOST is to minimize L(ρ)(f (T )) at ρ = ϱ so, in view of Theorem 1, our goal in each iteration t is to minimize E(t) ϱ (5). To derive the base objective (1), we follow the two step functional gradient descent procedure [7], that is, first we maximize the negative gradient −E′ ϱ(α) in α = 0, then we do a line search to determine α(t). Using this approach, the base objective becomes e1(Dn) = −Pn i=1 w(t) i κiθi, which is identical to (1). Note that since E(t) ϱ (α) is convex and E(t) ϱ (0) = 1, a positive α(t) means that minα E(t) ϱ (α) = E(t) ϱ (α(t)) < 1, so the condition in line 10 in Figure 1 guarantees that the upper bound of (10) decreases in each step. 3 Setting ϱ and maximizing the minimum margin In practice, ADABOOST works well with ϱ = 0, so setting ϱ to a positive value is only an alternative regularization option to early stopping. In the case of LOCMEDBOOST, however, one must carefully choose ϱ to keep the algorithm in its operating range and to avoid over- and underfitting. A too small ϱ means that the algorithm can overfit and stop in line 9. In binary classification this is an unrealistic situation: it means that there is a base classifier that correctly classifies all data points. On the other hand, it can happen easily in the abstaining classifier/regressor model, when κ(t)(x) = 0 on a possibly large input region. In this case, a base classifier can correctly classify (or a base regressor can give positive base rewards θi to) all data points on which it does not abstain, so if ϱ = 0, the algorithm stops in line 9. At the other end of the spectrum, a large ϱ can make the algorithm underfit and stop in line 11, so one needs to set ϱ carefully in order to avoid early stopping in lines 9 or 11. From the point of view of generalization, ϱ also has an important role as a regularization parameter. A larger ϱ decreases the stepsize α(t) in the functional gradient view. From another aspect, a larger ϱ decreases the effective capacity of the the class of base hypotheses by restricting the set of admissible base hypotheses to those having small errors. In general, ϱ has a potential role in balancing between over- and underfitting so, in practice, we suggest that it be validated together with the number of iterations T and other possible complexity parameters of the base hypotheses. In the context of ADABOOST, there have been several proposals to set ϱ in an adaptive way to effectively maximize the minimum margin. In the rest of this section, we extend the analysis of marginal boosting [5] to this general case. Although the agressive maximization of the minimum margin can lead to overfitting, the analysis can provide valuable insight into the understanding of LOCMEDBOOST and so it can guide the setting of ϱ in practice. For the sake of simplicity, let us assume that base hypotheses (h, κ) come from a finite set6 HN with cardinality N , and let H(t) = (h(1), κ(1)), . . . , (h(t), κ(t)) be the set of base hypotheses after the tth iteration. Let us define the edge of the base hypothesis (h, κ) ∈HN as7 γ(h,κ)(w) = n X i=1 wiκiθi = n X i=1 wiκ(xi) 1 −2Cϵ h(xi), yi , and the maximum edge in the tth iteration as γ∗(t) = max(h,κ)∈HN γ(h,κ)(w(t)). Note that γ(h,κ)(w) = −e1(Dn), so with this terminology, the objective of the base learner is 6The analysis can be extended to infinite base sets along the lines of [5]. 7For the sake of simplicity, in the notation we suppress the dependence of γ(h,κ) on Dn. to maximize the edge γ(t) = γ(h(t),κ(t))(w(t)) (if the maximum is achieved, then γ(t) = γ∗(t)), and the algorithm stops in line 11 if the edge γ(t) is less than ϱ. On the other hand, let us define the margin on a point (x, y) as the average reward8 ρ(x,y)(α) = N X j=1 eα(j)κ(j)θ(j) = N X j=1 eα(j)κ(j)(x) 1 −2Cϵ h(j)(x), y . Let us denote the minimum margin over the data points in the tth iteration by ρ∗(t) = min (x,y)∈Dn ρ(x,y)(α(t−1)), (11) where α(t−1) = α(1), . . . , α(t−1) is the vector of base hypothesis coefficients up to the (t −1)th iteration. It is easy to see that in each iteration, the maximum edge over the base hypotheses is at least the minimum margin over the training points: γ∗(t) = max (h,κ)∈HN γ(h,κ)(w(t)) ≥ min (x,y)∈Dn ρ(x,y)(α(t−1)) = ρ∗(t). Moreover, as several authors (e.g., [5]) noted in the context of ADABOOST, by the MinMax-Theorem of von Neumann [8] we have γ∗= min w max (h,κ)∈HN γ(h,κ)(w) = max α min (x,y)∈Dn ρ(x,y)(α) = ρ∗, so the minimum achievable maximal edge by any weighting over the training points is equal to the maximum achievable minimal margin by any weighting over the base hypotheses. To converge to ρ∗within a factor ν in finite time, [5] sets ϱ(t) RW = min j=1,...,t γ(j) −ν, and shows that ρ∗(t) exceeds ρ∗−ν after l 2 log n ν2 m + 1 steps. In the following, we extend these results to the general case of LOCMEDBOOST. First we define the minimum and maximum achievable base rewards by ρmin = min (h,κ)∈HN min (x,y)∈Dn κ(x) 1 −2Cϵ h(x), y , (12) ρmax = max (h,κ)∈HN max (x,y)∈Dn κ(x) 1 −2Cϵ h(x), y , (13) respectively. Let A = ρmax −ρmin, eγ(t) = γ(t) −ρmin, and eϱ(t) = ϱ(t) −ρmin.9 Lemma 1 (Generalization of Lemma 3 in [5]) Assume that ρmin ≤ϱ(t) ≤γ(t). Then E(t) ϱ(t)(α(t)) ≤exp −eϱ(t) A log eϱ(t) eγ(t) −A −eϱ(t) ϱ(t) log A −eϱ(t) A −eγ(t) . (14) Finite convergence of LOCMEDBOOST both with ϱ(t) = ϱ = const. and with an adaptive ϱ(t) = ϱ(t) RW is based on the following general result. Theorem 2 Assume that ϱ(t) ≤γ(t) −ν. Let ρ = PT t=1 eα(t)ϱ(t). Then L(ρ)(f (T )) = 0 (so ρ∗(t) > ρ) after at most T = l A2 log n 2ν2 m + 1 iterations. 8For the sake of simplicity, in the notation we suppress the dependence of ρ(x,y) on HN. 9In binary classification, ρmin = −1, ρmax = 1, A = 2, eγ(t) = 1 + γ(t), and eϱ(t) = 1 + ϱ(t). The first consequence is the convergence of LOCMEDBOOST with a constant ϱ. Corollary 1 (Generalization of Corollary 4 in [5]) Assume that the weak learner always achieves an edge γ(t) ≥ρ∗. If ρmin ≤ϱ < ρ∗, then ρ∗(t) > ϱ after at most T = l A2 log n 2(ρ∗−ϱ)2 m + 1 steps. The second corollary shows that if ϱ is set adaptively to ϱ(t) RW then the minimum margin ρ∗(t) will converge to ρ∗within a precision ν in a finite number of steps. Corollary 2 (Generalization of Theorem 6 in [5]) Assume that the weak learner always achieves an edge γ(t) ≥ρ∗. If ρmin ≤ϱ(t) = γ(t) −ν, ν > 0, then ρ∗(t) > ρ∗−ν after at most T = l A2 log n 2ν2 m + 1 iterations. 4 The generalization error In this section we extend probabilistic bounds on the generalization error obtained for ADABOOST [6], confidence-rated ADABOOST [2], and localized boosting [3]. Here we suppose that the data set Dn is generated independently according to a distribution D over Rd × R. The results provide bounds on the confidence-interval-type error L(f (T )) = PD hf (T )(X) −Y > ϵ i , where (X, Y ) is a random point generated according to D independently from points in Dn. The bounds state that with a large probability, L(f (T )) < L(ρ)(f (T )) + C(n, ρ, H, K), where the complexity term C depends on the size or the pseudo-dimension of the base regressor set H, and the smoothness of the base confidence functions in K. As in the case of ADABOOST, these bounds qualitatively justify the minimization of the robust training error L(ρ)(f (T )). Let C be the set of combined regressors obtained as a weighted median of base regressors from H, that is, C = f(·) = medα,κ(·) h(·) h ∈HN, α ∈R+N, κ ∈KN, N ∈Z+ . In the simplest case, we assume that H is finite and base coefficients are constant. Theorem 3 (Generalization of Theorem 1 in [6]) Let D be a distribution over Rd × R, and let Dn be a sample of n points generated independently at random according to D. Assume that the base regressor set H is finite, and K contains only κ(x) ≡1. Then with probability 1 −δ over the random choice of the training set Dn, any f ∈C satisfies the following bound for all ρ > 0: L(f) < L(ρ)(f) + O 1 √n log n log |H| ρ2 + log 1 δ 1/2! . Similarly to the proof of Theorem 1 in [6], we construct a set CN that contains unweighted medians of N base functions from H, then approximate f by g(·) = med1 h1(·), . . . , hN(·) ∈CN where the base functions hi are selected randlomly according to the coefficient distribution eα. We then separate the one-sided error into two terms by PD f(X) > Y + ϵ ≤PD g ρ 2 +(X) > Y + ϵ + PD g ρ 2 +(X) ≤Y + ϵ f(X) > Y + ϵ , and then upper bound the two terms as in [6]. The second theorem extends the first to the case of infinite base regressor sets. Theorem 4 (Generalization of Theorem 2 of [6]) Let D be a distribution over Rd × R, and let Dn be a sample of n points generated independently at random according to D. Assume that the base regressor set H has pseudodimension p, and K contains only κ(x) ≡ 1. Then with probability 1 −δ over the random choice of the training set Dn, any f ∈C satisfies the following bound for all ρ > 0: L(f) < L(ρ)(f) + O 1 √n p log2(n/p) ρ2 + log 1 δ 1/2! . The proof goes as in Theorem 3 and in Theorem 2 in [6] until we upper bound the shatter coefficient of the set A = n (x, y) : g ρ 2 +(x) > y + ϵ : g ∈CN, ρ = 0, 4 N , . . . , 2N N o by (N/2 + 1)(en/p)pN where p is the pseudodimension of H (or the VC dimension of H+ = {(x, y) : h(x) > y} : h ∈H ). In the most general case K can contain smooth functions. Theorem 5 (Generalization of Theorem 1 of [3]) Let D be a distribution over Rd × R, and let Dn be a sample of n points generated independently at random according to D. Assume that the base regressor set H has pseudodimension p, and K contains functions κ(x) which are lower bounded by a constant a, and which satisfy for all x, x′ ∈Rd the Lipschitz condition |κ(x) −κ(x′)| ≤L∥x −x′∥∞. Then with probability 1 −δ over the random choice of the training set Dn, any f ∈C satisfies the following bound for all ρ > 0: L(f) < L(ρ)(f) + O 1 √n (L/(aρ))dp log2(n/p) ρ2 + log 1 δ 1/2! . 5 Conclusion In this paper we have analyzed the algorithmic convergence of LOCMEDBOOST by generalizing recent results on efficient margin maximization, and provided bounds on the generalization error by extending similar bounds obtained for ADABOOST. References [1] B. K´egl, “Robust regression by boosting the median,” in Proceedings of the 16th Conference on Computational Learning Theory, Washington, D.C., 2003, pp. 258–272. [2] R. E. Schapire and Y. Singer, “Improved boosting algorithms using confidence-rated predictions,” Machine Learning, vol. 37, no. 3, pp. 297–336, 1999. [3] R. Meir, R. El-Yaniv, and S. Ben-David, “Localized boosting,” in Proceedings of the 13th Annual Conference on Computational Learning Theory, 2000, pp. 190–199. [4] B. K´egl, “Confidence-rated regression by boosting the median,” Tech. Rep. 1241, Department of Computer Science, University of Montreal, 2004. [5] G. R¨atsch and M. K. Warmuth, “Efficient margin maximizing with boosting,” Journal of Machine Learning Research (submitted), 2003. [6] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” Annals of Statistics, vol. 26, no. 5, pp. 1651–1686, 1998. [7] L. Mason, P. Bartlett, J. Baxter, and M. Frean, “Boosting algorithms as gradient descent,” in Advances in Neural Information Processing Systems. 2000, vol. 12, pp. 512–518, The MIT Press. [8] J. von Neumann, “Zur Theorie der Gesellschaftsspiele,” Math. Ann., vol. 100, pp. 295–320, 1928.
|
2004
|
131
|
2,543
|
Supervised graph inference Jean-Philippe Vert Centre de G´eostatistique Ecole des Mines de Paris 35 rue Saint-Honor´e 77300 Fontainebleau, France Jean-Philippe.Vert@mines.org Yoshihiro Yamanishi Bioinformatics Center Institute for Chemical Research Kyoto University Uji, Kyoto 611-0011, Japan yoshi@kuicr.kyoto-u.ac.jp Abstract We formulate the problem of graph inference where part of the graph is known as a supervised learning problem, and propose an algorithm to solve it. The method involves the learning of a mapping of the vertices to a Euclidean space where the graph is easy to infer, and can be formulated as an optimization problem in a reproducing kernel Hilbert space. We report encouraging results on the problem of metabolic network reconstruction from genomic data. 1 Introduction The problem of graph inference, or graph reconstruction, is to predict the presence or absence of edges between a set of points known to form the vertices of a graph, the prediction being based on observations about the points. This problem has recently drawn a lot of attention in computational biology, where the reconstruction of various biological networks, such as gene or molecular networks from genomic data, is a core prerequisite to the recent field of systems biology that aims at investigating the structures and properties of such networks. As an example, the in silico reconstruction of protein interaction networks [1], gene regulatory networks [2] or metabolic networks [3] from large-scale data generated by high-throughput technologies, including genome sequencing or microarrays, is one of the main challenges of current systems biology. Various approaches have been proposed to solve the network inference problem. Bayesian [2] or Petri networks [4] are popular frameworks to model the gene regulatory or the metabolic network, and include methods to infer the network from data such as gene expression of metabolite concentrations [2]. In other cases, such as inferring protein interactions from gene sequences or gene expression, these models are less relevant and more direct approaches involving the prediction of edges between “similar” nodes have been tested [5, 6]. These approaches are unsupervised, in the sense that they base their prediction on prior knowledge about which edges should be present for a given set of points; this prior knowledge might for example be based on a model of conditional independence in the case of Bayesian networks, or on the assumption that edges should connect similar points. The actual situations we are confronted with, however, can often be expressed in a supervised framework: besides the data about the vertices, part of the network is already known. This is obviously the case with all network examples discussed above, and the real challenge is to denoise the observed subgraph, if errors are assumed to be present, and to infer new edges involving in particular nodes outside of the observed subgraph. In order to clarify this point, let us take the example of an actual network inference problem that we treat in the experiment below: the inference of the metabolic network from various genomic data. The metabolic network is a graph of genes that involves only a subset of all the genes of an organisms, known as enzymes. Enzymes can catalyze chemical reaction, and an edge between two enzymes indicates that they can catalyze two successive reactions. For most organisms, this graph is partially known, because many enzymes have already been characterized. However many enzymes are also missing, and the problem is to detect uncharacterized enzymes and place them in their correct location in the metabolic network. Mathematically speaking, this means adding new edges involving new points, and eventually modifying edges in the known graph to remove mistakes from our current knowledge. In this contribution we propose an algorithm for supervised graph inference, i.e., to infer a graph from observations about the vertices and from the knowledge of part of the graph. Several attempts have already been made to formalize the network inference problem as a supervised machine learning problem [1, 7], but these attempts consist in predicting each edge independently from each others using algorithms for supervised classification. We propose below a radically different setting, where the known subgraph is used to extract a new representation for the vertices, as points in a vector space, where the structure of the graph is easier to infer than from the original observations. The edge inference engine in the vector space is very simple (edges are inferred between nodes with similar representations), and the learning step is limited to the construction of the mapping of the nodes onto the vector space. 2 The supervised graph inference problem Let us formally define the supervised graph inference problem. We suppose an undirected simple graph G = (V, E) is given, where V = (v1, . . . , vn) ∈Vn is a set of vertices and E ⊂V × V is a set of edges. The problem is, given an additional set of vertices V ′ = (v′ 1, . . . , v′ m) ∈Vm, to infer a set of edges E′ ⊂V ′ × (V ∪V ′) ∪(V ∪V ′) × V ′ involving the nodes in V ′. In many situations of interest, in particular gene networks, the additional nodes V ′ might be known in advance, but we do not make this assumption here to ensure a level of generality as large as possible. For the applications we have in mind, the vertices can be represented in V by a variety of data types, including but not limited to biological sequences, molecular structures, expression profiles or metabolite concentrations. In order to allow this diversity and take advantage of recent works on positive definite kernels on general sets [8], we will assume that V is a set endowed with a positive definite kernel k, that is, a symmetric function k : V2 →R satisfying Pp i,j=1 aiajk(xi, xj) ≥0 for any p ∈N, (a1, . . . , an) ∈Rp and (x1, . . . , xp) ∈Vp. 3 From distance learning to graph inference Suppose first that a graph must be inferred on p points (x1, . . . , xp) in the Euclidean space Rd, without further information than “similar points” should be connected. Then the simplest strategy to predict edges between the points is to put an edge between vertices that are at a distance from each other smaller than a fixed threshold δ. More or less edges can be inferred by varying the threshold. We call this strategy the “direct” strategy. We now propose to cast the supervised graph inference problem in a two step procedure: • map the original points to a Euclidean space through a mapping f : V →Rd; • apply the direct strategy to infer the network on the points {f(v), v ∈V ∪V ′} . While the second part of this procedure is fixed, the first part can be optimized by supervised learning of f using the known network. To do so we require the mapping f to map adjacent vertices in the known graph to nearby positions in Rd, in order to ensure that the known graph can be recovered to some extent by the direct strategy. Stated this way, the problem of learning f appears similar to a problem of distance learning that has been raised in the context of clustering [9], a important difference being that we need to define a new representation of the points and therefore a new (Euclidean) distance not only for the points in the training set, but also for points unknown during training. Given a function f : V →R, a possible criterion to assess whether connected (resp. disconnected) vertices are mapped onto similar (resp. dissimilar) points in R is the following: R(f) = P (u,v)∈E (f(u) −f(v))2 −P (u,v)̸∈E (f(u) −f(v))2 P (u,v)∈V 2 (f(u) −f(v))2 . (1) A small value of R(f) ensures that connected vertices tend to be closer than disconnected vertices (in a quadratic error sense). Observe that the numerator ensures an invariance of R(f) with respect to a scaling of f by a constant, which is consistent with the fact that the direct strategy itself is invariant with respect to scaling of the points. Let us denote by fV = (f(v1), . . . , f(vn))⊤∈Rn the values taken by f on the training set, and by L the combinatorial Laplacian of the graph G, i.e., the n × n matrix where Li,j is equal to −1 (resp. 0) if i ̸= j and vertices vi and vj are connected (resp. disconnected), and Li,i = −P j̸=i Li,j. If we restrict fV to have zero mean (P v∈V f(v) = 0), then the criterion (1) can be rewritten as follows: R(f) = 4f ⊤ V LfV f ⊤ V fV −2. The obvious minimum of R(f) under the constraint P v∈V f(v) = 0 is reached for any function f such that fV is equal to the second largest eigenvector of L (the largest eigenvector of L begin the constant vector). However, this only defines the values of f on the points V , but leaves indeterminacy on the values of f outside of V . Moreover, any arbitrary choice of f under a single constraint on fV is likely to be a mapping that overfits the known graph at the expense of the capacity to infer the unknown edges. To overcome both issues, we propose to regularize the criterion (1), by a smoothness functional on f, a classical approach in statistical learning [10, 11]. A convenient setting is to assume that f belongs to the reproducing kernel Hilbert space (r.k.h.s.) H defined by the kernel k on V, and to use the norm of f in the r.k.h.s. as a regularization operator. The regularized criterion to be minimized becomes: min f∈H0 f ⊤ V LfV + λ||f||2 H f ⊤ V fV , (2) where H0 = {f ∈H : P v∈V f(v) = 0} is the subset of H orthogonal to the function x 7→P v∈V k(x, v) in H and λ is a regularization parameter. We note that [12] have recently and indenpendently proposed a similar formulation in the context of clustering. The regularization parameter controls the trade-off between minimizing the original criterion (1) and ensuring that the solution has a small norm in the r.k.h.s. When λ varies, the solution to (2) varies between to extremes: • When λ is small, fV tends to the second largest eigenvector of the Laplacian L. The regularization ensures that f is well defined as a function of V →R, but f is likely to overfit the known graph. • When λ is large, the solution to (2) converges to the first kernel principal component (up to a scaling) [13], whatever the graph. Even though no supervised learning is performed in this case, one can observe that the resulting transformation, when the first d kernel principal components are kept, is similar to the operation performed in spectral clustering [14, 15] where points are mapped onto the first few eigenvectors of a similarity matrix before being clustered. Before showing how (2) is solved in practice, we must complete the picture by explaining how the mapping f : V →Rd is obtained. First note that the criterion in (2) is defined up to a scaling of the functions, and the solution is therefore a direction in the r.k.h.s. In order to extract a function, an additional constraint must be set, such that imposing the norm ||f||HV = 1, or imposing P v∈V f(v)2 = 1. The first solution correspond to an orthogonal projection onto the direction selected in the r.k.h.s. (which would for example give the same result as kernel PCA for large λ), while the second solution would provide a sphering of the data. We tested both possibilities in practice and found very little difference, with however slightly better results for the first solution (imposing ||f||HV = 1). Second, the problem (2) only defines a one-dimensional feature. In order to get a d-dimensional representation of the vertices, we propose to iterate the minimization of (2) under orthogonality constraints in the r.k.h.s., that is, we recursively define the i-th feature fi for i = 1, . . . , d by: fi = arg min f∈H0,f⊥{f1,...,fi−1} f ⊤ V LfV + λ||f||2 H f ⊤ V fV . (3) 4 Implementation Let kV be the kernel obtained by centering k on the set V , i.e., kV (x, y) = k(x, y) −1 n X v∈V k(x, v) −1 n X v∈V k(y, v) + 1 n2 X (v,v′)∈V 2 k(v, v′), and let HV be the r.k.h.s. associated with kV . Then it can easily be checked that HV = H0, where H0 is defined in the previous section as the subset of H of the function with zero mean on V . A simple extensions of the representer theorem [10] in the r.k.h.s. HV shows that for any i = 1, . . . , d, the solution to (3) has an expansion of the form: fi(x) = n X j=1 αi,jkV (xj, x), for some vector αi = (αi,1, . . . , αi,n)⊤∈Rn. The corresponding vector fi,V can be written in terms of αi by fi,V = KV αi, where KV is the Gram matrix of the kernel kV on the set V , i.e., [KV ]i,j = kV (vi, vj) for i, j = 1, . . . , n. KV is obtained from the Gram matrix K of the original kernel k by the classical formula KV = (I −U)K(I −U), I being the n × n identity matrix and U being the constant n × n matrix [U]i,j = 1/n for i, j = 1, . . . , n [13]. Besides, the norm in HV is equal to ||fi||2 HV = α⊤ i KV αi, and the orthogonality constraint between fi and fj in HV translates into α⊤ i KV αj = 0. As a result, the problem (2) is equivalent to the following: αi = arg min α∈Rn,αKV α1=...=αKV αi−1=0 α⊤KV LKV α + λα⊤KV α α⊤K2 V α . (4) Taking the differential of (4) with respect to α to 0 we see that the first vector α1 must solve the following generalized eigenvector problem with the smallest (non-negative) generalized eigenvalue: (KV LKV + λKV ) α = µK2 V α. (5) This shows that α1 must solve the following problem: (LKV + λI) α = µKV α, (6) up to the addition of a vector ϵ satisfying Kϵ = 0. Hence any solution of (5) differs from a solution of (6) by such an ϵ, which however does not change the corresponding function f ∈HV . It is therefore enough to solve (6) in order to find the first vector α1. K being positive semidefinite, the other generalized eigenvectors of (6) are conjugate with respect to KV , so it can easily be checked that the d vectors α1, . . . , αd solving (4) are in fact the d smallest generalized eigenvectors or (6). In practice, for large n, the generalized eigenvector problem (6) can be solved by first performing an incomplete Cholesky decomposition of KV , see e.g. [16]. Number of features Regularization parameter (log2) 0 20 40 60 80 100 −4 −2 0 2 4 6 8 (a) Train vs train Number of features Regularization parameter (log2) 0 20 40 60 80 100 −4 −2 0 2 4 6 8 (b) Test vs (Train + test) Number of features Regularization parameter (log2) 0 20 40 60 80 100 −4 −2 0 2 4 6 8 (c) Test vs test Figure 1: ROC score for different numbers of features and regularization parameters, in a 5-fold cross-validation experiment with the integrated kernel (the color scale is adjusted to highlight the variations inside each figure, the performance increases from blue to red). 5 Experiment We tested the supervised graph inference method described in the previous section on the problem of inferring a gene network of interest in computational biology: the metabolic gene network, with enzymes present in an organism as vertices, and edges between two enzymes when they can catalyze successive chemical reactions [17]. Focusing on the budding yeast S. cerevisiae, the network corresponding to our current knowledge of the network was extracted from the KEGG database [18]. The resulting network contains 769 vertices and 7404 edges. In order to infer it, various independent data about the genes can be used. We focus on three sources of data, likely to contain useful information to infer the graph: a set of 157 gene expression measurement obtained from DNA microarrays [19, 20], the phylogenetic profiles of the genes [21] as vectors of 145 bits indicating the presence or absence of each gene in 145 fully sequenced genomes, and their localization in the cell determined experimentally [22] as vectors of 23 bits indicating the presence of each gene into each of 23 compartment of the cell. In each case a Gaussian RBF kernel was used to represent the data as a kernel matrix. We denote these three datasets as “exp”, “phy” and “loc” below. Additionally, we considered a fourth kernel obtained by summing the first three kernels. This is a simple approach to data integration that has proved to be useful in [23], for example. This integrated kernel is denoted “int” below. We performed 5-fold cross-validation experiments as follows. For each random split of the set of genes into 80% (training set) and 20% (test set), the features are learned from the subgraph with genes from the training set as vertices. The edges involving genes in the test set are then predicted among all possible interactions involving the test set. The performance of the inference is estimated in term of ROC curves (the plot of the percentage of actual edges predicted as a function of the number of edges predicted although they are not present), and in terms of the area under the ROC curve normalized between 0 and 1. Notice that the set of possible interactions to be predicted is made of interactions between two genes in the test set, on the one hand, and between one gene in the test set and one gene in the training set, on the other hand. As it might be more challenging to infer an edge in the former case, we compute two performances: first on the edges involving two nodes in the test set, and second on the edges involving at least one vertex in the test set. The algorithm contains 2 free parameters: the number d of features to be kept, and the regularization parameter λ that prevents from overfitting the known graph. We varied λ among the values 2i, for i = −5, . . . , 8, and d between 1 and 100. Figure 1 displays the performance in terms of ROC index for the graph inference with the integrated kernel, for different values of d and λ. On the training set, it can be seen that the effect of increasing λ constantly decreases the performance of the graph reconstruction, which is natural since smaller values of λ are expected to overfit the training graph. These results however justify that the criterion (1), although not directly related to the ROC index of the graph reconstruction procedure, is a useful criterion to be optimized. As an example, for very small values of λ, the ROC index on the training set is above 96%. The results on the test vs. test and on the test vs. (train + test) experiments show that overfitting indeed occurs for small λ values, and that there is an optimum, both in terms of d and λ. The slight difference between the performance landscapes in the experiments “test vs. test” and “test vs. (train + test)” show that the first one is indeed more difficult that the latter one, where some form of overfitting is likely to occur (in the mapping of the vertices in the training set). In particular the “test vs. test” seems to be more sensitive to the number of features selected that the other setting. The abolute values of the ROC scures when 20 features are selected, for varying λ, are shown in figure 2. For all kernels tested, overfitting occurs at small λ values, and an optimum exists (around λ = 2 ∼10). The performance in the setting “test vs. (train+test)” is consistently better than that in the setting “test vs. test”. Finally, and more interestingly, the inference with the integrated kernel outperforms the inference with each individual kernel. This is further highlighted in figure 3, where the ROC curves obtained for 20 features and λ = 2 are shown. References [1] R. Jansen, H. Yu, D. Greenbaum, Y. Kluger, N.J. Krogan, S. Chung, A. Emili, M. Snyder, J.F. Greenblatt, and M. Gerstein. A bayesian networks approach for predicting protein-protein −4 −2 0 2 4 6 8 0.5 0.6 0.7 0.8 0.9 1 Regularization parameter (log2) ROC index (a) Expression kernel −4 −2 0 2 4 6 8 0.5 0.6 0.7 0.8 0.9 1 Regularization parameter (log2) ROC index (b) Localization kernel) −4 −2 0 2 4 6 8 0.5 0.6 0.7 0.8 0.9 1 Regularization parameter (log2) ROC index (c) Phylogenetic kernel −4 −2 0 2 4 6 8 0.5 0.6 0.7 0.8 0.9 1 Regularization parameter (log2) ROC index (d) Integrated kernel Figure 2: ROC scores for different regularization parameters when 20 features are selected. Different pictures represent different kernels. In each picture, the dashed blue line, dashdot red line and continuous black line correspond respectively to the ROC index on the training vs training set, the test vs (training + test) set, and the test vs test set. 0 20 40 60 80 100 0 20 40 60 80 100 False positives (%) True Positives (%) Kexp Kloc Kphy Kint Krand (a) Test vs. (train+test) 0 20 40 60 80 100 0 20 40 60 80 100 False positives (%) True Positives (%) Kexp Kloc Kphy Kint Krand (b) Test vs. test) Figure 3: ROC with 20 features selected and λ = 2 for the various kernels. interactions from genomic data. Science, 302(5644), 2003. [2] N. Friedman, M. Linial, I. Nachman, and D. Pe’er. Using bayesian networks to analyze expression data. Journal of Computational Biology, 7:601–620, 2000. [3] M. Kanehisa. Prediction of higher order functional networks from genomic data. Pharmacogenomics, 2(4):373–385, 2001. [4] A. Doi, H. Matsuno, M. Nagasaki, and S. Miyano. Hybrid petri net representation of gene regulatory network. In Proceedings of PSB 5, pages 341–352, 2000. [5] E.M. Marcotte, M. Pellegrini, H.-L. Ng, D.W. Rice, T.O. Yeates, and D. Eisenberg. Detecting protein function and protein-protein interactions from genome sequences. Science, 285(5428):751–753, 1999. [6] F. Pazos and A. Valencia. Similarity of phylogenetic trees as indicator of protein?protein interaction. Protein Engineering, 9(14):609–614, 2001. [7] J. R. Bock and D. A. Gough. Predicting protein-protein interactions from primary structure. Bioinformatics, 17:455–460, 2001. [8] B. Schr¨olkopf, K. Tsuda, and J.-P. Vert. Kernel methods in computational biology. MIT Press, 2004. [9] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In NIPS 15, pages 505–512. MIT Press, 2003. [10] G. Wahba. Splines Models for Observational Data. Series in Applied Mathematics, Vol. 59, SIAM, Philadelphia, 1990. [11] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219–269, 1995. [12] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from examples. Technical Report TR-2004-06, University of Chicago, 2004. [13] B. Sch¨olkopf, A. J. Smola, and K.-R. M¨uller. Kernel principal component analysis. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 327–352. MIT Press, 1999. [14] Y. Weiss. Segmentation using eigenvectors: a unifying view. In Proceedings of the IEEE International Conference on Computer Vision, pages 975–982. IEEE Computer Society, 1999. [15] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS 14, pages 849–856, MIT Press, 2002. [16] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3:1–48, 2002. [17] J.-P. Vert and M. Kanehisa. Graph-driven features extraction from microarray data using diffusion kernels and kernel CCA. In NIPS 15. MIT Press, 2003. [18] M. Kanehisa, S. Goto, S. Kawashima, and A. Nakaya. The KEGG databases at genomenet. Nucleic Acids Research, 30:42–46, 2002. [19] P. T. Spellman, G. Sherlock, M. Q. Zhang, K. Anders, M. B. Eisen, P. O. Brown, D. Botstein, and B. Futcher. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell, 9:3273–3297, 1998. [20] M. Eisen, P. Spellman, P. O. Brown, and D. Botstein. Cluster analysis and display of genomewide expression patterns. PNAS, 95:14863–14868, 1998. [21] M. Pellegrini, E. M. Marcotte, M. J. Thompson, D. Eisenberg, and T. O. Yeates. Assigning protein functions by comparative genome analysis: protein phylogenetic profiles. PNAS, 96(8):4285–4288, 1999. [22] W.K. Huh, J.V. Falco, C. Gerke, A.S. Carroll, R.W. Howson, J.S. Weissman, and E.K. O’Shea. Global analysis of protein localization in budding yeast. Nature, 425:686–691, 2003. [23] Y. Yamanishi, J.-P. Vert, A. Nakaya, and M. Kanehisa. Extraction of correlated gene clusters from multiple genomic data by generalized kernel canonical correlation analysis. Bioinformatics, 19:i323–i330, 2003.
|
2004
|
132
|
2,544
|
Confidence Intervals for the Area under the ROC Curve Corinna Cortes Google Research 1440 Broadway New York, NY 10018 corinna@google.com Mehryar Mohri Courant Institute, NYU 719 Broadway New York, NY 10003 mohri@cs.nyu.edu Abstract In many applications, good ranking is a highly desirable performance for a classifier. The criterion commonly used to measure the ranking quality of a classification algorithm is the area under the ROC curve (AUC). To report it properly, it is crucial to determine an interval of confidence for its value. This paper provides confidence intervals for the AUC based on a statistical and combinatorial analysis using only simple parameters such as the error rate and the number of positive and negative examples. The analysis is distribution-independent, it makes no assumption about the distribution of the scores of negative or positive examples. The results are of practical use and can be viewed as the equivalent for AUC of the standard confidence intervals given in the case of the error rate. They are compared with previous approaches in several standard classification tasks demonstrating the benefits of our analysis. 1 Motivation In many machine learning applications, the ranking quality of a classifier is critical. For example, the ordering of the list of relevant documents returned by a search engine or a document classification system is essential. The criterion widely used to measure the ranking quality of a classification algorithm is the area under an ROC curve (AUC). But, to measure and report the AUC properly, it is crucial to determine an interval of confidence for its value as it is customary for the error rate and other measures. It is also important to make the computation of the confidence interval practical by relying only on a small and simple number of parameters. In the case of the error rate, such intervals are often derived from just the sample size N. We present an extensive theoretical analysis of the AUC and show that a similar confidence interval can be derived for its value using only simple parameters such as the error rate k/N, the number of positive examples m, and the number of negative examples n = N −m. Thus, our results extend to AUC the computation of confidence intervals from a small number of readily available parameters. Our analysis is distribution-independent in the sense that it makes no assumption about the distribution of the scores of negative or positive examples. The use of the error rate helps determine tight confidence intervals. This contrasts with existing approaches presented in the statistical literature [11, 5, 2] which are based either on weak distribution-independent assumptions resulting in too loose confidence intervals, or strong distribution-dependent assumptions leading to tight but unsafe confidence intervals. We show that our results are of practical use. We also compare them with previous approaches in several standard classification tasks demonstrating the benefits of our analysis. Our results are also useful for testing the statistical significance of the difference of the AUC values of two classifiers. The paper is organized as follows. We first introduce the definition of the AUC, its connection with the Wilcoxon-Mann-Whitney statistic (Section 2), and briefly review some essential aspects of the existing literature related to the computation of confidence intervals for the AUC. Our computation of the expected value and variance of the AUC for a fixed error rate requires establishing several combinatorial identities. Section 4 presents some existing identities and gives the proof of novel ones useful for the computation of the variance. Section 5 gives the reduced expressions for the expected value and variance of the AUC for a fixed error rate. These can be efficiently computed and used to determine our confidence intervals for the AUC (Section 6). Section 7 reports the result of the comparison of our method with previous approaches, including empirical results for several standard tasks. 2 Definition and Properties of the AUC The Receiver Operating Characteristics (ROC) curves were originally introduced in signal detection theory [6] in connection with the study of radio signals, and have been used since then in many other applications, in particular for medical decision-making. Over the last few years, they have found increased interest in the machine learning and data mining communities for model evaluation and selection [14, 13, 7, 12, 16, 3]. The ROC curve for a binary classification problem plots the true positive rate as a function of the false positive rate. The points of the curve are obtained by sweeping the classification threshold from the most positive classification value to the most negative. For a fully random classification, the ROC curve is a straight line connecting the origin to (1, 1). Any improvement over random classification results in an ROC curve at least partially above this straight line. The AUC is defined as the area under the ROC curve. Consider a binary classification task with m positive examples and n negative examples. Let C be a fixed classifier that outputs a strictly ordered list for these examples. Let x1, . . . , xm be the output of C on the positive examples and y1, . . . , yn its output on the negative examples and denote by 1X the indicator function of a set X. Then, the AUC, A, associated to C is given by: A = Pm i=1 Pn j=1 1xi>yj mn (1) which is the value of the Wilcoxon-Mann-Whitney statistic [10]. Thus, the AUC is closely related to the ranking quality of the classification. It can be viewed as a measure based on pairwise comparisons between classifications of the two classes. It is an estimate of the probability Pxy that the classifier ranks a randomly chosen positive example higher than a negative example. With a perfect ranking, all positive examples are ranked higher than the negative ones and A = 1. Any deviation from this ranking decreases the AUC, and the expected AUC value for a random ranking is 0.5. 3 Overview of Related Work This section briefly describes some previous distribution-dependent approaches presented in the statistical literature to derive confidence intervals for the AUC and compares them to our method. The starting point for these analyses is a formula giving the variance of the AUC, A, for a fixed distribution of the scores Px of the positive examples and Py of the negative examples [10, 1]: σ2 A = A(1 −A) + (m −1)(Pxxy −A2) + (n −1)(Pxyy −A2) mn (2) where Pxxy is the probability that the classifier ranks two randomly chosen positive examples higher than a negative one, and Pxyy the probability that it ranks two randomly chosen negative examples lower than a positive one. To compute the variance exactly using Equation 2, the distributions Px and Py must be known. Hanley and McNeil [10] argue in favor of exponential distributions, loosely claiming that this upper-bounds the variance of normal distributions with various means and ratios of variances. They show that for exponential distributions Pxxy = A 2−A and Pxyy = 2A2 1+A. The resulting confidence intervals are of course relatively tight, but their validity is questionable since they are based on a strong assumption about the distributions of the positive and negative scores that may not hold in many cases. An alternative considered by several authors to the exact computation of the variance is to determine instead the maximum of the variance over all possible continuous distributions with the same expected value of the AUC. For all such distributions, one can fix m and n and compute the expected AUC and its variance. The maximum variance is denoted by σ2 max and is given by [5, 2]: σ2 max = A(1 −A) min {m, n} ≤ 1 4 min {m, n} (3) Unfortunately, this often yields loose confidence intervals of limited practical use. Our approach for computing the mean and variance of the AUC is distribution-independent and inspired by the machine learning literature where analyses typically center on the error rate. We require only that the error rate be measured and compute the mean and variance of the AUC over all distributions Px and Py that maintain the same error rate. Our approach is in line with that of [5, 2] but it crucially avoids considering the maximum of the variance. We show that it is possible to compute directly the mean and variance of the AUC assigning equal weight to all the possible distributions. Of course, one could argue that not all distributions Px and Py are equally probable, but since these distributions are highly problem-dependent, we find it risky to make any general assumption on the distributions and thereby limit the validity of our results. Our approach is further justified empirically by the experiments reported in the last section. 4 Combinatorial Analysis The analysis of the statistical properties of the AUC given a fixed error rate requires various combinatorial calculations. This section describes several of the combinatorial identities that are used in our computation of the confidence intervals. For all q ≥0, let Xq(k, m, n) be defined by: Xq(k, m, n) = k X x=0 xq M x M ′ x′ (4) where M = m −(k −x) + x, M ′ = n + (k −x) −x, and x′ = k −x. In previous work, we derived the following two identities which we used to compute the expected value of the AUC [4]: X0(k, m, n) = k X x=0 n + m + 1 x X1(k, m, n) = k X x=0 (k −x)(m −n) + k 2 n + m + 1 x To simplify the expression of the variance of the AUC, we need to compute X2(k, m, n). Proposition 1 Let k, m, n be non-negative integers such that k ≤min{m, n}, then: X2(k, m, n) = k X x=0 P2(k, m, n, x) m + n + 1 x (5) where P2 is the following 4th-degree polynomial: P2(k, m, n, x) = (k −x)/12(−2x3 + 2x2(2m −n + 2k −4) + x(−3m2 + 3nm + 3m −5km −2k2 + 2 + k + nk + 6n) + (3(k −1)m2 −3nm(k −1) + 6km + 5m + k2m + 8n + 8 −9nk + 3k + k2 + k2n)). Proof. The proof of the proposition is left to a longer version of this paper. 5 Expectation and Variance of the AUC This section presents the expression of the expectation and variance of the AUC for a fixed error rate k/N assuming that all classifications or rankings with k errors are equiprobable. For a given classification, there may be x, 0 ≤x ≤k, false positive examples. Since the number of errors is fixed, there are x′ = k −x false negative examples. The expression Xq discussed in the previous section represents the q-th moment of x over all classifications with exactly k errors. In previous work, we gave the exact expression of the expectation of the AUC for a fixed number of errors k: Proposition 2 ([4]) Assume that a binary classification task with m positive examples and n negative examples is given. Then, the expected value of the AUC, A, over all classifications with k errors is given by: E[A] = 1 − k m + n −(n −m)2(m + n + 1) 4mn k m + n − Pk−1 x=0 m+n x Pk x=0 m+n+1 x ! . Note that the two sums in this expression cannot be further simplified since they are known not to admit a closed form [9]. We also gave the expression of the variance of the AUC in terms of the function F defined for all Y by: F(Y ) = Pk x=0 M x M ′ x′ Y Pk x=0 M x M ′ x′ . (6) The following proposition reproduces that result: Proposition 3 ([4]) Assume that a binary classification task with m positive examples and n negative examples is given. Then, the variance of the AUC A over all classifications with k errors is given by: σ2(A) = F((1 − x n + k−x m 2 )2) −F((1 − x n + k−x m 2 ))2 + F( mx2+n(k−x)2+(m(m+1)x+n(n+1)(k−x))−2x(k−x)(m+n+1) 12m2n2 ). Because of the products of binomial terms, the computation of the variance using this expression is inefficient even for relatively small values of m and n. This expression can however be reduced using the identities presented in the previous section which leads to significantly more efficient computations that we have been using in all our experiments. Corollary 1 ([4]) Assume that a binary classification task with m positive examples and n negative examples is given. Then, the variance of the AUC A over all classifications with k errors is given by: σ2(A) = (m+n+1)(m+n)(m+n−1)T ((m+n−2)Z4−(2m−n+3k−10)Z3) 72m2n2 + (m+n+1)(m+n)T (m2−nm+3km−5m+2k2−nk+12−9k)Z2 48m2n2 −(m+n+1)2(m−n)4Z2 1 16m2n2 − (m+n+1)Q1Z1 72m2n2 + kQ0 144m2n2 with: Zi = Pk−i x=0 ( m+n+1−i x ) Pk x=0 ( m+n+1 x ) , T = 3((m −n)2 + m + n) + 2, and: Q0 = (m + n + 1)Tk2 + ((−3n2 + 3mn + 3m + 1)T −12(3mn + m + n) −8)k + (−3m2 + 7m + 10n + 3nm + 10)T −4(3mn + m + n + 1) Q1 = Tk3 + 3(m −1)Tk2 + ((−3n2 + 3mn −3m + 8)T −6(6mn + m + n))k + (−3m2 + 7(m + n) + 3mn)T −2(6mn + m + n) Proof. The expression of the variance given in Proposition 3 requires the computation of Xq(k, m, n), q = 0, 1, 2. Using the identities giving the expressions of X0 and X1 and Proposition 1, which provides the expression of X2, σ2(A) can be reduced to the expression given by the corollary. 6 Theory and Analysis Our estimate of the confidence interval for the AUC is based on a simple and natural assumption. The main idea for its computation is the following. Assume that a confidence interval E = [e1, e2] is given for the error rate of a classifier C over a sample S, with the confidence level 1 −ϵ. This interval may have have been derived from a binomial model of C, which is a standard assumption for determining a confidence interval for the error rate, or from any other model used to compute that interval. For a given error rate e ∈E, or equivalently for a given number of misclassifications, we can use the expectation and variance computed in the previous section and Chebyshev’s inequality to predict a confidence interval Ae for the AUC at the confidence level 1 −ϵ′. Since our equiprobable model for the classifications is independent of the model used to compute the interval of confidence for the error rate, we can use E and Ae, e ∈E, to compute a confidence interval of the AUC at the level (1 −ϵ)(1 −ϵ′). Theorem 1 Let C be a binary classifier and let S be a data sample of size N with m positive examples and n negative examples, N = m + n. Let E = [e1, e2] be a confidence interval for the error rate of C over S at the confidence level 1 −ϵ. Then, for any ϵ′, 0 ≤ϵ′ ≤1, we can compute a confidence interval for the AUC value of the classifier C at the confidence level (1 −ϵ)(1 −ϵ′) that depends only on ϵ, ϵ′, m, n, and the interval E. Proof. Let k1 =Ne1 and k2 =Ne2 be the number of errors associated to the error rates e1 and e2, and let IK be the interval IK = [k1, k2]. For a fixed k ∈IK, by Propositions 2 and Corollary 1, we can compute the exact value of the expectation E[Ak] and variance σ2(Ak) of the AUC Ak. Using Chebyshev’s inequality, for any k ∈IK and any ϵk > 0, P |Ak −E[Ak]| ≥σ(Ak) √ϵk ≤ϵk (7) where E[Ak] and σ(Ak) are the expressions given in Propositions 2 and Corollary 1, which depend only on k, m, and n. Let α1 and α2 be defined by: α1 = min k∈IK E[Ak] −σ(Ak) √ϵk α2 = max k∈IK E[Ak] + σ(Ak) √ϵk (8) α1 and α2 only depend on IK (i.e., on e1 and e2), and on k, m, and n. Let IA be the confidence interval defined by IA = [α1, α2] and let ϵk = ϵ′ for all k ∈IK. Using the fact that the confidence interval E is independent of our equiprobability model for fixed-k AUC values and the Bayes’ rule: P(A ∈IA) = X k∈R+ P(A ∈IA | K = k)P(K = k) (9) ≥ X k∈IK P(A ∈IA | K = k)P(K = k) (10) ≥ (1 −ϵ′) X k∈IK P(K = k) ≥(1 −ϵ′)(1 −ϵ) (11) where we used the property of Eq. 7 and the definitions of the intervals IK and IA. Thus, IA constitutes a confidence interval for the AUC value of C at the confidence level (1 − ϵ)(1 −ϵ′). In practice, the confidence interval E is often determined as a result of the assumption that C follows a binomial law. This leads to the following theorem. 0.75 0.80 0.85 0.90 0.95 1.00 .005 .010 .015 .020 AUC Standard deviation Max Distribution−dependent Distribution−independent 0.6 0.7 0.8 0.9 1.0 .005 .010 .015 .020 .025 .030 .035 AUC Standard deviation Max Distribution−dependent Distribution−independent (a) (b) Figure 1: Comparison of the standard deviations for three different methods with: (a) m = n = 500; (b) m = 400 and n = 200. The curves are obtained by computing the expected AUC and its standard deviations for different values of the error rate using the maximum-variance approach (Eq. 3), our distribution-independent method, and the distribution-dependent approach of Hanley [10]. Theorem 2 Let C be a binary classifier, let S be a data sample of size N with m positive examples and n negative examples, N = m + n, and let k0 be the number of misclassifications of C on S. Assume that C follows a binomial law, then, for any ϵ, 0 ≤ϵ ≤1, we can compute a confidence interval of the AUC value of the classifier C at the confidence level 1 −ϵ that depends only on ϵ, k0, m, and n. Proof. Assume that C follows a binomial law with coefficient p. Then, Chebyshev’s inequality yields: P(|C −E[C]| ≥η) ≤p(1 −p) Nη2 ≤ 1 4Nη2 (12) Thus, E = [ k0 N − 1 2√ (1−√1−ϵ)N , k0 N + 1 2√ (1−√1−ϵ)N ] forms a confidence interval for the error rate of C at the confidence level √1 −ϵ. By Theorem 1, we can compute for the AUC value a confidence interval at the level (1−(1−√1 −ϵ))(1−(1−√1 −ϵ)) = 1−ϵ depending only on ϵ, m, n, and the interval E, i.e., k0, N = m + n, and ϵ. For large N, we can use the normal approximation of the binomial law to determine a finer interval E. Indeed, for large N, P(|C −E[C]| ≥η) ≤2Φ(2 √ Nη) (13) with Φ(u) = R ∞ u e−x2/2 √ 2π dx. Thus, E = [ k0 N −Φ−1( 1−√1−ϵ 2 ) 2 √ N , k0 N + Φ−1( 1−√1−ϵ 2 ) 2 √ N ] is the confidence interval for the error rate at the confidence level √1 −ϵ. For simplicity, in the proof of Theorem 2, ϵk was chosen to be a constant (ϵk = ϵ′) but, in general, it can be another function of k leading to tighter confidence intervals. The results presented in the next section were obtained with ϵk = a0 exp((k −k0)2/2a2 1), where a0 and a1 are constants selected so that the inequality 11 be verified. 7 Experiments and Comparisons The analysis in the previous section provides a principled method for computing a confidence interval of the AUC value of a classier C at the confidence level 1 −ϵ that depends only on k, n and m. As already discussed, other expressions found in the statistical literature lead to either too loose or unsafely narrow confidence intervals based on questionable assumptions on the probability functions Px and Py [10, 15]. Figure 1 shows a comparison of the standard deviations obtained using the maximum-approach (Eq. 3), the distribution-dependent expression from [10], and our distribution-independent method for NAME m + n n m+n AUC k m+n σindep σA σdep σmax pima 368 0.63 0.70 0.24 0.0297 0.0440 0.0269 0.0392 yeast 700 0.67 0.63 0.26 0.0277 0.0330 0.0215 0.0317 credit 303 0.54 0.87 0.13 0.0176 0.0309 0.0202 0.0281 internet-ads 1159 0.17 0.85 0.05 0.0177 0.0161 0.0176 0.0253 page-blocks 2473 0.10 0.84 0.03 0.0164 0.0088 0.0161 0.0234 ionosphere 201 0.37 0.85 0.13 0.0271 0.0463 0.0306 0.0417 Table 1: Accuracy and AUC values for AdaBoost [8] and estimated standard deviations for several datasets from the UC Irvine repository. σindep is a distribution-independent standard deviation obtained using our method (Theorem 2). σA is given by Eq. (2) with the values of A, Pxxy, and Pxyy derived from data. σdep is the distribution-dependent standard deviation of Hanley [10], which is based on assumptions that may not always hold. σmax is defined by Eq. (3). All results were obtained on a randomly selected test set of size m + n. various error rates. For m = n = 500, our distribution-independent method consistently leads to tighter confidence intervals (Fig. 1 (a)). It also leads to tighter confidence intervals for AUC values more than .75 for the uneven distribution m = 400 and n = 200 (Fig. 1 (b)). For lower AUC values, the distribution-dependent approach produces tighter intervals, but its underlying assumptions may not hold. A different comparison was made using several datasets available from the UC Irvine repository (Table 1). The table shows that our estimates of the standard deviations (σindep) are in general close to or tighter than the distribution-dependent standard deviation σdep of Hanley [10]. This is despite we do not make any assumption about the distributions of positive and negative examples. In contrast, Hanley’s method is based on specific assumptions about these distributions. Plots of the actual ranking distribution demonstrate that these assumptions are often violated however. Thus, the relatively good performance of Hanley’s approach on several data sets can be viewed as fortuitous and is not general. Our distribution-independent method provides tight confidence intervals, in some cases tighter than those derived from σA, in particular because it exploits the information provided by the error rate. Our analysis can also be used to determine if the AUC values produced by two classifiers are statistically significant by checking if the AUC value of one falls within the confidence interval of the other. 8 Conclusion We presented principled techniques for computing useful confidence intervals for the AUC from simple parameters: the error rate, and the negative and positive sample sizes. We demonstrated the practicality of these confidence intervals by comparing them to previous approaches in several tasks. We also derived the exact expression of the variance of the AUC for a fixed k, which can be of interest in other analyses related to the AUC. The Wilcoxon-Mann-Whitney statistic is a general measure of the quality of a ranking that is an estimate of the probability that the classifier ranks a randomly chosen positive example higher than a negative example. One could argue that accuracy at the top or the bottom of the ranking is of higher importance. This, however, contrarily to some belief, is already captured to a certain degree by the definition of the Wilcoxon-Mann-Whitney statistic which penalizes more errors at the top or the bottom of the ranking. It is however an interesting research problem to determine how to incorporate this bias in a stricter way in the form of a score-specific weight in the ranking measure, a weighted WilcoxonMann-Whitney statistic, or how to compute the corresponding expected value and standard deviation in a general way and design machine learning algorithms to optimize such a measure. A preliminary analysis suggests, however, that the calculation of the expectation and the variance are likely to be extremely complex in that case. Finally, it could also be interesting but difficult to adapt our results to the distribution-dependent case and compare them to those of [10]. Acknowledgments We thank Rob Schapire for pointing out to us the problem of the statistical significance of the AUC, Daryl Pregibon for the reference to [11], and Saharon Rosset for various discussions about the topic of this paper. References [1] D. Bamber. The Area above the Ordinal Dominance Graph and the Area below the Receiver Operating Characteristic Graph. Journal of Math. Psychology, 12, 1975. [2] Z. W. Birnbaum and O. M. Klose. Bounds for the Variance of the Mann-Whitney Statistic. Annals of Mathematical Statistics, 38, 1957. [3] J-H. Chauchat, R. Rakotomalala, M. Carloz, and C. Pelletier. Targeting Customer Groups using Gain and Cost Matrix; a Marketing Application. Technical report, ERIC Laboratory - University of Lyon 2, 2001. [4] Corinna Cortes and Mehryar Mohri. AUC Optimization vs. Error Rate Minimization. In Advances in Neural Information Processing Systems (NIPS 2003), volume 16, Vancouver, Canada, 2004. MIT Press. [5] D. Van Dantzig. On the Consistency and Power of Wilcoxon’s Two Sample Test. In Koninklijke Nederlandse Akademie van Weterschappen, Series A, volume 54, 1915. [6] J. P. Egan. Signal Detection Theory and ROC Analysis. Academic Press, 1975. [7] C. Ferri, P. Flach, and J. Hern´andez-Orallo. Learning Decision Trees Using the Area Under the ROC Curve. In Proceedings of the 19th International Conference on Machine Learning. Morgan Kaufmann, 2002. [8] Yoav Freund and Robert E. Schapire. A Decision Theoretical Generalization of OnLine Learning and an Application to Boosting. In Proceedings of the Second European Conference on Computational Learning Theory, volume 2, 1995. [9] Ronald L. Graham, Donald E. Knuth, and Oren Patashnik. Concrete Mathematics. Addison-Wesley, Reading, Massachusetts, 1994. [10] J. A. Hanley and B. J. McNeil. The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve. Radiology, 1982. [11] E. L. Lehmann. Nonparametrics: Statistical Methods Based on Ranks. Holden-Day, San Francisco, California, 1975. [12] M. C. Mozer, R. Dodier, M. D. Colagrosso, C. Guerra-Salcedo, and R. Wolniewicz. Prodding the ROC Curve: Constrained Optimization of Classifier Performance. In Neural Information Processing Systems (NIPS 2002). MIT Press, 2002. [13] C. Perlich, F. Provost, and J. Simonoff. Tree Induction vs. Logistic Regression: A Learning Curve Analysis. Journal of Machine Learning Research, 2003. [14] F. Provost and T. Fawcett. Analysis and Visualization of Classifier Performance: Comparison under Imprecise Class and Cost Distribution. In Proceedings of the Third International Conference on Knowledge Discovery and Data Mining. AAAI, 1997. [15] Saharon Rosset. Ranking-Methods for Flexible Evaluation and Efficient Comparison of 2-Class Models. Master’s thesis, Tel-Aviv University, 1999. [16] L. Yan, R. Dodier, M. C. Mozer, and R. Wolniewicz. Optimizing Classifier Performance via the Wilcoxon-Mann-Whitney Statistics. In Proceedings of the International Conference on Machine Learning, 2003.
|
2004
|
133
|
2,545
|
Maximising Sensitivity in a Spiking Network Anthony J. Bell, Redwood Neuroscience Institute 1010 El Camino Real, Suite 380 Menlo Park, CA 94025 tbell@rni.org Lucas C. Parra Biomedical Engineering Department City College of New York New York, NY 10033 parra@ccny.cuny.edu Abstract We use unsupervised probabilistic machine learning ideas to try to explain the kinds of learning observed in real neurons, the goal being to connect abstract principles of self-organisation to known biophysical processes. For example, we would like to explain Spike TimingDependent Plasticity (see [5,6] and Figure 3A), in terms of information theory. Starting out, we explore the optimisation of a network sensitivity measure related to maximising the mutual information between input spike timings and output spike timings. Our derivations are analogous to those in ICA, except that the sensitivity of output timings to input timings is maximised, rather than the sensitivity of output ‘firing rates’ to inputs. ICA and related approaches have been successful in explaining the learning of many properties of early visual receptive fields in rate coding models, and we are hoping for similar gains in understanding of spike coding in networks, and how this is supported, in principled probabilistic ways, by cellular biophysical processes. For now, in our initial simulations, we show that our derived rule can learn synaptic weights which can unmix, or demultiplex, mixed spike trains. That is, it can recover independent point processes embedded in distributed correlated input spike trains, using an adaptive single-layer feedforward spiking network. 1 Maximising Sensitivity. In this section, we will follow the structure of the ICA derivation [4] in developing the spiking theory. We cannot claim, as before, that this gives us an information maximisation algorithm, for reasons that we will delay addressing until Section 3. But for now, to first develop our approach, we will explore an interim objective function called sensitivity which we define as the log Jacobian of how input spike timings affect output spike timings. 1.1 How to maximise the effect of one spike timing on another. Consider a spike in neuron j at time tl that has an effect on the timing of another spike in neuron i at time tk. The neurons are connected by a weight wij. We use i and j to index neurons, and k and l to index spikes, but sometimes for convenience we will use spike indices in place of neuron indices. For example, wkl, the weight between an input spike l and an output spike k, is naturally understood to be just the corresponding wij. R(t) dtk dtl threshold potential resting potential k l t t input spikes output spikes u(t) du Figure 1: Firing time tk is determined by the time of threshold crossing. A change of an input spike time dtl affects, via a change of the membrane potential du the time of the output spike by dtk. In the simplest version of the Spike Response Model [7], spike l has an effect on spike k that depends on the time-course of the evoked EPSP or IPSP, which we write as Rkl(tk −tl). In general, this Rkl models both synaptic and dendritic linear responses to an input spike, and thus models synapse type and location. For learning, we need only consider the value of this function when an output spike, k, occurs. In this model, depicted in Figure 1, a neuron adds up its spiking inputs until its membrane potential, ui(t), reaches threshold at time tk. This threshold we will often, again for convenience, write as uk ≡ui(tk, {tl}), and it is given by a sum over spikes l: uk = X l wklRkl(tk −tl) . (1) To maximise timing sensitivity, we need to determine the effect of a small change in the input firing time tl on the output firing time tk. (A related problem is tackled in [2].) When tl is changed by a small amount dtl the membrane potential will change as a result. This change in the membrane potential leads to a change in the time of threshold crossing dtk. The contribution to the membrane potential, du, due to dtl is (∂uk/∂tl)dtl, and the change in du corresponding to a change dtk is (∂uk/∂tk)dtk. We can relate these two effects by noting that the total change of the membrane potential du has to vanish because uk is defined as the potential at threshold. ie: du = ∂uk ∂tk dtk + ∂uk ∂tl dtl = 0 . (2) This is the total differential of the function uk = u(tk, {tl}), and is a special case of the implicit function theorem. Rearranging this: dtk dtl = −∂uk ∂tl ∂uk ∂tk = −wkl ˙Rkl/ ˙uk . (3) Now, to connect with the standard ICA derivation [4], recall the ‘rate’ (or sigmoidal) neuron, for which yi = gi(ui) and ui = P j wijxj. For this neuron, the output dependence on input is ∂yi/∂xj = wijg′ i while the learning gradient is: ∂ ∂wij log ∂yi ∂xj = 1 wij −fi(ui)xj (4) where the ‘score functions’, fi, are defined in terms of a density estimate on the summed inputs: fi(ui) = ∂ ∂ui log g′ i = ∂ ∂ui log ˆp(ui). The analogous learning gradient for the spiking case, from (3), is: ∂ ∂wij log dtk dtl = 1 wij − P a j(a) ˙Rka ˙uk . (5) where j(a) = 1 if spike a came from neuron j, and 0 otherwise. Comparing the two cases in (4) and (5), we see that the input variable xj has become the temporal derivative of the sum of the EPSPs coming from synapse j, and the output variable (or score function) fi(ui) has become ˙u−1 k , the inverse of the temporal derivative of the membrane potential at threshold. It is intriguing (A) to see this quantity appear as analogous to the score function in the ICA likelihood model, and, (B) to speculate that experiments could show that this‘ voltage slope at threshold’ is a hidden factor in STDP data, explaining some of the scatter in Figure 3A. In other words, an STDP datapoint should lie on a 2-surface in a 3D space of {∆w, ∆t, ˙uk}. Incidentally, ˙uk shows up in any learning rule optimising an objective function involving output spike timings. 1.2 How to maximise the effect of N spike timings on N other ones. Now we deal with the case of a ‘square’ single-layer feedforward mapping between spike timings. There can be several input and output neurons, but here we ignore which neurons are spiking, and just look at how the input timings affect the output timings. This is captured in a Jacobian matrix of all timing dependencies we call T. The entries of this matrix are Tkl ≡∂tk/∂tl. A multivariate version of the sensitivity measure introduced in the previous section is the log of the absolute determinant of the timing matrix, ie: log |T|. The full derivation for the gradient ∇W log |T| is in the Appendix. Here, we again draw out the analogy between Square ICA [4] and this gradient, as follows. Square ICA with a network y = g(Wx) is: ∆W ∝∇W log |J| = W−1 −f(u)xT (6) where the Jacobian J has entries ∂yi/∂xj and the score functions are now, fi(u) = −∂ ∂ui log ˆp(u) for the general likelihood case, with ˆp(u) = Q i g′ i being the special case of ICA. We will now split the gradient in (6) according to the chain rule: ∇W log |J| = [∇J log |J|] ⊗[∇WJ] (7) = J−T ⊗ Jkl i(k) j(l) wkl −fk(u)xj . (8) In this equation, i(k) = δik and j(l) = δjl. The righthand term is a 4-tensor with entries ∂Jkl/∂wij, and ⊗is defined as A ⊗Bij = P kl AklBklij. We write the gradient this way to preserve, in the second term, the independent structure of the 1 →1 gradient term in (4), and to separate a difficult derivation into two easy parts. The structure of (8) holds up when we move to the spiking case, giving: ∇W log |T| = [∇T log |T|] ⊗[∇WT] (9) = T−T ⊗ " Tkl i(k) j(l) wkl − P a j(a) ˙Rka ˙uk !# (10) where i(k) is now defined as being 1 if spike k occured in neuron i, and 0 otherwise. j(l) and j(a) are analogously defined. Because the T matrix is much bigger than the J matrix, and because it’s entries are more complex, here the similarity ends. When (10) is evaluated for a single weight influencing a single spike coupling (see the Appendix for the full derivation), it yields: ∆wkl ∝∂log |T| ∂wkl = Tkl wkl T−1 lk −1 , (11) This is a non-local update involving a matrix inverse at each step. In the ICA case of (6), such an inverse was removed by the Natural Gradient transform (see [1]), but in the spike timing case, this has turned out not to be possible, because of the additional asymmetry introduced into the T matrix (as opposed to the J matrix) by the ˙Rkl term in (3). 2 Results. Nonetheless, this learning rule can be simulated. It requires running the network for a while to generate spikes (and a corresponding T matrix), and then for each input/output spike coupling, the corresponding synapse is updated according to (11). When this is done, and the weights learn, it is clear that something has been sacrificed by ignoring the issue of which neurons are producing the spikes. Specifically, the network will often put all the output spikes on one output neuron, with the rates of the others falling to zero. It is happy to do this, if a large log |T| can thereby be achieved, because we have not included this ‘which neuron’ information in the objective. We will address these and other problems in Section 3, but now we report on our simulation results on demultiplexing. 2.1 Demultiplexing spike trains. An interesting possibility in the brain is that ‘patterns’ are embedded in spatially distributed spike timings that are input to neurons. Several patterns could be embedded in single input trains. This is called multiplexing. To extract and propagate these patterns, the neurons must demultiplex these inputs using its threshold nonlinearity. Demultiplexing is the ‘point process’ analog of the unmixing of independent inputs in ICA. We have been able to robustly achieve demultiplexing, as we now report. We simulated a feed-forward network with 3 integrate-and-fire neurons and inputs from 3 presynaptic neurons. Learning followed (11) where we replace the inverse by the pseudoinverse computed on the spikes generated during 0.5 s. The pseudo-inverse is necessary because even though on average, the learning matches number of output spikes to number of input spikes, the matrix T is still not usually square and so its actual inverse cannot be taken. In addition, in these simulations, an additional term is introduced in the learning to make sure all the output neurons fire with equal probability. This partially counters the ignoral of the ‘which neuron’ information, which we explained above. Assuming Poisson spike count ni for the ith output neuron with equal firing rate ¯ni it is easy to derive in an approximate term that will control the spike count, P i(¯ni −ni). The target firing rates ¯ni were set to match the “source” spike train in this example. The network learns to demultiplex mixed spike trains, as shown in Figure 2. This demultiplexing is a robust property of learning using (11) with this new spike-controlling term. Finally, what about the spike-timing dependendence of the observed learning? Does it match experimental results? The comparison is made in Figure 3, and the answer is no. There is a timing-dependent transition between depression and potentiation in our result 0 50 100 150 200 250 300 350 400 450 500 1 2 3 Spike Trains mixed input trains 0 50 100 150 200 250 300 350 400 450 500 1 2 3 output 0 50 100 150 200 250 300 350 400 450 500 1 2 3 time in ms original spike train mixing 0 0.2 0.4 0.6 0.8 1 synaptic weights −0.5 0 0.5 1 Figure 2: Unmixed spike trains. The input (top lef) are 3 spike trains which are a mixture of three independent Poison processes (bottom left). The network unmixes the spike train to approximately recover the original (center left). In this example 19 spikes correspond to the original with 4 deletion and 2 insertions. The two panels at the right show the mixing (top) and synaptic weight matrix after training (bottom). in Figure 3B, but it is not a sharp transition like the experimental result in Figure 3A. In addition, it does not transition at zero (ie: when tk −tl = 0), but at a time offset by the rise time of the EPSPs. In earlier experiments, in which we tranformed the gradient in (11) by an approximate inverse Hessian, to get an approximate Natural Gradient method, a sharp transition did emerge in simulations. However, the approximate inverse Hessian was singular, and we had to de-emphasise this result. It does suggest, however, that if the Natural Gradient transform can be usefully done on some variant of this learning rule, it may well be what accounts for the sharp transition effect of STDP. 3 Discussion Although these derivations started out smoothly, the reader possibly shares the authors’ frustration at the approximations involved here. Why isn’t this simple, like ICA? Why don’t we just have a nice maximum spikelihood model, ie: a density estimation algorithm for multivariate point processes, as ICA was a model in continuous space? We are going to be explicit about the problems now, and will propose a direction where the solution may lie. The over-riding problem is: we are unable to claim that in maximising log |T|, we are maximising the mutual information between inputs and outputs because: 1. The Invertability Problem. Algorithms such as ICA which maximise log Jacobians can only be called Infomax algorithms if the network transformation is both deterministic and invertable. The Spike Response Model is deterministic, but it is not invertable in general. When not invertable, the key formula (considering here vectors of input and output timings, tin and tout)is transformed from simple to complex. ie: p(tout) = p(tin) |T| becomes p(tout) = Z solns tin p(tin) |T| d tin (12) Thus when not invertable, we need to know the Jacobians of all the inputs that could have caused an output (called here ‘solns’), something we simply don’t know. 2. The ‘Which Neuron’ Problem. Instead of maximising the mutual information I(tout, tin), we should be maximising I(tiout, tiin), where the vector ti is the timing −20 0 20 40 60 80 100 −100 −50 0 50 100 150 ∆ w (a.u.) (B) Gradient ∆ t (ms) −100 −50 0 50 100 −100 −50 0 50 100 150 ∆ t (ms) ∆ w / w (%) (A) STDP Figure 3: Dependence of synaptic modification on pre/post inter-spike interval. Left (A): From Froemke & Dan, Nature (2002)]. Dependence of synaptic modification on pre/post inter-spike interval in cat L2/3 visual cortical pyramidal cells in slice. Naturalistic spike trains. Each point represents one experiment. Right (B): According to Equation (11). Each point corresponds to an spike pair between approximately 100 input and 100 output spikes. vector, t, with the vector, i, of corresponding neuron indices, concatenated. Thus, ‘who spiked?’ should be included in the analysis as it is part of the information. 3. The Predictive Information Problem. In ICA, since there was no time involved, we did not have to worry about mutual informations over time between inputs and outputs. But in the spiking model, output spikes may well have (predictive) mutual information with future input spikes, as well as the usual (causal) mutual information with past input spikes. The former has been entirely missing from our analysis so far. These temporal and spatial infomation dependencies missing in our analysis so far, are thrown into a different light by a single empirical observation, which is that Spike TimingDependent Plasticity is not just a feedforward computation like the Spike Response Model. Specifically, there must be at least a statistical, if not a causal, relation between a real synapse’s plasticity and its neuron’s output spike timings, for Figure 3B to look like it does. It seems we have to confront the need for both a ‘memory’ (or reconstruction) model, such as the T we have thus far dealt with, in which output spikes talk about past inputs, and a ‘prediction’ model, in which they talk about future inputs. This is most easily understood from the point of view of Barber & Agakov’s variational Infomax algorithm [3]. They argue for optimising a lower bound on mutual information, which, for our neurons’, would be expressed using an inverse model ˆp, as follows: eI(tiin, tiout) = H(tiin) −⟨log ˆp(tiin|tiout)⟩p(tiin,tiout) ≤I(tiin, tiout) (13) In a feedforward model, H(tiin) may be disregarded in taking gradients, leading us to the optimisation of a ‘memory-prediction’ model ˆp(tiin|tiout) related to something supposedly happening in dendrites, somas and at synapses. In trying to guess what this might be, it would be nice if the math worked out. We need a square Jacobian matrix, T, so that |T| = ˆp(tiin|tiout) can be our memory/prediction model. Now let’s rename our feedforward timing Jacobian T (‘up the dendritic trees’), as −→ T, and let’s fantasise that there is some, as yet unspecified, feedback Jacobian ←− T (‘down the dendritic trees’), which covers electrotonic influences as they spread from soma to synapse, and which −→ T can be combined with by some operation ‘⊗’ to make things square. Imagine further, that doing this yields a memory/prediction model on the inputs. Then the T we are looking for is −→ T ⊗←− T, and the memory-prediction model is: ˆp(tiin|tiout) = −→ T ⊗←− T Ideally, the entries of −→ T should be as before, ie: −→ T kl = ∂tk/∂tl. What should the entries of ←− T be? Becoming just one step more concrete, suppose ←− T had entries ←− T lk = ∂cl/∂tk, where cl is some, as yet unspecified, value, or process, occuring at an input synapse when spike l comes in. What seems clear is that ⊗should combine the correctly tensorised forms of −→ T and ←− T (giving them each 4 indices ijkl), so that T = −→ T ⊗←− T sums over the spikes k and l to give a I × J matrix, where I is the number of output neurons, and J the number of input neurons. Then our quantity, T, would represent all dependencies of input neuronal activity on output activity, summed over spikes. Further, we imagine that ←− T contains reverse (feedback) electrotonic transforms from soma to synapse ←− R lk that are somehow symmetrically related to the feedforward Spike Responses from synapse to soma, which we now rename −→ R kl. Thinking for a moment in terms of somatic k and synaptic l, voltages V , currents I and linear cable theory, the synapse to soma transform, −→ R kl would be related to an impedance in Vk = Il−→ Z kl, while the soma to synapse transform, ←− R lk would be related to an admittance in Il = Vk←− Y lk [8]. The symmetry in these equations is that −→ Z kl is just the inverse conjugate of ←− Y lk. Finally, then, what is cl? And what is its relation to the calcium concentration, [Ca2+]l, at a synapse, when spike l comes in? These questions naturally follow from considering the experimental data, since it is known that the calcium level at synapses is the critical integrating factor in determining whether potentiation or depression occurs [5]. 4 Appendix: Gradient of log |T| for the full Spike Response Model. Here we give full details of the gradient for Gerstner’s Spike Response Model [7]. This is a general model for which Integrate-and-Fire is a special case. In this model the effect of a presynaptic spike at time tl on the membrane potential at time t is described by a post synaptic potential or spike response, which may also depend on the time that has passed since the last output spike tk−1, hence the spike response is written as R(t −tk−1, t −tl). This response is weighted by the synaptic strength wl. Excitatory or inhibitory synapses are determined by the sign of wl. Refractoriness is incorporated by adding a hyper-polarizing contribution (spike-afterpotential) to the membrane potential in response to the last preceding spike η(t −tk−1). The membrane potential as a function of time is therefore given by u(t) = η(t −tk−1) + X l wlR(t −tk−1, t −tl) . (14) We have ignored here potential contributions from external currents which can easily be included without modifying the following derivations. The output firing times tk are defined as the times for which u(t) reaches firing threshold from below. We consider a dynamic threshold, ϑ(t −tk−1), which may depend on the time since that last spike tk−1, together then output spike times are defined implicitly by: t = tk : u(t) = ϑ(t −tk−1) and du(t) dt > 0 . (15) For this more general model Tkl is given by Tkl = dtk dtl = − ∂u ∂tk −∂ϑ ∂tk −1 ∂u ∂tl = wkl ˙R(tk −tk−1, tk −tl, ) ˙u(tk) −˙ϑ(tk −tk−1) , (16) where ˙R(s, t), ˙u(t), and ˙ϑ(t) are derivatives with respect to t. The dependence of Tkl on tk−1 should be implicitly assumed. It has been omitted to simplify the notation. Now we compute the derivative of log |T| with respect to wkl. For any matrix T we have ∂log |T|/∂Tab = [T−1]ba. Therefore: ∂log |T| ∂wkl = X ab ∂log |T| ∂Tab ∂Tab ∂wkl X ab [T−1]ba ∂Tab ∂wkl . (17) Utilising the Kronecker delta δab = (1 if a = b, else 0), the derivative of (16) with respect to wkl gives: ∂Tab ∂wkl = ∂ ∂wkl " wab ˙R(ta −ta−1, ta −tb) η(ta −ta−1) + P c wac ˙R(ta −ta−1, ta −tc) −˙ϑ(ta −ta−1) # = δakδbl ˙R(ta −ta−1, ta −tb) ˙u(ta) −˙ϑ(ta −ta−1) −wab ˙R(ta −ta−1, ta −tb)δak ˙R(ta −ta−1, ta −tl) ˙u(ta) −˙ϑ(ta −ta−1) 2 = δakTab δbl wab −Tal wal . (18) Therefore: ∂log |T| ∂wkl = X ab [T−1]baδakTab δbl wab −Tal wal (19) = Tkl wkl [T−1]lk − X b [T−1]bkTkl ! = Tkl wkl [T−1]lk −1 . (20) Acknowledgments We are grateful for inspirational discussions with Nihat Ay, Michael Eisele, Hong Hui Yu, Jim Crutchfield, Jeff Beck, Surya Ganguli, Sophi`e Deneve, David Barber, Fabian Theis, Tony Zador and Arunava Banerjee. AJB thanks all RNI colleagues for many such discussions. References [1] Amari S-I. 1997. Natural gradient works efficiently in learning, Neural Computation, 10, 251-276 [2] Banerjee A. 2001. On the Phase-Space Dynamics of Systems of Spiking Neurons. Neural Computation, 13, 161-225 [3] Barber D. & Agakov F. 2003. The IM Algorithm: A Variational Approach to Information Maximization. Advances in Neural Information Processing Systems 16, MIT Press. [4] Bell A.J. & Sejnowski T.J. 1995. An information maximization approach to blind separation and blind deconvolution, Neural Computation, 7, 1129-1159 [5] Dan Y. & Poo M-m. 2004. Spike timing-dependent plasticity of neural circuits, Neuron, 44, 23-30 [6] Froemke R.C. & Dan Y. 2002. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature, 28, 416: 433-8 [7] Gerstner W. & Kistner W.M. 2002. Spiking neuron models, Camb. Univ. Press [8] Zador A.M., Agmon-Snir H. & Segev I. 1995. The morphoelectrotonic transform: a graphical approach to dendritic function, J. Neurosci., 15(3): 1669-1682
|
2004
|
134
|
2,546
|
Probabilistic computation in spiking populations Richard S. Zemel Dept. of Comp. Sci. Univ. of Toronto Quentin J. M. Huys Gatsby CNU UCL Rama Natarajan Dept. of Comp. Sci. Univ. of Toronto Peter Dayan Gatsby CNU UCL Abstract As animals interact with their environments, they must constantly update estimates about their states. Bayesian models combine prior probabilities, a dynamical model and sensory evidence to update estimates optimally. These models are consistent with the results of many diverse psychophysical studies. However, little is known about the neural representation and manipulation of such Bayesian information, particularly in populations of spiking neurons. We consider this issue, suggesting a model based on standard neural architecture and activations. We illustrate the approach on a simple random walk example, and apply it to a sensorimotor integration task that provides a particularly compelling example of dynamic probabilistic computation. Bayesian models have been used to explain a gamut of experimental results in tasks which require estimates to be derived from multiple sensory cues. These include a wide range of psychophysical studies of perception;13 motor action;7 and decision-making.3,5 Central to Bayesian inference is that computations are sensitive to uncertainties about afferent and efferent quantities, arising from ignorance, noise, or inherent ambiguity (e.g., the aperture problem), and that these uncertainties change over time as information accumulates and dissipates. Understanding how neurons represent and manipulate uncertain quantities is therefore key to understanding the neural instantiation of these Bayesian inferences. Most previous work on representing probabilistic inference in neural populations has focused on the representation of static information.1,12,15 These encompass various strategies for encoding and decoding uncertain quantities, but do not readily generalize to real-world dynamic information processing tasks, particularly the most interesting cases with stimuli changing over the same timescale as spiking itself.11 Notable exceptions are the recent, seminal, but, as we argue, representationally restricted, models proposed by Gold and Shadlen,5 Rao,10 and Deneve.4 In this paper, we first show how probabilistic information varying over time can be represented in a spiking population code. Second, we present a method for producing spiking codes that facilitate further processing of the probabilistic information. Finally, we show the utility of this method by applying it to a temporal sensorimotor integration task. 1 TRAJECTORY ENCODING AND DECODING We assume that population spikes R(t) arise stochastically in relation to the trajectory X(t) of an underlying (but hidden) variable. We use RT and XT for the whole trajectory and spike trains respectively from time 0 to T. The spikes RT constitute the observations and are assumed to be probabilistically related to the signal by a tuning function f(X, θi): P(R(i, T)|X(T)) ∝f(X, θi) (1) for the spike train of the ith neuron, with parameters θi. Therefore, via standard Bayesian inference, RT determines a distribution over the hidden variable at time T, P(X(T)|RT ). We first consider a version of the dynamics and input coding that permits an analytical examination of the impact of spikes. Let X(t) follow a stationary Gaussian process such that the joint distribution P(X(t1), X(t2), . . . , X(tm)) is Gaussian for any finite collection of times, with a covariance matrix which depends on time differences: Ctt′ = c(|t −t′|). Function c(|∆t|) controls the smoothness of the resulting random walks. Then, P(X(T)|RT ) ∝p(X(T)) R X(T ) dX(T)P(RT |X(T))P(X(T)|X(T)) (2) where P(X(T)|X(T)) is the distribution over the whole trajectory X(T) conditional on the value of X(T) at its end point. If RT are a set of conditionally independent inhomogeneous Poisson processes, we have P(RT |X(T)) ∝Q iτ f(X(tiτ), θi) exp −P i R τ dτ f(X(τ), θi) , (3) where tiτ∀τ are the spike times τ of neuron i in RT . Let χ = [X(tiτ)] be the vector of stimulus positions at the times at which we observed a spike and Θ = [θ(tiτ)] be the vector of spike positions. If the tuning functions are Gaussian f(X, θi) ∝exp(−(X −θi)2/2σ2) and sufficiently dense that P i R τ dτ f(X, θi) is independent of X (a standard assumption in population coding), then P(RT |X(T)) ∝exp(−∥χ−Θ∥2/2σ2) and in Equation 2, we can marginalize out X(T) except at the spike times tiτ: P(X(T)|RT ) ∝p(X(T)) R χ dχ exp −[χ, X(T)]T C−1 2 [χ, X(T)] −∥χ−Θ∥2 2σ2 (4) and C is the block covariance matrix between X(tiτ), x(T) at the spike times [ttτ] and the final time T. This Gaussian integral has P(X(T)|RT ) ∼N(µ(T), ν(T)), with µ(T) = CT t(Ctt + Iσ2)−1Θ = kΘ ν(T) = CT T −kCtT (5) CT T is the T, Tth element of the covariance matrix and CT t is similarly a row vector. The dependence in µ on past spike times is specified chiefly by the inverse covariance matrix, and acts as an effective kernel (k). This kernel is not stationary, since it depends on factors such as the local density of spiking in the spike train RT . For example, consider where X(t) evolves according to a diffusion process with drift: dX = −αXdt + σϵdN(t) (6) where α prevents it from wandering too far, N(t) is white Gaussian noise with mean zero and σ2 ϵ variance. Figure 1A shows sample kernels for this process. Inspection of Figure 1A reveals some important traits. First, the monotonically decreasing kernel magnitude as the time span between the spike and the current time T grows matches the intuition that recent spikes play a more significant role in determining the posterior over X(T). Second, the kernel is nearly exponential, with a time constant that depends on the time constant of the covariance function and the density of the spikes; two settings of these parameters produced the two groupings of kernels in the figure. Finally, the fully adaptive kernel k can be locally well approximated by a metronomic kernel k<R> (shown in red in Figure 1A) that assumes regular spiking. This takes advantage of the general fact, indicated by the grouping of kernels, that the kernel depends weakly on the actual spike pattern, but strongly on the average rate. The merits of the metronomic kernel are that it is stationary and only depends on a single mean rate rather than the full spike train RT . It also justifies 0 0.03 0.06 0.09 10 −4 10 −2 t−tspike Kernel size (weight) Kernels k and ks 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 −0.5 0 0.5 Time Space True stimulus and means C 0.04 0.06 0.08 0.1 0 5 10 ν2 / σ2 Time Variance ratio Space Full kernel D −0.5 0 0.5 Time Space Regular, stationary kernel A B E 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 −0.5 0 0.5 Figure 1: Exact and approximate spike decoding with the Gaussian process prior. Spikes are shown in yellow, the true stimulus in green, and P(X(T)|RT ) in gray. Blue: exact inference with nonstationary and red: approximate inference with regular spiking. A Kernel samples for a diffusion process as defined by equations 5, 6. B, C: Mean and variance of the inference. D: Exact inference with full kernel k and E: approximation based on metronomic kernel k<R>. (Equation 7). the form of decoder used for the network model in the next section.6 Figure 1D shows an example of how well Equation 5 specifies a distribution over X(t) through very few spikes. Finally, 1E shows a factorized approximation with the stationary kernel similar to that used by Hinton and Brown6 and in our recurrent network: ˆP(X(t)|R(t)) ∝Q i f(X, θi) P t j=0 ks j tij = exp(−E(X(t), R(t), t)), (7) By design, the mean is captured very well, but not the variance, which in this example grows too rapidly for long interspike intervals (Figure 1B, C). Using a slower kernel improves performance on the variance, but at the expense of the mean. We thus turn to the network model with recurrent connections that are available to reinstate the spike-conditional characteristics of the full kernel. 2 NETWORK MODEL FORMULATION Above we considered how population spikes RT specify a distribution over X(T). We now extend this to consider how interconnected populations of neurons can specify distributions over time-varying variables. We frame the problem and our approach in terms of a two-level network, connecting one population of neurons to another; this construction is intended to apply to any level of processing. The network maps input population spikes R(t) to output population spikes S(t), where input and output evolve over time. As with the input spikes, ST indicates the output spike trains from time 0 to T, and these output spikes are assumed to determine a distribution over a related hidden variable. For the recurrent and feedforward computation in the network, we start with the deceptively simple goal9 of producing output spikes in such a way that the distribution Q(X(T)|ST ) they imply over the same hidden variable X(T) as the input, faithfully matches P(X(T)|RT ). This might seem a strange goal, since one could surely just listen to the input spikes. However, in order for the output spikes to track the hidden variable, the dynamics of the interactions between the neurons must explicitly capture the dynamics of the process X(T). Once this ‘identity mapping’ problem has been solved, more general, complex computations can be performed with ease. We illustrate this on a multisensory integration task, tracking a hidden variable that depends on multiple sensory cues. The aim of the recurrent network is to take the spikes R(t) as inputs, and produce output spikes that capture the probabilistic dynamics. We proceed in two steps. We first consider the probabilistic decoding process which turns ST into Q(X(t)|ST ). Then we discuss the recurrent and feedforward processing that produce appropriate ST given this decoder. Note that this decoding process is not required for the network processing; it instead provides a computational objective for the spiking dynamics in the system. We use a simple log-linear decoder based on a spatiotemporal kernel:6 Q(X(T)|ST ) ∝exp(−E(X(T), ST , T)) , where (8) E(X, ST , T) = P j PT τ=0 S(j, T −τ)φj(X, τ) (9) is an energy function, and the spatiotemporal kernels are assumed separable: φj(X, τ) = gj(X)ψ(τ). The spatial kernel gj(X) is related to the receptive field f(X, θj) of neuron j and the temporal kernel φj(X, τ) to k<RT >. The dynamics of processing in the network follows a standard recurrent neural architecture for modeling cortical responses, in the case that network inputs R(t) and outputs S(t) are spikes. The effect of a spike on other neurons in the network is assumed to have some simple temporal dynamics, described here again by the temporal kernel ψ(τ): ri(t) = PT τ=0 R(i, T −τ)ψ(τ) sj(t) = PT τ=0 S(j, T −τ)ψ(τ) where T is the extent of the kernel. The response of an output neuron is governed by a stochastic spiking rule, where the probability that neuron j spikes at time t is given by: P(S(j, t) = 1) = σ(uj(t)) = σ (P i wijri(t) + P k vkjsk(t −1)) (10) where σ() is the logistic function, and W and V are the feedforward and recurrent weights. If ψ(τ) = exp(−κτ), then uj(T) = ψ(0)(Wj · R(T) + Vj · S(T)) + ψ(1)uj(T −1); this corresponds to a discretization of the standard dynamics for the membrane potential of a leaky integrate-and-fire neuron: τ duj dt = −γuj+WR+VS, where the leak γ is determined by the temporal kernel. The task of the network is to make Q(X(T)|ST ) of Equation 8 match P(X(T)|RT ) coming from one of the two models above (exact dynamic or approximate stationary kernel). We measure the discrepancy using the Kullback-Leibler (KL) divergence: J = P t KL [P(X(T)|RT )||Q(X(T)|ST )] (11) and, as a proof of principle in the experiments below, find optimal W and V by minimizing the KL divergence J using back-propagation through time (BPTT). In order to implement this in the most straightforward way, we convert the stochastic spiking rule (Equation 10) to a deterministic rule via the mean-field assumption: Sj(t) = σ (P i wijri(t) + P k vkjsk(t −1)). The gradients are tedious, but can be neatly expressed in a temporally recursive form. Note that our current focus in the system is on the representational capability of the system, rather than its learning. Our results establish that the system can faithfully represent the posterior distribution. We return to the issue of more plausible learning rules below. The resulting network can be seen as a dynamic spiking analogue of the recurrent network scheme of Pouget et al.:9 both methods formulate feedforward and recurrent connections so that a simple decoding of the output can match optimal but complex decoding applied to the inputs. A further advantage of the scheme proposed here is that it facilitates downstream processing of the probabilistic information, as the objective encourages the formation of distributions at the output that factorize across the units. 3 RELATED MODELS Ideas about the representation of probabilistic information in spiking neurons are in vogue. One treatment considers Poisson spiking in populations with regular tuning functions, assuming that stimuli change slowly compared with the inter-spike intervals.8 This leads to a Kalman filter account with much formal similarity to the models of P(X(T)|RT ). However, because of the slow timescale, recurrent dynamics can be allowed to settle to an underlying attractor. In another approach, the spiking activity of either a single neuron4 or a pair of neurons5 is considered as reporting (logarithmic) probabilistic information about an underlying binary hypothesis. A third treatment proposes that a population of neurons directly represents the (logarithmic) probability over the state of a hidden Markov model.10 Our method is closely related to the latter two models. Like Deneve’s4 we consider the transformation of input spikes to output spikes with a fixed assumed decoding scheme so that the dynamics of an underlying process is captured. Our decoding mechanism produces something like the predictive coding apparent in Deneve’s scheme, except that here, a neuron may not need to spike not only if it itself has recently spiked and thereby conveyed the appropriate information; but also if one of its population neighbors has recently spiked. This is explicitly captured by the recurrent interactions among the population. Our scheme also resembles Rao’s10 approach in that it involves population codes. Our representational scheme is more general, however, in that the spatiotemporal decoder defines the relationship between output spikes and Q(X(T)|ST ), whereas his method assumes a direct encoding, with each output neuron’s activity proportional to log Q(X(T)|ST ). Our decoder can produce such a direct encoding if the spatial and temporal kernels are delta functions, but other kernels permit coordination amongst the population to take into account temporal effects, and to produce higher fidelity in the output distribution. 4 EXPERIMENTS 1. Random walk. We describe two experiments. For ease of presentation and comparison, these simulations treat X(t) as a discrete variable, so that the encoding model is a hidden Markov model (HMM) rather than the Gaussian process defined above. The first is a random walk, as in Equation 6 and Figure 1, which allows us to make comparisons with the exact statistics. In a discrete setting, the walk parameters α and σϵ determine the entries in the transition matrix of the corresponding HMM; in a continuous one, the covariance function C of the Gaussian process. Input spikes are generated according to Gaussian tuning functions (Equation 1). In the recurrent network model, the spatiotemporal kernels are fixed: the spatial kernels are based on the regular locations of the output units j, gj(X) = |X −Xj|2/(1 + |X −Xj|2) and the temporal kernel is ψ(τ) = exp(−κτ), where κ = 2 is set to match the walk dynamics. In the following simulations, the network contained 20 inputs, 60 states, and 20 outputs. Results on two walk trajectories with different dynamics are shown in Figure 2. The network is trained on walks with parameters (α = 0.2, σϵ = 2) that force the state to move to and remain near the center. Figures 2A & B show that in intervals without input spikes, the inferred mean quickly shifts towards the center and remains there until evidence is received in the form of input spikes. The feedforward weights (Fig. 2F) show strong connections between an input unit and its corresponding output, while the learned recurrent weights (Fig. 2E) reflect the transition probabilities: units coding for extreme values have strong connections to those nearer the center, and units with preferred values near the center have strong self-connections. Fig. 2C&D) shows the results of testing this trained network on walks with different dynamics (α = 0.8, σϵ = 7). The resulting mismatch between the mean approximated trajectory (yellow line) and true stimulus (dashed line) (Fig. 2D), and the variance (Fig. 2H), shows that the learned weights capture the input dynamics. A B C D E F G H Output Neurons Input Neurons Feed−forward Weights 2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 −2 −1 0 1 2 3 4 Output Neurons Output Neurons Recurrent Weights 10 20 30 40 50 60 10 20 30 40 50 60 −8 −6 −4 −2 0 2 4 Output Neurons Output Neurons Recurrent Weights 2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 −2 −1.5 −1 −0.5 0 0.5 1 1.5 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 Time Variance Variance of Encoded and Decoded Distributions Full Inference Approximation 0 2 4 6 8 10 12 14 16 18 20 0 20 40 60 80 100 120 140 Time Variance Variance of Encoded and Decoded Distributions Full Inference Approximation Figure 2: Comparison between full inference using hidden Markov model and approximation using network model. Top Row: Full Inference (A,C) and approximation (B,D) results from two walks. Input spikes (RT ) are shown as green circles; output spikes (ST > .9) as magenta stars; true stimulus as dashed line; mean inferred trajectory as red line; mean approximated trajectory as yellow line; distributions P(X(t)|RT ) and Q(X(t)|ST ) at each timestep in gray. Bottom Row: Feedfoward (E); recurrent weights (F); variance of exact and approximate inference from walks 1 (G) and 2 (H). 2. Sensorimotor task. We next applied our framework to a recent experiment on probabilistic computation during sensorimotor processing.7 Here, human subjects tried to move a cursor on a display to a target by moving a (hidden) finger. The cursor was shown before the start of the movement, it was then hidden, except for one point of blurry visual feedback in the middle of the movement (with variances 0 = σ0 < σL < σM < σ∞= ∞). Unbeknownst to them, on the onset of movement, the cursor was displaced by ∆X, drawn from a prior distribution P(∆X). The subjects must estimate ∆X in order to compensate for the displacement and land the cursor on the target. The key result is that subjects learned and used the prior information P(∆X), and indeed integrated it with the visual information in a way that was appropriately sensitive to the amount of blur (figure 3A). The authors showed that a simple Bayesian model could account for these data. We model a population representation of the 2D cursor position X(t) on the screen. Two spiking input codes—from vision (RT v) and proprioception (RT p), present also in the absence of visual feedback—are mapped into an output code ST representing P(X(t)|RT v, RT p). This is a neural instantiation of Bayesian cue combination, and also an extension of the previous model to the dynamic case. The first experiment involved a Gaussian prior distribution: P(∆X) ∼N(1, .5). During initial experience subjects learn about this prior from trajectories; we determine the parameters of the HMM. We use BPTT to learn feedforward weights for the input spikes from the different modalities, and recurrent weights between the output units. The input population had 64 units per modality, while the state space and output population each had 100 units. Input spikes were Poisson based on tuning functions centered on a grid within the 2D space; spatiotemporal kernels were based on the (gridded) output units j. The model was tested in conditions directly matching the experiment, with the cursor and finger moving along a straight line trajectory from the current model’s estimate of the cursor position, < X(t) >Q(X(t)|ST ), to the target location. The model captures the main effects of the experiment (see Figure 3) with respect to visual blur. The prior was ignored when the sensory evidence was precise (σ0), it dominated on trials without feedback (σ∞), and the two 0 1 2 Final Deviation 0 1 2 −1 0 1 0 1 2 −1 0 1 0 1 2 −1 0 1 Imposed displacement (∆X) Figure 3: (a) Results for a typical subject from the first K¨ording-Wolpert experiment,7 for different degrees of blur in the visual feedback (σ{0,M,L,∞}). The average lateral displacement of the cursor from the target location at the trial end, as a function of the imposed displacement of the cursor from the finger location (∆X), which is drawn from N(1, .5). (b) Model results under the same testing conditions. See text for model details. (a) (b) (c) −2 0 2 Final Deviation Figure 4: (a) Bimodal prior P(∆X) ∼N(±2, 0.5) for cursor displacement in second K¨ording-Wolpert experiment.7 (b) Results from human subjects. (c) Model results. factors combined on intermediate degrees of blurriness. In the second experiment the prior was bimodal (Fig. 4A) and feedback was blurred (σL). For this prior, the final cursor location should be based on the more prevalent displacements, so responses based on optimal inference should be non-linear. may modify the posterior estimate of cursor location. Finally, its 2DIndeed, this is the case (Fig. 4B;C). Intuitively, the blurry visual feedback inadequately defines the true finger position, and thus the posterior P(X(t)|RT ) is influenced by the learned bimodal prior; the network model accurately reconstructs this optimal posterior. Our model generalizes the simple Bayesian account, and suggests new avenues for predictions. The dynamic nature of the system permits modeling the integration of several visual cues during the trial, as well as differential effects of the timing of visual feedback. The integration of cues in our model also allows it to capture interactions between them. Finally, its 2D nature allows our system to model other aspects of combining visual and proprioceptive cues, such as their varying and contrasting degrees of certainty across space.14 5 DISCUSSION We proposed a spiking population coding scheme for representing and propagating uncertainty which operates at the fine-grained timescale of individual inter-spike intervals. We motivated the key spatio-temporal spike kernels in the model from analytical results in a Gaussian process, and suggested two approximations to the exact decoding provided by these adaptive spatiotemporal kernels. The first is a regular stationary kernel while the second is a recurrent network model. We showed how gradient descent can set model parameters to match the requirements on the output distribution and capture the dynamics underlying a hidden variable. This is a dynamic and spiking extension of DPC,15 and a population extension of Deneve.4 We showed its proficiency by comparison with exact inference in a random walk, and a neural model that does not use a population code. The most important direction concerns biologically plausible learning in the full spiking form of the model. One possibility is to view spikes as a primitive action chosen by a neuron. In this case, we can use the analog of direct policy methods in partially observable Markov decision processes,2 with faithful tracking of X(t) leading to reward. It is also possible that simpler, Hebbian rules will suffice. A second future direction concerns inference of one variable from another using our spiking population code model. This problem involves marginalizing over intermediate variables, which is difficult in direct representations of distributions over these variables, due to approximating logs of sums with sums of logs;10 we are investigating how well our scheme can approximate this computation. We applied the model to a challenging sensorimotor integration task which has been used to demonstrate Bayesian inference. Since it offers a dynamic account, we can make a number of predictions about the consequences of variations to the experiment. Most interesting would be the case in which a bimodal likelihood is combined with a unimodal (or bimodal) prior (rather than vice-versa), or indeed two instances of visual feedback during the task. Acknowledgements We thank Sophie Deneve and Jon Pillow for helpful discussions. RZ & RN funded by NSERC, CIHR NET program; PD & QH by Gatsby Charitable Fdtn., BIBA consortium, UCL MB/PhD program. References [1] Anderson C.H. & Van Essen D.C. (1994). Neurobiological computational systems. In: Computational Intelligence: Imitating Life, Zurada, Marks, Robinson (ed.), IEEE Press, 213-222. [2] Baxter, J. & Bartlett, P. (2001). Infinite-horizon policy-gradient estimation. JAIR, 319 - 350. [3] Carpenter, R. H. S. & Williams, M. L. L. (1995). Neural computation of log likelihood in the control of saccadic eye movements. Nature, 377: 59-62. [4] Deneve, S. (2004). Bayesian inference in spiking neurons. NIPS-17. [5] Gold, JI & Shadlen, MN (2001). Neural computations that underlie decisions about sensory stimuli. Trends in Cognitive Sciences 5:10-16. [6] Hinton, GE & Brown, AD (2000) Spiking Boltzmann machines. NIPS-12: 122-129. [7] K¨ording, KP & Wolpert, D (2004) Bayesian integration in sensorimotor learning. Nature 427:244-247. [8] Latham, P., Deneve, S., & Pouget, A., (2004). Optimal computation with attractor networks. J Physiology, Paris. [9] Pouget, A., Zhang, K, Deneve, S, & Latham, P. (1998) Statistically efficient estimation using population codes. NeuralComputation, 10: 373-401. [10] Rao, R. (2004). Bayesian computation in recurrent neural circuits. Neural Computation, 16(1). [11] Rieke, F, Warland, D, de Ruyter v. Steveninck, & Bialek, W. (1999). Spikes. MIT Press. [12] Sahani, M & Dayan, P (2003) Doubly distributional population codes: Simultaneous representation of uncertainty and multiplicity. Neural Computation 15. [13] Saunders, J.A. & Knill, D.C. (2001). Perception of 3D surface orientation from skew symmetry. Vision Research, 41 (24) 3163 - 3183. [14] Van Beers, R. J., Sittig, A.C., & Denier, J.J. (1999). Integration of propriocetive and visual position-information J Neurophysiol, 81: 1355-1364. [15] Zemel, R.S., Dayan, P. & Pouget A. (1998). Probabilistic interpretation of population codes. Neural Computation, 10, 403-430.
|
2004
|
135
|
2,547
|
Theory of Localized Synfire Chain: Characteristic Propagation Speed of Stable Spike Patterns Kosuke Hamaguchi RIKEN Brain Science Institute Wako, Saitama 351-0198, JAPAN hammer@brain.riken.jp Masato Okada Dept. of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba, 277-8561, JAPAN okada@brain.riken.jp Kazuyuki Aihara Institute of Industrial Science, University of Tokyo & ERATO Aihara Complexity Modeling Project JST Meguro, Tokyo 153-8505, JAPAN aihara@sat.t.u-tokyo.ac.jp Abstract Repeated spike patterns have often been taken as evidence for the synfire chain, a phenomenon that a stable spike synchrony propagates through a feedforward network. Inter-spike intervals which represent a repeated spike pattern are influenced by the propagation speed of a spike packet. However, the relation between the propagation speed and network structure is not well understood. While it is apparent that the propagation speed depends on the excitatory synapse strength, it might also be related to spike patterns. We analyze a feedforward network with Mexican-Hattype connectivity (FMH) using the Fokker-Planck equation. We show that both a uniform and a localized spike packet are stable in the FMH in a certain parameter region. We also demonstrate that the propagation speed depends on the distinct firing patterns in the same network. 1 Introduction Neurons transmit information through spikes, but how the information is encoded remains a matter of debate. The classical view is that the firing rates of neurons encode information, while a recent view is that spatio-temporal spike patterns encode the information. For example, the synchrony of spikes observed in the cortex is thought to play functional roles in cognitive functions [1]. The mechanism of synchrony has been studied theoretically for several neural network models. Especially, the model of spike synchrony propagation through a feedforward network is called the synfire chain [2]. The mechanism of generating synchrony in a feedforward network can be described as follows. When feedforward connections are homogeneous with excitatory efficacy as a whole, .... W(θ - θ’) Mexican-Hat-type Connectivity inhibitory excitatory - + firing rate synaptic input π −π θ position r(θ’ , t) Ι (θ, t) α in Figure 1: Network architecture. Each layer consists of N neuron arranged in a circle. Each neuron projects its axon to a post-synaptic layer with Mexican-Hat-type connectivity. post-synaptic neurons accept similar synaptic inputs. If neurons receive similar temporally modulated inputs, the resultant spike timings will be also similar, or roughly synchronized even though the membrane potentials fluctuate because of noise [3]. The question in a feedforward network is, whether the timing of spikes within a layer becomes more synchronized or not as the spike packet propagates through a sequence of neural layers. Detailed analyses of the activity propagation in feedforward networks have shown that homogeneous feedforward networks with excitatory synapses have a stable spike synchrony propagation mode [4, 5, 6]. Neurons, however, are embedded in more structured networks with excitatory and inhibitory synapses. Thus, such network structure would generate inhomogeneous inputs to neurons and whether spike synchrony is stable is not a trivial issue. One simple way to detect the synfire chain phenomena would be to record from several neurons and find significant repeated patterns. If a spike packet propagates through a feedforward network, a statistically significant number of spike pairs would be found that have fixed relative time lags (or inter-spike intervals (ISIs).) Such correlated activity has been experimentally observed in the anterior forebrain pathway of songbirds [7], in the prefrontal cortex of primates [8], both in vivo and in vitro [9], and in an artificially constructed network in vitro [10]. To generate fixed ISIs by spike packet propagation, the propagation speed of a spike packet must be constant over several trials. The speed depends on spike patterns as well as the structure of the network. Conventional homogeneous feedforward networks have only one stable spike pattern, namely a spatially uniform synchronized activity, but structured networks can generally produce spatially inhomogeneous spike patterns. In those networks, the relation between propagation speed and differences in the spike pattern is not well understood. It is therefore an important problem to study a biologically realistic, structured network in the context of the synfire chain. Among suggested network structures, Mexican-Hat-type (MH) connectivity is one of the most widely accepted as being representative of connectivity in the cortex [11]. Studies of a feedforward network with the MH connectivity (FMH) have been reported [12]. In a FMH, a localized activity propagates through the network, and this network is preferable to a homogeneous feedforward network because it can transmit analog information regarding position [12], and both position and intensity [13]. However, no detailed analytical work on the structured synfire chain has been reported. In this paper, we use the Fokker-Planck equation to analyze the FMH. The method of the Fokker-Planck equation enables us to analyze the collective behavior of the membrane potentials in an identical neural population [14, 15]. When it is applied to the synfire chain [5], the detailed analysis of the flow diagram, the effect of the membrane potential distribution on the spike packet evolution, and the interaction of spike packets is possible [5]. This paper thus examines the feedforward neural network model with Mexican-Hat-type connectivity. Our strategy is, first, to describe the evolution of firing states through order parameters, which allows us to measure the macroscopic quantity of the network. Second, we relate the input order parameters to the output ones through the Fokker-Planck equation. Finally, we analyze the evolution of spike packets with various shapes, and investigate stable firing patterns and their propagation speeds. 2 Model We analyze the dynamics of a structured feedforward network composed of identical single compartment, Leaky Integrate-and-Fire (LIF) neurons. Each neuron is aligned in a ring neural layer, and projects its axon to the next neural layer with the Mexican-Hat-type connectivity (Fig.1). The input to one neuron generally includes both outputs from pre-synaptic neurons and a random noisy synaptic current from ongoing activity. If we assume that the thousands of synaptic background inputs that connect to one neuron are independent Poissonian, instantaneous synapse, and have small amplitude of post synaptic potential (PSP), we can approximate the sum of noisy background inputs as a Gaussian white noise fluctuating around the mean of the total noisy inputs. The membrane potential of a neuron at position θ at time t, which receives many random Poisson excitatory and inhibitory inputs, can be approximately described through a stochastic differential equation as follows: C dv(θ, t) dt = Iα in(θ, t) −v(θ, t) R + ˜µ + D′η(t), (1) Iα in(θ, t) = Z π −π dθ′ 2π W(θ −θ′)rα(θ′, t), (2) rα(θ, t) = Z 0 −∞ dt′r(θ, t −t′)α(t′), (3) W(θ) = W0 + W1 cos(θ), (4) where C is membrane capacitance, R is membrane resistance, I α in(θ, t) is the synaptic current to a neuron at position θ, ˜µ is proportional to the mean of the total noisy input, and η(t) is a Gaussian random variable with ⟨η(t)⟩= 0 and ⟨η(t)η(t′)⟩= δ(t−t′). Here, D′ is the amplitude of Gaussian noise. The input current Iα in(θ, t) is obtained from the weighted sum of output currents rα(θ, t) generated by pre-synaptic neurons. The synaptic current is derived from the convolution of its firing rate r(θ, t) with the PSP time course α(t). Here, α(t) = βα2t exp(−αt) where β is chosen such that a single EPSP generates a 0.0014 mV elevation from the resting potential. The Mexican-Hat-type connectivity consists of a uniform term W0 and a spatial modulation term W1 cos(θ). We set the reset potential and threshold potential as V0 and Vth, respectively. We start simulations from stationary distribution of membrane potentials. The input to the initial layer is formulated in terms of the firing rate on the virtual pre-synaptic layer, r(θ, t) = r0 + r1 cos(θ) √ 2πσ2 exp(−(t −¯t)2 2σ2 ), (5) where r0 and r1 are input parameters, and the temporal profile of activity is assumed to be the Gaussian with σ = 1 and ¯t = 10. Throughout this paper, parameter values are given as follows: C = 100 pF, R = 100 MΩ, Vth = 15 mV, D′ = 100, ˜µ = 0.075 pA, V0 = 0 mV, Vth = 15 mV, α = 2, and β = 0.00017. Space is divided into 50 regions for the Fokker-Planck equation approach, and 10000 LIF neurons per layer are used in simulations. 0 5 10 15 20 −10 0 10 −π 0 −10 0 10 −10 0 10 0 0.1 0.2 t=10.5 [ms] π θ = 0 P (v,t) θ P (v,t=10.5) θ mV [P(v,t)]θ A B C time [ms] mV mV Vth V0 15 −15 averaged membrane potential distribution LIF PDE Figure 2: A: Dynamics of membrane potential distribution at θ = 0. B: A snapshot of the membrane potential distributions for a range of [−π π] at t = 10.5. C: A snapshot of the membrane potential distribution averaged with position θ. Results by a numerical simulation (LIF) and the Fokker-Planck equation (PDE) are shown. 3 Theory The prerequisites for a full description of the network activity are time series of order parameters at an arbitrary time t defined as follows: r0(t) = 1 2π Z dθ r(θ, t), r1(t) = p (rc(t))2 + (rs(t))2, (6) rc(t) = 1 2π Z dθ cos(θ)r(θ, t), rs(t) = 1 2π Z dθ sin(θ)r(θ, t), (7) where r0(t) is the population firing rate of the neuron population, and rc(t) and rs(t) are coefficients of the Fourier transformation of the spatial firing pattern which represent the spatial eccentricity of activity at time t. rc(t) and rs(t) depends on the position of the localized activity, but r1(t) does not. These order parameters play an important role in two ways. First, we can describe the input to the next layer neuron at θ in a closed form of order parameters. Second, their time integrals, which will be introduced later, become indices of the spike packet shape. Input currents are described with the order parameters as follows: Iα in(θ, t) = W0rα 0 (t) + W1 (rα c (t) cos(θ) + rα s (t) sin(θ)) , (8) rα {0,c,s}(t) = Z 0 −∞ dt′α(t′) r{0,c,s}(t −t′). (9) Given the time sequence of order parameters in the pre-synaptic layer, the order parameters in the post-synaptic layer are obtained through the following calculations. The analytical method we use here is the Fokker-Planck equation which describes the distribution of the membrane potential of a pool of identical neurons as the probability density Pθ(v, t) of voltage v at time t. The suffix θ denotes that this neuron population is located at position θ. We assume that there are a large number of neurons at position θ. Equation (1) is equivalent to the Fokker-Planck equation [16] within the limit of a large neuron number N →∞, ∂ ∂tPθ(v, t) = ∂ ∂v Jθ(v, t) + δ(v −V0)Jθ(Vth, t), (10) Jθ(v, t) = v τ −Iα in(θ, t) + ˜µ C + ∂ ∂v D Pθ(v, t), (11) where Jθ(v, t) is a probability flux and D = 1 2 D′ C 2 . Boundary conditions are Pθ(Vth, t) = 0, (12) r(θ, t) = Jθ(V + 0 , t) −Jθ(V − 0 , t) = Jθ(Vth, t). (13) Equation (12) is the absorbing boundary condition at the threshold potential, and Eq. (13) is the current source at the reset potential. From Eq. (13), we obtain the firing rate of a postsynaptic neuron r(θ, t). The Fokker-Planck equations are solved based on the modified Chang-Cooper algorithm [17]. Figure 2 shows the actual distribution of the initial layer’s membrane potentials and their dynamics which accepts virtual layer activity with parameter r0 = 500 and r1 = 350. Figure 2A shows the evolution of the probability density Pθ=0(v, t). From white to black, the probability becomes higher. Figure 2B is a snapshot of the probability density at time t = 10.5 over the region from −π to π. As a result of a localized input injection, part of neuronal membrane potentials is strongly depolarized. The membrane potential distribution averaged over the neural layer is illustrated in Fig. 2C. It shows the consistency between the numerical simulations and the Fokker-Planck equations. The probability flow dropping out from the threshold potential Vth is a firing rate. Combined with these firing rates at each position θ and definitions of order parameters in Eqs. (6)-(7), the order parameters on the post-synaptic neural layer are again calculated. The closed forms of order parameters have been obtained. Spatio-temporal patterns of firing rates and dynamics of order parameters in response to a localized input (r0 = 600, r1 = 300) and a uniform input (r0 = 900, r1 = 0) are shown in Fig. 3. When an input is spatially localized, the spatio-temporal firing pattern is localized with a slightly distorted shape (Fig. 3A). On the other hand, when a uniform input is applied, the spatio-temporal firing pattern is uniform as illustrated in Fig. 3B. We show an example of the time course of r0(t) and r1(t) in Fig. 3C and 3D for both the numerical simulation of 10, 000 LIF neurons and the Fokker-Planck equation. Elevation of time course of r1(t) in Fig. 3C indicates the localized firing. In contrast, the uniform input generates no response in r1(t) parameter as illustrated in Fig. 3D. To quantitatively evaluate the spike packet shape and propagation speed, we define indices r0, r1, and σ. r0 and r1 can be directly defined as r0 = Z dt r0(t) −spontaneous firing rate, r1 = Z dt r1(t). (14) r0 corresponds to the total population activity, and r1 corresponds to spatial eccentricity of the activity. r0 and r1 are a natural extension of an index used in the study of the synfire chain [4] in the sense that an index corresponds to the area of a time varying parameters of the system, such as the population firing rate (r0(t)) above the spontaneous firing rate, or spatial eccentricity (r1(t)). The basic idea of characterizing the spike packet was to approximate the firing rate curve through a Gaussian function [4] as in Eq. (5). Here, the approximated Gaussian curve is obtained by minimizing the mean squared error with r0(t) and the Gaussian. We also use the index σ obtained from the variance of the Gaussian, and ¯t as an index for the arrival time of the spike synchrony taken from the peak time of the Gaussian (Fig. 3C). 4 Results Our observation of the activities of the FMH with various parameter sets reveals two types of stable spike packets. Figure 4 shows the activity of the FMH with four characteristic parameter sets of W0 and W1. Here we use r0 = 500 and r1 = 350 for the upper figures −π 0 π 0 500 1000 0 500 1000 [spk/s] A θ B firing rate firing rate FP LIF 5 10 15 20 5 10 15 20 time [ms] time [ms] r0(t) r1(t) −π 0 π θ [spk/s] 5 10 15 20 5 10 15 20 time [ms] time [ms] r0(t) r1(t) FP LIF Gaussian Approximation σ tC D Figure 3: Activity profiles in response to a localized input (A,C) and a uniform input (B,D). A,B: Spatio-temporal pattern of the firing rates from neurons at position −π to π. C,D: Time courses of order parameters (r0(t), r1(t)) calculated from numerical simulations of a population of LIF neurons (squares and crosses) and the Fokker-Planck equation (solid lines). The time course of order parameters in response to a single stimulus is approximated by using a Gaussian function. In C, Gaussian approximation of r0(t) is also shown. The variance of the Gaussian σ and the mean value ¯t are used as the indices of a spike packet. as a localized input and r0 = 900 and r1 = 0 for the lower ones as a uniform input. The common parameter is σ = 1 and ¯t = 2. When both W0 and W1 are small, no spike packet can propagate (Non-firing). When the uniform activation term W0 is sufficiently strong, a uniform spike packet is stable (Uniform Activity). Note that even though the localized input elicits localized spike packets with several layers, it finally decays to the uniform spike packet. When the Mexican-Hat term W1 is strong enough, only the localized spike packet is stable (Localized Activity). When W0 and W1 are balanced within a certain ratio, there exists a novel firing mode where both the uniform and the localized spike packet are stable depending on the initial layer input (Multi-stable). The results show that there are four types of phase and two types of spike packet in the FMH. The stability of a spike packet depends on W0 and W1. In addition, the difference of the arrival times of propagating spike packets in the 8th layer shown in the Multi-stable phase indicates that the propagation speed of spike packets might differ. It is apparent that the propagation speed depends on the strength of the excitatory synapse efficacy, however, our results in the Multi-stable phase in Fig. 4 suggest that a spike pattern also determines the propagation speed. To investigate this effect, we plotted propagation time ∆¯t, the difference between propagation time ¯tpost and ¯tpre for various W1 (Fig. 5B). The speed is analyzed after the spike packet indices r0, r1 and σ have converged. The convergences of spike packets are shown in the flow diagram in Fig. 5A for (W0, W1) = (1, 1.5) case. In Fig. 5B, each triangle indicates the speed of the localized activity, and each circle corresponds to that of the uniform activity. Within the plotted region (W1 = 1.4 ∼ 2), both the uniform and localized activities are stable, and no bursting activity is observed. This indicates that as W1 rises the propagation speed of localized activity becomes higher. In contrast, the propagation speed of the uniform activity does not depends on W1 because (W0,W1) = (0.7, 2.5) (W0,W1) = (1, 1.4) (W0,W1) = (1, 0.6) Localized Activity Uniform Activity (W0,W1) = (0.7, 1) Non-firing time [ms] 1 2 3 4 5 6 7 8 Multi-stable phase Layer Localized Input Uniform Input 1 2 3 4 5 6 7 0 5 10 15 20 8 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 Figure 4: Four characteristic FMH with different stable states. The evolutions of firing rate propagation are shown. The upper row panels show the response to an identical localized pulse input, and the lower row panels show the response to a uniform pulse input. uniform activity generates rc(t) = rs(t) = 0. 5 Summary We have found that there are four phases in the W0 −W1 parameter space; Non-firing, Localized activity, Uniform activity, and Multi-stable phase. Multi-stable phase is the most intriguing in that an identical network has completely different firing modes in response to different initial inputs. In this phase, the effect of spike pattern on the propagation speed of the spike packet can be directly studied. By the analysis of the Fokker-Planck equation, we found that the propagation speed depends on the distinct firing patterns in the same network. It implies that observation of repeated spike patterns requires an appropriately controlled input if the network structure produces a multi-stable state. The characteristic speed of the spike packet also suggests that the speed of information processing in the brain depends on the spiking pattern, or the representation of the information. Acknowledgment This study is partially supported by the Advanced and Innovational Research Program in Life Sciences, a Grant-in-Aid No. 15016023 for Scientific Research on Priority Areas (C) Advanced Brain Science Project, a Grand-in-Aid No. 14084212, from the Ministry of Education, Culture, Sports, Science, and Technology, the Japanese Government. References [1] C. M. Gray, P. K¨onig, A. K. Engel, and W. Singer, “Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties,” Nature, vol. 338, pp. 334–337, 1989. [2] M. Abeles, Corticonics: neural circuits of the cerebral cortex. Cambridge UP, 1991. [3] Z. Mainen and T. Sejnowski, “Reliability of spike timing in neocortical neurons,” Science, vol. 268, pp. 1503–1506, 1995. [4] M. Diesmann, M.-O. Gewaltig, and A. Aertsen, “Stable propagation of synchronous spiking in cortical neural networks,” Nature, vol. 402, pp. 529–533, 1999. Localized Uniform r0 σ/τ 200 600 1000 100 300 200 100 300 500 r1 0 0 A B w1 ∆ t [ms] w0 = 1 1.4 1.6 1.8 2 0.8 1 1.2 local uniform Figure 5: A: Flow diagram with parameter (W0, W1) = (1, 1.5) where both localized and uniform spike packets are stable. Two attractors in high r1 region (Localized) and high r0 with r1 = 0 region (Uniform) are shown. A sequence of arrows indicates the evolution of a spike packet in the (r0, σ/τ, r1) space. B: Plot of propagation time ∆¯t that is necessary time for a spike packet to propagate one neural layer. These results indicate that localized spike packets propagate slower than the uniform ones. [5] H. Cˆateau and T. Fukai, “Fokker-planck approach to the pulse packet propagation in synfire chain,” Neural Networks, vol. 14, pp. 675–685, 2001. [6] W. M. Kistler and W. Gerstner, “Stable propagation of activity pulses in populations of spiking neurons,” Neural Comp., vol. 14, pp. 987–997, 2002. [7] R. Kimpo, F. Theunissen, and A. Doupe, “Propagation of correlated activity through multiple stages of a neural circuit,” J. Neurosci., vol. 23, no. 13, pp. 5750–5761, 2003. [8] M. Abeles, H. Bergman, E. Margalit, and E. Vaadia, “Spatiotemporal firing patterns in the frontal cortex of behaving monkeys,” J. Neurophysiol., vol. 70, pp. 1629–1638, 1993. [9] Y. Ikegaya, G. Aaron, R. Cossart, D. Aronov, I. Lampl, D. Ferster, and R. Yuste, “Synfire chains and cortical songs: Temporal modules of cortical activity,” Science, vol. 304, pp. 559– 564, 2004. [10] A. Reyes, “Synchrony-dependent propagation of firing rate in iteratively constructed networks in vitro,” Nature Neuroscience, vol. 6, pp. 593 – 599, 2003. [11] D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex,” J. Physiol., vol. 160, pp. 106–154, 1962. [12] M. C. W. van Rossum, G. G. Turrigiano, and S. B. Nelson, “Fast propagation of firing rates through layered networks of noisy neurons,” J. Neurosci., vol. 22, pp. 1956–1966, 2002. [13] K. Hamaguchi and K. Aihara, “Quantitative information transfer through layers of spiking neurons connected by mexican-hat type connectivity,” Neurocomputing, vol. 58-60, pp. 85–90, 2004. [14] D. J. Amit and N. Brunel, “Dynamics of a recurrent network of spiking neurons before and following learning,” Network: Computation in Neural Systems, vol. 8, no. 4, pp. 373–404, 1997. [15] N. Brunel and V. Hakim, “Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates,” Neural Comp., vol. 11, no. 7, pp. 1621–1671, 1999. [16] H. Risken, The Fokker-Planck Equation. Springer-Verlag, 1996. [17] J. S. Chang and G. Cooper, “A practical difference scheme for fokker-planck equations,” J. Comp. Phys., vol. 6, pp. 1–16, 1970.
|
2004
|
136
|
2,548
|
Semi-supervised Learning on Directed Graphs Dengyong Zhou†, Bernhard Sch¨olkopf†, and Thomas Hofmann‡† †Max Planck Institute for Biological Cybernetics 72076 Tuebingen, Germany {dengyong.zhou, bernhard.schoelkopf}@tuebingen.mpg.de ‡Department of Computer Science, Brown University Providence, RI 02912 USA th@cs.brown.edu Abstract Given a directed graph in which some of the nodes are labeled, we investigate the question of how to exploit the link structure of the graph to infer the labels of the remaining unlabeled nodes. To that extent we propose a regularization framework for functions defined over nodes of a directed graph that forces the classification function to change slowly on densely linked subgraphs. A powerful, yet computationally simple classification algorithm is derived within the proposed framework. The experimental evaluation on real-world Web classification problems demonstrates encouraging results that validate our approach. 1 Introduction We consider semi-supervised classification problems on weighted directed graphs, in which some nodes in the graph are labeled as positive or negative, and where the task consists in classifying unlabeled nodes. Typical examples of this kind are Web page categorization based on hyperlink structure [4, 11] and document classification or recommendation based on citation graphs [10], yet similar problems exist in other domains such as computational biology. For the sake of concreteness, we will mainly focus on the Web graph in the sequel, i.e. the considered graph represents a subgraph of the Web, where nodes correspond to Web pages and directed edges represent hyperlinks between them (cf. [3]). We refrain from utilizing attributes or features associated with each node, which may or may not be available in applications, but rather focus on the analysis of the connectivity of the graph as a means for classifying unlabeled nodes. Such an approach inevitably needs to make some a priori premise about how connectivity and categorization of individual nodes may be related in real-world graphs. The fundamental assumption of our framework is the category similarity of co-linked nodes in a directed graph. This is a slightly more complex concept than in the case of undirected (weighted) graphs [1, 18, 12, 15, 17], where a typical assumption is that an edge connecting two nodes will more or less increase the likelihood of the nodes belonging to the same category. Co-linkage on the other hand seems a more suitable and promising concept in directed graphs, as is witnessed by its successful use in Web page categorization [4] as well as co-citation analysis for information retrieval [10]. Notice that co-linkage comes in two flavors: sibling structures, i.e. nodes with common parents, and co-parent structures, i.e. nodes with common children. In most Web and citation graph related application, the first assumption, namely that nodes with highly overlapping parent sets are likely to belong to the same category, seems to be more relevant (cf. [4]), but in general this will depend on the specific application. One possible way of designing classifiers based on graph connectivity is to construct a kernel matrix based on pairwise links [11] and then to adopt a standard kernel method, e.g. Support Vector Machines (SVMs) [16] as a learning algorithm. However, a kernel matrix as the one proposed in [11] only represents local relationships among nodes, but completely ignores the global structure of the graph. The idea of exploiting global rather than local graph structure is widely used in other Web-related techniques, including Web page ranking [2, 13], finding similar Web pages [7], detecting Web communities [13, 9] and so on. The major innovation of this paper is a general regularization framework on directed graphs, in which the directionality and global relationships are considered, and a computationally attractive classification algorithm, which is derived from the proposed regularization framework. 2 Regularization Framework 2.1 Preliminaries A directed graph Γ = (V, E) consists of a set of vertices, denoted by V and a set of edges, denoted by E ⊆V × V . Each edge is an ordered pair of nodes [u, v] representing a directed connection from u to v. We do not allow self loops, i.e. [v, v] ̸∈E for all v ∈V . In a weighted directed graph, a weight function w : V × V →R+ is associated with Γ, satisfying w([u, v]) = 0 if and only if [u, v] ̸∈E. Typically, we can equip a directed graph with a canonical weight function by defining w([u, v]) ≡1 if and only if [u, v] ∈E. The in-degree p(v) and out-degree q(v) of a vertex v ∈V , respectively, are defined as p(v) ≡ X {u|[u,v]∈E} w([u, v]), and q(v) ≡ X {u|[v,u]∈E} w([v, u]) . (1) Let H(V ) denote the space of functions f : V →R, which assigns a real value f(v) to each vertex v. The function f can be represented as a column vector in R|V |, where |V | denotes the number of the vertices in V . The function space H(V ) can be endowed with the usual inner product: ⟨f, g⟩= X v f(v)g(v). (2) Accordingly, the norm of the function induced from the inner product is ∥f∥= p ⟨f, f⟩. 2.2 Bipartite Graphs A bipartite graph G = (H, A, L) is a special type of directed graph that consists of two sets of vertices, denoted by H and A respectively, and a set of edges (or links), denoted by L ⊆H × A. In a bipartite graph, each edge connects a vertex in H to a vertex in A. Any directed graph Γ = (V, E) can be regarded as a bipartite graph using the following simple construction [14]: H ≡{h|h ∈V, q(h) > 0}, A ≡{a|a ∈V, p(a) > 0}, and L ≡E. Figure 1 depicts the construction of the bipartite graph. Notice that vertices of the original graph Γ may appear in both vertex sets H and A of the constructed bipartite graph. The intuition behind the construction of the bipartite graph is provided by the so-called hub and authority web model introduced by Kleinberg [13]. The model distinguishes between two types of Web pages: authoritative pages, which are pages relevant to some topic, and hub pages, which are pages pointing to relevant pages. Note that some Web pages can Figure 1: Constructing a bipartite graph from a directed one. Left: directed graph. Right: bipartite graph. The hub set H = {1, 3, 5, 6}, and the authority set A = {2, 3, 4}. Notice that the vertex indexed by 3 is simultaneously in the hub and authority set. simultaneously be both hub and authority pages (see Figure 1). Hubs and authorities exhibit a mutually reinforcement relationship: a good hub node points to many good authorities and a good authority node is pointed to by many good hubs. It is interesting to note that in general there is no direct link from one authority to another. It is the hub pages that glue together authorities on a common topic. According to Kleinberg’s model, we suggestively call the vertex set H in the bipartite graph the hub set, and the vertex set A the authority set. 2.3 Smoothness Functionals If two distinct vertices u and v in the authority set A are co-linked by vertex h in the hub set H as shown in the left panel of Figure 2, then we think that u and v are likely to be related, and the co-linkage strength induced by h between u and v can be measured by ch([u, v]) = w([h, u])w([h, v]) q(h) . (3) In addition, we define ch(v, v) = 0 for all v in the authority set A and for all h in the hub set H. Such a relevance measure can be naturally understood in the situation of citation networks. If two articles are simultaneously cited by some other article, then this should make it more likely that both articles deal with a similar topics. Moreover, the more articles cite both articles together, the more significant the connection. A natural question arising in this context is why the relevance measure is further normalized by out-degree. Let us consider the following two web sites: Yahoo! and kernel machines. General interest portals like Yahoo! consists of pages having a large number of diverse hyperlinks. The fact that two web pages are co-linked by Yahoo! does not establish a significant connection between them. In contrast, the pages on the kernel machine Web site have much fewer hyperlinks, but the Web pages pointed to are closely related in topic. Let f denote a function defined on the authority set A. The smoothness of function f can be measured by the following functional: ΩA(f) = 1 2 X u,v X h ch([u, v]) f(u) p p(u) −f(v) p p(v) 2 . (4) The smoothness functional penalizes large differences in function values for vertices in the authority set A that are strongly related. Notice that the function values are normalized by Figure 2: Link and relevance. Left panel: vertices u and v in the authority set A are colinked by vertex h in the hub set H. Right panel: vertices u and v in the hub set H co-link vertex a in the authority set A. in-degree. For the Web graph, the explanation is similar to the one given before. Many web pages contain links to popular sites like the Google search engine. This does not mean though that all these Web pages share a common topic. However, if two web pages point to web page like the one of the Learning with Kernels book, it is likely to express a common interest for kernel methods. Now define a linear operator T : H(A) →H(H) by (Tf)(h) = X a w([h, a]) p q(h)p(a) f(a). (5) Then its adjoint T ∗: H(H) →H(A) is given by (T ∗f)(a) = X h w([h, a]) p q(h)p(a) f(h). (6) These two operators T and T ∗were also implicitly suggested by [8] for developing a new Web page ranking algorithm. Further define the operator SA : H(A) →H(A) by composing T and T ∗, i.e. SA = T ∗T, (7) and the operator ∆A : H(A) →H(A) by ∆A = I −SA, (8) where I denotes the identity operator. Then we can show the following (See Appendix A for the proof): Proposition 1. ΩA(f) = ⟨f, ∆Af⟩. Comparing with the combinatorial Laplace operator defined on undirected graphs [5], we can think of the operator ∆A as a Laplacian but defined on the authority set of directed graphs. Note that Proposition 1 also shows that the Laplacian ∆A is positive semi-definite. In fact, we can further show that the eigenvalues of the operator SA are scattered in [0, 1], and accordingly the eigenvalues of the Laplacian ∆A fall into [0, 1]. Similarly, if two distinct vertices u and v co-link vertex a in the authority set A as shown in right panel of Figure 2, then u and v are also thought to be related. The co-linkage strength between u and v induced by a can be measured by ca([u, v]) = w([u, a])w([v, a]) p(a) . (9) and the smoothness of function f on the hub set H can be measured by: ΩH(f) = 1 2 X u,v X a ca([u, v]) f(u) p q(u) −f(v) p q(v) 2 . (10) As before, one can define the operators SH = TT ∗and ∆H = I −SH leading to the corresponding statement: Proposition 2. ΩH(f) = ⟨f, ∆Hf⟩. Convexly combining together the two smoothness functionals (4) and (10), we obtain a smoothness measure of function f defined on the whole vertex set V : Ωγ(f) = γΩA(f) + (1 −γ)ΩH(f), 0 ≤γ ≤1, (11) where the parameter γ weighs the relative importance between ΩA(f) and ΩH(f). Extend the operator T to H(V ) by defining (Tf)(v) = 0 if v is only in the authority set A and not in the hub set H. Similarly extend T ∗by defining (T ∗f)(v) = 0 if v is only in the hub set H and not in the authority set A. Then, if the remaining operators are extended correspondingly, one can define the operator Sγ : H(V ) →H(V ) by Sγ = γSA + (1 −γ)SH, (12) and the Laplacian on directed graphs ∆γ : H(V ) →H(V ) by ∆γ = I −Sγ . (13) Clearly, ∆γ = γ∆A + (1 −γ)∆H. By Proposition 1 and 2, it is easy to see that: Proposition 3. Ωγ(f) = ⟨f, ∆γf⟩. 2.4 Regularization Define a function y in H(V ) in which y(v) = 1 or −1 if vertex v is labeled as positive or negative, and 0, if it is not labeled. The classification problem can be regarded as the problem of finding a function f, which reproduces the target function y to a sufficient degree of accuracy while being smooth in a sense quantified by the above smoothness functional. A formalization of this idea leads to the following optimization problem: f ∗= argmin f∈H(V ) n Ωγ(f) + µ 2 ∥f −y∥2o . (14) The final classification of vertex v is obtained as sign f ∗(v). The first term in the bracket is called the smoothness term or regularizer, which measures the smoothness of function f, and the second term is called the fitting term, which measures its closeness to the given function y. The trade-off between these two competitive terms is captured by a positive parameter µ. Successively smoother solutions f ∗can be obtained by decreasing µ →0. Theorem 4. The solution f ∗of the optimization problem (14) satisfies ∆γf ∗+ µ(f ∗−y) = 0. Proof. By Proposition 3, we have (∆γf)(v) = ∂Ωγ(f) ∂f v . Differentiating the cost function in the bracket of (14) with respect to function f completes the proof. Corollary 5. The solution f ∗of the optimization problem (14) is f ∗= (1 −α)(I −αSγ)−1y, where α = 1/(1 + µ). It is worth noting that the closed form solution presented by Corollary 5 shares the same appearance as the algorithm proposed by [17], which operates on undirected graphs. 3 Experiments We considered the Web page categorization task on the WebKB dataset [6]. We only addressed a subset which contains the pages from the four universities: Cornell, Texas, Washington, Wisconsin. We removed pages without incoming or outgoing links, resulting in 858, 825, 1195 and 1238 pages respectively, for a total of 4116. These pages were manually classified into the following seven categories: student, faculty, staff, department, course, project and other. We investigated two different classification tasks. The first is used to illustrate the significance of connectivity information in classification, whereas the second one stresses the importance of preserving the directionality of edges. We may assign a weight to each hyperlink according to the textual content of web pages or the anchor text contained in hyperlinks. However, here we are only interested in how much we can obtain from link structure only and hence adopt the canonical weight function defined in Section 2.1. We first study an extreme classification problem: predicting which university the pages belong to from very few labeled training examples. Since pages within a university are well-linked, and cross links between different universities are rare, we can imagine that few training labels are enough to exactly classify pages based on link information only. For each of the universities, we in turn viewed corresponding pages as positive examples and the pages from the remaining universities as negative examples. We have randomly draw two pages as the training examples under the constraint that there is at least one labeled instance for each class. Parameters were set to γ = 0.50, and α = 0.95. (In fact, in this experiment the tuning parameters have almost no influence on the result.) Since the Web graph is not connected, some small isolated subgraphs possibly do not contain labeled instances. The values of our classifying function on the pages contained in these subgraphs will be zeros and we simply think of these pages as negative examples. This is consistent with the search engine ranking techniques [2, 13]. We compare our method with SVMs using a kernel matrix K constructed as K = W T W [11], where W denotes the adjacency matrix of the web graph and W T denotes the transpose of W. The test errors averaged over 100 training sample sets for both our method and SVMs are summarized into the following table: Cornel Texas Washington Wisconsin our method 0.03 (± 0.00) 0.02 (± 0.01) 0.01 (± 0.00) 0.02 (± 0.00) SVMs 0.42 (± 0.03) 0.39 (± 0.03) 0.40 (± 0.02) 0.43 (± 0.02) However, to be fair, we should state that the kernel matrix that we used in the SVM may not be the best possible kernel matrix for this task — this is an ongoing research issue which is not the topic of the present paper. The other investigated task is to discriminate the student pages in a university from the non-student pages in the same university. As a baseline we have applied our regularization method on the undirected graph [17] obtained by treating links as undirected or bidirectional, i.e., the affinity matrix is defined to be W T + W. We use the AUC scores to measure the performances of the algorithms. The experimental results in Figure 3(a)-3(d) clearly demonstrate that taking the directionality of edges into account can yield substantial accuracy gains. In addition, we also studied the influence of different choices for the parameters γ and α; we used the Cornell Web for that purpose and sampled 10 labeled training pages. Figure 3(e) show that relatively small values of α are more suitable. We think that is because the subgraphs in each university are quite small, limiting the information conveyed in the graph structure. The influence of γ is shown in Figure 3(f). The performance curve shows that large values for γ are preferable. This confirms the conjecture that co-link structure among authority pages is much more important than within the hub set. 2 4 6 8 10 12 14 16 18 20 0.4 0.5 0.6 0.7 0.8 0.9 # labeled points AUC directed (γ=1, α=0.10) undirected (α=0.10) (a) Cornell 2 4 6 8 10 12 14 16 18 20 0.4 0.5 0.6 0.7 0.8 0.9 # labeled points AUC directed (γ=1, α=0.10) undirected (α=0.10) (b) Texas 2 4 6 8 10 12 14 16 18 20 0.3 0.4 0.5 0.6 0.7 0.8 # labeled points AUC directed (γ=1, α=0.10) undirected (α=0.10) (c) Washington 2 4 6 8 10 12 14 16 18 20 0.3 0.4 0.5 0.6 0.7 0.8 # labeled points AUC directed (γ=1, α=0.10) undirected (α=0.10) (d) Wisconsin 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 parameter values AUC (e) α (Cornell) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 parameter values AUC (f) γ (Cornell) Figure 3: Classification on the WebKB dataset. Figure (a)-(d) depict the AUC scores of the directed and undirected regularization methods on the classification problem student vs. non-student in each university. Figure (e)-(f) illustrate the influences of the different choices of the parameters α and γ. 4 Conclusions We proposed a general regularization framework on directed graphs, which has been validated on a real-word Web data set. The remaining problem is how to choose the suitable parameters contained in this approach. In addition, it is worth noticing that this framework can be applied without any essential changes to bipartite graphs, e.g. to graphs describing customers’ purchase behavior in market basket analysis. Moreover, in the absence of labeled instances, this framework can be utilized in an unsupervised setting as a (spectral) clustering method for directed or bipartite graphs. Due to lack of space, we have not been able to give a thorough discussion of these topics. Acknowledgments We would like to thank David Gondek for his help on this work. References [1] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and regression on large graphs. In COLT, 2004. [2] S. Brin and L. Page. The anatomy of a large scale hypertextual web search engine. In Proc. 7th Intl. WWW Conf., 1998. [3] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A. Tomkins, and J. Wiener. Graph structure in the Web. In Proc. 9th Intl. WWW Conf., 2000. [4] S. Chakrabarti, B. Dom, and P. Indyk. Enhanced hypertext categorization using hyperlinks. In Proc. ACM SIGMOD Conf., 1998. [5] F. Chung. Spectral Graph Theory. Number 92 in Regional Conference Series in Mathematics. American Mathematical Society, 1997. [6] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery. Learning to extract symbolic knowledge from the World Wide Web. In Proc. 15th National Conf. on Artificial Intelligence, 1998. [7] J. Dean and M. Henzinger. Finding related Web pages in the World Wide Web. In Proc. 8th Intl. WWW Conf., 1999. [8] C. Ding, X. He, P. Husbands, H. Zha, and H. D. Simon. PageRank, HITS and a unified framework for link analysis. In Proc. 25th ACM SIGIR Conf., 2001. [9] G. Flake, S. Lawrence, C. L. Giles, and F. Coetzee. Self-organization and identification of Web communities. IEEE Computer, 35(3):66–71, 2002. [10] C. Lee Giles, K. Bollacker, and S. Lawrence. CiteSeer: An automatic citation indexing system. In Proc. 3rd ACM Conf. on Digital Libraries, 1998. [11] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext categorisation. In ICML, 2001. [12] T. Joachims. Transductive learning via spectral graph partitioning. In ICML, 2003. [13] J. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604–632, 1999. [14] R. Lempel and S. Moran. SALSA: the stochastic approach for link-structure analysis. ACM Transactions on Information Systems, 19(2):131–160, 2001. [15] A. Smola and R. Kondor. Kernels and regularization on graphs. In Learning Theory and Kernel Machines. Springer-Verlag, Berlin-Heidelberg, 2003. [16] V. N. Vapnik. Statistical learning theory. Wiley, NY, 1998. [17] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In NIPS, 2003. [18] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML, 2003. A Proof of Proposition 1 Expand the right site of Equ. (4): ΩA(f) = X u,v X h ch([u, v]) f 2(u) p(u) −f(u)f(v) p p(u)p(v) = X u X v X h ch([u, v]) f 2(u) p(u) − X u,v X h ch([u, v]) f(u)f(v) p p(u)p(v) . (15) By substituting Equ. (3), the first term in the above equality can be rewritten as X u X v X h w([h, u])w([h, v]) q(h) f 2(u) p(u) = X u X h w([h, u]) p(u) f 2(u) = X u f 2(u). (16) In addition, the second term in Equ. (15) can be transformed into X u,v X h w([h, u])w([h, v]) q(h) f(u)f(v) p p(u)p(v) = X u,v X h f(u) w([h, u]) p q(h)p(u) w([h, v]) p q(h)p(v) f(v). (17) Substituting Equ. (16) and (17) into (15), we have ΩA(f) = X u f 2(u) − X u,v X h f(u) w([h, u]) p q(h)p(u) w([h, v]) p q(h)p(v) f(v). (18) This completes the proof.
|
2004
|
137
|
2,549
|
Class-size Independent Generalization Analsysis of Some Discriminative Multi-Category Classification Methods Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract We consider the problem of deriving class-size independent generalization bounds for some regularized discriminative multi-category classification methods. In particular, we obtain an expected generalization bound for a standard formulation of multi-category support vector machines. Based on the theoretical result, we argue that the formulation over-penalizes misclassification error, which in theory may lead to poor generalization performance. A remedy, based on a generalization of multi-category logistic regression (conditional maximum entropy), is then proposed, and its theoretical properties are examined. 1 Introduction We consider the multi-category classification problem, where we want to find a predictor p : X →Y, where X is a set of possible inputs and Y is a discrete set of possible outputs. In many applications, the output space Y can be extremely large, and may be regarded as infinity for practical purposes. For example, in natural language processing and sequence analysis, the input can be an English sentence, and the output can be a parse or a translation of the sentence. For such applications, the number of potential outputs can be exponential of the length of the input sentence. As another example, in machine learning based webpage search and ranking, the input is the keywords and the output space consists of all web-pages. In order to handle such application tasks, from the theoretical point of view, we do not need to assume that the output space Y is finite, so that it is crucial to obtain generalization bounds that are independent of the size of Y. For such large scale applications, one often has a routine that maps each x ∈X to a subset of candidates GEN(x) ⊂Y, so that the desired output associated with x belongs to GEN(x). For example, for web-page search, GEN(x) consists of all pages that contain one or more keywords in x. For sequence annotation, GEN(x) may include all annotation sequences that are consistent. Although the set GEN(x) may significantly reduce the size of potential outputs Y, it can still be large. Therefore it is important that our learning bounds are independent of the size of GEN(x). We consider the general setting of learning in Hilbert spaces since it includes the popular kernel methods. Let our feature space H be a reproducing kernel Hilbert space with dot product ·. For a weight vector w ∈H, we use notation ∥w∥2 H = w · w. We associate each possible input/output pair (x, y) ∈X × Y with a feature vector fx,y ∈H. Our classifier is characterized by a weight vector w ∈H, with the following classification rule: pw(x) = arg max c∈GEN(x) w · fx,c. (1) Note that computational issues are ignored in this paper. In particular, we assume that the above decision can be computed efficiently (either approximately or exactly) even when GEN(x) is large. In practice, this is often possible either by heuristic search or dynamic programming (when GEN(X) has certain local-dependency structures). In this paper, we are only interested in the learning performance, so that we will not discuss the computational aspect. We assume that the input/output pair (x, y) ∈X ×Y is drawn from an unknown underlying distribution D. The quality of the predictor w is measured by some loss function. In this paper, we focus on the expected classification error with respect to D: ℓD(w) = E(X,Y ) I(pw(X), Y ), (2) where (X, Y ) is drawn from D, and I is the standard 0-1 classification error: I(Y ′, Y ) = 0 when Y ′ = Y and I(Y ′, Y ) = 1 when Y ′ ̸= Y . The general set up we described above is useful for many application problems, and has been investigated, for example, in [2, 6]. The important issue of class-size independent (or weakly dependent) generalization analysis has also been discussed there. Consider a set of training data S = {(Xi, Yi), i = 1, . . . , n}, where we assume that for each i, Yi ∈GEN(Xi). We would like to find ˆwS ∈H such that the classification error ℓD( ˆwS) is as small as possible. This paper studies regularized discriminative learning methods that estimate a weight vector ˆwS ∈H by solving the following optimization problem: ˆwS = arg min w∈H " 1 n n X i=1 L(w, Xi, Yi) + λ 2 ∥w∥2 H # , (3) where λ ≥0 is an appropriately chosen regularization parameter, and L(w, X, Y ) is a loss function which is convex of w. In this paper, we focus on some loss functions of the following form: L(w, X, Y ) = ψ X c∈GEN(X)\Y φ(w · (fX,Y −fX,c)) , where ψ and φ are appropriately chosen real-valued functions. Typically ψ is chosen as an increasing function and φ as a decreasing function, selected so that (3) is a convex optimization problem. The intuition behind this method is that the resulting optimization formulation favors large values w · (fXi,Yi −fXi,c) for all c ∈GEN(Xi)\Yi. Therefore, it favors a weight vector w ∈H such that w · fXi,Yi = arg maxc∈GEN(Xi) w · fXi,c, which encourages the correct classification rule in (1). The regularization term λ 2 ∥w∥2 H is included for capacity control, which has become the standard practice in machine learning nowadays. Two of the most important methods used in practice, multi-category support vector machines [7] and penalized multi-category logistic regression (conditional maximum entropy with Gaussian smoothing [1]), can be regarded as special cases of (3). The purpose of this paper is to study their generalization behaviors. In particular, we are interested in generalization bounds that are independent of the size of GEN(Xi). 2 Multi-category Support Vector Machines We consider the multi-category support vector machine method proposed in [7]. It is a special case of (3) with ˆwS computed based on the following formula: ˆwS = arg min w∈H 1 n n X i=1 X c∈GEN(Xi)\Yi h(w · (fXi,Yi −·fXi,c)) + λ 2 ∥w∥2 H , (4) where h(z) = max(1 −z, 0) is the hinge loss used in the standard SVM formulation. From the asymptotic statistical point of view, this formulation has some drawbacks in that there are cases such that the method does not lead to a classifier that achieves the Bayes error [9] (inconsistency). A Bayes consistent remedy has been proposed in [4]. However, method based on (4) has some attractive properties, and has been successfully used for some practical problems. We are interested in the generalization performance of (4). As we shall see, this formulation performs very well in the linearly separable (or near separable) case. Our analysis also reveals a problem of this method for non-separable problems. Specifically, the formulation over-penalizes classification error. Possible remedies will be suggested at the end of the section. We start with the following theorem, which specifies a generalization bound in a form often referred to as the oracle inequality. That is, it bounds the generalization performance of the SVM method (4) in terms of the best possible true multi-category SVM loss. Proof is left to Appendix B. Theorem 2.1 Let M = supX supY,Y ′∈GEN(X) ∥fX,Y −fX,Y ′∥H. The expected generalization error of (4) can be bounded as: ESℓD( ˆwS) ≤ESE(X,Y ) sup c∈GEN(X)\Y h( ˆwS · (fX,Y −fX,c)) ≤max(λn, M 2) + M 2 λn inf w∈H E(X,Y ) X c∈GEN(X)\Y h(w · (fX,Y −fX,c)) + λn∥w∥2 H 2(n + 1) , where ES is the expectation with respect to the training data. Note that the generalization bound does not depend on the size of GEN(X), which is what we want to achieve. The left-hand side of the theorem bounds the classification error of the multi-category SVM classifier in terms of supc∈GEN(X)\Y h( ˆwS · (fX,Y −fX,c)), while the right hand side in terms of P c∈GEN(X)\Y h(w · (fX,Y −fX,c)). There is a mismatch here. The latter is a very loose bound since it over-counts classification errors in the summation when multiple errors are made at the same point. In fact, although the class-size dependency does not come into our generalization analysis, it may well come into the summation term P c∈GEN(X)\Y h(w · (fX,Y −fX,c)) when multiple errors are made at the same point. We believe that this is a serious flaw of the method, which we will try to remedy later. However, the bound can be quite tight in the near separable case, when P c∈GEN(X)\Y h( ˆwS · (fX,Y −fX,c)) is small. The following Corollary gives such a result: Corollary 2.1 Assume that there is a large margin separator w∗∈H such that for each data point (X, Y ), the following margin condition holds: ∀c ∈GEN(X)\Y : w∗· fX,Y ≥w∗· fX,c + 1. Then in the limit of λ →0, the expected generalization error of (4) can be bounded as: ESℓD( ˆwS) ≤∥w∗∥2 H n + 1 sup X sup Y,Y ′∈GEN(X) ∥fX,Y −fX,Y ′∥2 H, where ES is the expectation with respect to the training data. Proof. Just choose w∗on the right hand side of Theorem 2.1. 2 The above result for (4) gives a class-size independent bound for large margin separable problems. The bound generalizes a similar result for two-class hard-margin SVM. It also matches a bound for multi-class perceptron in [2]. To our knowledge, this is the first result showing that the generalization performance of a batch large margin algorithm such as (4) can be class-size independent (at least in the separable case). Previous results in [2, 6], relying on the covering number analysis, lead to bounds that depend on the size of Y (although the result in [6] is of a different style). Our analysis also implies that the multi-category classification method (4) has good generalization behavior for separable problems. However, as pointed out earlier, for nonseparable problems, the formulation over-penalize classification error since in the summation, it may count classification error at a point multiple times when multiple mistakes are made at the point. A remedy is to replace the summation symbol P c∈GEN(Xi)\Yi in (4) by the sup operator supc∈GEN(Xi)\Yi, as we have used for bounding the classification error on the left hand side of Theorem 2.1. This is done in [3]. However, like (4), the resulting formulation is also inconsistent. Instead of using a hard-sup operator, we may also use a soft-sup operator, which can possibly lead to consistency. For example, consider the equality supc |hc| = limp→∞(P c |hc|p)1/p, we may approximate the right hand side limit with a large p. Another more interesting formulation is to consider supc hc = limp→∞p−1 ln(P c exp(phc)), which leads to a generalization of the conditional maximum entropy method. 3 Large Margin Discriminative Maximum Entropy Method Based on the motivation given at the end of the last section, we propose the following generalization of maximum entropy (multi-category logistic regression) with Gaussian prior (see [1]). It introduces a margin parameter into the standard maximum entropy formulation, and can be regarded as a special case of (3): ˆwS = arg min w∈H 1 n n X i=1 1 p ln 1 + X c∈GEN(Xi)\Yi ep(γ−w·(fXi,Yi −fXi,c)) + λ 2 ∥w∥2 H , (5) where γ is a margin condition, and p > 0 is a scaling factor (which in theory can also be removed by a redefinition of w and γ). If we choose γ = 0, then this formulation is equivalent to the standard maximum entropy method. If we pick the margin parameter γ = 1, and let p →∞, then 1 p ln 1 + X c∈GEN(Xi)\Yi ep(γ−w·(fXi,Yi−fXi,c)) → sup c∈GEN(Xi)\Yi h(w·(fXi,Yi−fXi,c)), where h(z) = max(0, 1 −z) is used in (4). In this case, the formulation reduces to (4) but with P c∈GEN(Xi)\Yi replaced by supc∈GEN(Xi)\Yi. As discussed at the end of last section, this solves the problem of over-counting the classification error. In general, even with a finite scaling factor p, the log-transform in (4) guarantees that one penalizes misclassification error at most 1 p ln |GEN(Xi)| times at a point, where |GEN(Xi)| is the size of GEN(Xi), while in (4), one may potentially over-penalize |GEN(Xi)| times. Clearly this is a desirable effect for non-separable problems. Methods in (5) have many attractive properties. In particular, we are able to derive class-size independent generalization bounds for this method. The proof of the following theorem is given in Appendix C. Theorem 3.1 Let M = supX supY,Y ′∈GEN(X) ∥fX,Y −fX,Y ′∥H. Define loss L(w, x, y) as: L(w, x, y) = 1 p ln 1 + X c∈GEN(x)\y ep(γ−w·(fx,y−fx,c)) , and let Qλ = inf w∈H E(X,Y )L(w, X, Y ) + λn 2(n + 1)∥w∥2 H . The expected generalization error of (5) can be bounded as: ESE(X,Y )L( ˆwS, X, Y ) ≤Qλ + M 2 λn (1 −e−pQλ). where ES is the expectation with respect to the training data. Theorem 3.1 gives a class-size independent generalization bound for (5). Note that the left hand side is the true loss of the ˆwS from (5), and the right hand size is specified in terms of the best possible regularized true loss Qλ, plus a penalty term that is no larger than M 2/(λn). It is clear that this generalization bound is class-size independent. Moreover, unlike Theorem 2.1, the loss function on the left hand side matches the loss function on the right hand side in Theorem 3.1. These are not trivial properties. In fact, most learning methods do not have these desirable properties. We believe this is a great advantage for the maximum entropy-type discriminative learning method in (5). It implies that this class of algorithms are suitable for problems with large number of classes. Moreover, we can see that the generalization performance is well-behaved no matter what values of p and γ we choose. If we take γ = 0 and p = 1, then we obtain a generalization bound for the popular maximum entropy method with Gaussian prior, which has been widely used in natural language processing applications. To our knowledge, this is the first generalization bound derived for this method. Our result not only shows the importance of Gaussian prior regularization, but also implies that the regularized conditional maximum entropy method has very desirable generalization behavior. Another interesting special case of (5) is to let γ = 1 and p →∞. For simplicity we only consider the case that |GEN(X)| is finite (but can be arbitrarily large). In this case, we note that 0 ≤L(w, X, Y ) −supc∈GEN(X)\Y h(w · (fX,Y −fX,c)) ≤ln |GEN(X)| p . We thus obtain from Theorem 3.1 a bound ESE(X,Y ) sup c∈GEN(X)\Y h( ˆwS · (fX,Y −fX,c)) ≤EX ln |GEN(X)| p + M 2 λn + inf w∈H " E(X,Y ) sup c∈GEN(X)\Y h(w · (fX,Y −fX,c)) + λ∥w∥2 H 2 # . Now we can take a sufficiently large p such that the term EX ln |GEN(X)|/p becomes negligible. Let p →∞, the result implies a bound for the SVM method in [3]. For non-separable problems, this bound is clearly superior to the SVM bound in Theorem 2.1 since the right hand side replaces the summation P c∈GEN(X)\Y by the sup operator supc∈GEN(X)\Y . In theory, this satisfactorily solves the problem of over-penalizing misclassification error. Moreover, an advantage over [3] is that for some p, consistency can be achieved. Our analysis also establishes a bridge between the Gaussian smoothed maximum entropy method [1] and the SVM method in [3]. 4 Conclusion We studied the generalization performance of some regularized multi-category classification methods. In particular, we derived a class-size independent generalization bound for a standard formulation of multi-category support vector machines. Based on the theoretical investigation, we showed that this method works well for linearly separable problems. However, it over-penalizes mis-classification error, leading to loose generalization bounds in the non-separable case. A remedy, based on a generalization of the maximum entropy method, is proposed. Moreover, we are able to derive class-size independent bounds for the newly proposed formulation, which implies that this class of methods (including the standard maximum entropy) are suitable for classification problems with very large number of classes. We showed that in theory, the new formulation provides a satisfactory solution to the problem of over-penalizing mis-classification error. A A general stability bound The following lemma is essentially a variant of similar stability results for regularized learning systems used in [8, 10]. We include the proof Sketch for completeness. Lemma A.1 Consider a sequence of convex functions Li(w) for i = 1, 2, . . . Define for k = 1, 2, . . . wk = arg min w " k X i=1 Li(w) + λn 2 ∥w∥2 H # . Then for all k ≥1, there exists subgradient (cf. [5]) ∇Lk+1(wk+1) of Li at wk+1 such that wk+1 = −1 λn k+1 X i=1 ∇Li(wk+1), ∥wk −wk+1∥H ≤1 λn∥∇Lk+1(wk+1)∥H. Proof Sketch. The first equality is the first-order condition for the optimization problem [5] where wk+1 is the solution. Now, subtracting this equality at wk and wk+1, we have: −λn(wk+1 −wk) = ∇Lk+1(wk+1) + k X i=1 (∇Li(wk+1) −∇Li(wk)). Multiply the two sides by wk+1 −wk, we obtain −λn∥wk+1−wk∥2 H = ∇Lk+1(wk+1)·(wk+1−wk)+ k X i=1 (∇Li(wk+1)−∇Li(wk))·(wk+1−wk). Note that ∇Li(wk+1) −∇Li(wk)) · (wk+1 −wk) = dLi(wk, wk+1) + dLi(wk+1, wk), where dL(w, w′) = L(w′) −L(w) −∇L(w) · (w′ −w) is often called the Bregman divergence of L, which is well-known to be non-negative for any convex function L (this claim is also easy to verify by definition). We thus have (∇Li(wk+1)−∇Li(wk))·(wk+1− wk) ≥0. It follows that −λn∥wk+1 −wk∥2 H ≥∇Lk+1(wk+1) · (wk+1 −wk) ≥−∥∇Lk+1(wk+1)∥H∥wk+1 −wk∥H. By canceling the factor ∥wk+1 −wk∥H, we obtain the second inequality. 2 B Proof Sketch of Theorem 2.1 Consider training samples (Xi, Yi) for i = 1, . . . , n+1. Let ˜wk be the solution of (4) with the training sample (Xk, Yk) removed from the set (that is, the summation is Pn+1 i=1,i̸=k), and let ˜w be the solution of (4) but with the summation Pn i=1 replaced by Pn+1 i=1 . Now for notation simplicity, we let zk,c = ˜w · (fXk,Yk −fXk,c) for c ∈GEN(X). It follows from Lemma A.1 that ∥˜w∥2 H = −1 λn n+1 X k=1 X c∈GEN(X) h′(zk,c)zk,c, ∥˜wk −˜w∥H ≤−M λn X c∈GEN(X)\Y h′(zk,c), where h′(·) denotes a subgradient of h(·). Therefore using the inequality −h′(z) ≤h(z)− h′(z)z, we have sup c∈GEN(Xk)\Yk [h( ˜wk · (fXk,Yk −fXk,c)) −h(zk,c)] ≤∥˜wk −˜w∥HM ≤−M 2 λn X c∈GEN(Xk)\Yk h′(zk,c) ≤M 2 λn X c∈GEN(Xk)\Yk [h(zk,c) −h′(zk,c)zk,c]. Summing over k = 1, . . . , n + 1, we obtain n+1 X k=1 sup c∈GEN(Xk)\Yk [h( ˜wk · (fXk,Yk −fXk,c)) −h(zk,c)] ≤M 2 λn X c∈GEN(Xk)\Yk n+1 X k=1 [h(zk,c) −h′(zk,c)zk,c] =M 2 λn X c∈GEN(Xk)\Yk n+1 X k=1 h(zk,c) + ∥˜w∥2 HM 2. Therefore given an arbitrary w ∈H, we have n+1 X k=1 sup c∈GEN(Xk)\Yk h( ˜wk · (fXk,Yk −fXk,c)) ≤(1 + M 2 λn ) X c∈GEN(Xk)\Yk n+1 X k=1 h(zk,c) + ∥˜w∥2 HM 2 ≤max(1 + M 2 λn , 2M 2 λn ) X c∈GEN(Xk)\Yk n+1 X k=1 h(zk,c) + λn 2 ∥˜w∥2 H ≤max(1 + M 2 λn , 2M 2 λn ) X c∈GEN(Xk)\Yk n+1 X k=1 h(w · (fXk,Yk −fXk,c)) + λn 2 ∥w∥2 H . Now, taking expectation with respect to the training data, we obtain the bound. C Proof Sketch of Theorem 3.1 Similar to the proof of Theorem 2.1, we consider training samples (Xi, Yi) for i = 1, . . . , n + 1. Let ˜wk be the solution of (5) with the training sample (Xk, Yk) removed from the set (that is, the summation is Pn+1 i=1,i̸=k), and let ˜w be the solution of (5) but with the summation Pn i=1 replaced by Pn+1 i=1 . It follows from Lemma A.1 that ∥˜wk −˜w∥H ≤1 λn∥∇L( ˜w, Xk, Yk)∥H ≤M λn(1 −e−pL( ˜w,Xk,Yk)). Therefore L( ˜wk, Xk, Yk) −L( ˜w, Xk, Yk) ≤M 2 λn (1 −e−pL( ˜w,Xk,Yk)). Now summing over k, we obtain 1 n + 1 n+1 X k=1 L( ˜wk, Xk, Yk) ≤ 1 n + 1 n+1 X k=1 L( ˜w, Xk, Yk) + M 2 λn 1 − 1 n + 1 n+1 X k=1 e−pL( ˜ w,Xk,Yk) ! . Taking expectation with respect to the training data, and using the following Jensen’s inequality: −ES 1 n + 1 n+1 X k=1 e−pL( ˜w,Xk,Yk) ≤−e−pES 1 n+1 P n+1 k=1 L( ˜w,Xk,Yk), we obtain ESE(Xk,Yk)L( ˜wk, Xk, Yk) ≤ES n+1 X k=1 L( ˜w, Xk, Yk) n + 1 +M 2 λn 1 −e−pES P n+1 k=1 L( ˜ w,Xk,Yk) n+1 . Now, using the fact ES Pn+1 k=1 L( ˜w, Xk, Yk) ≤(n+1)Qλ (which follows from the optimal property of ˜w), we obtain the theorem. References [1] Stanley Chen and Ronald Rosenfeld. A survey of smoothing techniques for ME models. IEEE Trans. Speech and Audio Processing, 8:37–50, 2000. [2] Michael Collins. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In IWPT, 2001. available at http://www.ai.mit.edu/people/mcollins/publications.html. [3] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265–292, 2001. [4] Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines, theory, and application to the classification of microarray data and satellite radiance data. Journal of American Statistical Association, 99:67–81, 2004. [5] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970. [6] Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin markov networks. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [7] J. Weston and C. Watkins. Multi-class support vector machines. Technical Report CSD-TR-98-04, Royal Holloway, 1998. [8] Tong Zhang. Leave-one-out bounds for kernel methods. Neural Computation, 15:1397–1437, 2003. [9] Tong Zhang. Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research, 5:1225–1251, 2004. [10] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statitics, 32:56–85, 2004. with discussion.
|
2004
|
138
|
2,550
|
ℓ0-norm Minimization for Basis Selection David Wipf and Bhaskar Rao ∗ Department of Electrical and Computer Engineering University of California, San Diego, CA 92092 dwipf@ucsd.edu, brao@ece.ucsd.edu Abstract Finding the sparsest, or minimum ℓ0-norm, representation of a signal given an overcomplete dictionary of basis vectors is an important problem in many application domains. Unfortunately, the required optimization problem is often intractable because there is a combinatorial increase in the number of local minima as the number of candidate basis vectors increases. This deficiency has prompted most researchers to instead minimize surrogate measures, such as the ℓ1-norm, that lead to more tractable computational methods. The downside of this procedure is that we have now introduced a mismatch between our ultimate goal and our objective function. In this paper, we demonstrate a sparse Bayesian learning-based method of minimizing the ℓ0-norm while reducing the number of troublesome local minima. Moreover, we derive necessary conditions for local minima to occur via this approach and empirically demonstrate that there are typically many fewer for general problems of interest. 1 Introduction Sparse signal representations from overcomplete dictionaries find increasing relevance in many application domains [1, 2]. The canonical form of this problem is given by, min w ∥w∥0, s.t. t = Φw, (1) where Φ ∈ℜN×M is a matrix whose columns represent an overcomplete basis (i.e., rank(Φ) = N and M > N), w is the vector of weights to be learned, and t is the signal vector. The actual cost function being minimized represents the ℓ0-norm of w (i.e., a count of the nonzero elements in w). In this vein, we seek to find weight vectors whose entries are predominantly zero that nonetheless allow us to accurately represent t. While our objective function is not differentiable, several algorithms have nonetheless been derived that (i), converge almost surely to a solution that locally minimizes (1) and more importantly (ii), when initialized sufficiently close, converge to a maximally sparse solution that also globally optimizes an alternate objective function. For convenience, we will refer these approaches as local sparsity maximization (LSM) algorithms. For example, procedures that minimize ℓp-norm-like diversity measures1 have been developed such that, if p is chosen sufficiently small, we obtain a LSM algorithm [2, 3]. Likewise, a Gaussian entropybased LSM algorithm called FOCUSS has been developed and successfully employed to ∗This work was supported by an ARCS Foundation scholarship, DiMI grant 22-8376 and Nissan. 1Minimizing a diversity measure is often equivalent to maximizing sparsity. solve Neuromagnetic imaging problems [4]. A similar algorithm was later discovered in [5] from the novel perspective of a Jeffrey’s noninformative prior. While all of these methods are potentially very useful candidates for solving (1), they suffer from one significant drawback: as we have discussed in [6], every local minima of (1) is also a local minima to the LSM algorithms. Unfortunately, there are many local minima to (1). In fact, every basic feasible solution w∗ to t = Φw is such a local minimum.2 To see this, we note that the value of ∥w∗∥0 at such a solution is less than or equal to N. Any other feasible solution can be written as w∗+αw′, where w′ ∈Null(Φ). For simplicity, if we assume that every subset of N columns of Φ are linearly independent, the unique representation property (URP), then w′ must necessarily have nonzero elements in locations that differ from w∗. Consequently, any solution in the neighborhood of w∗will satisfy ∥w∗∥0 < ∥w∗+ αw′∥0. This ensures that all such w∗ represent local minima to (1). The number of basic feasible solutions is bounded between M−1 N + 1 and M N ; the exact number depends on t and Φ [4]. Regardless, when M ≫N, we have an large number of local minima and not surprisingly, we often converge to one of them using currently available LSM algorithms. One potential remedy is to employ a convex surrogate measure in place of the ℓ0-norm that leads to a more tractable optimization problem. The most common choice is to use the alternate norm ∥w∥1, which creates a unimodal optimization problem that can be solved via linear programming or interior point methods. The considerable price we must pay, however, is that the global minimum of this objective function need not coincide with the sparsest solutions to (1).3 As such, we may fail to recover the maximally sparse solution regardless of the initialization we use (unlike a LSM procedure). In this paper, we will demonstrate an alternative algorithm for solving (1) using a sparse Bayesian learning (SBL) framework. Our objective is twofold. First, we will prove that, unlike minimum ℓ1-norm methods, the global minimum of the SBL cost function is only achieved at the minimum ℓ0-norm solution to t = Φw. Later, we will show that this method is only locally minimized at a subset of basic feasible solutions and therefore, has fewer local minima than current LSM algorithms. 2 Sparse Bayesian Learning Sparse Bayesian learning was initially developed as a means of performing robust regression using a hierarchal prior that, empirically, has been observed to encourage sparsity [8]. The most basic formulation proceeds as follows. We begin with an assumed likelihood model of our signal t given fixed weights w, p(t|w) = (2πσ2)−N/2 exp −1 2σ2 ∥t −Φw∥2 . (2) To provide a regularizing mechanism, we assume the parameterized weight prior, p(w; γ) = M Y i=1 (2πγi)−1/2 exp −w2 i 2γi , (3) where γ = [γ1, . . . , γM]T is a vector of M hyperparameters controlling the prior variance of each weight. These hyperparameters (along with the error variance σ2 if necessary) can be estimated from the data by marginalizing over the weights and then performing ML optimization. The marginalized pdf is given by p(t; γ) = Z p(t|w)p(w; γ)dw = (2π)−N/2 |Σt|−1/2 exp −1 2tT Σ−1 t t , (4) 2A basic feasible solution is a solution with at most N nonzero entries. 3In very restrictive settings, it has been shown that the minimum ℓ1-norm solution can equal the minimum ℓ0-norm solution [7]. But in practical situations, this result often does not apply. where Σt ≜σ2I + ΦΓΦT and we have introduced the notation Γ ≜diag(γ).4 This procedure is referred to as evidence maximization or type-II maximum likelihood [8]. Equivalently, and more conveniently, we may instead minimize the cost function L(γ; σ2) = −log p(t; γ) ∝log |Σt| + tT Σ−1 t t (5) using the EM algorithm-based update rule for the (k + 1)-th iteration given by ˆw(k+1) = E w|t; γ(k) = ΦT Φ + σ2Γ−1 (k) −1 ΦT t (6) γ(k+1) = E diag(wwT )|t; γ(k) = diag ˆw(k) ˆwT (k) + σ−2ΦT Φ + Γ−1 (k) −1 .(7) Upon convergence to some γML, we compute weight estimates as ˆw = E[w|t; γML], allowing us to generate ˆt = Φ ˆw ≈t. We now quantify the relationship between this procedure and ℓ0-norm minimization. 3 ℓ0-norm minimization via SBL Although SBL was initially developed in a regression context, it can nonetheless be easily adapted to handle (1) by fixing σ2 to some ε and allowing ε →0. To accomplish this we must reexpress the SBL iterations to handle the low noise limit. Applying standard matrix identities and the general result lim ε→0 U T εI + UU T −1 = U †, (8) we arrive at the modified update rules ˆw(k) = Γ1/2 (k) ΦΓ1/2 (k) † t (9) γ(k+1) = diag ˆw(k) ˆwT (k) + I −Γ1/2 (k) ΦΓ1/2 (k) † Φ Γ(k) , (10) where (·)† denotes the Moore-Penrose pseudo-inverse. We observe that all ˆw(k) are feasible, i.e., t = Φ ˆw(k) for all γ(k).5 Also, upon convergence we can easily show that if γML is sparse, ˆw will also be sparse while maintaining feasibility. Thus, we have potentially found an alternative way of solving (1) that is readily computable via the modified iterations above. Perhaps surprisingly, these update rules are equivalent to the Gaussian entropybased LSM iterations derived in [2, 5], with the exception of the [I −Γ1/2 (k) (ΦΓ1/2 (k) )†Φ]Γ(k) term. A firm connection with ℓ0-norm minimization is realized when we consider the global minimum of L(γ; σ2 = ε) in the limit as ε approaches zero. We will now quantify this relationship via the following theorem, which extends results from [6]. Theorem 1. Let W0 denote the set of weight vectors that globally minimize (1). Furthermore, let W(ε) be defined as the set of weight vectors w∗∗: w∗∗= ΦT Φ + εΓ−1 ∗∗ −1 ΦT t, γ∗∗= arg min γ L(γ; σ2 = ε) . (11) Then in the limit as ε →0, if w ∈W(ε), then w ∈W0. 4We will sometimes use Γ and γ interchangeably when appropriate. 5This assumes that t is in the span of the columns of Φ associated with nonzero elements in γ, which will always be the case if t is in the span of Φ and all γ are initialized to nonzero values. A full proof of this result is available at [9]; however, we provide a brief sketch here. First, we know from [6] that every local minimum of L(γ; σ2 = ε) is achieved at a basic feasible solution γ∗(i.e., a solution with N or fewer nonzero entries), regardless of ε. Therefore, in our search for the global minimum, we only need examine the space of basic feasible solutions. As we allow ε to become sufficiently small, we show that L(γ∗; σ2 = ε) = (N −∥γ∗∥0) log(ε) + O(1) (12) at any such solution. This result is minimized when ∥γ∗∥0 is as small as possible. A maximally sparse basic feasible solution, which we denote γ∗∗, can only occur with nonzero elements aligned with the nonzero elements of some w ∈W0. In the limit as ε →0, w∗∗ becomes feasible while maintaining the same sparsity profile as γ∗∗, leading to the stated result. This result demonstrates that the SBL framework can provide an effective proxy to direct ℓ0-norm minimization. More importantly, we will now show that the limiting SBL cost function, which we will henceforth denote L(γ) ≜lim ε→0 L(γ; σ2 = ε) = log ΦΓΦT + tT ΦΓΦT −1 t, (13) need not have the same problematic local minima profile as other methods. 4 Analysis of Local Minima Thus far, we have demonstrated that there is a close affiliation between the limiting SBL framework and the the minimization problem posed by (1). We have not, however, provided any concrete reason why SBL should be preferred over current LSM methods of finding sparse solutions. In fact, this preference is not established until we carefully consider the problem of convergence to local minima. As already mentioned, the problem with current methods of minimizing ∥w∥0 is that every basic feasible solution unavoidably becomes a local minimum. However, what if we could somehow eliminate all or most of these extrema. For example, consider the alternate objective function f(w) ≜min(∥w∥0, N), leading to the optimization problem min w f(w), s.t. t = Φw. (14) While the global minimum remains unchanged, we observe that all local minima occurring at non-degenerate basic feasible solutions have been effectively removed.6 In other words, at any solution w∗with N nonzero entries, we can always add a small component αw′ ∈Null(Φ) without increasing f(w), since f(w) can never be greater than N. Therefore, we are free to move from basic feasible solution to basic feasible solution without increasing f(w). Also, the rare degenerate basic solutions that do remain, even if suboptimal, are sparser by definition. Therefore, locally minimizing our new problem (14) is clearly superior to locally minimizing (1). But how can we implement such a minimization procedure, even approximately, in practice? Although we cannot remove all non-degenerate local minima and still retain computational feasibility, it is possible to remove many of them, providing some measure of approximation to (14). This is effectively what is accomplished using SBL as will be demonstrated below. Specifically, we will derive necessary conditions required for a non-degenerate basic feasible solution to represent a local minimum to L(γ). We will then show that these conditions are frequently not satisfied, implying that there are potentially many fewer local minima. Thus, locally minimizing L(γ) comes closer to (locally) minimizing (14) than current LSM methods, which in turn, is closer to globally minimizing ∥w∥0. 6A degenerate basic feasible solution has strictly less than N nonzero entries; however, the vast majority of local minima are non-degenerate, containing exactly N nonzero entries. 4.1 Necessary Conditions for Local Minima As previously stated, all local minima to L(γ) must occur at basic feasible solutions γ∗. Now suppose that we have found a (non-degenerate) γ∗with associated w∗computed via (9) and we would like to assess whether or not it is a local minimum to our SBL cost function. For convenience, let e w denote the N nonzero elements of w∗and eΦ the associated columns of Φ (therefore, t = eΦ e w and e w = eΦ−1t). Intuitively, it would seem likely that if we are not at a true local minimum, then there must exist at least one additional column of Φ not in eΦ, e.g., some x, that is somehow aligned with or in some respect similar to t. Moreover, the significance of this potential alignment must be assessed relative to eΦ. But how do we quantify this relationship for the purposes of analyzing local minima? As it turns out, a useful metric for comparison is realized when we decompose x with respect to eΦ, which forms a basis in ℜN under the URP assumption. For example, we may form the decomposition x = eΦev, where ev is a vector of weights analogous to e w. As will be shown below, the similarity required between x and t (needed for establishing the existence of a local minimum) may then be realized by comparing the respective weights ev and e w. In more familiar terms, this is analogous to suggesting that similar signals have similar Fourier expansions. Loosely, we may expect that if ev is ‘close enough’ to e w, then x is sufficiently close to t (relative to all other columns in eΦ) such that we are not at a local minimum. We formalize this idea via the following theorem: Theorem 2. Let Φ satisfy the URP and let γ∗represent a vector of hyperparameters with N and only N nonzero entries and associated basic feasible solution e w = eΦ−1t. Let X denote the set of M −N columns of Φ not included in eΦ and V the set of weights given by n ev : ev = eΦ−1x, x ∈X o . Then γ∗is a local minimum of L(γ) only if X i̸=j evievj ewi ewj < 0 ∀ev ∈V. (15) Proof: If γ∗truly represents a local minimum of our cost function, then the following condition must hold for all x ∈X: ∂L(γ∗) ∂γx ≥ 0, (16) where γx denotes the hyperparameter corresponding to the basis vector x. In words, we cannot reduce L(γ∗) along a positive gradient because this would push γx below zero. Using the matrix inversion lemma, the determinant identity, and some algebraic manipulations, we arrive at the expression ∂L(γ∗) ∂γx = xT Bx 1 + γxxT Bx − tT Bx 1 + γxxT Bx 2 , (17) where B ≜(eΦeΓeΦT )−1. Since we have assumed that we are at a local minimum, it is straightforward to show that eΓ = diag( e w)2 leading to the expression B = eΦ−T diag( e w)−2eΦ−1. (18) Substituting this expression into (17) and evaluating at the point γx = 0, the above gradient reduces to ∂L(γ∗) ∂γx = evT diag( e w−1 e w−T ) −e w−1 e w−T ev, (19) where e w−1 ≜[ ew−1 1 , . . . , ew−1 N ]T . This leads directly to the stated theorem. ■ This theorem provides a useful picture of what is required for local minima to exist and more importantly, why many basic feasible solutions are not local minimum. Moreover, there are several convenient ways in which we can interpret this result to accommodate a more intuitive perspective. 4.2 A Simple Geometric Interpretation In general terms, if the signs of each of the elements in a given ev match up with e w, then the specified condition will be violated and we cannot be at a local minimum. We can illustrate this geometrically as follows. To begin, we note that our cost function L(γ) is invariant with respect to reflections of any basis vectors about the origin, i.e., we can multiply any column of Φ by −1 and the cost function does not change. Returning to a candidate local minimum with associated eΦ, we may therefore assume, without loss of generality, that eΦ ≡eΦdiag (sgn(w)), giving us the decomposition t = eΦw, w > 0. Under this assumption, we see that t is located in the convex cone formed by the columns of eΦ. We can infer that if any x ∈X (i.e., any column of Φ not in eΦ) lies in this convex cone, then the associated coefficients ev must all be positive by definition (likewise, by a similar argument, any x in the convex cone of −eΦ leads to the same result). Consequently, Theorem 2 ensures that we are not at a local minimum. The simple 2D example shown in Figure 1 helps to illustrate this point. −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 φ1 φ2 t x −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 φ1 φ2 t x Figure 1: 2D example with a 2 × 3 dictionary Φ (i.e., N = 2 and M = 3) and a basic feasible solution using the columns eΦ = [φ1 φ2]. Left: In this case, x = φ3 does not penetrate the convex cone containing t, and we do not satisfy the conditions of Theorem 2. This configuration does represent a minimizing basic feasible solution. Right: Now x is in the cone and therefore, we know that we are not at a local minimum; but this configuration does represent a local minimum to current LSM methods. Alternatively, we can cast this geometric perspective in terms of relative cone sizes. For example, let CeΦ represent the convex cone (and its reflection) formed by eΦ. Then we are not at a local minimum to L(γ) if there exists a second convex cone C formed from a subset of columns of Φ such that t ∈C ⊂CeΦ, i.e., C is a tighter cone containing t. In Figure 1(right), we obtain a tighter cone by swapping x for φ2. While certainly useful, we must emphasize that in higher dimensions, these geometric conditions are much weaker than (15), e.g., if all x are not in the convex cone of eΦ, we still may not be at a local minimum. In fact, to guarantee a local minimum, all x must be reasonably far from this cone as quantified by (15). Of course the ultimate reduction in local minima from the M−1 N + 1 to M N bounds is dependent on the distribution of M/N 1.3 1.6 2.0 2.4 3.0 SBL Local Minimum % 4.9% 4.0% 3.2% 2.3% 1.6% Table 1: Given 1000 trials where FOCUSS has converged to a suboptimal local minimum, we tabulate the percentage of times the local minimum is also a local minimum to SBL. M/N refers to the overcompleteness ratio of the dictionary used, with N fixed at 20. Results using other algorithms are similar. basis vectors in t-space. In general, it is difficult to quantify this reduction except in a few special cases.7 However, we will now proceed to empirically demonstrate that the overall reduction in local minima is substantial when the basis vectors are randomly distributed. 5 Empirical Comparisons To show that the potential reduction in local minima derived above translates into concrete results, we conducted a simulation study using randomized basis vectors distributed isometrically in t-space. Randomized dictionaries are of interest in signal processing and other disciplines [2, 7] and represent a viable benchmark for testing basis selection methods. Moreover, we have performed analogous experiments with other dictionary types (such as pairs of orthobases) leading to similar results (see [9] for some examples). Our goal was to demonstrate that current LSM algorithms often converge to local minima that do not exist in the SBL cost function. To accomplish this, we repeated the following procedure for dictionaries of various sizes. First, we generate a random N × M Φ whose columns are each drawn uniformly from a unit sphere. Sparse weight vectors w0 are randomly generated with ∥w0∥0 = 7 (and uniformly distributed amplitudes on the nonzero components). The vector of target values is then computed as t = Φw0. The LSM algorithm is then presented with t and Φ and attempts to learn the minimum ℓ0-norm solutions. The experiment is repeated a sufficient number of times such that we collect 1000 examples where the LSM algorithm converges to a local minimum. In all these cases, we check if the condition stipulated by Theorem 2 applies, allowing us to determine if the given solution is a local minimum to the SBL algorithm or not. The results are contained in Table 1 for the FOCUSS LSM algorithm. We note that, the larger the overcompleteness ratio M/N, the larger the total number of LSM local minima (via the bounds presented earlier). However, there also appears to be a greater probability that SBL can avoid any given one. In many cases where we found that SBL was not locally minimized, we initialized the SBL algorithm in this location and observed whether or not it converged to the optimal solution. In roughly 50% of these cases, it escaped to find the maximally sparse solution. The remaining times, it did escape in accordance with theory; however, it converged to another local minimum. In contrast, when we initialize other LSM algorithms at an SBL local minima, we always remain trapped as expected. 6 Discussion In practice, we have consistently observed that SBL outperforms current LSM algorithms in finding maximally sparse solutions (e.g., see [9]). The results of this paper provide a very plausible explanation for this improved performance: conventional LSM procedures are very likely to converge to local minima that do not exist in the SBL landscape. However, 7For example, in the special case where t is proportional to a single column of Φ, we can show that the number of local minima reduces from M−1 N +1 to 1, i.e., we are left with a single minimum. it may still be unclear exactly why this happens. In conclusion, we give a brief explanation that provides insight into this issue. Consider the canonical FOCUSS LSM algorithm or the Figueiredo algorithm from [5] (with σ2 fixed to zero, the Figueiredo algorithm is actually equivalent to the FOCUSS algorithm). These methods essentially solve the problem min w M X i=1 log |wi|, s.t. t = Φw, (20) where the objective function is proportional to the Gaussian entropy measure. In contrast, we can show that, up to a scale factor, any minimum of L(γ) must also be a minimum of min γ N X i=1 log λi(γ), s.t. γ ∈Ωγ, (21) where λi(γ) is the i-th eigenvalue of ΦΓΦT and Ωγ is the convex set {γ : tT ΦΓΦT −1 t ≤1, γ ≥0}. In both instances, we are minimizing a Gaussian entropy measure over a convex constraint set. The crucial difference resides in the particular parameterization applied to this measure. In (20), we see that if any subset of |wi|’s becomes significantly small (e.g., as we approach a basic feasible solution), we enter the basin of a local minimum because the associated log |wi| terms becomes enormously negative; hence the one-to-one correspondence between basic feasible solutions and local minima of the LSM algorithms. In contrast, when working with (21), many of the γi’s may approach zero without becoming trapped, as long as ΦΓΦT remains reasonably well-conditioned. In other words, since Φ is overcomplete, up to M −N of the γi’s can be zero while still maintaining a full set of nonzero eigenvalues to ΦΓΦT , so no term in the summation is driven towards minus infinity as occurred above. Thus, we can switch from one basic feasible solution to another in many instances while still reducing our objective function. It is in this respect that SBL approximates the minimization of the alternative objective posed by (14). References [1] S.S. Chen, D.L. Donoho, and M.A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999. [2] B.D. Rao and K. Kreutz-Delgado, “An affine scaling methodology for best basis selection,” IEEE Transactions on Signal Processing, vol. 47, no. 1, pp. 187–200, January 1999. [3] R.M. Leahy and B.D. Jeffs, “On the design of maximally sparse beamforming arrays,” IEEE Transactions on Antennas and Propagation, vol. 39, no. 8, pp. 1178–1187, Aug. 1991. [4] I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm,” IEEE Transactions on Signal Processing, vol. 45, no. 3, pp. 600–616, March 1997. [5] M.A.T. Figueiredo, “Adaptive sparseness using Jeffreys prior,” Neural Information Processing Systems, vol. 14, pp. 697–704, 2002. [6] D.P. Wipf and B.D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2153–2164, 2004. [7] D.L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization,” Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, March 2003. [8] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001. [9] D.P. Wipf and B.D. Rao, “Some results on sparse Bayesian learning,” ECE Department Technical Report, University of California, San Diego, 2005.
|
2004
|
139
|
2,551
|
Inference, Attention, and Decision in a Bayesian Neural Architecture Angela J. Yu Peter Dayan Gatsby Computational Neuroscience Unit, UCL 17 Queen Square, London WC1N 3AR, United Kingdom. feraina@gatsby.ucl.ac.uk dayan@gatsby.ucl.ac.uk Abstract We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network’s intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty. 1 Introduction A constant stream of noisy and ambiguous sensory inputs bombards our brains, informing on-going inferential processes and directing perceptual decision-making. Neurophysiologists and psychologists have long studied inference and decision-making in isolation, as well as the careful attentional filtering that is necessary to optimize them. The recent focus on their interactions poses an important opportunity and challenge for computational models. In this paper, we study an attentional task which involves all three components, and thereby directly confront their interaction. We first discuss the background of the individual elements; then describe our model. The first element involves the representation and manipulation of uncertainty in sensory inputs and contextual information. There are two broad families of suggestions. One is microscopic, for which individual cortical neurons and populations either implicitly or explicitly represent the uncertainty. This spans a broad spectrum, from distributional codes that can also encode restricted aspects of uncertainty [1] to more exotic interpretations of codes as representing complex distributions [1, 2, 3, 4, 5]. The other family is macroscopic, with cholinergic (ACh) and noradrenergic (NE) neuromodulatory systems reporting computationally distinct forms of uncertainty to influence the way that information in differentially reliable cortical areas is integrated and learned [6, 7]. How microscopic and macroscopic families work together is hitherto largely unexplored. The second element is selective attention and top-down influences over sensory processing. Here, the key challenge is to couple the many ideas about the way that attention should, from a sound statistical viewpoint, modify sensory processing, to the measurable effects of attention on the neural substrate. For instance, one typical consequence of (visual) featural and spatial attention is an increase in the activities of neurons in cortical populations representing those features, which is equivalent to multiplying their tuning functions by a factor [8]. Under the sort of probabilistic representational scheme in which the population activity codes for uncertainty in the underlying variable, it is of obvious importance to understand how this multiplication changes the implied uncertainty, and what statistical characteristic of the attention licenses this change [9]. The third element is the coupling between sensory processing and perceptual decisions. Implementational and computational issues underlying binary decisions, especially in simple cases, have been extensively explored, with psychologists [11, 12], and neuroscientists [13, 14] converging on common statistical [10] ideas about drift-diffusion processes. In order to explore the interaction of these elements, we model an extensively studied attentional task (due to Posner [15]), in which probabilistic spatial cueing is used to manipulate attentional modulation of visual discrimination. We employ a hierarchical neural architecture in which top-down attentional priors are integrated with sequentially sampled sensory input in a sound Bayesian manner, using a logarithmic mapping between cortical neural activities and uncertainty [4]. In the model, the information provided by the cue is realized as a change in the prior distribution over the cued dimension (space). The effect of the prior is to eliminate inputs from spatial locations considered irrelevant for the task, thus improving discrimination in another dimension (orientation). In section 2, we introduce the Posner task and give a Bayesian description of the computations underlying successful performance. In section 3, we describe the probabilistic semantics of the layers, and their functional connections, in the hierarchical neural architecture. In section 4, we compare the perceptual performance of the network to psychophysics data, and the intermediate layers’ activities to the relevant physiological data. 2 Spatial Attention as Prior Information In the classic version of Posner’s task [15], a subject is presented with a cue that predicts the location of a subsequent target with a certain probability termed its validity. The cue is valid if it makes a correct prediction, and invalid otherwise. Subjects typically perform detection or discrimination on the target more rapidly and accurately on a valid-cue trial than an invalid one, reflecting cue-induced attentional modulation of visual processing and/or decision making [15]. This difference in reaction time or accuracy is often termed the validity effect [16], and depends on the cue validity [17]. We consider sensory stimuli with two feature dimensions, a periodic variable, orientation, φ = φ∗, about which decisions are to be made, and a linear variable, space, µ = µ∗which is cued. The cue induces a top-down spatial prior, which we model as a mixture of a component sharply peaked at the cued location and a broader component capturing contextual and bottom-up saliency factors (including the possibility of invalidity). For simplicity, we use a Gaussian for the peaked component, and a uniform distribution for the broader one, although more complex priors of a similar nature would not change the model behavior: p(µ)=γN(ˆµ, ν2) + (1−γ)c. Given lower-layer activation patterns Xt ≡{x1, ..., xt}, assumed to be iid samples (with Gaussian noise) of bell-shaped tuning responses to the true underlying stimulus values µ∗, φ∗: fij(µ∗, φ∗) = Z exp(−(µi−µ∗)2/2σ2 µ+k cos(φj−φ∗)), the task is to infer a posterior distribution P(φ|Xt), involving the following steps: p(xt|µ, φ) = Q ij p(xij(t)|µ, φ) Likelihood p(φ|xt) = R p(µ, φ)p(xt|µ, φ)dµ Prior-weighted marginalization p(φ|Xt) ∝p(φ|xt−1 1 )p(µ, φ|xt) Temporal accumulation Because the marginalization step is weighted by the priors, a valid cue results in the inteLayer V r5 j(t) = exp(r4 j(t))/( P k exp(r4 k(t))) Layer IV r4 j(t) = r4 j(t −1) + r3 j(t) + ct Layer III r3 j(t) = log P i exp(r2 ij(t)) + bt Layer II r2 ij(t) = r1 ij(t) + log P(µi) + at Layer I r1 ij(t) = log p(xt|µi, φj) log P(µi) fij(µ∗, φ∗) −1.5 −1 −0.5 0 0.5 1 1.5 0 0.2 0.4 2 4 6 −1.5 −1 −0.5 0 0.5 1 1.5 0 1 2 3 2 4 6 −1.5 −1 −0.5 0 0.5 1 1.5 0 2 4 6 8 10 2 4 6 −1.5 −1 −0.5 0 0.5 1 1.5 0 5 10 15 1 2 3 4 5 6 0 2 4 1 2 3 4 5 6 0 5 10 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 µ µ µ µ µ µ φ φ φ φ φ φ φ φ φ φ φ Figure 1: A Bayesian neural architecture. Layer I activities represent the log likelihood of the data given each possible setting of µi and φj. This gives a noisy version of the smooth bell-shaped tuning curve (shown on the left). In layer II, the log likelihood of each µi and φj is modulated by the prior information log P(µj), shown on the upper left. The prior in µ strongly suppresses the noisy input in the irrelevant part of the µ dimension, thus enabling improved inference based on the underlying tuning response fij. The layer III neurons represent the log marginal posterior of φ by integrating out the µ dimension of layer II activities. Layer IV neurons combine recurrent information and feedforward input from layer III to compute the log marginal posterior given all data so far observed. Layer V computes the cumulative posterior distribution of φ using a softmax operation. Due to the strong nonlinearity of softmax, its activity is much more peaked than in layer III and IV. Solid lines in the diagram represent excitatory connections, dashed lines inhibitory. Blue circles illustrate how the activities of one row of inputs in Layer I travels through the hierarchy to affect the final decision layer. Brown circles illustrate how one unit in the spatial prior layer comes into the integration process. gration of more “signal” and less “noise” into the marginal posterior, whereas the opposite results from an invalid cue. To turn this on-line posterior into a decision ˆφ, we use an extension of the Sequential Probability Ratio Test (SPRT [10]): observe x1, x2, ... until the first time that max P(φj|Xt) exceeds a fixed threshold q, then terminate the observation process and report ˆφ= argmaxP(φj|Xt) as the estimate of φ for the current trial. 3 A Bayesian Neural Architecture The neural architecture implements the above computational steps exactly through a logarithmic transform, and has five layers (Fig 1). In layer I, activity of neuron ij, r1 ij(t), reports the log likelihood, log p(xt|µi, φj) (throughout, we discretize space and orientation). Layer II combines this log likelihood information with the prior, r2 ij(t)=r1 ij(t) + log P(µi) + at, to yield the joint log posterior up to an additive constant at that makes min r2 ij =0. Layer III performs the marginalization r3 j(t)=log P i exp(r2 ij)+bt, to give the marginal posterior in φ (up to a constant bt that makes min r3 j(t) = 0). While this step (’log-of-sums’) looks computationally formidable for neural hardware, it has been shown [4] that under certain conditions it can be well approximated by a (weighted) ’sum-of-logs’ r3 j(t)≈P i cir2 ij +bt, where ci are weights optimized to minimize approximation error. Layer IV neurons combine recurrent information and feedforward input from layer III to compute the log marginal 10 20 30 40 50 0 50 100 150 val inv 0 50 100 150 200 .50 .75 .99 0.2 0.4 0.6 .50 .75 .99 0 0.5 1 (a) (b) (c) Model Valid & Invalid RT Empirical Valid & Invalid RT Valid & Invalid ˆφ Reaction Time vs. γ Error Rate vs. γ time 0 π/2 π φ γ Figure 2: Validity effect and dependence on γ. (a) The distribution of reaction times for the invalid condition (γ = 0.5) has a greater mean and longer tail than the valid condition in model simulation results (top). Compare to similar results (bottom) from a Posner task in rats [18]. (b) Distribution of inferred ˆφ is more tightly clustered around the true φ∗(red dashed line) in valid case (blue) than the invalid case (red). γ = 0.75 (c) Validity effect, in both reaction time (top) and error rate (bottom) increases with increasing γ. {µi} = {−1.5, −1.4, ..., 1.5}, {φj} = {π/8, 2π/8, ..., 16π/8}, σµ = 0.1, σφ = π/16, q = 0.90, µ∗= 0.5, γ ∈{0.5, .75, .99}, ν = 0.05, 300 trials each of valid and invalid trials. 100 trials of each γ value. posterior given all data so far observed, r4 j(t)=r4 j(t−1) + r3 j(t) + ct, up to a constant ct. Finally, layer V neurons perform a softmax operation to retrieve the exact marginal posterior, r5 j(t) = exp(r4 j)/ P k exp(r4 k) = P(φj|Xt), with the additive constants dropping out. Note that a pathway parallel to III-IV-V consisting of neurons that only care about µ and not φ can be constructed in exactly the same manner. Its corresponding layers would report log p(xt, µi), log p(Xt, µi), and p(µi|Xt). An example of activities at each layer of the network, along with the choice of prior p(µ) and tuning function fij, is shown in Fig 1. 4 Results We first verify that the model indeed exhibits the cue-induced validity effect, ie shorter RT and greater accuracy for valid-cue trials than invalid ones. “Reaction time” on a trial is the number of iid samples necessary to reach a decision, and “error rate” is the average angular distance between the estimated ˆφ and the true φ∗. Figure 2 shows simulation results for 300 trials each of valid and invalid cue trials, for different values of γ, reflecting the model’s belief as to cue validity. Reassuringly, the RT distribution for valid-cue trials distribution is tighter and left-shifted compared to invalid-cue trials (Figure 2(a), top panel), as observed in experimental data [15, 18] (Fig 2(a), bottom panel); (b) shows that accuracy is also higher for valid-cue trials. Consistent with data from a human Posner task [17], (c) shows that the VE increases with increasing perceived cue validity, as parameterized by γ, in both reaction times and error rates (precluding a simple speed-error trade-off). Since we have an explicit model of not only the “behavioral output” but also the whole neural hierarchy, we can relate activities at various levels of representation to existing physiological data. Ample evidence indicates that spatial attention to one side of the visual field increases stimulus-induced activities in the corresponding part of the visual cortex [19, 20]. Fig 3(a) shows that our model qualitatively reproduces this effect; indeed it increases with γ, the perceived cue validity. Electrophysiological data also shows that spatial attention has a multiplicative effect on orientation tuning responses in visual cortical neurons [8] (Fig 3(b)). We see a similar phenomenon in the layer IV neurons (Fig 3(c); layer III similar, data not shown). Fig 3(d) is a scatter-plot of ⟨log p(xt, φj)+c1⟩t for the valid condition versus the invalid condition, for various values of γ, along with the slope fit to the experiment of Fig 3(b) (Layer III similar, data not shown). The linear least square error fits are good, and the slope increases with increasing confidence in the cued location (larger γ). In .50 .75 .99 6 6.5 7 7.5 cued uncued 0 10 20 30 val inv 0 5 10 0 10 20 0 π/2 π (a) (b) (c) (d) γ = .5 γ = .75 γ = .99 Cued vs. Uncued Activities Attention & V4 Activities Valid vs. Invalid Cueing Multiplicative Gain γ φ Invalid D r4 j E D r2 ij E D r4 j E Valid D r4 j E Figure 3: Multiplicative gain modulation by spatial attention. (a) r2 ij activities, averaged over the half of layer II where the prior peaks, are greater for valid (blue, left) than invalid (red, right) conditions. (b) Experimentally observed multiplicative modulation of V4 orientation tunings by spatial attention [8]. (c) Similar multiplicative effect in layer IV in the model. (d) Linear fits to scatter-plot of layer III activities for valid cue condition vs. invalid cue condition show that the slope is greatest for large γ and smallest for small γ (magenta: γ = 0.99, blue: γ = 0.75, red: γ = 0.5, black: linear fit to study in (b)). Simulation parameters are same as in Fig 2. Error bars: standard errors of the mean. the model, the slope not only depends on γ but also the noise model, the discretization, and so on, so the comparison of Figure 3(d) should be interpreted loosely. In valid cases, the effect of attention is to increase the certainty in the posterior marginal over φ, since the correct prior allows the relative suppression of noisy input from the irrelevant part of space. Were the posterior marginal exactly Gaussian, the increased certainty would translate into a decreased variance. For Gaussian probability distributions, logarithmic coding amounts to something close to a quadratic (adjusted for the circularity of orientation), with a curvature determined by the variance. Decreasing the variance increases the curvature, and therefore has a multiplicative effect on the activities (as in figure 3). The approximate gaussianity of the marginal posterior comes from the accumulation of many independent samples over time and space, and something like the central limit theorem. While it is difficult to show this multiplicative modulation rigorously, we can at least demonstrate it mathematically for the case where the spatial prior is very sharply peaked at its Gaussian mean ˆy. In this case, (⟨log p1(x(t), φj)⟩t+c1)/(⟨log p2(x(t), φj)⟩t+c2) ≈R, where c1, c2, and R are constants independent of φj and µi. Based on the peaked prior assumption, p(µ) ≈δ(µ−ˆµ), we have p(x(t), φ) = R p(x(t)|µ, φ)p(µ)p(φ) ≈p(x(t)|φ, ˆµ). We can expand log p(x(t)|ˆµ, φ) and compute its average over time ⟨log p(x(t)|ˆµ, φ)⟩t = C −N 2σ2n (fij(µ∗, φ∗) −fij(ˆµ, φ))2 ij . (1) Then using the tuning function defined earlier, we can compare the joint probabilities given valid (val) and invalid (inv) cues: log pval(x(t), φ) t log pinv(x(t), φ) t = α1 −β D e−(µi−µ∗)2/σ2 µ E i ⟨g(φ)⟩j α2 −β D e−((µi−µ∗)2+(µi−ˆµ)2)/2σ2 µ E i ⟨g(φ)⟩j , (2) and therefore, ⟨log pval(xt, φ)⟩t + c1 ⟨log pinv(xt, φ)⟩t + c2 ≈e(µ∗−ˆµ)2/(4σ2 µ) = R. (3) The derivation for a multiplicative effect on layer IV activities is very similar. Another aspect of intermediate representation of interest is the way attention modifies the evidence accumulation process over time. Fig 4 show the effect of cueing on the activities of neuron r5 j∗(t), or P(φ∗|Xt), for all trials with correct responses. The mean activity trajectory is higher for the valid cue case than the invalid one: in this case, spatial attention mainly acts through increasing the rate of evidence accumulation after stimulus onset 0 50 0 0.5 1 val inv 0 50 0 0.5 1 0 50 0 0.5 1 0 10 20 0 0.2 0.4 0.6 0.8 1 γ=.5 γ=.75 γ=.99 −5 0 0 0.2 0.4 0.6 0.8 1 γ=.5 γ=.75 γ=.99 (a) (b) (c) (d) (e) (f) Time Time Time Time Time r5 j∗ r5 j∗ r5 j∗ r5 j∗ r5 j∗ Figure 4: Accumulation of iid samples in orientation discrimination, and dependence on prior belief about stimulus location. (a-c) Average activity of neuron r5 j∗, which represents P(φ∗|Xt), saturates to 100% certainty much faster for valid cue trials (blue) than invalid cue trials (red). The difference is more drastic when γ is larger, or when there is more prior confidence in the cued target location. (a) γ = 0.5, (b) γ = 0.75, (c) γ = 0.99. Cyan dashed line indicates stimulus onset. (d) First 15 time steps (from stimulus onset) of the invalid cue traces from (a-c) are aligned to stimulus onset; cyan line denotes stimulus onset. The differential rates of rise are apparent. (e) Last 8 time steps of the invalid traces from (a-c) are aligned to decision threshold-crossing; there is no clear separation as a function γ. (f) Multiplicative gain modulation of attention on V4 orientation tuning curves. Simulation parameters are same as in Fig 2. (steeper rise). This attentional effect is more pronounced when the system is more confident about its prior information ((a) γ = 0.5, (b) γ = 0.75, (c) γ = 0.99). Effectively, increasing γ for invalid-cue trials is increasing input noise. Figure 4 (d) shows the average traces for invalid-cueing trials aligned to the stimulus onset and (e) to the decision threshold crossing. These results bear remarkable similarities to the LIP neuronal activities recorded during monkey perceptual decision-making [13] (shown in (f)). In the stimulus-aligned case, the traces rise linearly at first and then tail off somewhat, and the rate of rise increases for lower (effective) noise. In the decision-aligned case, the traces rise steeply and together. All these characteristics can also be seen in the experimental results in (f), where the input noise level is explicitly varied. 5 Discussion We have presented a hierarchical neural architecture that implements optimal probabilistic integration of top-down information and sequentially observed data. We consider a class of attentional tasks for which top-down modulation of sensory processing can be conceptualized as changes in the prior distribution over implicit stimulus dimensions. We use the specific example of the Posner spatial cueing task to relate the characteristics of this neural architecture to experimental literature. The network produces a reaction time distribution and error rates that qualitatively replicate experimental data. The way these measures depend on valid versus invalid cueing, and on the exact perceived validity of the cue, are similar to those observed in attentional experiments. Moreover, the activities in various levels of the hierarchy resemble electrophysiologically recorded activities in the visual cortical neurons during attentional modulation and perceptual discrimination, lending farther credence to the particular encoding and computational mechanisms that we have proposed. In particular, the intermediate layers demonstrate a multiplicative gain modulation by attention, as observed in primate V4 neurons [8]; and the temporal behavior of the final layer, representing the marginal posterior, qualitative replicates the experimental observation that LIP neurons show noise-dependent firing rate increase when aligned to stimulus onset, and noise-independent rise when aligned to the decision [13]. Our results illustrate the important concept that priors in a variable in one dimension (space) can dramatically alter the inferential performance in a completely independent variable dimension (orientation). In this case, the spatial prior affects the marginal posterior over φ by altering the relative importance of joint posterior terms in the marginalization process. This leads to the difference in performance between valid and invalid trials, a difference that increases with γ. This model elaborates on an earlier phenomenological model [9], by showing explicitly how marginalizing (in layer III) over activities biased by the prior (in layer II) produces the effect. This work has various theoretical and experimental implications. The model presents one possible reconciliation of cortical and neuromodulatory representations of uncertainty. The sensory-driven activities (layer I in this model) themselves encode bottom-up uncertainty, including sensory receptor noise and any processing noise that have occurred up until then. The top-down information, which specifies the Gaussian component of the spatial prior p(µ), involves two kinds of uncertainty. One determines the locus and spatial extent of visual attention, the other specifies the relative importance of this top-down bias compared to the bottom-up stimulus-driven input. The first is highly specific in modality and featural dimension, presumably originating from higher visual cortical areas (eg parietal cortex for spatial attention, inferotemporal cortex for complex featural attention). The second is more generic and may affect different featural dimensions and maybe even different modalities simultaneously, and is thus more appropriately signalled by a diffusely-projecting neuromodulator such as ACh. This characterization is also in keeping with our previous models of ACh [21, 7] and experimental data showing that ACh selectively suppresses corticocortical transmission relative to bottom-up processing in primary sensory cortices [22]. The perceptual decision strategy employed in this model is a natural multi-dimensional extension of SPRT [10], by monitoring the first-time passage of any one of the posterior values crossing a fixed decision threshold.. Note that the distribution of reaction times is skewed to the right (Fig 2(a)), as is commonly observed in visual discrimination tasks [11]. For binary decision tasks modeled using continuous diffusion processes [10, 11, 12, 13, 14], this skew arises from the properties of the first-passage time distribution (the time at which a diffusion barrier is first breached, corresponding to a fixed threshold confidence level in the binary choice). Our multi-choice decision-making realization of visual discrimination, as an extension of SPRT, also retains this skewed first-passage time distribution. Given that SPRT is optimal for binary decisions (smallest average response time for a given error rate), and that MAP estimate is optimal for 0-1 loss, we conjecture that our particular n-dim generalization of SPRT should be optimal for sequential decision-making under 0-1 loss. This is an area of active research. There are several important open issues. One is that of noise: our network performs exact Bayesian inference when activities are deterministic. The potentially deleterious effects of noise, particularly in log probability space, needs to be explored. Another important question is how uncertainty in signal strength, including the absence of a signal, can be detected and encoded. If the stimulus strength is unknown and can vary over time, then naive integration of bottom-up inputs ignoring the signal-to-noise ratio is no longer optimal. Based on a slightly different task involving sustained attention or vigilance [23], Brown et al [24] have made the interesting suggestion that one role for noradrenergic neuromodulation is to implement a change in the integration strategy when a stimulus is detected. We have also addressed this issue by ascribing to phasic norepinephrine a related but distinct role in signaling unexpected state uncertainty (in preparation). Acknowledgement We are grateful to Eric Brown, Jonathan Cohen, Phil Holmes, Peter Latham, and Iain Murray for helpful discussions. Funding was from the Gatsby Charitable Foundation. References [1] Zemel, R S, Dayan, P, & Pouget, A (1998). Probabilistic interpretation of population codes. Neural Comput 10: 403-30. [2] Sahani, M & Dayan, P (2003). Doubly distributional population codes: simultaneous representation of uncertainty and multiplicity. Neural Comput 15: 2255-79. [3] Barber, M J, Clark, J W, & Anderson, C H (2003). Neural representation of probabilistic information. Neural Comput 15: 1843-64 [4] Rao, R P (2004). Bayesian computation in recurrent neural circuits. Neural Comput 16: 1-38. [5] Weiss, Y & Fleet D J(2002). Velocity likelihoods in biological and machine vision. In Prob Models of the Brain: Perc and Neural Function. Cambridge, MA: MIT Press. [6] Dayan, P & Yu, A J (2002). Acetylcholine, uncertainty, and cortical inference. In Adv in Neural Info Process Systems 14. [7] Yu, A J & Dayan, P (2003). Expected and unexpected uncertainty: ACh and NE in the neocortex. In Adv in Neural Info Process Systems 15. [8] McAdams, C J & Maunsell, J H R (1999). Effects of attention on orientation-tuning functions of single neurons in Macaque cortical area V4. J. Neurosci 19: 431-41. [9] Dayan, P & Zemel R S (1999). Statistical models and sensory attention. In ICANN 1999. [10] Wald, A (1947). Sequential Analysis. New York: John Wiley & Sons, Inc. [11] Luce, R D (1986). Response Times: Their Role in Inferring Elementary Mental Organization. New York: Oxford Univ. Press. [12] Ratcliff, R (2001). Putting noise into neurophysiological models of simple decision making. Nat Neurosci 4: 336-7. [13] Gold, J I & Shadlen, M N (2002). Banburismus and the brain: decoding the relationship between sensory stimuli, decisions, and reward. Neuron 36: 299-308. [14] Bogacz, Brown, Moehlis, Holmes, & Cohen (2004). The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced choice tasks, in press. [15] Posner, M I (1980). Orienting of attention. Q J Exp Psychol 32: 3-25. [16] Phillips, J M, et al (2000). Cholinergic neurotransmission influences overt orientation of visuospatial attention in the rat. Psychopharm 150:112-6. [17] Yu, A J et al (2004). Expected and unexpected uncertainties control allocation of attention in a novel attentional learning task. Soc Neurosci Abst 30:176.17. [18] Bowman, E M, Brown, V, Kertzman, C, Schwarz, U, & Robinson, D L (2003). Covert orienting of attention in Macaques: I. effects of behavioral context. J Neurophys 70: 431-434. [19] Reynolds, J H & Chelazzi, L (2004). Attentional modulation of visual processing. Annu Rev Neurosci 27: 611-47. [20] Kastner, S & Ungerleider, L G (2000). Mechanisms of visual attention in the human cortex. Annu Rev Neurosci 23: 315-41. [21] Yu, A J & Dayan, P (2002). Acetylcholine in cortical inference. Neural Networks 15: 719-30. [22] Kimura, F, Fukuada, M, & Tusomoto, T (1999). Acetylcholine suppresses the spread of excitation in the visual cortex revealed by optical recording: possible differential effect depending on the source of input. Eur J Neurosci 11: 3597-609. [23] Rajkowski, J, Kubiak, P, & Aston-Jones, P (1994). Locus coeruleus activity in monkey: phasic and tonic changes are associated with altered vigilance. Synapse 4: 162-4. [24] Brown, E et al (2004). Simple neural networks that optimize decisions. Int J Bifurcation and Chaos, in press.
|
2004
|
14
|
2,552
|
The Power of Selective Memory: Self-Bounded Learning of Prediction Suffix Trees Ofer Dekel Shai Shalev-Shwartz Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {oferd,shais,singer}@cs.huji.ac.il Abstract Prediction suffix trees (PST) provide a popular and effective tool for tasks such as compression, classification, and language modeling. In this paper we take a decision theoretic view of PSTs for the task of sequence prediction. Generalizing the notion of margin to PSTs, we present an online PST learning algorithm and derive a loss bound for it. The depth of the PST generated by this algorithm scales linearly with the length of the input. We then describe a self-bounded enhancement of our learning algorithm which automatically grows a bounded-depth PST. We also prove an analogous mistake-bound for the self-bounded algorithm. The result is an efficient algorithm that neither relies on a-priori assumptions on the shape or maximal depth of the target PST nor does it require any parameters. To our knowledge, this is the first provably-correct PST learning algorithm which generates a bounded-depth PST while being competitive with any fixed PST determined in hindsight. 1 Introduction Prediction suffix trees are elegant, effective, and well studied models for tasks such as compression, temporal classification, and probabilistic modeling of sequences (see for instance [13, 11, 7, 10, 2]). Different scientific communities gave different names to variants of prediction suffix trees such as context tree weighting [13] and variable length Markov models [11, 2]. A PST receives an input sequence of symbols, one symbol at a time, and predicts the identity of the next symbol in the sequence based on the most recently observed symbols. Techniques for finding a good prediction tree include online Bayesian mixtures [13], tree growing based on PAC-learning [11], and tree pruning based on structural risk minimization [8]. All of these algorithms either assume an a-priori bound on the maximal number of previous symbols which may be used to extend predictions or use a pre-defined template-tree beyond which the learned tree cannot grow. Motivated by statistical modeling of biological sequences, Apostolico and Bejerano [1] showed that the bound on the maximal depth can be removed by devising a smart modification of Ron et. al’s algorithm [11] (and in fact many other variants), yielding an algorithm with time and space requirements that are linear in the length of the input. However, when modeling very long sequences, both the a-priori bound and the linear space modification might impose serious computational problems. 0 1 4 7 −3 −2 −1 Figure 1: An illustration of the prediction process induced by a PST. The context in this example is: + + + In this paper we describe a variant of prediction trees for which we are able to devise a learning algorithm that grows bounded-depth trees, while remaining competitive with any fixed prediction tree chosen in hindsight. The resulting time and space requirements of our algorithm are bounded and scale polynomially with the complexity of the best prediction tree. Thus, we are able to sidestep the pitfalls of previous algorithms. The setting we employ is slightly more general than context-based sequence modeling as we assume that we are provided with both an input stream and an output stream. For concreteness, we assume that the input stream is a sequence of vectors x1, x2, . . . (xt ∈Rn) and the output stream is a sequence of symbols y1, y2, . . . over a finite alphabet Y. We denote a sub-sequence yi, . . . , yj of the output stream by yj i and the set of all possible sequences by Y∗. We denote the length of a sequence s by |s|. Our goal is to correctly predict each symbol in the output stream y1, y2, . . .. On each time-step t we predict the symbol yt based on an arbitrarily long context of previously observed output stream symbols, yt-1 1 , and based on the current input vector xt. For simplicity, we focus on the binary prediction case where |Y| = 2 and for convenience we use Y = {−1, +1} (or {−, +} for short) as our output alphabet. Our algorithms and analysis can be adapted to larger output alphabets using ideas from [5]. The hypotheses we use are confidence-rated and are of the form h : X ×Y∗→R where the sign of h is the predicted symbol and the magnitude of h is the confidence in this prediction. Each hypothesis is parameterized by a triplet (w, T , g) where w ∈Rn, T is a suffix-closed subset of Y∗and g is a context function from T into R (T is suffix closed if ∀s ∈T it holds that all of the suffixes of s are also in T ). The prediction extended by a hypothesis h = (w, T , g) for the t’th symbol is, h(xt, yt-1 1 ) = w · xt + X i: yt-1 t-i ∈T 2-i/2 g yt-1 t-i . (1) In words, the prediction is the sum of an inner product between the current input vector xt with the weight vector w and the application of the function g to all the suffixes of the output stream observed thus far that also belong to T . Since T is a suffix-closed set, it can be described as a rooted tree whose nodes are the sequences constituting T . The children of a node s ∈T are all the sequences σs ∈T (σ ∈Y). Following the terminology of [11], we use the term prediction suffix tree (PST) for T and refer to s ∈T as a sequence and a node interchangeably. We denote the length of the longest sequence in T by depth(T ). Given g, each node s ∈T is associated with a value g(s). Note that in the prediction process, the contribution of each context yt-1 t-i is multiplied by a factor which is exponentially decreasing in the length of yt-1 t-i . This type of demotion of long suffixes is common to most PSTbased approaches [13, 7, 10] and reflects the a-priori assumption that statistical correlations tend to decrease as the time between events increases. An illustration of a PST where T = {ϵ, −, +, +−, ++, −++, +++}, with the associated prediction for y6 given the context y5 1 = −−+++ is shown in Fig. 1. The predicted value of y6 in the example is sign(w·xt +2−1/2 ×(−1)+2−1 ×4+2−3/2 ×7). Given T and g we define the extension of g to all strings over Y∗by setting g(s) = 0 for s ̸∈T . Using this extension, Eq. (1) can be simplified to, h(xt, yt-1 1 ) = w · xt + t−1 X i=1 2-i/2 g yt-1 t-i . (2) We use the online learning loss-bound model to analyze our algorithms. In the online model, learning takes place in rounds. On each round, an instance xt is presented to the online algorithm, which in return predicts the next output symbol. The predicted symbol, denoted ˆyt is defined to be the sign of ht(xt, yt-1 1 ). Then, the correct symbol yt is revealed and with the new input-output pair (xt, yt) on hand, a new hypothesis ht+1 is generated which will be used to predict the next output symbol, yt+1. In our setting, the hypotheses ht we generate are of the form given by Eq. (2). Most previous PST learning algorithms employed probabilistic approaches for learning. In contrast, we use a decision theoretic approach by adapting the notion of margin to our setting. In the context of PSTs, this approach was first suggested by Eskin in [6]. We define the margin attained by the hypothesis ht to be ytht(xt, yt−1 1 ). Whenever the current symbol yt and the output of the hypothesis agree in their sign, the margin is positive. We would like our online algorithm to correctly predict the output stream y1, . . . , yT with a sufficiently large margin of at least 1. This construction is common to many online and batch learning algorithms for classification [12, 4]. Specifically, we use the hinge loss as our margin-based loss function which serves as a proxy for the prediction error. Formally, the hinge loss attained on round t is defined as, ℓt = max 0, 1 −ytht xt, yt-1 1 . The hinge-loss equals zero when the margin exceeds 1 and otherwise grows linearly as the margin gets smaller. The online algorithms discussed in this paper are designed to suffer small cumulative hinge-loss. Our algorithms are analyzed by comparing their cumulative hinge-losses and prediction errors with those of any fixed hypothesis h⋆= (w⋆, T ⋆, g⋆) which can be chosen in hindsight, after observing the entire input and output streams. In deriving our loss and mistake bounds we take into account the complexity of h⋆. Informally, the larger T ⋆and the bigger the coefficients of g⋆(s), the more difficult it is to compete with h⋆. The squared norm of the context function g is defined as, ∥g∥2 = X s∈T (g(s))2 . (3) The complexity of a hypothesis h (and h⋆in particular) is defined as the sum of ∥w∥2 and ∥g∥2. Using the extension of g to Y⋆we can evaluate ∥g∥2 by summing over all s ∈Y⋆. We present two online algorithms for learning large-margin PSTs. The first incrementally constructs a PST which grows linearly with the length of the input and output sequences, and thus can be arbitrarily large. While this construction is quite standard and similar methods were employed by previous PST-learning algorithms, it provides us with an infrastructure for our second algorithm which grows bounded-depth PSTs. We derive an explicit bound on the maximal depth of the PSTs generated by this algorithm. We prove that both algorithms are competitive with any fixed PST constructed in hindsight. To our knowledge, this is the first provably correct construction of a PST-learning algorithm whose space complexity does not depend on the length of the input-output sequences. 2 Learning PSTs of Unbounded Depth Having described the online prediction paradigm and the form of hypotheses used, we are left with the task of defining the initial hypothesis h1 and the hypothesis update rule. To facilitate our presentation, we assume that all of the instances presented to the online algorithm have a bounded Euclidean norm, namely, ∥xt∥≤1. First, we define the initial hypothesis to be h1 ≡0. We do so by setting w1 = (0, . . . , 0), T1 = {ϵ} and g1(·) ≡0. As a consequence, the first prediction always incurs a unit loss. Next, we define the updates applied to the weight vector wt and to the PST at the end of round t. The weight vector is updated by wt+1 = wt + ytτtxt, where τt = ℓt/(∥xt∥2 + 3). Note that if the margin attained on this round is at least 1 then ℓt = 0 and thus wt+1 = wt. This type of update is common to other online learning algorithms (e.g. [3]). We would like to note in passing that the operation wt · xt in Eq. (2) can be replaced with an inner product defined via a Mercer kernel. To see this, note that wt can be rewritten explicitly as Pt−1 i=1 yiτi xi and initialize: w1 = (0, . . . , 0), T1 = {ϵ}, g1(s) = 0 ∀s ∈Y⋆, P0 = 0 for t = 1, 2, . . . do Receive an instance xt s.t. ∥xt∥≤1 Define: j = max{i : yt-1 t-i ∈Tt} Calculate: ht ` xt, yt-1 1 ´ = wt · xt + Pj i=1 2-i/2 gt ` yt-1 t-i ´ Predict: ˆyt = sign ` ht ` xt, yt-1 1 ´´ Receive yt and suffer loss: ℓt = max ˘ 0, 1 −ytht ` xt, yt-1 1 ´¯ Set: τt = ℓt/ ` ∥xt∥2 + 3 ´ and dt = t −1 if (ℓt ≤1/2) then Set: τt = 0, Pt = Pt−1, dt = 0, and continue to the next iteration else Set: dt = max n j , l 2 log2 (2τt) −2 log2 “p P 2 t-1 + τtℓt −Pt-1 ”mo Set: Pt = Pt-1 + 2τt2-dt/2 modification required for self-bounded version Update weight vector: wt+1 = wt + ytτtxt Update tree: Tt+1 = Tt ∪{yt-1 t-i : 1 ≤i ≤dt} gt+1(s) = gt(s) + yt 2-|s|/2 τt if s ∈{yt-1 t-i : 1 ≤i ≤dt} gt(s) otherwise Figure 2: The online algorithms for learning a PST. The code outside the boxes defines the base algorithm for learning unbounded-depth PSTs. Including the pseudocode inside the boxes gives the self-bounded version. therefore wt ·xt = P i yiτi xi ·xt. Using a kernel operator K simply amounts to replacing the latter expression with P i yiτiK(xi, xt). The update applied to the context function gt also depends on the scaling factor τt. However, gt is updated only on those strings which participated in the prediction of ˆyt, namely strings of the form yt-1 t-i for 1 ≤i < t. Formally, for 1 ≤i < t our update takes the form gt+1(yt-1 t-i ) = gt(yt-1 t-i ) + yt 2-i/2 τt. For any other string s, gt+1(s) = gt(s). The pseudocode of our algorithm is given in Fig. 2. The following theorem states that the algorithm in Fig. 2 is 2-competitive with any fixed hypothesis h⋆for which ∥g⋆∥is finite. Theorem 1. Let x1, . . . , xT be an input stream and let y1, . . . , yT be an output stream, where every xt ∈Rn, ∥xt∥≤1 and every yt ∈{-1, 1}. Let h⋆= (w⋆, T ⋆, g⋆) be an arbitrary hypothesis such that ∥g⋆∥< ∞and which attains the loss values ℓ⋆ 1, . . . , ℓ⋆ T on the input-output streams. Let ℓ1, . . . , ℓT be the sequence of loss values attained by the unbounded-depth algorithm in Fig. 2 on the input-output streams. Then it holds that, T X t=1 ℓ2 t ≤4 ∥w⋆∥2 + ∥g⋆∥2 + 2 T X t=1 (ℓ⋆ t )2 . In particular, the above bounds the number of prediction mistakes made by the algorithm. Proof. For every t = 1, . . . , T define ∆t = ∥wt −w⋆∥2 −∥wt+1 −w⋆∥2 and, ˆ∆t = X s∈Y∗ gt(s) −g⋆(s) 2 − X s∈Y∗ gt+1(s) −g⋆(s) 2 . (4) Note that ∥gt∥2 is finite for any value of t and that ∥g⋆∥2 is finite due to our assumption, therefore ˆ∆t is finite and well-defined. We prove the theorem by devising upper and lower bounds on P t(∆t + ˆ∆t), beginning with the upper bound. P t ∆t is a telescopic sum which collapses to ∥w1 −w⋆∥2 −∥wt+1 −w⋆∥2. Similarly, T X t=1 ˆ∆t = X s∈Y∗ g1(s) −g⋆(s) 2 − X s∈Y∗ gt+1(s) −g⋆(s) 2 . (5) Omitting negative terms and using the facts that w1 = (0, . . . , 0) and g1(·) ≡0, we get, T X t=1 ∆t + ˆ∆t ≤∥w⋆∥2 + X s∈Y∗ (g⋆(s))2 = ∥w⋆∥2 + ∥g⋆∥2 . (6) Having proven an upper bound on P t(∆t + ˆ∆t), we turn to the lower bound. First, ∆t can be rewritten as ∆t = ∥wt −w⋆∥2 −∥(wt+1 −wt)+(wt −w⋆)∥2 and by expansion of the right-hand term we get that ∆t = −∥wt+1 −wt∥2 −2(wt+1 −wt)·(wt −w⋆). Using the value of wt+1 as defined in the update rule of the algorithm (wt+1 = wt + ytτtxt) gives, ∆t = −τ 2 t ∥xt∥2 −2 yt τt xt · (wt −w⋆) . (7) Next, we use similar manipulations to rewrite ˆ∆t. Unifying the two sums that make up ˆ∆t in Eq. (4) and adding null terms of the form 0 = gt(s) −gt(s), we obtain, ˆ∆t = P s∈Y∗ h gt(s) −g⋆(s) 2 − gt+1(s) −gt(s) + gt(s) −g⋆(s) 2i = P s∈Y∗ h − gt+1(s) −gt(s) 2 −2 gt+1(s) −gt(s) gt(s) −g⋆(s) i . Let dt = t −1 as defined in Fig. 2. Using the fact that gt+1 differs from gt only on strings of the form yt-1 t-i , where gt+1 yt-1 t-i = gt yt-1 t-i + yt2-i/2τt, we can write ˆ∆t as, ˆ∆t = dt X i=1 −2-i τ 2 t −2 dt X i=1 yt 2-i/2 τt gt yt-1 t-i −g⋆ yt-1 t-i . (8) Summing Eqs. (7-8) gives, ∆t + ˆ∆t = −τ 2 t ∥xt∥2 + Pdt i=1 2-i −2τt yt wt · xt + Pdt i=1 2-i/2 gt yt-1 t-i + 2τt yt w⋆· xt + Pdt i=1 2-i/2 g⋆ yt-1 t-i . (9) Using Pdt i=1 2−i ≤1 with the definitions of ht and h⋆from Eq. (2), we get that, ∆t + ˆ∆t ≥−τ 2 t (∥xt∥2 + 1) −2τt yt ht xt, yt−1 1 + 2τt yt h⋆ xt, yt−1 1 . (10) Denote the right-hand side of Eq. (10) by Γt and recall that the loss is defined as max{0, 1− ytht(xt, yt-1 1 )}. Therefore, if ℓt > 0 then −ytht(xt, yt-1 1 ) = ℓt −1. Multiplying both sides of this equality by τt gives −τtytht(xt, yt−1 1 ) = τt(ℓt −1). Now note that this equality also holds when ℓt = 0 since then τt = 0 and both sides of the equality simply equal zero. Similarly, we have that yth⋆(xt, yt-1 1 ) ≥1 −ℓ⋆ t . Plugging these two inequalities into Eq. (10) gives that, Γt ≥−τ 2 t (∥xt∥2 + 1) + 2τt (ℓt −1) + 2τt (1 −ℓ⋆ t ) , which in turn equals −τ 2 t (∥xt∥2 + 1) + 2τt ℓt −2τt ℓ⋆ t . The lower bound on Γt still holds if we subtract from it the non-negative term (21/2τt −2−1/2ℓ⋆ t )2, yielding, Γt ≥ −τ 2 t (∥xt∥2 + 1) + 2τt ℓt −2τt ℓ⋆ t − 2τ 2 t −2τtℓ⋆ t + (ℓ⋆ t )2/2 = −τ 2 t (∥xt∥2 + 3) + 2τt ℓt −(ℓ⋆ t )2/2 . Using the definition of τt and using the assumption that ∥xt∥2 ≤1, we get, Γt ≥−τtℓt + 2τtℓt −(ℓ⋆ t )2 2 = ℓ2 t ∥xt∥2 + 3 −(ℓ⋆ t )2 2 ≥ℓ2 t/4 −(ℓ⋆ t )2/2 . (11) Since Eq. (10) implies that ∆t + ˆ∆t ≥Γt, summing ∆t + ˆ∆t over all values of t gives, T X t=1 ∆t + ˆ∆t ≥1 4 T X t=1 ℓ2 t −1 2 T X t=1 (ℓ⋆ t )2 . Combining the bound above with Eq. (6) gives the bound stated by the theorem. Finally, we obtain a mistake bound by noting that whenever a prediction mistake occurs, ℓt ≥1. We would like to note that the algorithm for learning unbounded-depth PSTs constructs a sequence of PSTs, T1, . . . , TT , such that depth(Tt) may equal t. Furthermore, the number of new nodes added to the tree on round t is on the order of t, resulting in Tt having O(t2) nodes. However, PST implementation tricks in [1] can be used to reduce the space complexity of the algorithm from quadratic to linear in t. 3 Self-Bounded Learning of PSTs The online learning algorithm presented in the previous section has one major drawback, the PSTs it generates can keep growing with each online round. We now describe a modification to the algorithm which casts a limit on the depth of the PST that is learned. Our technique does not rely on arbitrary assumptions on the structure of the tree (e.g. maximal tree depth) nor does it require any parameters. The algorithm determines the depth to which the PST should be updated automatically, and is therefore named the self-bounded algorithm for PST learning. The self-bounded algorithm is obtained from the original unbounded algorithm by adding the lines enclosed in boxes in Fig. 2. A new variable dt is calculated on every online iteration. On rounds where an update takes place, the algorithm updates the PST up to depth dt, adding nodes if necessary. Below this depth, no nodes are added and the context function is not modified. The definition of dt is slightly involved, however it enables us to prove that we remain competitive with any fixed hypothesis (Thm. 2) while maintaining a bounded-depth PST (Thm. 3). A point worth noting is that the criterion for performing updates has also changed. Before, the online hypothesis was modified whenever ℓt > 0. Now, an update occurs only when ℓt > 1/2, tolerating small values of loss. Intuitively, this relaxed margin requirement is what enables us to avoid deepening the tree. The algorithm is allowed to predict with lower confidence and in exchange the PST can be kept small. The trade-off between PST size and confidence of prediction is adjusted automatically, extending ideas from [9]. While the following theorem provides a loss bound, this bound can be immediately used to bound the number of prediction mistakes made by the algorithm. Theorem 2. Let x1, . . . , xT be an input stream and let y1, . . . , yT be an output stream, where every xt ∈Rn, ∥xt∥≤1 and every yt ∈{-1, 1}. Let h⋆= (w⋆, T ⋆, g⋆) be an arbitrary hypothesis such that ∥g⋆∥< ∞and which attains the loss values ℓ⋆ 1, . . . , ℓ⋆ T on the input-output streams. Let ℓ1, . . . , ℓT be the sequence of loss values attained by the selfbounded algorithm in Fig. 2 on the input-output streams. Then the sum of squared-losses attained on those rounds where ℓt > 1/2 is bounded by, X t:ℓt> 1 2 ℓ2 t ≤ (1 + √ 5) ∥g⋆∥+ 2 ∥w⋆∥+ 2 T X t=1 (ℓ⋆ t )21/2 2 . Proof. We define ∆t and ˆ∆t as in the proof of Thm. 1. First note that the inequality in Eq. (9) in the proof of Thm. 1 still holds. Using the fact that Pdt i=1 2−i ≤1 with the definitions of ht and h⋆from Eq. (2), Eq. (9) becomes, ∆t + ˆ∆t ≥−τ 2 t (∥xt∥2 + 1) −2τt yt ht xt, yt−1 1 + 2τt yt h⋆ xt, yt−1 1 −2τtyt Pt−1 i=dt+1 2-i/2 g⋆ yt-1 t-i . (12) Using the Cauchy-Schwartz inequality we get that t−1 X i=dt+1 2-i/2 g⋆ yt-1 t-i ≤ t−1 X i=dt+1 2-i1/2 t−1 X i=dt+1 g⋆ yt-1 t-i 21/2 ≤2-dt/2 ∥g⋆∥. Plugging the above into Eq. (12) and using the definition of Γt from the proof of Thm. 1 gives ∆t + ˆ∆t ≥Γt −2τt2-dt/2 ∥g⋆∥. Using the upper bound on Γt from Eq. (11) gives, ∆t + ˆ∆t ≥τtℓt −(ℓ⋆ t )2/2 −2 τt2-dt/2 ∥g⋆∥. (13) For every 1 ≤t ≤T, define Lt = Pt i=1 τiℓi and Pt = Pt i=1 τi21−di/2, and let P0 = L0 = 0. Summing Eq. (13) over t and comparing to the upper bound in Eq. (6) we get, LT ≤∥g⋆∥2 + ∥w⋆∥2 + (1/2) T X t=1 (ℓ⋆ t )2 + ∥g⋆∥PT . (14) We now use an inductive argument to prove that Pt ≤√Lt for all 0 ≤t ≤T. This inequality trivially holds for t = 0. Assume that P 2 t−1 ≤Lt−1. Expanding Pt we get that P 2 t = Pt−1 + τt21−dt/22 = P 2 t−1 + Pt−1 22−dt/2 τt + 22−dt τ 2 t . (15) We therefore need to show that the right-hand side of Eq. (15) is at most Lt. The definition of dt implies that 2−dt/2 is at most (P 2 t−1 + τtℓt)1/2 −Pt−1 /(2τt). Plugging this fact into the right-hand side of Eq. (15) gives that P 2 t cannot exceed P 2 t−1 + τtℓt. Using the inductive assumption P 2 t−1 ≤Lt−1 we get that P 2 t ≤Lt−1 + τtℓt = Lt and the inductive argument is proven. In particular, we have shown that PT ≤√LT . Combining this inequality with Eq. (14) we get that p LT 2 −∥g⋆∥ p LT −∥g⋆∥2 −∥w⋆∥2 −(1/2) T X t=1 (ℓ⋆ t )2 ≤0 . The above equation is a quadratic inequality in √LT from which it follows that √Lt can be at most as large as the positive root of this equation, namely, p LT ≤1 2 ∥g⋆∥+ 5 ∥g⋆∥2 + 4 ∥w⋆∥2 + 2 T X t=1 (ℓ⋆ t )2 1/2 . Using the the fact that √ a2 + b2 ≤(a + b) (a, b ≥0) we get that, p LT ≤1 + √ 5 2 ∥g⋆∥+ ∥w⋆∥+ 1 2 T X t=1 (ℓ⋆ t )2 1/2 . (16) If ℓt ≤1/2 then τtℓt = 0 and otherwise τtℓt ≥ℓ2 t/4. Therefore, the sum of ℓ2 t over the rounds for which ℓt > 1/2 is less than 4 Lt, which yields the bound of the theorem. Note that if there exists a fixed hypothesis with ∥g⋆∥< ∞which attains a margin of 1 on the entire input sequence, then the bound of Thm. 2 reduces to a constant. Our next theorem states that the algorithm indeed produces bounded-depth PSTs. Its proof is omitted due to the lack of space. Theorem 3. Under the conditions of Thm. 2, let T1, . . . , TT be the sequence of PSTs generated by the algorithm in Fig. 2. Then, for all 1 ≤t ≤T, depth(Tt) ≤9 + 2 log2 2 ∥g⋆∥+ ∥w⋆∥+ 1 2 T X t=1 (ℓ⋆ t )2 1/2 + 1 . The bound on tree depth given in Thm. 3 becomes particularly interesting when there exists some fixed hypothesis h⋆for which P t(ℓ⋆ t )2 is finite and independent of the total length of the output sequence, denoted by T. In this case, Thm. 3 guarantees that the depth of the PST generated by the self-bounded algorithm is smaller than a constant which does not depend on T. We also would like to emphasize that our algorithm is competitive even with a PST which is deeper than the PST constructed by the algorithm. This can be accomplished by allowing the algorithm’s predictions to attain lower confidence than the predictions made by the fixed PST with which it is competing. Acknowledgments This work was supported by the Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778 and by the Israeli Science Foundation grant number 522-04. References [1] G. Bejerano and A. Apostolico. Optimal amnesic probabilistic automata, or, how to learn and classify proteins in linear time and space. Journal of Computational Biology, 7(3/4):381–393, 2000. [2] P. Buhlmann and A.J. Wyner. Variable length markov chains. The Annals of Statistics, 27(2):480–513, 1999. [3] K. Crammer, O. Dekel, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. In Advances in Neural Information Processing Systems 16, 2003. [4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [5] O. Dekel, J. Keshet, and Y. Singer. Large margin hierarchical classification. In Proceedings of the Twenty-First International Conference on Machine Learning, 2004. [6] E. Eskin. Sparse Sequence Modeling with Applications to Computational Biology and Intrusion Detection. PhD thesis, Columbia University, 2002. [7] D.P. Helmbold and R.E. Schapire. Predicting nearly as well as the best pruning of a decision tree. Machine Learning, 27(1):51–68, April 1997. [8] M. Kearns and Y. Mansour. A fast, bottom-up decision tree pruning algorithm with near-optimal generalization. In Proceedings of the Fourteenth International Conference on Machine Learning, 1996. [9] P. Auer, N. Cesa-Bianchi and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64(1):48–75, 2002. [10] F.C. Pereira and Y. Singer. An efficient extension to mixture techniques for prediction and decision trees. Machine Learning, 36(3):183–199, 1999. [11] D. Ron, Y. Singer, and N. Tishby. The power of amnesia: learning probabilistic automata with variable memory length. Machine Learning, 25(2):117–150, 1996. [12] V.N. Vapnik. Statistical Learning Theory. Wiley, 1998. [13] F.M.J. Willems, Y.M. Shtarkov, and T.J. Tjalkens. The context tree weighting method: basic properties. IEEE Transactions on Information Theory, 41(3):653–664, 1995.
|
2004
|
140
|
2,553
|
Modelling Uncertainty in the Game of Go David H. Stern Department of Physics Cambridge University dhs26@cam.ac.uk Thore Graepel Microsoft Research Cambridge, U.K. thoreg@microsoft.com David J. C. MacKay Department of Physics Cambridge University mackay@mrao.cam.ac.uk Abstract Go is an ancient oriental game whose complexity has defeated attempts to automate it. We suggest using probability in a Bayesian sense to model the uncertainty arising from the vast complexity of the game tree. We present a simple conditional Markov random field model for predicting the pointwise territory outcome of a game. The topology of the model reflects the spatial structure of the Go board. We describe a version of the Swendsen-Wang process for sampling from the model during learning and apply loopy belief propagation for rapid inference and prediction. The model is trained on several hundred records of professional games. Our experimental results indicate that the model successfully learns to predict territory despite its simplicity. 1 Introduction The game of Go originated in China over 4000 years ago. Its rules are simple (See www.gobase.org for an introduction). Two players, Black and White, take turns to place stones on the intersections of an N × N grid (usually N = 19 but smaller boards are in use as well). All the stones of each player are identical. Players place their stones in order to create territory by occupying or surrounding areas of the board. The player with the most territory at the end of the game is the winner. A stone is captured if it has been completely surrounded (in the horizontal and vertical directions) by stones of the opponent’s colour. Stones in a contiguous ‘chain’ have the common fate property: they are captured all together or not at all [1]. The game that emerges from these simple rules has a complexity that defeats attempts to apply minimax search. The best Go programs play only at the level of weak amateur Go players and Go is therefore considered to be a serious AI challenge not unlike Chess in the 1960s. There are two main reasons for this state of affairs: firstly, the high branching factor of Go (typically 200 to 300 potential moves per position) prevents the expansion of a game tree to any useful depth. Secondly, it is difficult to produce an evaluation function for Go positions. A Go stone has no intrinsic value; its value is determined by its relationships with other stones. Go players evaluate positions using visual pattern recognition and qualitative intuitions which are difficult to formalise. Most Go programs rely on a large amount of hand-tailored rules and expert knowledge [2]. Some machine learning techniques have been applied to Go with limited success. Schraudolph, Dayan and Sejnowski [3] trained a multi-layer perceptron to evaluate board positions by temporal difference learning. Enzenberger [4] improved on this by structuring the topologies of his neural networks according to the relationships between stones on the board. Graepel et al. [1] made use of the common fate property of chains to construct an efficient graph-based representation of the board. They trained a Support Vector Machine to use this representation to solve Go problems. Our starting point is the uncertainty about the future course of the game that arises from the vast complexity of the game tree. We propose to explicitly model this uncertainty using probability in a Bayesian sense. The Japanese have a word, aji, much used by Go players. Taken literally it means ‘taste’. Taste lingers, and likewise the influence of a Go stone lingers (even if it appears weak or dead) because of the uncertainty of the effect it may have in the future. We use a probabilistic model that takes the current board position and predicts for every intersection of the board if it will be Black or White territory. Given such a model the score of the game can be predicted and hence an evaluation function produced. The model is a conditional Markov random field [5] which incorporates the spatial structure of the Go board. 2 Models for Predicting Territory Consider the Go board as an undirected Graph G = (N, E) with N = Nx×Ny nodes n ∈N representing vertices on the board and edges e ∈E connecting vertically and horizontally neighbouring points. We can denote a position as the vector c ∈ {Black, White, Empty}N for cn = c(n) and similarly the final territory outcome of the game as s ∈{+1, −1}N for sn = s(n). For convenience we score from the point of view of Black so elements of s representing Black territory are valued +1 and elements representing white territory are valued −1. Go players will note that we are adopting the Chinese method of scoring empty as well as occupied intersections. The distribution we wish to model is P(s|c), that is, the distribution over final territory outcomes given the current position. Such a model would be useful for several reasons. • Most importantly, the detailed outcomes provide us with a simple evaluation function for Go positions by the expected score, u(c) := ⟨P i si⟩P (s|c). An alternative (and probably better) evaluation function is given by the probability of winning which takes the form P(Black wins) = P(P i si > komi), where komi refers to the winning threshold for Black. • Connectivity of stones is vital because stones can draw strength from other stones. Connectivity could be measured by the correlation between nodes under the distribution P(s|c). This would allow us to segment the board into ‘groups’ of stones to reduce complexity. • It would also be useful to observe cases where we have an anti-correlation between nodes in the territory prediction. Japanese refer to such cases as miai in which only one of two desired results can be achieved at the expense of the other - a consequence of moving in turns. • The fate of a group of Go stones could be estimated from the distribution P(s|c) by marginalising out the nodes not involved. The way stones exert long range influence can be considered recursive. A stone influences its neighbours, who influence their neighbours and so on. A simple model which exploits this idea is to consider the Go board itself as an undirected graphical model in the form of a Conditional Random Field (CRF) [5]. We factorize the distribution as P(s|c) = 1 Z(c, θ) Y f∈F ψf(sf, cf, θf) = 1 Z(c, θ) exp X f∈F log(ψf(sf, cf, θf)) . (1) The simplest form of this model has one factor for each pair of neighbouring nodes i, j so ψf(sf, cf, θf) = ψf(si, sj, ci, cj, θf). Boltzmann5 For our first model we decompose the factors into ‘coupling’ terms and ‘external field’ terms as follows: P(s|c) = 1 Z(c, θ) exp X (i,j)∈F {w(ci, cj)sisj + h(ci)si + h(cj)sj} (2) This gives a Boltzmann machine whose connections have the grid topology of the board. The couplings between territory-outcome nodes depend on the current board position local to those nodes and the external field at each node is determined by the state of the board at that location. We assume that Go positions with their associated territory positions are symmetric with respect to colour reversal so ψf(si, sj, ci, cj, θf) = ψf(−si, −sj, −ci, −cj, θf). Pairwise connections are also invariant to direction reversal so ψf(si, sj, ci, cj, θf) = ψf(sj, si, cj, ci, θf). It follows that the model described in 2 can be specified by just five parameters: • wchains = w(Black, Black) = w(White, White), • winter−chain = w(Black, White) = w(White, Black), • wchain−empty = w(Empty, White) = w(Empty, Black), • wempty = w(Empty, Empty), • hstones = h(Black) = −h(White), and h(empty) is set to zero by symmetry. We will refer to this model as Boltzmann5. This simple model is interesting because all these parameters are readily interpreted. For example we would expect wchains to take on a large positive value since chains have common fate. BoltzmannLiberties A feature that has particular utility for evaluating Go positions is the number of liberties associated with a chain of stones. A liberty of a chain is an empty vertex adjacent to it. The number of liberties indicates a chain’s safety because the opponent would have to occupy all the liberties to capture the chain. Our second model takes this information into account: P(s|c) = 1 Z(c, θ) exp X (i,j)∈F w(ci, cj, si, sj, li, lj) , (3) where li is element i of a vector l ∈{+1, +2, +3, 4 or more}N the liberty count of each vertex on the Go board. A group with four or more liberties is considered relatively safe. Again we can apply symmetry arguments and end up with 78 parameters. We will refer to this model as BoltzmannLiberties. We trained the two models using board positions from a database of 22,000 games between expert Go players1. The territory outcomes of a subset of these games 1The GoGoD database, April 2003. URL:http://www.gogod.demon.co.uk (a) Gibbs Sampling (b) Swendsen Wang Figure 1: Comparing ordinary Gibbs with Swendsen Wang sampling for Boltzmann5. Shown are the differences between the running averages and the exact marginals for each of the 361 nodes plotted as a function of the number of wholeboard samples. were determined using the Go program GnuGo2 to analyse their final positions. Each training example comprised a board position c, with its associated territory outcome s. Training was performed by maximising the likelihood ln P(s′|c) using gradient descent. In order to calculate the likelihood it is necessary to perform inference to obtain the marginal expectations of the potentials. 3 Inference Methods It is possible to perform exact inference on the model by variable elimination [6]. Eliminating nodes one diagonal at a time gave an efficient computation. The cost of exact inference was still too high for general use but it was used to compare other inference methods. Sampling The standard method for sampling from a Boltzmann machine is to use Gibbs sampling where each node is updated one at a time, conditional on the others. However, Gibbs sampling mixes slowly for spin systems with strong correlations. A generalisation of the Swendsen-Wang process [7] alleviates this problem. The original Swendsen-Wang algorithm samples from a ferromagnetic Ising model with no external field by adding an additional set of ‘bond’ nodes d, one attached to each factor (edge) in the original graph. Each of these nodes can either be in the state ‘bond’ or ‘no bond’. The new factor potentials ψf(sf, cf, df, θf) are chosen such that if a bond exists between a pair of spins then they are forced to be in the same state. Conditional on the bonds, each cluster has an equal probability of having all its spins in the ‘up’ state or all in the ‘down’ state. The algorithm samples from P(s|d, c, θ) and P(d|s, c, θ) in turn (flipping clusters and forming bonds respectively). It can be generalised to models with arbitrary couplings and biases [7, 8]. The new factor potentials ψf(sf, cf, df, θf) have the following effect: if the coupling is positive then when the d node is in the ‘bond’ state it forces the two spins to be in the same state; if the coupling is negative the ‘bond’ state forces the two spins to be opposite. The probability of each cluster being in each state depends on the sum of the biases involved. Figure 1 shows that the mixing rate of the sampling process is improved by using Swendsen-Wang allowing us to find accurate marginals for a single position in a couple of seconds. 2URL:http://www.gnu.org/software/gnugo/gnugo.html Loopy Belief Propagation In order to perform very rapid (approximate) inference we used the loopy belief propagation (BP) algorithm [9] and the results are examined in Section 4. This algorithm is similar to an influence function [10], as often used by Go programmers to segment the board into Black and White territory and for this reason is laid out below. For each board vertex j ∈N, create a data structure called a node containing: 1. A(j), the set of nodes corresponding to the neighbours of vertex j, 2. a set of new messages mnew ij (sj) ∈Mnew, one for each i ∈A(j), 3. a set of old messages mold ij (sj) ∈Mold, one for each i ∈A(j), 4. a belief bj(sj). repeat for all j ∈N do for all i ∈A(j) do for all sj ∈{Black, White} do let variable SUM := 0, for all si ∈{Black, White} do SUM := SUM + ψ(i,j)(si, sj) Q q∈A(i)\j mold qi (si), end for mnew ij (sj) := SUM, end for end for end for for all messages, mnew xy (sy) ∈Mnew do mnew xy (sy) := λmold xy (sy) + (1 −λ)mnew xy (sy), end for until completed I iterations (typically I=50) Belief Update: for all j ∈N do for all sj ∈{Black, White} do bj(sj) := Q q∈A(j) mnew qj (sj) end for end for Here, λ (typically 0.5), damps any oscillations. ψ(i,j)(si, sj) is the factor potential (see (1)) and in the case of Boltzmann5 takes on the form ψ(i,j)(si, sj) = exp (w(ci, cj)sisj + h(ci)si + h(cj)sj). Now the probability of each vertex being Black or White territory is found by normalising the beliefs at each node. For example P(sj = Black) = bj(Black)/Z where Z = bj(Black) + bj(White). The accuracy of the loopy BP approximation appears to be improved by using it during the parameter learning stage in cases where it is to be used in evaluation. 4 Results for Territory Prediction Some Learnt Parameters Here are some parameters learnt for the Boltzmann5 model (2). This model was trained on 290 positions from expert Go games at move 80. Training was performed by maximum likelihood as described in Section 2. (a) Boltzmann5 (Exact) (b) Boltzmann5 (Loopy BP) Figure 2: Comparing territory predictions for a Go position from a professional game at move 90. The circles represent stones. The small black and white squares at each vertex represent the average territory prediction at that vertex, from −1 (maximum white square) to +1 (maximum black square). • hstones = 0.265 • wempty = 0.427 • wchain−empty = 0.442 • wchains = 2.74 • winter−chain = 0.521 The values of these parameters can be interpreted. For example wchains corresponds to the correlation between the likely territory outcome of two adjacent vertices in a chain of connected stones. The high value of this parameter derives from the ‘common fate’ property of chains as described in Section 1. Interestingly, the value of the parameter wempty (corresponding to the coupling between territory predictions of neighbouring vertices in empty space) is 0.427 which is close to the critical coupling for an Ising model, 0.441. Territory Predictions Figure 2 gives examples of territory predictions generated by Boltzmann5. In comparison, Figure 3 shows the prediction of BoltzmannLiberties and a territory prediction from The Many Faces of Go [2]. Go players confirm that the territory predictions produced by the models are reasonable, even around loose groups of Black and White stones. Compare Figures 2 (a) and 3 (a); when liberty counts are included as features, the model can more confidently identify which of the two small chains competing in the bottom right of the board is dead. Comparing Figure 2 (a) and (b) Loopy BP appears to give over-confident predictions in the top right of the board where few stones are present. However, it is a good approximation where many stones are present (bottom left). Comparing Models and Inference Methods Figure 4 shows cross-entropies between model territory predictions and true final territory outcomes for a dataset of expert games. As we progress through a game, predictions become more accurate (not surprising) but the spread of the accuracy increases, possibly due to incorrect assessment of the life-and-death status of groups. Swendsen-Wang performs better than Loopy BP, which may suffer from its over-confidence. BoltzmannLiberties performs better than Boltzmann5 (when using Swendsen-Wang) the difference in (a) BoltzmannLiberties (Exact) (b) Many Faces of Go Figure 3: Diagram (a) is produced by exact inference (training was also by Loopy BP). Diagram (b) shows the territory predicted by The Many Faces of Go (MFG) [2]. MFG uses of a rule-based expert system and its prediction for each vertex has three possible values: ‘White’, ‘Black’ or ‘unknown/neutral’. performance increasing later in the game when liberty counts become more useful. 5 Modelling Move Selection In order to produce a Go playing program we are interested in modelling the selection of moves. A measure of performance of such a model is the likelihood it assigns to professional moves as measured by X games X moves log P(move|model). (4) We can obtain a probability over moves by choosing a Gibbs distribution with the negative energy replaced by the evaluation function, P(move|model, w) = eβu(c′,w) Z(w) (5) where u(c′, w) is an evaluation function evaluated at the board position c′ resulting from a given move. The inverse temperature parameter β determines the degree to which the move made depends on its evaluation. The territory predictions from the models Boltzmann5 and BoltzmannLiberties can be combined with the evaluation function of Section 2 to produce position evaluators. 6 Conclusions We have presented a probabilistic framework for modelling uncertainty in the game of Go. A simple model which incorporates the spatial structure of a board position can perform well at predicting the territory outcomes of Go games. The models described here could be improved by extracting more features from board positions and increasing the size of the factors (see (1)). B5 BLib B5 BLib B5 BLib 0.0 0.5 1.0 1.5 Swendsen−Wang Move 20 Move 80 Move 150 Cross Entropy B5 BLib B5 BLib B5 BLib 0.0 0.5 1.0 1.5 Loopy BP Move 20 Move 80 Move 150 Cross Entropy Figure 4: Cross entropies 1 N PN n [s′ n log sn +(1−s′ n) log(1−sn)] between actual and predicted territory outcomes, s′ n and n for 327 Go positions. Sampling is compared with Loopy BP (training and testing). 3 board positions were analysed for each game (moves 20, 80 and 150). The Boltzmann5 (B5) and the BoltzmannLiberties (BLib) models are compared. Acknowledgements We thank I. Murray for helpful discussions on sampling and T. Minka for general advice about probabilistic inference. This work was supported by a grant from Microsoft Research UK. References [1] Thore Graepel, Mike Goutrie, Marco Kruger, and Ralf Herbrich. Learning on graphs in the game of Go. In Proceedings of the International Conference on Artificial Neural Networks, ICANN 2001, 2001. [2] David Fotland. Knowledge representation in the many faces of go. URL: ftp://www.joy.ne.jp/welcome/igs/Go/computer/mfg.tex.Z, 1993. [3] Nicol N. Schrauldolph, Peter Dayan, and Terrance J. Sejnowski. Temporal difference learning of position evaluation in the game of go. In Advances in Neural Information Processing Systems 6, pages 817–824, San Fransisco, 1994. Morgan Kaufmann. [4] Markus Enzenberger. The integration of a priori knowledge into a Go playing neural network. URL: http://www.markus-enzenberger.de/neurogo.html, 1996. [5] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. Int. Conf. on Machine Learning, 2001. [6] Fabio Gagliardi Cozman. Generalizing variable elimination in Bayesian networks. In Proceedings of the IBERAMIA/SBIA 2000 Workshops, pages 27–32, 2000. [7] R. H. Swendsen and J-S Wang. Nonuniversal critical dynamics in Monte Carlo simulations. Physical Review Letters, 58:86–88, 1987. [8] Robert G. Edwards and Alan D. Sokal. Generalisation of the Fortuin-KasteleynSwendsen-Wang representation and Monte Carlo algorithm. Physical Review Letters, 38(6), 1988. [9] Yair Weiss. Belief propagation and revision in networks with loops. Technical report, AI Lab Memo, MIT, Cambridge, 1998. [10] A. L. Zobrist. Feature Extractions and Representations for Pattern Recognition and the Game of Go. PhD thesis, Graduate School of the University of Wisconsin, 1970.
|
2004
|
141
|
2,554
|
Spike Sorting: Bayesian Clustering of Non-Stationary Data Aharon Bar-Hillel Neural Computation Center The Hebrew University of Jerusalem aharonbh@cs.huji.ac.il Adam Spiro School of Computer Science and Engineering The Hebrew University of Jerusalem adams@cs.huji.ac.il Eran Stark Department of Physiology The Hebrew University of Jerusalem eranstark@md.huji.ac.il Abstract Spike sorting involves clustering spike trains recorded by a microelectrode according to the source neuron. It is a complicated problem, which requires a lot of human labor, partly due to the non-stationary nature of the data. We propose an automated technique for the clustering of non-stationary Gaussian sources in a Bayesian framework. At a first search stage, data is divided into short time frames and candidate descriptions of the data as a mixture of Gaussians are computed for each frame. At a second stage transition probabilities between candidate mixtures are computed, and a globally optimal clustering is found as the MAP solution of the resulting probabilistic model. Transition probabilities are computed using local stationarity assumptions and are based on a Gaussian version of the Jensen-Shannon divergence. The method was applied to several recordings. The performance appeared almost indistinguishable from humans in a wide range of scenarios, including movement, merges, and splits of clusters. 1 Introduction Neural spike activity is recorded with a micro-electrode which normally picks up the activity of multiple neurons. Spike sorting seeks the segmentation of the spike data such that each cluster contains all the spikes generated by a different neuron. Currently, this task is mostly done manually. It is a tedious mission, requiring many hours of human labor for each recording session. Several algorithms were proposed in order to help automating this process (see [7] for a review, [9],[10]) and some tools were implemented to assist in manual sorting [8]. However, the ability of suggested algorithms to replace the human worker has been quite limited. One of the main obstacles to a successful application is the non-stationary nature of the data [7]. The primary source of this non-stationarity is slight movements of the recording electrode. Slight drifts of the electrode’s location, which are almost inevitable, cause changes in the typical shapes of recorded spikes over time. Other sources of non-stationarity include variable background noise and changes in the characteristic spike generated by a certain neuron. The increasing usage of multiple electrode systems turns non-stationarity into an acute problem, as electrodes are placed in a single location for long durations. Using the first 2 PCA coefficients to represent the data (which preserves up to 93% of the variance in the original recordings [1]), a human can cluster spikes by visual inspection. When dividing the data into small enough time frames, cluster density can be approximated by a multivariate Gaussian with a general covariance matrix without loosing much accuracy [7]. Problematic scenarios which can appear due to non-stationarity are exemplified in Section 4.2 and include: (1) Movements and considerable shape changes of the clusters over time, (2) Two clusters which are reasonably well-separated may move until they converge and become indistinguishable. A split of a cluster is possible in the same manner. Most spike sorting algorithms do not address the presented difficulties at all, as they assume full stationarity of the data. Some methods [4, 11] try to cope with the lack of stationarity by grouping data into many small clusters and identifying the clusters that can be combined to represent the activity of a single unit. In the second stage, [4] uses ISI information to understand which clusters cannot be combined, while [11] bases this decision on the density of points between clusters. In [3] a semi-automated method is suggested, in which each time frame is clustered manually, and then the correspondence between clusters in consecutive time frames is established automatically. The correspondence is determined by a heuristic score, and the algorithm doesn’t handle merge or split scenarios. In this paper we suggest a new fully automated technique to solve the clustering problem for non-stationary Gaussian sources in a Bayesian framework. We divide the data into short time frames in which stationarity is a reasonable assumption. We then look for good mixture of Gaussians descriptions of the data in each time frame independently. Transition probabilities between local mixture solutions are introduced, and a globally optimal clustering solution is computed by finding the Maximum-A-Posteriori (MAP) solution of the resulting probabilistic model. The global optimization allows the algorithm to successfully disambiguate problematic time frames and exhibit close to human performance. We present the outline of the algorithm in Section 2. The transition probabilities are computed by optimizing the Jensen-Shannon divergence for Gaussians, as described in Section 3. Empirical results and validation are presented in Section 4. 2 Clustering using a chain of Gaussian mixtures Denote the observable spike data by D = {d}, where each spike d ∈Rn is described by the vector of its PCA coefficients. We break the data into T disjoint groups {Dt = {dt i}Nt i=1} T t=1. We assume that in each frame, the data can be well approximated by a mixture of Gaussians, where each Gaussian corresponds to a single neuron. Each Gaussian in the mixture may have a different covariance matrix. The number of components in the mixture is not known a priori, but is assumed to be within a certain range (we used 1-6). In the search stage, we use a standard EM (Expectation-Maximization) algorithm to find a set of M t candidate mixture descriptions for each time frame t. We build the set of candidates using a three step process. First, we run the EM algorithm with different number of clusters and different initial conditions. In a second step, we import to each time frame t the best mixture solutions found in the neighboring time frames [t −k, .., t + k] (we used k = 2). These solutions are also adapted by using them as the initial conditions for the EM and running a low number of EM rounds. This mixing of solutions between time frames is repeated several times. Finally, the solution list in each time frame is pruned to remove similar solutions. Solutions which don’t comply with the assumption of well shaped Gaussians are also removed. In order to handle outliers, which are usually background spikes or non-spike events, each mixture candidate contains an additional ’background model’ Gaussian. This model’s parameters are set to 0, K · Σt where Σt is the covariance matrix of the data in frame t and K > 1 is a constant. Only the weight of this model is allowed to change during the EM process. After the search stage, each time frame t has a list of M t models {Θt i}T,M t t=1,i=1. Each mixture model is described by a triplet Θt i = {αt i,l, µt i,l, Σt i,l}Ki,t l=1 , denoting Gaussian mixture’s weights, means, and covariances respectively. Given these candidate models we define a discrete random vector Z = {zt}T t=1 in which each component zt has a value range of {1, 2, .., M t}. ”zt = j” has the semantics of ”at time frame t the data is distributed according to the candidate mixture Θt j”. In addition we define for each spike dt i a hidden discrete ’label’ random variable lt i. This label indicates which Gaussian in the local mixture hypothesis is the source of the spike. Denote by Lt = {lt i}Nt i=1 the vector of labels of time frame t, and by L the vector of all the labels. O z 1 O O D1 L 1 O z 2 O D 2 O L 2 O z T O D T O L T O z 3 O O L O H D ΘH Ψ (A) (B) Figure 1: (A) A Bayesian network model of the data generation process. The network has an HMM structure, but unlike HMM it does not have fixed states and transition probabilities over time. The variables and the CPDs are explained in Section 2. (B) A Bayesian network representation of the relations between the data D and the hidden labels H (see Section 3.1). The visible labels L and the sampled data points are independent given the hidden labels. We describe the probabilistic relations between D, L, and Z using a Bayesian network with the structure described in Figure 1A. Using the network structure and assuming i.i.d samples the joint log probability decomposes into log P(z1) + T X t=2 logP(zt|zt−1) + T X t=1 Nt X i=1 [log P(lt i|zt) + log P(dt i|lt i, zt)] (1) We wish to maximize this log-likelihood over all possible choices of L, Z. Notice that by maximizing the probability of both data and labels we avoid the tendency to prefer mixtures with many Gaussians, which appears when maximizing the probability for the data alone. The conditional probability distributions (CPDs) of the points’ labels and the points themselves, given an assignment to Z, are given by log P(lt k = j|zt = i) = log αt i,j (2) log P(dt k|lt i = j, zt = i) = −1 2[n log 2π + log |Σt i,j| + (dt k −µt i,j) tΣt i,j −1(dt k −µt i,j)] The transition CPDs P(zt|zt−1) are described in Section 3. For the first frame’s prior we use a uniform CPD. The MAP solution for the model is found using the Viterbi algorithm. Labels are then unified using the correspondences established between the chosen mixtures in consecutive time frames. As a final adjustment step, we repeat the mixing process using only the mixtures of the found MAP solution. Using this set of new candidates, we calculate the final MAP solution in the same manner described above. 3 A statistical distance between mixtures The transition CPDs of the form P(zt|zt−1) are based on the assumption that the Gaussian sources’ distributions are approximately stationary in pairs of consecutive time frames. Under this assumption, two mixtures candidates estimated at consecutive time frames are viewed as two samples from a single unknown Gaussian mixture. We assume that each Gaussian component from any of the two mixtures arises from a single Gaussian component in the joint hidden mixture, and so the hidden mixture induces a partition of the set of visible components into clusters. Gaussian components in the same cluster are assumed to arise from the same hidden source. Our estimate of p(zt = j|zt−1 = i) is based on the probability of seeing two large samples with different empirical distributions (Θt−1 i and Θt j respectively) under the assumption of such a single joint mixture. In Section 3.1, the estimation of the transition probability is formalized as an optimization of a Jensen-Shannon based score over the possible partitions of the Gaussian components set. If the family of allowed hidden mixture models is not further constrained, the optimization problem derived in Section 3.1 is trivially solved by choosing the most detailed partition (each visible Gaussian component is a singleton). This happens because a richer partition, which does not merge many Gaussians, gets a higher score. In Section 3.2 we suggest natural constraints on the family of allowed partitions in the two cases of constant and variable number of clusters through time, and present algorithms for both cases. 3.1 A Jensen-Shannon based transition score Assume that in two consecutive time frames we observed two labeled samples (X1, L1), (X2, L2) of sizes N 1, N 2 with empirical distributions Θ1, Θ2 respectively. By ’empirical distribution’, or ’type’ in the notation of [2], we denote the ML parameters of the sample, for both the multinomial distribution of the mixture weights and the Gaussian distributions of the components. As stated above, we assume that the joint sample of size N = N 1 + N 2 is generated by a hidden Gaussian mixture ΘH with KH components, and its components are determined by a partition of the set of all components in Θ1, Θ2. For convenience of notation, let us order this set of K1 + K2 Gaussians and refer to them (and to their parameters) using one index. We can define a function R : {1, .., K1 + K2} →{1, .., KH} which matches each visible Gaussian component in Θ1 or Θ2 to its hidden source component in ΘH. Denote the labels of the sample points under the hidden mixture H = {hj i} Nj i=1, j = 1, 2. The values of these variables are given by hj i = R(lj i ), where lj i is the label index in the set of all components. The probabilistic dependence between a data point, its visible label, and its hidden label is explained by the Bayesian network model in Figure 1B. We assume a data point is obtained by choosing a hidden label and then sample the point from the relevant hidden component. The visible label is then sampled based on the hidden label using a multinomial distribution with parameters Ψ = {Ψq}K1+K2 q=1 , where Ψq = P(l = q|h = R(q)), i.e., the probability of the visible label q given the hidden label R(q) (since H is deterministic given L, P(l = q|h) = 0 for h ̸= R(q)). Denote this model, which is fully determined by R, Ψ, and ΘH, by M H. We wish to estimate P((X1, L1) ∼Θ1|(X2, L2) ∼Θ2, M H). We use ML approximations and arguments based on the method of types [2] to approximate this probability and optimize it with respect to ΘH and Ψ. The obtained result is (the derivation is omitted) P((X1, L1) ∼Θ1|(X2, L2) ∼Θ2, M H) ≈ (3) max R exp(−N · KH X m=1 αH m X {q:R(q)=m} ΨqDkl(G(x|µq, Σq)|G(x|µH m, ΣH m))) where G(x|µ, Σ) denotes a Gaussian distribution with the parameters µ, Σ and the optimized ΘH, Ψ appearing here are given as follows. Denote by wq (q ∈{1, .., K1 + K2}) the weight of model q in a naive joint mixture of Θ1,Θ2, i.e., wq = Nj N αq where j = 1 if component q is part of Θ1 and the same for j = 2. αH m = X {q:R(q)=m} wq , Ψq = wq αH R(q) , µH m = X {q:R(q)=m} Ψqµq (4) ΣH m = X {q:R(q)=m} Ψq(Σq + (µq −µH m)(µq −µH m)t) Notice that the parameters of a hidden Gaussian, µH m and ΣH m, are just the mean and covariance of the mixture P q:R(q)=m ΨqG(x|µq, Σq). The summation over q in expression (3) can be interpreted as the Jensen-Shannon (JS) divergence between the components assigned to the hidden source m, under Gaussian assumptions. For a given parametric family, the JS divergence is a non-negative measurement which can be used to test whether several samples are derived from a single distribution from the family or from a mixture of different ones [6]. The JS divergence is computed for a mixture of n empirical distributions P1, .., Pn with mixture weights π1, .., πn. In the Gaussian case, denote the mean and covariance of the component distributions by {µi, Σi}n i=1. The mean and covariance of the mixture distribution µ∗, Σ∗are a function of the means and covariances of the components, with the formulae given in (4) for µH m,ΣH m. The Gaussian JS divergence is given by JSG π1,..,πn(P1, .., Pn) = n X i=1 πiDkl(G(x|µi, Σi), G(x|µ∗, Σ∗)) (5) = H(G(x|µ∗, Σ∗)) − n X i=1 πiH(G(x|µi, Σi)) = 1 2(log |Σ∗| − n X i=1 πi log |Σi|) using this identity in (3), and setting Θ1 = Θt i, Θ2 = Θt−1 j , we finally get the following expression for the transition probability log P(zt = i|zt−1 = j) = (6) −N · max R KH X m=1 αH mJSG {Ψq:R(q)=m}({G(x|µq, Σq) : R(q) = m}) 3.2 Constrained optimization and algorithms Consider first the case in which a one-to-one correspondence is assumed between clusters in two consecutive frames, and hence the number of Gaussian components K is constant over all time frames. In this case, a mapping R is allowed iff it maps to each hidden source i a single Gaussian from mixture Θ1 and a single Gaussian from Θ2. Denoting the Gaussians matched to hidden i by R−1 1 (i), R−1 2 (i), the transition score (6) takes the form of −N · max R K P i=1 S(R−1 1 (i), R−1 2 (i)). Such an optimization of a pairwise matching score can be seen as a search for a maximal perfect matching in a weighted bipartite graph. The nodes of the graph are the Gaussian components of Θ1, Θ2 and the edges’ weights are given by the scores S(a, b). The global optimum of this problem can be efficiently found using the Hungarian algorithm [5] in O(n3), which is unproblematic in our case. The one-to-one correspondence assumption is too strong for many data sets in the spike sorting application, as it ignores the phenomena of splits and merges of clusters. We wish to allow such phenomena, but nevertheless enforce strong (though not perfect) demands of correspondence between the Gaussians in two consecutive frames. In order to achieve such balance, we place the following constraints on the allowed partitions R: 1. Each cluster of R should contain exactly one Gaussian from Θ1 or exactly one Gaussian from Θ2. Hence assignment of different Gaussians from the same mixture to the same hidden source is limited only for cases of a split or a merge. 2. The label entropy of the partition R should satisfy H(αH 1 , .., αH KH) ≤N 1 N H(α1 1, .., α1 K1) + N 2 N H(α2 1, .., α2 K2) (7) Intuitively, the second constraint limits the allowed partitions to ones which are not richer than the visible partition, i.e., do not have much more clusters. Note that the most detailed partition (the partition into singletons) has a label entropy given by the r.h.s of inequality (7) plus H( N1 N , N2 N ), which is one bit for N 1 = N 2. This extra bit is the price of using the concatenated ’rich’ mixture, so we look for mixtures which do not pay such an extra price. The optimization for this family of R does not seem to have an efficient global optimization technique, and thus we resort to a greedy procedure. Specifically, we use a bottom up agglomerative algorithm. We start from the most detailed partition (each Gaussian is a singleton) and merge two clusters of the partition at each round. Only merges that comply with the first constraint are considered. At each round we look for a merge which incurs a minimal loss to the accumulated Jensen Shannon score (6) and a maximal loss to the mixture label entropy. For two Gaussian clusters (α1, µ1, Σ1), (α2, µ2, Σ2) these two quantities are given by ∆log JS = −N(w1 + w2)JSG π1,π2(G(x|µ1, Σ1), G(x|µ2, Σ2)) (8) ∆H = −N(w1 + w2)H(π1, π2) where π1, π2 are w1 w1+w2 , w2 w1+w2 and wi are as in (4). We choose at each round the merge which minimizes the ratio between these two quantities. The algorithm terminates when the accumulated label entropy reduction is bigger than H( N1 N , N2 N ) or when no allowed merges exist anymore. In the second case, it may happen that the partition R found by the algorithm violates the constraint (7). We nevertheless compute the score based on the R found, since this partition obeys the first constraint and usually is not far from satisfying the second. 4 Empirical results 4.1 Experimental design and data acquisition Neural data were acquired from the dorsal and ventral pre-motor (PMd, PMv) cortices of two Macaque monkeys performing a prehension (reaching and grasping) task. At the beginning of each trial, an object was presented in one of six locations. Following a delay period, a Go signal prompted the monkey to reach for, grasp, and hold the target object. A recording session typically lasted 2 hours during which monkeys completed 600 trials. During each session 16 independently-movable glass-plated tungsten micro-electrodes f 1 2 score Number of frames (%) Number of electrodes (%) 0.9-1.0 3386 (75%) 13 (30%) 0.8-0.9 860 (19%) 10 (23%) 0.7-0.8 243 (5%) 10 (23%) 0.6-0.7 55 (1%) 11 (25%) Table 1: Match scores between manual and automatic clustering. The rows list the appearance frequencies of different f 1 2 scores. were inserted through the dura, 8 into each area. Signals from these electrodes were amplified (10K), bandpass filtered (5-6000Hz), sampled (25 kHz), stored on disk (Alpha-Map 5.4, Alpha-Omega Eng.), and subjected to 3-stage preprocessing. (1) Line influences were cleaned by pulse-triggered averaging: the signal following a pulse was averaged over many pulses and subtracted from the original in an adaptive manner. (2) Spikes were detected by a modified second derivative algorithm (7 samples backwards and 11 forward), accentuating spiky features; segments that crossed an adaptive threshold were identified. Within each segment, a potential spike’s peak was defined as the time of the maximal derivative. If a sharper spike was not encountered within 1.2ms, 64 samples (10 before peak and 53 after) were registered. (3) Waveforms were re-aligned s.t. each started at the point of maximal fit with 2 library PCs (accounting, on average, for 82% and 11% of the variance, [1]). Aligned waveforms were projected onto the PCA basis to arrive at two coefficients. 4.2 Results and validation 2 4 1 2 4 1 2 34 1 2 3 4 1 3 5 1 3 2 1 3 2 1 23 4 1 2 3 4 1 4 5 (0.80) (0.77) (0.98) (0.95) (0.98) Figure 2: Frames 3,12,24,34, and 47 from a 68-frames data set. Each frame contains 1000 spikes, plotted here (with random number assignments) according to their first two PCs. In this data one cluster moves constantly, another splits into distinguished clusters, and at the end two clusters are merged. The top and bottom rows show manual and automatic clustering solutions respectively. Notice that during the split process of the bottom left area some ambiguous time frames exist in which 1,2, or 3 cluster descriptions are reasonable. This ambiguity can be resolved using global considerations of past and future time frames. By finding the MAP solution over all time frames, the algorithm manages such considerations. The numbers below the images show the f 1 2 score of the local match between the manual and the automatic clustering solutions (see text). We tested the algorithm using recordings of 44 electrodes containing a total of 4544 time frames. Spike trains were manually clustered by a skilled user in the environment of AlphaSort 4.0 (Alpha-Omega Eng.). The manual and automatic clustering results were compared using a combined measure of precision P and recall R scores f 1 2 = 2P R R+P . Figure 2 demonstrates the performance of the algorithm using a particularly non-stationary data set. Statistics on the match between manual and automated clustering are described in Table 1. In order to understand the score’s scale we note that random clustering (with the same label distribution as the manual clustering) gets an f 1 2 score of 0.5. The trivial clustering which assigns all the points to the same label gets mean scores of 0.73 and 0.67 for single frame matching and whole electrode matching respectively. The scores of single frames are much higher than the full electrode scores, since the problem is much harder in the latter case. A single wrong correspondence between two consecutive frames may reduce the electrode’s score dramatically, while being unnoticed by the single frame score. In most cases the algorithm gives reasonably evolving clustering, even when it disagrees with the manual solution. Examples can be seen at the authors’ web site1. Low matching scores between the manual and the automatic clustering may result from inherent ambiguity in the data. As a preliminary assessment of this hypothesis we obtained a second, independent, manual clustering for the data set for which we got the lowest match scores. The matching scores between manual and automatic clustering are presented in Figure 3A. 0.68 0.68 0.62 H2 H1 A 1 2 3 1 2 3 (A) (B1) (B2) (B3) (B4) Figure 3: (A) Comparison of our automatic clustering with 2 independent manual clustering solutions for our worst matched data points. Note that there is also a low match between the humans, forming a nearly equilateral triangle. (B) Functional validation of clustering results: (1) At the beginning of a recording session, three clusters were identified. (2) 107 minutes later, some shifted their position. They were tracked continuously. (3) The directional tuning of the top left cluster (number 3) during the delay periods of the first 100 trials (dashed lines are 99% confidence limits). (4) Although the cluster’s position changed, its tuning curve’s characteristics during the last 100 trials were similar. In some cases, validity of the automatic clustering can be assessed by checking functional properties associated with the underlying neurons. In Figure 3B we present such a validation for a successfully tracked cluster. References [1] Abeles M., Goldstein M.H. Multispike train analysis. Proc IEEE 65, pp. 762-773, 1977. [2] Cover T., Thomas J. Elements of information theory. John wiley and sons, New York 1991. [3] Emondi A.A, Rebrik S.P, Kurgansky A.V, Miller K.D. Tracking neurons recorded from tetrodes across time. J. of Neuroscience Methods, vol. 135:95-105, 2004. [4] Fee M., Mitra P., Kleinfeld D. Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-gaussian variability. J. of Neuroscience Methods, vol. 69:175-188, 1996. [5] Kuhn H.W. The Hungarian method for the assignment problem. Naval research logistics quarterly, pp. 83-87, 1995. [6] Lehmann E.L. Testing statistical hypotheses John Wiley and Sons, New York 1959. [7] Lewicki, M.S. A review of methods for spike sorting: the detection and classification of neural action potentials. Network: Computation in Neural Systems. 9(4):R53-R78, 1998. [8] Lewicki’s Bayesian spike sorter, sslib (ftp.etho.caltech.edu). [9] Penev P., Dimitrov A., Miller J. Characterization of and compensation for the non-stationarity of spike shapes during physiological recordings. Neurocomputing 38-40:1695-1701, 2001. [10] Shoham S., Fellows M.R., Normann R.A. Robust, automatic spike sorting using mixtures of multivariate t-distributions. J. of Neuroscience Methods vol. 127(2):111-122, 2003. [11] Snider R.K. , Bonds A.B. Classification of non-stationary neural signals. J. of Neuroscience Methods, vol. 84(1-2):155-166, 1998. 1http://www.cs.huji.ac.il/˜aharonbh,˜adams
|
2004
|
142
|
2,555
|
Harmonising Chorales by Probabilistic Inference Moray Allan and Christopher K. I. Williams School of Informatics, University of Edinburgh Edinburgh EH1 2QL moray.allan@ed.ac.uk, c.k.i.williams@ed.ac.uk Abstract We describe how we used a data set of chorale harmonisations composed by Johann Sebastian Bach to train Hidden Markov Models. Using a probabilistic framework allows us to create a harmonisation system which learns from examples, and which can compose new harmonisations. We make a quantitative comparison of our system’s harmonisation performance against simpler models, and provide example harmonisations. 1 Introduction Chorale harmonisation is a traditional part of the theoretical education of Western classical musicians. Given a melody, the task is to create three further lines of music which will sound pleasant when played simultaneously with the original melody. A good chorale harmonisation will show an understanding of the basic ‘rules’ of harmonisation, which codify the aesthetic preferences of the style. Here we approach chorale harmonisation as a machine learning task, in a probabilistic framework. We use example harmonisations to build a model of harmonic processes. This model can then be used to compose novel harmonisations. Section 2 below gives an overview of the musical background to chorale harmonisation. Section 3 explains how we can create a harmonisation system using Hidden Markov Models. Section 4 examines the system’s performance quantitatively and provides example harmonisations generated by the system. In section 5 we compare our system to related work, and in section 6 we suggest some possible enhancements. 2 Musical Background Since the sixteenth century, the music of the Lutheran church had been centred on the ‘chorale’. Chorales were hymns, poetic words set to music: a famous early example is Martin Luther’s “Ein’ feste Burg ist unser Gott”. At first chorales had only relatively simple melodic lines, but soon composers began to arrange more complex music to accompany the original tunes. In the pieces by Bach which we use here, the chorale tune is taken generally unchanged in the highest voice, and three other musical parts are created alongside it, supporting it and each other. By the eighteenth century, a complex system of rules had developed, dictating what combinations of notes should be played at the same time or following previous notes. The added lines of music should not fit too easily with the melody, but should not clash with it too much either. Dissonance can improve the music, if it is resolved into a pleasant consonance. Figure 1: Hidden state representations (a) for harmonisation, (b) for ornamentation. The training and test chorales used here are divided into two sets: one for chorales in ‘major’ keys, and one for chorales in ‘minor’ keys. Major and minor keys are based around different sets of notes, and musical lines in major and minor keys behave differently. The representation we use to model harmonisations divides up chorales into discrete timesteps according to the regular beat underlying their musical rhythm. At each time-step we represent the notes in the various musical parts by counting how far apart they are in terms of all the possible ‘semitone’ notes. 3 Harmonisation Model 3.1 HMM for Harmonisation We construct a Hidden Markov model in which the visible states are melody notes and the hidden states are chords. A sequence of observed events makes up a melody line, and a sequence of hidden events makes up a possible harmonisation for a melody line. We denote the sequence of melody notes as Y and the harmonic motion as C, with yt representing the melody at time t, and ct the harmonic state. Hidden Markov Models are generative models: here we model how a visible melody line is emitted by a hidden sequence of harmonies. This makes sense in musical terms, since we can view a chorale as having an underlying harmonic structure, and the individual notes of the melody line as chosen to be compatible with this harmonic state at each time step. We will create separate models for chorales in major and minor keys, since these groups have different harmonic structures. For our model we divide each chorale into time steps of a single beat, making the assumption that the harmonic state does not change during a beat. (Typically there are three or four beats in a bar.) We want to create a model which we can use to predict three further notes at each of these time steps, one for each of the three additional musical lines in the harmonisation. There are many possible hidden state representations from which to choose. Here we represent a choice of notes by a list of pitch intervals. By using intervals in this way we represent the relationship between the added notes and the melody at a given time step, without reference to the absolute pitch of the melody note. These interval sets alone would be harmonically ambiguous, so we disambiguate them using harmonic labels, which are included in the training data set. Adding harmonic labels means that our hidden symbols not only identify a particular chord, but also the harmonic function that the chord is serving. Figure 1(a) shows the representation used for some example notes. Here (an A major chord) the alto, tenor and bass notes are respectively 4, 9, and 16 semitones below the soprano melody. The harmonic label is ‘T’, labelling this as functionally a ‘tonic’ chord. Our representation of both melody and harmony distinguishes between a note which is continued from the previous beat and a repeated note. We make a first-order Markov assumption concerning the transition probabilities between the hidden states, which represent choices of chord on an individual beat: P(ct|ct−1, ct−2, . . . , c0) = P(ct|ct−1). We make a similar assumption concerning emission probabilities to model how the observed event, a melody note, results from the hidden state, a chord: P(yt|ct, . . . , c0, yt−1, . . . , y0) = P(yt|ct). In the Hidden Markov Models used here, the ‘hidden’ states of chords and harmonic symbols are in fact visible in the data during training. This means that we can learn transition and emission probabilities directly from observations in our training data set of harmonisations. We use additive smoothing (adding 0.01 to each bin) to deal with zero counts in the training data. Using a Hidden Markov Model framework allows us to conduct efficient inference over our harmonisation choices. In this way our harmonisation system will ‘plan’ over an entire harmonisation rather than simply making immediate choices based on the local context. This means, for example, that we can hope to compose appropriate ‘cadences’ to bring our harmonisations to pleasant closes rather than finishing abruptly. Given a new melody line, we can use the Viterbi algorithm to find the most likely state sequence, and thus harmonisation, given our model. We can also provide alternative harmonisations by sampling from the posterior [see 1, p. 156], as explained below. 3.2 Sampling Alternative Harmonisations Using αt−1(j), the probability of seeing the observed events of a sequence up to time t −1 and finishing in state j, we can calculate the probability of seeing the first t −1 events, finishing in state j, and then transitioning to state k at the next step: P(y0, y1, . . . , yt−1, ct−1 = j, ct = k) = αt−1(j)P(ct = k|ct−1 = j). We can use this to calculate ρt(j|k), the probability that we are in state j at time t−1 given the observed events up to time t −1, and given that we will be in state k at time t: ρt(j|k) = P(ct−1 = j|y0, y1, . . . , yt−1, ct = k) = αt−1(j)P(ct = k|ct−1 = j) P l αt−1(l)P(ct = k|ct−1 = l). To sample from P(C|Y ) we first choose the final state by sampling from its probability distribution according to the model: P(cT = j|y0, y1, . . . , yT ) = αT (j) P l αT (l). Once we have chosen a value for the final state cT , we can use the variables ρt(j|k) to sample backwards through the sequence: P(ct = j|y0, y1, . . . , yT , ct+1) = ρt+1(j|ct+1). 3.3 HMM for Ornamentation The chorale harmonisations produced by the Hidden Markov Model described above harmonise the original melody according to beat-long time steps. Chorale harmonisations are Table 1: Comparison of predictive power achieved by different models of harmonic sequences on training and test data sets (nats). Training (maj) Test (maj) Training (min) Training (min) −1 T ln P(C|Y ) 2.56 4.90 2.66 5.02 −1 T P ln P(ct|yt) 3.00 3.22 3.52 4.33 −1 T P ln P(ct|ct−1) 5.41 7.08 5.50 7.21 −1 T P ln P(ct) 6.43 7.61 6.57 7.84 not limited to this rhythmic form, so here we add a secondary ornamentation stage which can add passing notes to decorate these harmonisations. Generating a harmonisation and adding the ornamentation as a second stage greatly reduces the number of hidden states in the initial harmonisation model: if we went straight to fully-ornamented hidden states then the data available to us concerning each state would be extremely limited. Moreover, since the passing notes do not change the harmonic structure of a piece but only ornament it, adding these passing notes after first determining the harmonic structure for a chorale is a plausible compositional process. We conduct ornamentation by means of a second Hidden Markov Model. The notes added in this ornamentation stage generally smooth out the movement between notes in a line of music, so we set up the visible states in terms of how much the three harmonising musical lines rise or fall from one time-step to the next. The hidden states describe ornamentation of this motion in terms of the movement made by each part during the time step, relative to its starting pitch. This relative motion is described at a time resolution four times as fine as the harmonic movement. On the first of the four quarter-beats we always leave notes as they were, so we have to make predictions only for the final three quarter-beats. Figure 1(b) shows an example of the representation used. In this example, the alto and tenor lines remain at the same pitch for the second quarter-beat as they were for the first, and rise by two semitones for the third and fourth quarter-beats, so are both represented as ‘0,0,2,2’, while the bass line does not change pitch at all, so is represented as ‘0,0,0,0’. 4 Results Our training and test data are derived from chorale harmonisations by Johann Sebastian Bach.1 These provide a relatively large set of harmonisations by a single composer, and are long established as a standard reference among music theorists. There are 202 chorales in major keys of which 121 were used for training and 81 used for testing; and 180 chorales in minor keys (split 108/72). Using a probabilistic framework allows us to give quantitative answers to questions about the performance of the harmonisation system. There are many quantities we could compute, but here we will look at how high a probability the model assigns to Bach’s own harmonisations given the respective melody lines. We calculate average negative log probabilities per symbol, which describe how predictable the symbols are under the model. These quantities provide sample estimates of cross-entropy. Whereas verbal descriptions of harmonisation performance are unavoidably vague and hard to compare, these figures allow our model’s performance to be directly compared with that of any future probabilistic harmonisation system. Table 1 shows the average negative log probability per symbol of Bach’s chord symbol 1We used a computer-readable edition of Bach’s chorales downloaded from ftp://i11ftp. ira.uka.de/pub/neuro/dominik/midifiles/bach.zip Figure 2: Most likely harmonisation under our model of chorale K4, BWV 48 Figure 3: Most likely harmonisation under our model of chorale K389, BWV 438 sequences given their respective melodic symbol sequences, −1 T ln P(C|Y ), on training and test data sets of chorales in major and minor keys. As a comparison we give analogous negative log probabilities for a model predicting chord states from their respective melody notes, −1 T P ln P(ct|yt), for a simple Markov chain between the chord states, −1 T P ln P(ct|ct−1), and for a model which assumes that the chord states are independently drawn, −1 T P ln P(ct). The Hidden Markov Model here has 5046 hidden chord states and 58 visible melody states. The Hidden Markov Model finds a better fit to the training data than the simpler models: to choose a good chord for a particular beat we need to take into account both the melody note on that beat and the surrounding chords. Even the simplest model of the data, which assumes that each chord is drawn independently, performs worse on the test data than the training data, showing that we are suffering from sparse data. There are many chords, chord to melody note emissions, and especially chord to chord transitions, that are seen in the test data but never occur in the training data. The models’ performance with unseen data could be improved by using a more sophisticated smoothing method, for example taking into account the overall relative frequencies of harmonic symbols when assigning probabilities to unseen chord transitions. However, this lower performance with unseen test data is not a problem for the task we approach here, of generating new harmonisations, as long as we can learn a large enough vocabulary of events from the training data to be able to find good harmonisations for new chorale melodies. Figures 2 and 3 show the most likely harmonisations under our model for two short chorales. The system has generated reasonable harmonisations. We can see, for example, passages of parallel and contrary motion between the different parts. There is an appropriate harmonic movement through the harmonisations, and they come to plausible cadences. The generated harmonisations suffer somewhat from not taking into account the flow of the individual musical lines which we add. There are large jumps, especially in the bass line, more often than is desirable – the bass line suffers most since has the greatest variance with respect to the soprano melody. This excessive jumping also feeds through to reduce the performance of the ornamentation stage, creating visible states which are unseen in the training data. The model structure means that the most likely harmonisation leaves these states unornamented. Nevertheless, where ornamentation has been added it fits with its context and enhances the harmonisations. The authors will publish further example harmonisations, including MIDI files, online at http://www.tardis.ed.ac.uk/˜moray/harmony/. 5 Relationship to previous work Even while Bach was still composing chorales, music theorists were catching up with musical practice by writing treatises to explain and to teach harmonisation. Two famous examples, Rameau’s Treatise on Harmony [2] and the Gradus ad Parnassum by Fux [3], show how musical style was systematised and formalised into sets of rules. The traditional formulation of harmonisation technique in terms of rules suggests that we might create an automatic harmonisation system by finding as many rules as we can and encoding them as a consistent set of constraints. Pachet and Roy [4] provide a good overview of constraintbased harmonisation systems. For example, one early system [5] takes rules from Fux and assigns penalties according to the seriousness of each rule being broken. This system then conducts a modified best-first search to produce harmonisations. Using standard constraintsatisfaction techniques for harmonisation is problematic, since the space and time needs of the solver tend to rise extremely quickly with the length of the piece. Several systems have applied genetic programming techniques to harmonisation, for example McIntyre [6]. These are similar to the constraint-based systems described above, but instead of using hard constraints they encode their rules as a fitness function, and try to optimise that function by evolutionary techniques. Phon-Amnuaisuk and Wiggins [7] are reserved in their assessment of genetic programming for harmonisation. They make a direct comparison with an ordinary constraint-based system, and conclude that the performance of each system is related to the amount of knowledge encoded in it rather than the particular technique it uses. In their comparison the ordinary constraint-based system actually performs much better, and they argue that this is because it possesses implicit control knowledge which the system based on the genetic algorithm lacks. Even if they can be made more efficient, these rule-based systems do not perform the full task of our harmonisation system. They take a large set of rules written by a human and attempt to find a valid solution, whereas our system learns its rules from examples. Hild et al. [8] use neural networks to harmonise chorales. Like the Hidden Markov Models in our system, these neural networks are trained using example harmonisations. However, while two of their three subtasks use only neural networks trained on example harmonisations, their second subtask, where chords are chosen to instantiate more general harmonies, includes constraint satisfaction. Rules written by a human penalise undesirable combinations of notes, so that they will be filtered out when the best chord is chosen from all those compatible with the harmony already decided. In contrast, our model learns all its harmonic ‘rules’ from its training data. Ponsford et al. [9] use n-gram Markov models to generate harmonic structures. Unlike in chorale harmonisation, there is no predetermined tune with which the harmonies need to fit. The data set they use is a selection of 84 saraband dances, by 15 different seventeen-century French composers. An automatically annotated corpus is used to train Markov models using contexts of different lengths, and the weighted sum of the probabilities assigned by these models used to predict harmonic movement. Ponsford et al. create new pieces first by random generation from their models, and secondly by selecting those randomly-generated pieces which match a given template. Using templates gives better results, but the great majority of randomly-generated pieces will not match the template and so will have to be discarded. Using a Hidden Markov Model rather than simple n-grams allows this kind of template to be included in the model as the visible state of the system: the chorale tunes in our system can be thought of as complex templates for harmonisations. Ponsford et al. note that even with their longest context length, the cadences are poor. In our system the ‘planning’ ability of Hidden Markov Models, using the combination of chords and harmonic labels encoded in the hidden states, produces cadences which bring the chorale tunes to harmonic closure. This paper stems from work described in the first author’s MSc thesis [10] carried out in 2002. We have recently become aware that similar work has been carried out independently in Japan by a team led by Prof S. Sagayama [11, 12]. To our knowledge this work has been published only in Japanese2. The basic frameworks are similar, but there are several differences. First, their system only describes the harmonisation in terms of the harmonic label (e.g. T for tonic) and does not fully specify the voicing of the three harmony lines or ornamentation. Secondly, they do not give a quantitative evaluation of the harmonisations produced as in our Table 1. Thirdly, in [12] a Markov model on blocks of chord sequences rather than on individual chords is explored. 6 Discussion Using the framework of probabilistic influence allows us to perform efficient inference to generate new chorale harmonisations, avoiding the computational scaling problems suffered by constraint-based harmonisation systems. We described above neural network and genetic algorithm techniques which were less compute-intensive than straightforward constraint satisfaction, but the harmonisation systems using these techniques retain a preprogrammed knowledge base, whereas our model is able to learn its harmonisation constraints from training data. Different forms of graphical model would allow us to take into account more of the dependencies in harmonisation. For example, we could use a higher-order Markov structure, although this by itself would be likely to greatly increase the problems already seen here with sparse data. An alternative might be to use an Autoregressive Hidden Markov Model [13], which models the transitions between visible states as well as the hidden state transitions modelled by an ordinary Hidden Markov Model. Not all of Bach’s chorale harmonisations are in the same style. Some of his harmonisations are intentionally complex, and others intentionally simple. We could improve our harmonisations by modelling this stylistic variation, either manually annotating training chorales according to their style or by training a mixture of HMMs. As we only wish to model the hidden harmonic state given the melody, rather than construct a full generative model of the data, Conditional Random Fields (CRFs) [14] provide a related but alternative framework. However, note that training such models (e.g. using iterative scaling methods) is more difficult than the simple counting methods that can be applied to the HMM case. On the other hand the use of the CRF framework would have 2We thank Yoshinori Shiga for explaining this work to us. some advantages, in that additional features could be incorporated. For example, we might be able to make better predictions by taking into account the current time step’s position within its musical bar. Music theory recognises a hierarchy of stressed beats within a bar, and harmonic movement should correlated with these stresses. The ornamentation process especially might benefit from a feature-based approach. Our system described above only considers chords as sets of intervals, and thus does not have a notion of the key of a piece (other than major or minor). However, voices have a preferred range and thus the notes that should be used do depend on the key, so the key signature could also be used as a feature in a CRF. Taking into account the natural range of each voice would prevent the bass line from descending too low and keep the three parts closer together. In general more interesting harmonies result when musical lines are closer together and their movements are more constrained. Another dimension that could be explored with CRFs would be to take into account the words of the chorales, since Bach’s own harmonisations are affected by the properties of the texts as well as of the melodies. Acknowledgments MA gratefully acknowledges support through a research studentship from Microsoft Research Ltd. References [1] R. Durbin, S. R. Eddy, A. Krogh, and G. Mitchison. Biological sequence analysis. Cambridge University Press, 1998. [2] J.-P. Rameau. Trait´e de l’Harmonie reduite `a ses principes naturels. Paris, 1722. [3] J. J. Fux. Gradus ad Parnassum. Vienna, 1725. [4] F. Pachet and P. Roy. Musical harmonization with constraints: A survey. Constraints, 6(1): 7–19, 2001. [5] B. Schottstaedt. Automatic species counterpoint. Technical report, Stanford University CCRMA, 1989. [6] R. A. McIntyre. Bach in a box: The evolution of four-part baroque harmony using the genetic algorithm. In Proceedings of the IEEE Conference on Evolutionary Computation, 1994. [7] S. Phon-Amnuaisuk and G. A. Wiggins. The four-part harmonisation problem: a comparison between genetic algorithms and a rule-based system. In Proceedings of the AISB’99 Symposium on Musical Creativity, 1999. [8] H. Hild, J. Feulner, and W. Menzel. HARMONET: A neural net for harmonizing chorales in the style of J.S. Bach. In R.P. Lippman, J.E. Moody, and D.S. Touretzky, editors, Advances in Neural Information Processing 4, pages 267–274. Morgan Kaufmann, 1992. [9] D. Ponsford, G. Wiggins, and C. Mellish. Statistical learning of harmonic movement. Journal of New Music Research, 1999. [10] M. M. Allan. Harmonising Chorales in the Style of Johann Sebastian Bach. Master’s thesis, School of Informatics, University of Edinburgh, 2002. [11] T. Kawakami. Hidden Markov Model for Automatic Harmonization of Given Melodies. Master’s thesis, School of Information Science, JAIST, 2000. In Japanese. [12] K. Sugawara, T. Nishimoto, and S. Sagayama. Automatic harmonization for melodies based on HMMs including note-chain probability. Technical Report 2003-MUS-53, Acoustic Society of Japan, December 2003. In Japanese. [13] P. C. Woodland. Hidden Markov Models using vector linear prediction and discriminative output distributions. In Proc ICASSP, volume I, pages 509–512, 1992. [14] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional Random Fields: probabilistic models for segmenting and labeling sequence data. In Proc ICML, pages 282–289, 2001.
|
2004
|
143
|
2,556
|
Nearly Tight Bounds for the Continuum-Armed Bandit Problem Robert Kleinberg∗ Abstract In the multi-armed bandit problem, an online algorithm must choose from a set of strategies in a sequence of n trials so as to minimize the total cost of the chosen strategies. While nearly tight upper and lower bounds are known in the case when the strategy set is finite, much less is known when there is an infinite strategy set. Here we consider the case when the set of strategies is a subset of Rd, and the cost functions are continuous. In the d = 1 case, we improve on the best-known upper and lower bounds, closing the gap to a sublogarithmic factor. We also consider the case where d > 1 and the cost functions are convex, adapting a recent online convex optimization algorithm of Zinkevich to the sparser feedback model of the multi-armed bandit problem. 1 Introduction In an online decision problem, an algorithm must choose from among a set of strategies in each of n consecutive trials so as to minimize the total cost of the chosen strategies. The costs of strategies are specified by a real-valued function which is defined on the entire strategy set and which varies over time in a manner initially unknown to the algorithm. The archetypical online decision problems are the best expert problem, in which the entire cost function is revealed to the algorithm as feedback at the end of each trial, and the multiarmed bandit problem, in which the feedback reveals only the cost of the chosen strategy. The names of the two problems are derived from the metaphors of combining expert advice (in the case of the best expert problem) and learning to play the best slot machine in a casino (in the case of the multi-armed bandit problem). The applications of online decision problems are too numerous to be listed here. In addition to occupying a central position in online learning theory, algorithms for such problems have been applied in numerous other areas of computer science, such as paging and caching [6, 14], data structures [7], routing [4, 5], wireless networks [19], and online auction mechanisms [8, 15]. Algorithms for online decision problems are also applied in a broad range of fields outside computer science, including statistics (sequential design of experiments [18]), economics (pricing [20]), game theory (adaptive game playing [13]), and medical decision making (optimal design of clinical trials [10]). Multi-armed bandit problems have been studied quite thoroughly in the case of a finite strategy set, and the performance of the optimal algorithm (as a function of n) is known ∗M.I.T. CSAIL, Cambridge, MA 02139. Email: rdk@csail.mit.edu. Supported by a Fannie and John Hertz Foundation Fellowship. up to a constant factor [3, 18]. In contrast, much less is known in the case of an infinite strategy set. In this paper, we consider multi-armed bandit problems with a continuum of strategies, parameterized by one or more real numbers. In other words, we are studying online learning problems in which the learner designates a strategy in each time step by specifying a d-tuple of real numbers (x1, . . . , xd); the cost function is then evaluated at (x1, . . . , xd) and this number is reported to the algorithm as feedback. Recent progress on such problems has been spurred by the discovery of new algorithms (e.g. [4, 9, 16, 21]) as well as compelling applications. Two such applications are online auction mechanism design [8, 15], in which the strategy space is an interval of feasible prices, and online oblivious routing [5], in which the strategy space is a flow polytope. Algorithms for online decisions problems are often evaluated in terms of their regret, defined as the difference in expected cost between the sequence of strategies chosen by the algorithm and the best fixed (i.e. not time-varying) strategy. While tight upper and lower bounds on the regret of algorithms for the K-armed bandit problem have been known for many years [3, 18], our knowledge of such bounds for continuum-armed bandit problems is much less satisfactory. For a one-dimensional strategy space, the first algorithm with sublinear regret appeared in [1], while the first polynomial lower bound on regret appeared in [15]. For Lipschitz-continuous cost functions (the case introduced in [1]), the best known upper and lower bounds for this problem are currently O(n3/4) and Ω(n1/2), respectively [1, 15], leaving as an open question the problem of determining tight bounds for the regret as a function of n. Here, we solve this open problem by sharpening the upper and lower bounds to O(n2/3 log1/3(n)) and Ω(n2/3), respectively, closing the gap to a sublogarithmic factor. Note that this requires improving the best known algorithm as well as the lower bound technique. Recently, and independently, Eric Cope [11] considered a class of cost functions obeying a more restrictive condition on the shape of the function near its optimum, and for such functions he obtained a sharper bound on regret than the bound proved here for uniformly locally Lipschitz cost functions. Cope requires that each cost function C achieves its optimum at a unique point θ, and that there exist constants K0 > 0 and p ≥1 such that for all x, |C(x) −C(θ)| ≥K0∥x −θ∥p. For this class of cost functions — which is probably broad enough to capture most cases of practical interest — he proves that the regret of the optimal continuum-armed bandit algorithm is O(n−1/2), and that this bound is tight. For a d-dimensional strategy space, any multi-armed bandit algorithm must suffer regret depending exponentially on d unless the cost functions are further constrained. (This is demonstrated by a simple counterexample in which the cost function is identically zero in all but one orthant of Rd, takes a negative value somewhere in that orthant, and does not vary over time.) For the best-expert problem, algorithms whose regret is polynomial in d and sublinear in n are known for the case of cost functions which are constrained to be linear [16] or convex [21]. In the case of linear cost functions, the relevant algorithm has been adapted to the multi-armed bandit setting in [4, 9]. Here we adapt the online convex programming algorithm of [21] to the continuum-armed bandit setting, obtaining the first known algorithm for this problem to achieve regret depending polynomially on d and sublinearly on n. A remarkably similar algorithm was discovered independently and simultaneously by Flaxman, Kalai, and McMahan [12]. Their algorithm and analysis are superior to ours, requiring fewer smoothness assumptions on the cost functions and producing a tighter upper bound on regret. 2 Terminology and Conventions We will assume that a strategy set S ⊆Rd is given, and that it is a compact subset of Rd. Time steps will be denoted by the numbers {1, 2, . . . , n}. For each t ∈{1, 2, . . . , n} a cost function Ct : S →R is given. These cost functions must satisfy a continuity property based on the following definition. A function f is uniformly locally Lipschitz with constant L (0 ≤L < ∞), exponent α (0 < α ≤1), and restriction δ (δ > 0) if it is the case that for all u, u′ ∈S with ∥u −u′∥≤δ, |f(u) −f(u′)| ≤L∥u −u′∥α. (Here, ∥· ∥denotes the Euclidean norm on Rd.) The class of all such functions f will be denoted by ulL(α, L, δ). We will consider two models which may govern the cost functions. The first of these is identical with the continuum-armed bandit problem considered in [1], except that [1] formulates the problem in terms of maximizing reward rather than minimizing cost. The second model concerns a sequence of cost functions chosen by an oblivious adversary. Random The functions C1, . . . , Cn are independent, identically distributed random samples from a probability distribution on functions C : S →R. The expected cost function ¯C : S →R is defined by ¯C(u) = E(C(u)) where C is a random sample from this distribution. This function ¯C is required to belong to ulL(α, L, δ) for some specified α, L, δ. In addition, we assume there exist positive constants ζ, s0 such that if C is a random sample from the given distribution on cost functions, then E(esC(u)) ≤e 1 2 ζ2s2 ∀|s| ≤s0, u ∈S. The “best strategy” u∗is defined to be any element of arg minu∈S ¯C(u). (This set is non-empty, by the compactness of S.) Adversarial The functions C1, . . . , Cn are a fixed sequence of functions in ulL(α, L, δ), taking values in [0, 1]. The “best strategy” u∗is defined to be any element of arg minu∈S Pn t=1 Ct(u). (Again, this set is non-empty by compactness.) A multi-armed bandit algorithm is a rule for deciding which strategy to play at time t, given the outcomes of the first t −1 trials. More formally, a deterministic multi-armed bandit algorithm U is a sequence of functions U1, U2, . . . such that Ut : (S × R)t−1 →S. The interpretation is that Ut(u1, x1, u2, x2, . . . , ut−1, xt−1) defines the strategy to be chosen at time t if the algorithm’s first t −1 choices were u1, . . . , ut−1 respectively, and their costs were x1, . . . , xt−1 respectively. A randomized multi-armed bandit algorithm is a probability distribution over deterministic multi-armed bandit algorithms. (If the cost functions are random, we will assume their randomness is independent of the algorithm’s random choices.) For a randomized multi-armed bandit algorithm, the n-step regret Rn is the expected difference in total cost between the algorithm’s chosen strategies u1, u2, . . . , un and the best strategy u∗, i.e. Rn = E " n X t=1 Ct(ut) −Ct(u∗) # . Here, the expectation is over the algorithm’s random choices and (in the random-costs model) the randomness of the cost functions. 3 Algorithms for the one-parameter case (d = 1) The continuum-bandit algorithm presented in [1] is based on computing an estimate ˆC of the expected cost function ¯C which converges almost surely to ¯C as n →∞. This estimate is obtained by devoting a small fraction of the time steps (tending to zero as n →∞) to sampling the random cost functions at an approximately equally-spaced sequence of “design points” in the strategy set, and combining these samples using a kernel estimator. When the algorithm is not sampling a design point, it chooses a strategy which minimizes expected cost according to the current estimate ˆC. The convergence of ˆC to ¯C ensures that the average cost in these “exploitation steps” converges to the minimum value of ¯C. A drawback of this approach is its emphasis on estimating the entire function ¯C. Since the algorithm’s goal is to minimize cost, its estimate of ¯C need only be accurate for strategies where ¯C is near its minimum. Elsewhere a crude estimate of ¯C would have sufficed, since such strategies may safely be ignored by the algorithm. The algorithm in [1] thus uses its sampling steps inefficiently, focusing too much attention on portions of the strategy interval where an accurate estimate of ¯C is unnecessary. We adopt a different approach which eliminates this inefficiency and also leads to a much simpler algorithm. First we discretize the strategy space by constraining the algorithm to choose strategies only from a fixed, finite set of K equally spaced design points {1/K, 2/K, . . . , 1}. (For simplicity, we are assuming here and for the rest of this section that S = [0, 1].) This reduces the continuum-armed bandit problem to a finite-armed bandit problem, and we may apply one of the standard algorithms for such problems. Our continuum-armed bandit algorithm is shown in Figure 1. The outer loop uses a standard doubling technique to transform a non-uniform algorithm to a uniform one. The inner loop requires a subroutine MAB which should implement a finite-armed bandit algorithm appropriate for the cost model under consideration. For example, MAB could be the algorithm UCB1 of [2] in the random case, or the algorithm Exp3 of [3] in the adversarial case. The semantics of MAB are as follows: it is initialized with a finite set of strategies; subsequently it recommends strategies in this set, waits to learn the feedback score for its recommendation, and updates its recommendation when the feedback is received. The analysis of this algorithm will ensure that its choices have low regret relative to the best design point. The Lipschitz regularity of ¯C guarantees that the best design point performs nearly as well, on average, as the best strategy in S. ALGORITHM CAB1 T ←1 while T ≤n K ← T log T 1 2α+1 Initialize MAB with strategy set {1/K, 2/K, . . . , 1}. for t = T, T + 1, . . . , min(2T −1, n) Get strategy ut from MAB. Play ut and discover Ct(ut). Feed 1 −Ct(ut) back to MAB. end T ←2T end Figure 1: Algorithm for the one-parameter continuum-armed bandit problem Theorem 3.1. In both the random and adversarial models, the regret of algorithm CAB1 is O(n α+1 2α+1 log α 2α+1 (n)). Proof Sketch. Let q = α 2α+1, so that the regret bound is O(n1−q logq(n)). It suffices to prove that the regret in the inner loop is O(T 1−q logq(T)); if so, then we may sum this bound over all iterations of the inner loop to get a geometric progression with constant ratio, whose largest term is O(n1−q logq(n)). So from now on assume that T is fixed and that K is defined as in Figure 1, and for simplicity renumber the T steps in this iteration of inner loop so that the first is step 1 and the last is step T. Let u∗be the best strategy in S, and let u′ be the element of {1/K, 2/K, . . . , 1} nearest to u∗. Then |u′ −u∗| < 1/K, so using the fact that ¯C ∈ulL(α, L, δ) (or that 1 T PT t=1 Ct ∈ulL(α, L, δ) in the adversarial case) we obtain E " T X t=1 Ct(u′) −Ct(u∗) # ≤T Kα = O T 1−q logq(T) . It remains to show that E hPT t=1 Ct(ut) −Ct(u′) i = O T 1−q logq(T) . For the adversarial model, this follows directly from Corollary 4.2 in [3], which asserts that the regret of Exp3 is O √TK log K . For the random model, a separate argument is required. (The upper bound for the adversarial model doesn’t directly imply an upper bound for the random model, since the cost functions are required to take values in [0, 1] in the adversarial model but not in the random model.) For u ∈{1/K, 2/K, . . . , 1} let ∆(u) = ¯C(u) −¯C(u′). Let ∆= p K log(T)/T, and partition the set {1/K, 2/K, . . . , 1} into two subsets A, B according to whether ∆(u) < ∆or ∆(u) ≥∆. The time steps in which the algorithm chooses strategies in A contribute at most O(T∆) = O(T 1−q logq(T)) to the regret. For each strategy u ∈B, one may prove that, with high probability, u is played only O(log(T)/∆(u)2) times. (This parallels the corresponding proof in [2] and is omitted here. Our hypothesis on the moment generating function of the random variable C(u) is strong enough to imply the exponential tail inequality required in that proof.) This implies that the time steps in which the algorithm chooses strategies in B contribute at most O(K log(T)/∆) = O(T 1−q logq(T)) to the regret, which completes the proof. 4 Lower bounds for the one-parameter case There are many reasons to expect that Algorithm CAB1 is an inefficient algorithm for the continuum-armed bandit problem. Chief among these is that fact that it treats the strategies {1/K, 2/K, . . . , 1} as an unordered set, ignoring the fact that experiments which sample the cost of one strategy j/K are (at least weakly) predictive of the costs of nearby strategies. In this section we prove that, contrary to this intuition, CAB1 is in fact quite close to the optimal algorithm. Specifically, in the regret bound of Theorem 3.1, the exponent of α+1 2α+1 is the best possible: for any β < α+1 2α+1, no algorithm can achieve regret O(nβ). This lower bound applies to both the randomized and adversarial models. The lower bound relies on a function f : [0, 1] →[0, 1] defined as the sum of a nested family of “bump functions.” Let B be a C∞bump function defined on the real line, satisfying 0 ≤B(x) ≤1 for all x, B(x) = 0 if x ≤0 or x ≥1, and B(x) = 1 if x ∈[1/3, 2/3]. For an interval [a, b], let B[a,b] denote the bump function B( x−a b−a ), i.e. the function B rescaled and shifted so that its support is [a, b] instead of [0, 1]. Define a random nested sequence of intervals [0, 1] = [a0, b0] ⊃[a1, b1] ⊃. . . as follows: for k > 0, the middle third of [ak−1, bk−1] is subdivided into intervals of width wk = 3−k!, and [ak, bk] is one of these subintervals chosen uniformly at random. Now let f(x) = 1/3 + 3α−1 −1/3 ∞ X k=1 wα k B[ak,bk](x). Finally, define a probability distribution on functions C : [0, 1] →[0, 1] by the following rule: sample λ uniformly at random from the open interval (0, 1) and put C(x) = λf(x). The relevant technical properties of this construction are summarized in the following lemma. Lemma 4.1. Let {u∗} = T∞ k=1[ak, bk]. The function f(x) belongs to ulL(α, L, δ) for some constants L, δ, it takes values in [1/3, 2/3], and it is uniquely maximized at u∗. For each λ ∈(0, 1), the function C(x) = λf(x) belongs to ulL(α, L, δ) for some constants L, δ, and is uniquely minimized at u∗. The same two properties are satisfied by the function ¯C(x) = Eλ∈(0,1) λf(x) = (1 + f(x))−1. Theorem 4.2. For any randomized multi-armed bandit algorithm, there exists a probability distribution on cost functions such that for all β < α+1 2α+1, the algorithm’s regret {Rn}∞ n=1 in the random model satisfies lim sup n→∞ Rn nβ = ∞. The same lower bound applies in the adversarial model. Proof sketch. The idea is to prove, using the probabilistic method, that there exists a nested sequence of intervals [0, 1] = [a0, b0] ⊃[a1, b1] ⊃. . ., such that if we use these intervals to define a probability distribution on cost functions C(x) as above, then Rn/nβ diverges as n runs through the sequence n1, n2, n3, . . . defined by nk = ⌈1 k(wk−1/wk)w−2α k ⌉. Assume that intervals [a0, b0] ⊃. . . ⊃[ak−1, bk−1] have already been specified. Subdivide [ak−1, bk−1] into subintervals of width wk, and suppose [ak, bk] is chosen uniformly at random from this set of subintervals. For any u, u′ ∈[ak−1, bk−1], the Kullback-Leibler distance KL(C(u)∥C(u′)) between the cost distributions at u and u′ is O(w2α k ), and it is equal to zero unless at least one of u, u′ lies in [ak, bk]. This means, roughly speaking, that the algorithm must sample strategies in [ak, bk] at least w−2α k times before being able to identify [ak, bk] with constant probability. But [ak, bk] could be any one of wk−1/wk possible subintervals, and we don’t have enough time to play w−2α k trials in even a constant fraction of these subintervals before reaching time nk. Therefore, with constant probability, a constant fraction of the strategies chosen up to time nk are not located in [ak, bk], and each of them contributes Ω(wα k ) to the regret. This means the expected regret at time nk is Ω(nkwα k ). From this, we obtain the stated lower bound using the fact that nkwα k = n α+1 2α+1 −o(1) k . Although this proof sketch rests on a much more complicated construction than the lower bound proof for the finite-armed bandit problem given by Auer et al in [3], one may follow essentially the same series of steps as in their proof to make the sketch given above into a rigorous proof. The only significant technical difference is that we are working with continuous-valued rather than discrete-valued random variables, which necessitates using the differential Kullback-Leibler distance1 rather than working with the discrete KullbackLeibler distance as in [3]. 5 An online convex optimization algorithm We turn now to continuum-armed bandit problems with a strategy space of dimension d > 1. As mentioned in the introduction, for any randomized multi-armed bandit algorithm there is a cost function C (with any desired degree of smoothness and boundedness) such that the algorithm’s regret is Ω(2d) when faced with the input sequence C1 = C2 = . . . = Cn = C. As a counterpoint to this negative result, we seek interesting classes of cost functions which admit a continuum-armed bandit algorithm whose regret is polynomial in d (and, as always, sublinear in n). A natural candidate is the class of convex, smooth functions on a closed, bounded, convex strategy set S ⊂Rd, since this is the most 1Defined by the formula KL(P∥Q) = R log (p(x)/q(x)) dp(x), for probability distributions P, Q with density functions p, q. general class of functions for which the corresponding best-expert problem is known to admit an efficient algorithm, namely Zinkevich’s greedy projection algorithm [21]. Greedy projection is initialized with a sequence of learning rates η1 > η2 > . . .. It selects an arbitrary initial strategy u1 ∈S and updates its strategy in each subsequent time step t according to the rule ut+1 = P(ut −ηt∇Ct(ut)), where ∇Ct(ut) is the gradient of Ct at ut and P : Rd →S is the projection operator which maps each point of Rd to the nearest point of S. (Here, distance is measured according to the Euclidean norm.) Note that greedy projection is nearly a multi-armed bandit algorithm: if the algorithm’s feedback when sampling strategy ut were the vector ∇Ct(ut) rather than the number Ct(ut), it would have all the information required to run greedy projection. To adapt this algorithm to the multi-armed bandit setting, we use the following idea: group the timeline into phases of d + 1 consecutive steps, with a cost function Cφ for each phase φ defined by averaging the cost functions at each time step of φ. In each phase use trials at d+1 affinely independent points of S, located at or near ut, to estimate the gradient ∇Cφ(ut).2 To describe the algorithm, it helps to assume that the convex set S is in isotropic position in Rd. (If not, we may bring it into isotropic position by an affine transformation of the coordinate system. This does not increase the regret by a factor of more than d2.) The algorithm, which we will call simulated greedy projection, works as follows. It is initialized with a sequence of “learning rates” η1, η2, . . . and “frame sizes” ν1, ν2, . . .. At the beginning of a phase φ, we assume the algorithm has determined a basepoint strategy uφ. (An arbitrary uφ may be used in the first phase.) The algorithm chooses a set of (d+1) affinely independent points {x0 = uφ, x1, x2, . . . , xd} with the property that for any y ∈S, the difference y −x0 may be expressed as a linear combination of the vectors {xi −x0 : 1 ≤i ≤d} using coefficients in [−2, 2]. (Such a set is called an approximate barycentric spanner, and may computed efficiently using an algorithm specified in [4].) We then choose a random bijection σ mapping the time steps in phase φ into the set {0, 1, . . . , d}, and in step t we sample the strategy yt = uφ +νφ(xσ(t) −uφ). At the end of the phase we let Bφ denote the unique affine function whose values at the points yt are equal to the costs observed during the phase at those points. The basepoint for the next phase φ′ is determined according to Zinkevich’s update rule uφ′ = P(uφ −ηφ∇Bφ(uφ)).3 Theorem 5.1. Assume that S is in isotropic position and that the cost functions satisfy ∥Ct(x)∥≤1 for all x ∈S, 1≤t≤n, and that in addition the Hessian matrix of Ct(x) at each point x ∈S has Frobenius norm bounded above by a constant. If ηk = k−3/4 and νk = k−1/4, then the regret of the simulated greedy projection algorithm is O(d3n3/4). Proof sketch. In each phase φ, let Yφ = {y0, . . . , yd} be the set of points which were sampled, and define the following four functions: Cφ, the average of the cost functions in phase φ; Λφ, the linearization of Cφ at uφ, defined by the formula Λφ(x) = ∇Cφ(uφ) · (x −uφ) + Cφ(uφ); Lφ, the unique affine function which agrees with Cφ at each point of Yφ; and Bφ, the affine function computed by the algorithm at the end of phase φ. The algorithm is simply running greedy projection with respect to the simulated cost functions Bφ, and it consequently satisfies a low-regret bound with respect to those functions. The expected value of Bφ(u) is Lφ(u) for every u. (Proof: both are affine functions, and they agree on every point of 2Flaxman, Kalai, and McMahan [12], with characteristic elegance, supply an algorithm which counterintuitively obtains an unbiased estimate of the approximate gradient using only a single sample. Thus they avoid grouping the timeline into phases and improve the algorithm’s convergence time by a factor of d. 3Readers familiar with Kiefer-Wolfowitz stochastic approximation [17] will note the similarity with our algorithm. The random bijection σ —which is unnecessary in the Kiefer-Wolfowitz algorithm —is used here to defend against the oblivious adversary. Yφ.) Hence we obtain a low-regret bound with respect to Lφ. To transfer this over to a lowregret bound for the original problem, we need to bound several additional terms: the regret experienced because the algorithm was using uφ + ηφ(xσ(t) −uφ) instead of uφ, the difference between Lφ(u∗) and Λφ(u∗), and the difference between Λφ(u∗) and Cφ(u∗). In each case, the desired upper bound can be inferred from properties of barycentric spanners, or from the convexity of Cφ and the bounds on its first and second derivatives. References [1] R. AGRAWAL. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33:1926-1951, 1995. [2] P. AUER, N. CESA-BIANCHI, AND P. FISCHER. Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47:235-256, 2002. [3] P. AUER, N. CESA-BIANCHI, Y. FREUND, AND R. SCHAPIRE. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of FOCS 1995. [4] B. AWERBUCH AND R. KLEINBERG. Near-Optimal Adaptive Routing: Shortest Paths and Geometric Generalizations. In Proceedings of STOC 2004. [5] N. BANSAL, A. BLUM, S. CHAWLA, AND A. MEYERSON. Online oblivious routing. In Proceedings of SPAA 2003: 44-49. [6] A. BLUM, C. BURCH, AND A. KALAI. Finely-competitive paging. In Proceedings of FOCS 1999. [7] A. BLUM, S. CHAWLA, AND A. KALAI. Static Optimality and Dynamic Search-Optimality in Lists and Trees. Algorithmica 36(3): 249-260 (2003). [8] A. BLUM, V. KUMAR, A. RUDRA, AND F. WU. Online learning in online auctions. In Proceedings of SODA 2003. [9] A. BLUM AND H. B. MCMAHAN. Online geometric optimization in the bandit setting against an adaptive adversary. In Proceedings of COLT 2004. [10] D. BERRY AND L. PEARSON. Optimal Designs for Two-Stage Clinical Trials with Dichotomous Responses. Statistics in Medicine 4:487 - 508, 1985. [11] E. COPE. Regret and Convergence Bounds for Immediate-Reward Reinforcement Learning with Continuous Action Spaces. Preprint, 2004. [12] A. FLAXMAN, A. KALAI, AND H. B. MCMAHAN. Online Convex Optimization in the Bandit Setting: Gradient Descent Without a Gradient. To appear in Proceedings of SODA 2005. [13] Y. FREUND AND R. SCHAPIRE. Adaptive Game Playing Using Multiplicative Weights. Games and Economic Behavior 29:79-103, 1999. [14] R. GRAMACY, M. WARMUTH, S. BRANDT, AND I. ARI. Adaptive Caching by Refetching. In Advances in Neural Information Processing Systems 15, 2003. [15] R. KLEINBERG AND T. LEIGHTON. The Value of Knowing a Demand Curve: Bounds on Regret for On-Line Posted-Price Auctions. In Proceedings of FOCS 2003. [16] A. KALAI AND S. VEMPALA. Efficient algorithms for the online decision problem. In Proceedings of COLT 2003. [17] J. KIEFER AND J. WOLFOWITZ. Stochastic Estimation of the Maximum of a Regression Function. Annals of Mathematical Statistics 23:462-466, 1952. [18] T. L. LAI AND H. ROBBINS. Asymptotically efficient adaptive allocations rules. Adv. in Appl. Math. 6:4-22, 1985. [19] C. MONTELEONI AND T. JAAKKOLA. Online Learning of Non-stationary Sequences. In Advances in Neural Information Processing Systems 16, 2004. [20] M. ROTHSCHILD. A Two-Armed Bandit Theory of Market Pricing. Journal of Economic Theory 9:185-202, 1974. [21] M. ZINKEVICH. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In Proceedings of ICML 2003, 928-936.
|
2004
|
144
|
2,557
|
Mistake Bounds for Maximum Entropy Discrimination Philip M. Long Center for Computational Learning Systems Columbia University plong@cs.columbia.edu Xinyu Wu Department of Computer Science National University of Singapore wuxy@comp.nus.edu.sg Abstract We establish a mistake bound for an ensemble method for classification based on maximizing the entropy of voting weights subject to margin constraints. The bound is the same as a general bound proved for the Weighted Majority Algorithm, and similar to bounds for other variants of Winnow. We prove a more refined bound that leads to a nearly optimal algorithm for learning disjunctions, again, based on the maximum entropy principle. We describe a simplification of the on-line maximum entropy method in which, after each iteration, the margin constraints are replaced with a single linear inequality. The simplified algorithm, which takes a similar form to Winnow, achieves the same mistake bounds. 1 Introduction In this paper, we analyze a maximum-entropy procedure for ensemble learning in the online learning model. In this model, learning proceeds in trials. During the tth trial, the algorithm (1) receives xt ∈{0, 1}n (interpreted in this work as a vector of base classifier predictions), (2) predicts a class ˆyt ∈{0, 1}, and (3) discovers the correct class yt. During trial t, the algorithm has access only to information from previous trials. The first algorithm we will analyze for this problem was proposed by Jaakkola, Meila and Jebara [14]. The algorithm, at each trial t, makes its prediction by taking a weighted vote over the predictions of the base classifiers. The weight vector pt is the probability distribution over the n base classifiers that maximizes the entropy, subject to the constraint that pt correctly classifies all patterns seen in previous trials with a given margin γ. That is, it maximizes the entropy of pt subject to the constraints that pt ·xs ≥1/2+γ whenever ys = 1 for s < t, and pt · xs ≤1/2 −γ whenever ys = 0 for s < t. We show that, if there is a weighting p∗, determined with benefit of hindsight, that achieves margin γ on all trials, then this on-line maximum entropy procedure makes at most ln n 2γ2 mistakes. Littlestone [19] proved the same bound for the Weighted Majority Algorithm [21], and a similar bound for the Balanced Winnow Algorithm [19]. The original Winnow algorithm was designed to solve the problem of learning a hidden disjunction of a small number k out of a possible n boolean variables. When this problem is reduced to our general setting in the most natural way, the resulting bound is Θ(k2 log n), whereas Littlestone proved a bound of ek ln n for Winnow. We prove more refined bounds for a wider family of maximum-entropy algorithms, which use thresholds different than 1/2 (as proposed in [14]) and class-sensitive margins. A mistake bound of ek ln n for learning disjunctions is a consequence of this more refined analysis. The optimization needed at each round can be cast as minimizing a convex function subject to convex constraints, and thus can be solved in polynomial time [25]. However, the same mistake bounds hold for a similar, albeit linear-time, algorithm. This algorithm, after each trial, replaces all constraints from previous trials with a single linear inequality. (This is analogous to modification of SVMs leading to the ROMMA algorithm [18].) The resulting update is similar in form to Winnow. Littlestone [19] analyzed some variants of Winnow by showing that mistakes cause a reduction in the relative entropy between the learning algorithm’s weight vector, and that of the target function. Kivinen and Warmuth [16] showed that an algorithm related to Winnow trades optimally in a sense between accommodating the information from new data, and keeping the relative entropy between the new and old weight vectors small. Blum [4] identified a correspondence between Winnow and a different application of the maximum entropy principle, in which the algorithm seeks to maximize the average entropy of the conditional distribution over the class designations (the yt’s) subject to constraints arising from the examples, as proposed in [2]. Our proofs have a similar structure to the analysis of ROMMA [18]. Our problems fall within the general framework analyzed by Gordon [11]; while Gordon’s results expose interesting relationships among learning algorithms, applying them did not appear to be the most direct route to solving our concrete problem, nor did they appear likely to result in the most easily understood proofs. As in related analyses like mistake bounds for the perceptron algorithm [22], Winnow [19] and the Weighted Majority Algorithm [19], our bound holds for any sequence of (xt, yt) pairs satisfying the separation condition; in particular no independence assumptions are needed. Langford, Seeger and Megiddo [17] performed a related analysis, incomparable in strength, using independence assumptions. Other related papers include [3, 20, 5, 15, 26, 13, 8, 27, 7]. The proofs of our main results do not contain any calculation; they combine simple geometric arguments with established information theory. The proof of the main result proceeds roughly as follows. If there is a mistake on trial t, it is corrected with a large margin by pt+1. Thus pt+1 must assign a significantly different probability to the voters predicting 1 on trial t than pt does. Applying an identity known as Pinsker’s inequality, this means that the relative entropy from pt+1 and pt is large. Next, we exploit the fact that the constraints satisfied by pt, and therefore by pt+1, are convex to show that moving from pt to pt+1 must take you away from the uniform distribution, thus decreasing the entropy. The theorem then follows from the fact that the entropy can only be reduced by a total of ln n. The refinement leading to a ek ln n bound for disjunctions arises from the observation that Pinsker’s inequality can be strengthened when the probabilities being compared are small. The analysis of this paper lends support to a view of Winnow as a fast, incremental approximation to the maximum entropy discrimination approach, and suggests a variant of Winnow that corresponds more closely to the inductive bias of maximum entropy. 2 Preliminaries Let n be the number of base classifiers. To avoid clutter, for the rest of the paper, “probability distribution” should be understood to mean “probability distribution over {1, ..., n}.” 2.1 Margins For u ∈[0, 1], define σ(u) = 1 to be 1 if u ≥1/2, and 0 otherwise. For a feature vector x ∈{0, 1}n and a class designation y ∈{0, 1}, say that a probability distribution p is correct with margin γ if σ(p · x) = y, and |p · x −1/2| ≥γ. If x and y were encountered in a trial of a learning algorithm, we say that p is correct with margin γ on that trial. 2.2 Entropy, relative entropy, and variation Recall that, for a probability distributions p = (p1, ..., pn) and q = (q1, ..., qn), • the entropy of p, denoted by H(p), is defined by Pn i=1 pi ln(1/pi), • the relative entropy between p and q, denoted by D(p||q), is defined by Pn i=1 pi ln(pi/qi), and • the variation distance between p and q, denoted by V (p, q), is defined to be the maximum difference between the probabilities that they assign to any set: V (p, q) = max x∈{0,1}n p · x −q · x = 1 2 n X i=1 |pi −qi|. (1) Relative entropy and variation distance are related in Pinsker’s inequality. Lemma 1 ([23]) For all p and q, D(p||q) ≥2V (p, q)2. 2.3 Information geometry Relative entropy obeys something like the Pythogarean Theorem. Lemma 2 ([9]) Suppose q is a probability distribution, C is a convex set of probability distributions, and r is the element of A that minimizes D(r||q). Then for any p ∈C, D(p||q) ≥D(p||r) + D(r||q). If C can be defined by a system of linear equations, then D(p||q) = D(p||r) + D(r||q). 3 Maximum Entropy with Margin In this section, we will analyze the algorithm OMEγ (“on-line maximum entropy”) that at the tth trial • chooses pt to maximize the entropy H(pt), subject to the constraint that it is correct with margin γ on all pairs (xs, ys) seen in the past (with s < t), • predicts 1 if and only if pt · xt ≥1/2. In our analysis, we will assume that there is always a feasible pt. The following is our main result. Theorem 3 If there is a fixed probability distribution p∗that is correct with margin γ on all trials, OMEγ makes at most ln n 2γ2 mistakes. Proof: We will show that a mistake causes the entropy of the hypothesis to drop by at least 2γ2. Since the constraints only become more restrictive, the entropy never increases, and so the fact that the entropy lies between 0 and ln n will complete the proof. Suppose trial t was a mistake. The definition of pt+1 ensures that pt+1 ·xt is on the correct side of 1/2 by at least γ. But pt · xt was on the wrong side of 1/2. Thus |pt+1 · xt −pt · xt| ≥γ. Either pt+1 · xt −pt · xt ≥γ, or the bitwise complement c(xt) of xt satisfies pt+1 · c(xt) −pt · c(xt) ≥γ. Thus V (pt+1, pt) ≥γ. Therefore, Pinsker’s Inequality (Lemma 1) implies that D(pt+1||pt) ≥2γ2. (2) Let Ct be the set of all probability distributions that satisfy the constraints in effect when pt was chosen, and let u = (1/n, ..., 1/n). Since pt+1 is in Ct (it must satisfy the constraints that pt did), Lemma 2 implies D(pt+1||u) ≥D(pt+1||pt) + D(pt||u) and thus D(pt+1||u) −D(pt||u) ≥D(pt+1||pt) which, since D(p||u) = (ln n) −H(p) for all p, implies H(pt)−H(pt+1) ≥D(pt+1||pt). Applying (2), we get H(pt)−H(pt+1) ≥2γ2. As described above, this completes the proof. Because H(pt) is always at least H(p∗), the same analysis leads to a mistake bound of (ln n −H(p∗))/(2γ2). Further, a nearly identical proof establishes the following (details are omitted from this abstract). Theorem 4 Suppose OMEγ is modified so that p1 is set to be something other than the uniform distribution, and each pt minimizes D(pt||p1) subject to the same constraints. If there is a fixed p∗that is correct with margin γ on all trials, the modified algorithm makes at most D(p∗||p1) 2γ2 mistakes. 4 Maximum Entropy for Learning Disjunctions In this section, we show how the maximum entropy principle can be used to efficiently learn disjunctions. For a threshold b, define σb(x) to be 1 if x ≥b and 0 otherwise. For a feature vector x ∈{0, 1}n and a class designation y ∈{0, 1}, say that p is correct at threshold b with margin γ if σb(p · x) = y, and |p · x −b| ≥γ. The algorithm OMEb,γ+,γ−analyzed in this section, on the tth trial • chooses pt to maximize the entropy H(pt), subject to the constraint that it is correct at threshold b with margin γ+ on all pairs (xs, ys) with ys = 1 seen in the past (with s < t), and correct at threshold b with margin γ−on all such pairs (xs, ys) with ys = 0, then • predicts 1 if and only if pt · xt ≥b. Note that the algorithm OMEγ considered in Section 3 can also be called OME1/2,γ,γ. For p, q ∈[0, 1], define d(p||q) = D((p, (1−p))||(q, (1−q))), often called “entropic loss.” Lemma 5 If there is an x ∈{0, 1}n such that p · x = p and q · x = q, then D(p||q) ≥ d(p||q). Proof: Application of Lagrange multipliers, together with the fact that D is convex [6], implies that D(p||q) is minimized, subject to the constraints that p · x = p and q · x = q, when (1) pi is the same for all i with xi = 1, (2) qi is the same for all i with xi = 1, (3) pi is the same for all i with xi = 0, (4) qi is the same for all i with xi = 0. The above four properties, together with the constraints, are enough to uniquely specify p and q. Evaluating D(p||q) in this case gives the result. Theorem 6 Suppose there is a probability distribution p∗that is correct at threshold b, with a margin γ+ on all trials t with yt = 1, and with margin γ−on all trials with yt = 0. Then OMEb,γ+,γ−makes at most ln n min{d(b+γ+||b),d(b−γ−||b)} mistakes. Proof: The outline of the proof is similar to the proof of Theorem 3. We will show that mistakes cause the entropy of the algorithm’s hypothesis to decrease. Arguing as in the proof of Theorem 3, H(pt+1) ≤H(pt) −D(pt+1||pt). Lemma 5 then implies that H(pt+1) ≤H(pt) −d(pt+1 · xt||pt · xt). (3) If there was a mistake on trial t for which yt = 1, then pt · xt ≤b, and pt+1 · xt ≥b + γ+. Thus in this case d(pt+1 · xt||pt · xt) ≥ d(b + γ+||b). Similarly, if there was a mistake on trial t for which yt = 0, then d(pt+1 · xt||pt · xt) ≥ d(b −γ−||b). Once again, these two bounds on d(pt+1 · xt||pt · xt), together with (3) and the fact that the entropy is between 0 and ln n, complete the proof. The analysis of Theorem 6 can also be used to prove bounds for the case in which mistakes of different types have different costs, as considered in [12]. Theorem 6 improves on Theorem 3 even in the case in which γ+ = γ−and b = 1/2. For example, if γ = 1/4, Theorem 6 gives a bound of 7.65 ln n, where Theorem 3 gives an 8 ln n bound. Next, we apply Theorem 6 to analyze the problem of learning disjunctions. Corollary 7 If there are k of the n features, such that each yt is the disjunction of those features in xt, then algorithm OME1/(ek),1/k−1/(ek),1/(ek) makes at most ek ln n mistakes. Proof Sketch: If the target weight vector p∗assigns equal weight to each of the variables in the disjunction, when y = 1, the weight of variables evaluating to 1 is at least 1/k, and when y = 0, it is 0. So the hypothesis of Theorem 6 is satisfied when b = 1/(ek), γ+ = 1/k −b and γ−= b. Plugging into Theorem 6, simplifying and overapproximating completes the proof. To get a more readable, but weaker, variant of Theorem 6, we will use the following bound, implicit in the analysis of Angluin and Valiant [1] (see Theorem 1.1 of [10] for a more explicit proof, and [24] for a closely related bound). It improves on Pinsker’s inequality (Lemma 1) when n = 2, p is small, and q is close to p. Lemma 8 ([1]) If 0 ≤p ≤2q, d(p||q) ≥(p−q)2 3q . The following is a direct consequence of Lemma 8 and Theorem 6. Note that in the case of disjunctions, it leads to a weaker 6k ln n bound. Theorem 9 If there is a probability distribution p∗that is correct at threshold b with a margin γ on all trials, then OMEb,γ,γ makes at most 3b ln n γ2 mistakes. 5 Relaxed on-line maximum entropy algorithms Let us refer the halfspace of probability distributions that satisfy the constraint of trial t as Tt and the associated separating hyperplane by Jt. Recall that Ct is the set of feasible Figure 1: In ROME, the constraints Ct in effect before the tth round are replaced by the halfspace St. solutions to all the constraints in effect when pt is chosen. So pt+1 maximizes entropy subject to membership in Ct+1 = Tt ∩Ct. Our proofs only used the following facts about the OME algorithm: (a) pt+1 ∈Tt, (b) pt is the maximum entropy member of Ct, and (c) pt+1 ∈Ct. Suppose At is the set of weight vectors with entropy at last that of pt. Let Ht be the hyperplane tangent to At at pt. Finally, let St be the halfspace with boundary Ht containing pt+1. (See Figure 1.) Then (a), (b) and (c) hold if Ct is replaced with St. (The least obvious is (b), which follows since Ht is tangent to At at pt, and the entropy function is strictly concave.) Also, as previously observed by Littlestone [19], the algorithm might just as well not respond to trials in which there is not a mistake. Let us refer to an algorithm that does both of these as a Relaxed On-line Maximum Entropy (ROME) algorithm. A similar observation regarding an on-line SVM algorithm, led to the simple ROMMA algorithm [18]. In that case, it was possible to obtain a simple close-form expression for the new weight vector. Matters are only slightly more complicated here. Proposition 10 If trial t is a mistake, and q maximizes entropy subject to membership in St ∩Tt, then it is on the separating hyperplane for Tt. Proof: Because q and p both satisfy St, any convex combination of the two satisfies St. Thus, if q was on the interior of Tt, we could find a probability distribution with higher entropy that still satisfies both St and Tt by taking a tiny step from q toward p. This would contradict the assumption that q is the maximum entropy member of St ∩Tt. This implies that the next hypothesis of a ROME algorithm is either on Jt (the separating hyperplane Tt) only, or on both Jt and Ht (the separating hyperplane of St). The following theorem will enable us to obtain a formula in either case. Lemma 11 ([9] (Theorem 3.1)) Suppose q is a probability distribution, and C is a set defined by linear constraints as follows: for an m × n real matrix A, and a m-dimensional column vector b, C = {r : Ar = b}. Then if r is the member of C minimizing D(r||q), then there are scalar constants Z, c1, ..., cm such that for all i ∈{1, ..., n}, ri = exp(Pm j=1 cjaj,i)qi/Z. If the next hypothesis pt+1 of a ROME algorithm is on Ht, then by Lemma 2, it and all other members of Ht satisfy D(pt+1||u) = D(pt+1||pt) + D(pt||u). Thus, in this case, pt+1 also minimizes D(q||pt) from among the members q of Ht ∩Jt. Thus, Lemma 11 implies that pt+1,i/pt,i is the same for all i with xi = 1, and the same for all i with xi = 0. This implies that, for ROMEb,γ+,γ−, if there was a mistake on a trial t, pt+1,i = (b+γ+)pt,i pt·xt if xt,i = 1 and yt = 1 (1−(b+γ+))pt,i 1−(pt·xt) if xt,i = 0 and yt = 1 (b−γ−)pt,i pt·xt if xt,i = 1 and yt = 0 (1−(b−γ+))pt,i 1−(pt·xt) if xt,i = 0 and yt = 0. (4) Note that this updates the weights multiplicatively, like Winnow and Weighted Majority. If pt+1 is not on the separating hyperplane for St, then it must maximize entropy subject to membership in Tt alone, and therefore subject to membership in Jt. In this case, Lemma 11 implies pt+1,i = (b+γ+) |{j:xt,j=1}| if xt,i = 1 and yt = 1 (1−(b+γ+)) |{j:xt,j=0}|. if xt,i = 0 and yt = 1 (b−γ+) |{j:xt,j=1}| if xt,i = 1 and yt = 0 (1−(b−γ+)) |{j:xt,j=0}|. if xt,i = 0 and yt = 0 (5) If this is the case, then pt+1 defined as in (5) should be a member of St. How to test for membership in St? Evaluating the gradient of H at pt, and simplifying a bit, we can see that St = ( q : n X i=1 qi ln 1 pt,i ≤H(p) ) . Summing up, a way to implement a ROME algorithm with the same mistake bound as the corresponding OME algorithm is to • try defining pt+1 as in (5), and check whether the resulting pt+1 ∈St, if so use it, and • if not, then define pt+1 as in (4) instead. Acknowledgements We are grateful to Tony Jebara and Tong Zhang for helpful conversations, and an anonymous referee for suggesting a simplification of the proof of Theorem 3. References [1] D. Angluin and L. Valiant. Fast probabilistic algorithms for Hamiltonion circuits and matchings. Journal of Computer and System Sciences, 18(2):155–193, 1979. [2] A. L. Berger, S. Della Pietra, and V. J. Della Pietra. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71, 1996. [3] D. Blackwell. An analog of the minimax theorem for vector payoffs. Pac. J. Math., 6:1–8, 1956. [4] A. Blum, 2002. http://www-2.cs.cmu.edu/∼avrim/ML02/lect0418.txt. [5] N. Cesa-Bianchi, A. Krogh, and M. Warmuth. Bounds on approximate steepest descent for likelihood maximization in exponential families. IEEE Transactions on Information Theory, 40(4):1215–1218, 1994. [6] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991. [7] K. Crammer, O. Dekel, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. NIPS, 2003. [8] Koby Crammer and Yoram Singer. Ultraconservative online algorithms for multiclass problems. In COLT, pages 99–115, 2001. [9] I. Csisz´ar. I-divergence geometry of probability distributions and minimization problems. Annals of Probability, 3:146–158, 1975. [10] D. P. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized algorithms, 1998. Monograph. [11] Geoffrey J. Gordon. Regret bounds for prediction problems. In Proc. 12th Annu. Conf. on Comput. Learning Theory, pages 29–40. ACM Press, New York, NY, 1999. [12] D. P. Helmbold, N. Littlestone, and P. M. Long. On-line learning with linear loss constraints. Information and Computation, 161(2):140–171, 2000. [13] M. Herbster and M. K. Warmuth. Tracking the best linear predictor. Journal of Machine Learning Research, 1:281–309, 2001. [14] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. NIPS, 1999. [15] J. Kivinen and M. Warmuth. Boosting as entropy projection. COLT, 1999. [16] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1):1–63, 1997. [17] J. Langford, M. Seeger, and N. Megiddo. An improved predictive accuracy bound for averaging classifiers. ICML, pages 290–297, 2001. [18] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. Machine Learning, 46(1-3):361–387, 2002. [19] N. Littlestone. Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms. PhD thesis, UC Santa Cruz, 1989. [20] N. Littlestone, P. M. Long, and M. K. Warmuth. On-line learning of linear functions. Computational Complexity, 5:1–23, 1995. Preliminary version in STOC’91. [21] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108:212–261, 1994. [22] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, pages 615–622, 1962. [23] M. S. Pinsker. Information and Information Stability of Random Variables and Processes. Holden-Day, 1964. [24] F. Topsoe. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inform. Theory, 46(4):1602–1609, 2001. [25] P. Vaidya. A new algorithm for minimizing convex functions over convex sets. FOCS, pages 338–343, 1989. [26] T. Zhang. Regularized winnow methods. NIPS, pages 703–709, 2000. [27] T. Zhang. A sequential approximation bound for some sample-dependent convex optimization problems with applications in learning. COLT, pages 65–81, 2001.
|
2004
|
145
|
2,558
|
A harmonic excitation state-space approach to blind separation of speech Rasmus Kongsgaard Olsson and Lars Kai Hansen Informatics and Mathematical Modelling Technical University of Denmark, 2800 Lyngby, Denmark rko,lkh@imm.dtu.dk Abstract We discuss an identification framework for noisy speech mixtures. A block-based generative model is formulated that explicitly incorporates the time-varying harmonic plus noise (H+N) model for a number of latent sources observed through noisy convolutive mixtures. All parameters including the pitches of the source signals, the amplitudes and phases of the sources, the mixing filters and the noise statistics are estimated by maximum likelihood, using an EM-algorithm. Exact averaging over the hidden sources is obtained using the Kalman smoother. We show that pitch estimation and source separation can be performed simultaneously. The pitch estimates are compared to laryngograph (EGG) measurements. Artificial and real room mixtures are used to demonstrate the viability of the approach. Intelligible speech signals are re-synthesized from the estimated H+N models. 1 Introduction Our aim is to understand the properties of mixtures of speech signals within a generative statistical framework. We consider convolutive mixtures, i.e., xt = L−1 X k=0 Akst−k + nt, (1) where the elements of the source signal vector, st, i.e., the ds statistically independent source signals, are convolved with the corresponding elements of the filter matrix, Ak. The multichannel sensor signal, xt, is furthermore degraded by additive Gaussian white noise. It is well-known that separation of the source signals based on second order statistics is infeasible in general. Consider the second order statistic ⟨xtx⊤ t′ ⟩= L−1 X k,k′=0 Ak⟨st−ks⊤ t′−k′⟩A⊤ k′ + R, (2) where R is the (diagonal) noise covariance matrix. If the sources can be assumed stationary white noise, the source covariance matrix can be assumed proportional to the unit matrix without loss of generality, and we see that the statistic is symmetric to a common rotation of all mixing matrices Ak →AkU. This rotational invariance means that the acquired statistic is not informative enough to identify the mixing matrix, hence, the source time series. However, if we consider stationary sources with known, non-trivial, autocorrelations ⟨sts⊤ t′ ⟩= G(t −t′), and we are given access to measurements involving multiple values of G(t −t′), the rotational degrees of freedom are constrained and we will be able to recover the mixing matrices up to a choice of sign and scale of each source time series. Extending this argument by the observation that the mixing model (1) is invariant to filtering of a given column of the convolutive filter provided that the inverse filter is applied to corresponding source signal, we see that it is infeasible to identify the mixing matrices if these arbitrary inverse filters can be chosen to that they are allowed to ‘whiten’ the sources, see also [1]. For non-stationary sources, on the other hand, the autocorrelation functions vary through time and it is not possible to choose a single common whitening filter for each source. This means that the mixing matrices may be identifiable from multiple estimates of the second order correlation statistic (2) for non-stationary sources. Analysis in terms of the number of free parameters vs. the number of linear conditions is provided in [1] and [2]. Also in [2], the constraining effect of source non-stationarity was exploited by the simultaneous diagonalization of multiple estimates of the source power spectrum. In [3] we formulated a generative probabilistic model of this process and proved that it could estimate sources and mixing matrices in noisy mixtures. Blind source separation based on state-space models has been studied, e.g., in [4] and [5]. The approach is especially useful for including prior knowledge about the source signals and for handling noisy mixtures. One example of considerable practical importance is the case of speech mixtures. For speech mixtures the generative model based on white noise excitation may be improved using more realistic priors. Speech models based on sinusoidal excitation have been quite popular in speech modelling since [6]. This approach assumes that the speech signal is a time-varying mixture of a harmonic signal and a noise signal (H+N model). A recent application of this model for pitch estimation can be found in [7]. Also [8] and [9] exploit the harmonic structure of certain classes of signals for enhancement purposes. A related application is the BSS algorithm of [10], which uses the cross-correlation of the amplitude in different frequency. The state-space model naturally leads to maximum-likelihood estimation using the EM-algorithm, e.g. [11], [12]. The EM algorithm has been used in related models: [13] and [14]. In this work we generalize our previous work on state space models for blind source separation to include harmonic excitation and demonstrate that it is possible to perform simultaneous un-mixing and pitch tracking. 2 The model The assumption of time variant source statistics help identify parameters that would otherwise not be unique within the model. In the following, the measured signals are segmented into frames, in which they are assumed stationary. The mixing filters and observation noise covariance matrix are assumed stationary across all frames. The colored noise (AR) process that was used in [3] to model the sources is augmented to include a periodic excitation signal that is also time-varying. The specific choice of periodic basis function, i.e. the sinusoid, is motivated by the fact that the phase is linearizable, facilitating one-step optimization. In frame n, source i is represented by: sn i,t = p X t′=1 f n i,t′sn i,t−t′ + K X k=1 αn i,k sin(ωn 0,ikt + βn i ) + vn i,t = p X t′=1 f n i,t′sn i,t−t′ + K X k=1 cn i,2k−1 sin(ωn 0,ikt) + cn i,2k cos(ωn 0,ikt) + vn i,t (3) where n ∈{1, 2, .., N} and i ∈{1, 2, .., ds}. The innovation noise, vn i,t, is i.i.d Gaussian. Clearly, (3) represents a H+N model. The fundamental frequency, ωn 0,i, enters the estimation problem in an inherent non-linear manner. In order to benefit from well-established estimation theory, the above recursion is fitted into the framework of Gaussian linear models, see [15]. The Kalman filter model is an instance of this model. The augmented state space is constructed by including a history of past samples for each source. Source vector i in frame n is defined: sn i,t = £ sn i,t sn i,t−1 . . . sn i,t−p+1 ¤⊤. All sn i,t’s are stacked in the total source vector: ¯sn t = £ (sn 1,t)⊤ (sn 2,t)⊤ . . . (sn ds,t)⊤¤⊤. The resulting state-space model is: ¯sn t = Fn¯sn t−1 + Cnun t + ¯vn t xn t = A¯sn t + nn t where ¯vt ∼N(0, Q), nt ∼N(0, R) and ¯sn 1 ∼N(µn, Σn). The combined harmonics input vector is defined: un t = £ (un 1,t)⊤ (un 2,t)⊤ . . . (un ds,t)⊤¤⊤, where the harmonics corresponding to source i in frame n are: un i,t = £ sin(ωn 0,it) cos(ωn 0,it) . . . sin(Kωn 0,it) cos(Kωn 0,it) ¤⊤ It is apparent that the matrix multiplication by A constitutes a convolutive mixing of the sources, where the dx × ds channel filters are: A = a⊤ 11 a⊤ 12 .. a⊤ 1ds a⊤ 21 a⊤ 22 .. a⊤ 2ds ... ... ... ... a⊤ dx1 a⊤ dx2 .. a⊤ dxds In order to implement the H+N source model, the parameter matrices are constrained as follows: Fn = Fn 1 0 · · · 0 0 Fn 2 · · · 0 ... ... ... ... 0 0 · · · Fn ds , Fn i = f n i,1 f n i,2 · · · f n i,p−1 f n i,p 1 0 · · · 0 0 0 1 · · · 0 0 ... ... ... ... ... 0 0 · · · 1 0 Qn = Qn 1 0 · · · 0 0 Qn 2 · · · 0 ... ... ... ... 0 0 · · · Qn ds , (Qn i )jj′ = n qn i j = j′ = 1 0 j ̸= 1 W j′ ̸= 1 Cn = Cn 1 0 · · · 0 0 Cn 2 · · · 0 ... ... ... ... 0 0 · · · Cn ds , Cn i = cn i,1 cn i,2 · · · cn i,2K 0 0 · · · 0 0 0 · · · 0 ... ... ... ... 0 0 · · · 0 3 Learning Having described the convolutive mixing problem in the general framework of linear Gaussian models, more specifically the Kalman filter model, optimal inference of the sources is obtained by the Kalman smoother. However, since the problem at hand is effectively blind, we also need to estimate the parameters. Along the lines of, e.g. [15], we will invoke an EM approach. The log-likelihood is bounded from below: L(θ) ≥F(θ, ˆp) ≡J (θ, ˆp) −R(ˆp), with the definitions J (θ, ˆp) ≡ R dSˆp(S) log p(X, S|θ) and R(ˆp) ≡ R dSˆp(S) log ˆp(S). In accordance with standard EM theory, J (θ, ˆp) is optimized wrt. θ in the M-step. The E-step infers the relevant moments of the marginal posterior, ˆp = p(S|X, θ). For the Gaussian model the means are also source MAP estimates. The combined E and M steps are guaranteed not to decrease L(θ). 3.1 E-step The forward-backward recursions which comprise the Kalman smoother are employed in the E-step to infer moments of the source posterior, p(S|X, θ), i.e. the joint posterior of the sources conditioned on all observations. The relevant second-order statistic of this distribution in segment n is the marginal posterior mean, ˆ¯s n t ≡⟨¯sn t ⟩, and autocorrelation, Mn i,t ≡⟨sn i,t(sn i,t)⊤⟩≡[ mn i,1,t mn i,2,t .. mn i,L,t ]⊤, along with the marginal lag-one covariance, M1,n i,t ≡⟨sn i,t(sn i,t−1)⊤⟩≡[ m1,n i,1,t m1,n i,2,t .. m1,n i,L,t ]⊤. In particular, mn i,t is the first element of mn i,1,t. All averages are performed over p(S|X, θ). The forward recursion also yields the log-likelihood, L(θ). 3.2 M-step The M-step utility function, J (θ, ˆp), is defined: J (θ, ˆp) = −1 2 N X n=1 [ ds X i=1 log det Σn i + (τ −1) ds X i=1 log qn i +τ log det R + ds X i=1 ⟨(sn i,1 −µn i )T (Σn i )−1(sn i,1 −µn i )⟩ + τ X t=2 ds X i=1 ⟨1 qn i (sn i,t −(dn i )T zn i,t)2⟩+ τ X t=1 ⟨(xn t −A¯sn t )T R−1(xn t −A¯sn t )⟩] where ⟨·⟩signifies averaging over the source posterior from the previous E-step, p(S|X, θ) and τ is the frame length. The linear source parameters are grouped as dn i ≡ £ (f n i )⊤ (cn i )⊤¤⊤ , zn i ≡ £ (sn i,t−1)⊤ (un i,t)⊤¤⊤ where f n i ≡[ fi,1 fi,2 .. fi,p ]⊤ , cn i ≡[ ci,1 ci,2 .. ci,p ]⊤ Optimization of J (θ, ˆp) wrt. θ is straightforward (except for the ωn 0,i’s). Relatively minor changes are introduced to the estimators of e.g. [12] in order to respect the special constrained format of the parameter matrices and to allow for an external input to the model. More details on the estimators for the correlated source model are given in [3]. It is in general difficult to maximize J (θ, ˆp) wrt. to ωn i,0, since several local maxima exist, e.g. at multiples of ωn i,0, see e.g. [6]. This problem is addressed by narrowing the search range based on prior knowledge of the domain, e.g. that the pitch of speech lies in the range 50-400Hz. A candidate estimate for ωn i,0 is obtained by computing the autocorrelation function of sn i,t −(f n i )⊤sn i,t−1. Grid search is performed in the vicinity of the candidate. For each point in the grid we optimize dn i : dn i,new = " τ X t=2 · (Mn i,t−1) ˆsn i,t−1(un i,t)⊤ un i,t(ˆsn i,t−1)⊤ un i,t(un i,t)⊤ ¸#−1 τ X t=2 · mn i,t,t−1 ˆsn i,tun i,t ¸ (4) At each step of the EM-algorithm, the parameters are normalized by enforcing ||Ai|| = 1, t [sec] F [Hz] (source 2) 0 0.5 1 0 1000 2000 3000 4000 F [Hz] (source 1) 0 1000 2000 3000 4000 t [sec] 0 0.5 1 t [sec] 0 0.5 1 Figure 1: Amplitude spectrograms of the frequency range 0-4000Hz, from left to right: the true sources, the estimated sources and the re-synthesized source. that is enforcing a unity norm on the filter coefficients related to source i. 4 Experiment I: BSS and pitch tracking in a noisy artificial mixture The performance of a pitch detector can be evaluated using electro-laryngograph (EGG) recordings, which are obtained from electrodes placed on the neck, see [7]. In the following experiment, speech signals from the TIMIT [16] corpus is used for which the EGG signals were measured, kindly provided by the ‘festvox’ project (http://festvox.org). Two male speech signals (Fs = 16kHz) were mixed through known mixing filters and degraded by additive white noise (SNR ∼20dB), constructing two observation signals. The pitches of the speech signals were overlapping. The filter coefficients (of 2 × 2 = 4 FIR filter impulse responses) were: A = · 1.00 0.35 −0.20 0.00 0.00, 0.00 0.00 −0.50 −0.30 0.20 0.00 0.00 0.70 −0.20 0.15, 1.30 0.60 0.30 0.00 0.00 ¸ The signals were segmented into frames, τ = 320 ∼20ms, and the order of the ARprocess was set to p = 1. The number of harmonics was limited to K = 40. The pitch grid search involved 30 re-estimations of dn i . In figure 1 is shown the spectrograms of 0.4 0.6 0.8 1 80 100 120 140 160 t [sec] F0 [Hz] (source 2) 80 100 120 140 160 F0 [Hz] (source 1) Figure 2: The estimated (dashed) and EGG-provided (solid) pitches as a function of time. The speech mixtures were artificially mixed from TIMIT utterances and white noise was added. approximately 1 second of 1) the original sources, 2) the MAP source estimates and 3) the resynthesized sources (from the estimated model parameters). It is seen that the sources were well separated. Also, the re-synthesizations are almost indistinguishable from the source estimates. In figure 2, the estimated pitch of both speech signals are shown along with the pitch of the EGG measurements.1 The voiced sections of the speech were manually preselected, this step is easily automated. The estimated pitches do follow the ’true’ pitches as provided by the EGG. The smoothness of the estimates is further indicating the viability of the approach, as the pitch estimates are frame-local. 5 Experiment II: BSS and pitch tracking in a real mixture The algorithm was further evaluated on real room recordings that were also used in [17].2 Two male speakers synchronously count in English and Spanish (Fs = 16kHz). The mixtures were degraded with noise (SNR ∼20dB). The filter length, the frame length, the order of the AR-process and the number of harmonics were set to L = 25, τ = 320, p = 1 and K = 40, respectively. Figure 3 shows the MAP source estimates and the re-synthesized sources. Features of speech such as amplitude modulation are clearly evident in estimates and re-synthesizations.3 A listening test confirms: 1) the separation of the sources and 2) the good quality of the synthesized sources, reconfirming the applicability of the H+N model. Figure 4 displays the estimated pitches of the sources, where the voiced sections were manually preselected. Although, the ’true’ pitch is unavailable in this experiment, the smoothness of the frame-local pitch-estimates is further support for the approach. 1The EGG data are themselves noisy measurements of the hypothesized ‘truth’. Bandpass filtering was used for preprocessing. 2The mixtures were obtained from http://inc2.ucsd.edu/˜tewon/ica_cnl.html. 3Note that the ’English’ counter lowers the pitch throughout the sentence. t [sec] F [Hz] (source 2) 0 1 2 3 0 1000 2000 3000 t [sec] 0 1 2 3 F [Hz] (source 1) 0 1000 2000 3000 Figure 3: Spectrograms of the estimated (left) and re-synthesized sources (right) extracted from the ’one two ...’ and ’uno dos ...’ mixtures, source 1 and 2, respectively 6 Conclusion It was shown that prior knowledge on speech signals and quasi-periodic signals in general can be integrated into a linear non-stationary state-space model. As a result, the simultaneous separation of the speech sources and estimation of their pitches could be achieved. It was demonstrated that the method could cope with noisy artificially mixed signals and real room mixtures. Future research concerns more realistic mixtures in terms of reverberation time and inclusion of further domain knowledge. It should be noted that the approach is computationally intensive, we are also investigating means for approximate inference and parameter estimation that would allow real time implementation. Acknowledgement This work is supported by the Danish ‘Oticon Fonden’. References [1] E. Weinstein, M. Feder and A.V. Oppenheim, Multi-channel signal separation by decorrelation, IEEE Trans. on speech and audio processing, vol. 1, no. 4, pp. 405-413,1993. [2] Parra, L., Spence C., Convolutive blind separation of non-stationary sources. IEEE Trans. on speech and audio processing, vol. 5, pp. 320-327, 2000. [3] Olsson, R. K., Hansen L. K., Probabilistic blind deconvolution of non-stationary source. Proc. EUSIPCO, 2004, accepted. Olsson R. K., Hansen L. K., Estimating the number of sources in a noisy convolutive mixture using BIC. International conference on independent component analysis 2004, accepted. Preprints may be obtained from http://www.imm.dtu.dk/˜rko/ research.htm. [4] Gharbi, A.B.A., Salam, F., Blind separtion of independent sources in linear dynamical media. NOLTA, Hawaii, 1993. http://www.egr.msu.edu/bsr/papers/blind_ separation/nolta93.pdf 0 2 4 6 8 80 100 120 140 160 180 F0 [Hz] (source 2) t [sec] 80 100 120 140 F0 [Hz] (source 1) Figure 4: Pitch tracking in ’one two ...’/’uno dos ...’ mixtures. [5] Zhang, L., Cichocki, A., Blind Deconvolution of dynamical systems: a state space appraoch, Journal of signal processing, vol. 4, no. 2, pp. 111-130, 2000. [6] McAulay, R.J., Quateri. T.F., Speech analysis/synthesis based on a sinusoidal representation, IEEE Trans. on acoustics, speech and signal processing, vol. 34, no. 4, pp. 744-754, 1986. [7] Parra, L., Jain U., Approximate Kalman filtering for the harmonic plus noise model. IEEE Workshop on applications of signal processing to audio and acoustics, pp. 75-78, 2001. [8] Nakatani, T., Miyoshi, M., and Kinoshita, K., One microphone blind dereverberation based on quasi-periodicity of speech signals, Advances in Neural Information Processing Systems 16 (to appear), MIT Press, 2004. [9] Hu, G. Wang, D., Monaural speech segregation based on pitch tracking and amplitude modulation, IEEE Trans. neural networks, in press, 2004. [10] Anem¨uller, J., Kollmeier, B., Convolutive blind source separation of speech signals based on amplitude modulation decorrelation, Journal of the Acoustical Society of America, vol. 108, pp. 2630, 2000. [11] A. P. Dempster, N. M. Laird, and Rubin D. B., Maximum liklihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society, vol. 39, pp. 1–38, 1977. [12] Shumway, R.H., Stoffer, D.S., An approach to time series smoothing and forecasting using the EM algorithm. Journal of time series analysis, vol. 3, pp. 253-264. 1982. [13] Moulines E., Cardoso J. F., Gassiat E., Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models, ICASSP, vol. 5, pp. 3617-20, 1997. [14] Cardoso, J.F., Snoussi, H. , Delabrouille, J., Patanchon, G., Blind separation of noisy Gaussian stationary sources. Application to cosmic microwave background imaging, Proc. EUSIPCO, pp 561-564, 2002. [15] Roweis, S., Ghahramani, Z., A unifying review of linear Gaussian models. Neural Computation, vol. 11, pp. 305-345, 1999. [16] Center for Speech Technology Research, University of Edinburgh,http://www.cstr.ed. ac.uk/ [17] Lee, T.-W., Bell, A.J., Orglmeister, R., Blind source separation of real world signals, Proc. IEEE international conference neural networks, pp 2129-2135, 1997.
|
2004
|
146
|
2,559
|
Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection Koji Tsuda∗†, Gunnar R¨atsch∗‡and Manfred K. Warmuth§ ∗Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 T¨ubingen, Germany †AIST CBRC, 2-43 Aomi, Koto-ku, Tokyo, 135-0064, Japan ‡Fraunhofer FIRST, Kekul´estr. 7, 12489 Berlin, Germany §University of California at Santa Cruz {koji.tsuda,gunnar.raetsch}@tuebingen.mpg.de, manfred@cse.ucsc.edu Abstract We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: On-line learning with a simple square loss and finding a symmetric positive definite matrix subject to symmetric linear constraints. The updates generalize the Exponentiated Gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the analysis of each algorithm generalizes to the non-diagonal case. We apply both new algorithms, called the Matrix Exponentiated Gradient (MEG) update and DefiniteBoost, to learn a kernel matrix from distance measurements. 1 Introduction Most learning algorithms have been developed to learn a vector of parameters from data. However, an increasing number of papers are now dealing with more structured parameters. More specifically, when learning a similarity or a distance function among objects, the parameters are defined as a symmetric positive definite matrix that serves as a kernel (e.g. [14, 11, 13]). Learning is typically formulated as a parameter updating procedure to optimize a loss function. The gradient descent update [6] is one of the most commonly used algorithms, but it is not appropriate when the parameters form a positive definite matrix, because the updated parameter is not necessarily positive definite. Xing et al. [14] solved this problem by always correcting the updated matrix to be positive. However no bound has been proven for this update-and-correction approach. In this paper, we introduce the Matrix Exponentiated Gradient update which works as follows: First, the matrix logarithm of the current parameter matrix is computed. Then a step is taken in the direction of the steepest descent. Finally, the parameter matrix is updated to the exponential of the modified log-matrix. Our update preserves symmetry and positive definiteness because the matrix exponential maps any symmetric matrix to a positive definite matrix. Bregman divergences play a central role in the motivation and the analysis of on-line learning algorithms [5]. A learning problem is essentially defined by a loss function, and a divergence that measures the discrepancy between parameters. More precisely, the updates are motivated by minimizing the sum of the loss function and the Bregman divergence, where the loss function is multiplied by a positive learning rate. Different divergences lead to radically different updates [6]. For example, the gradient descent is derived from the squared Euclidean distance, and the exponentiated gradient from the Kullback-Leibler divergence. We use the von Neumann divergence (also called quantum relative entropy) for measuring the discrepancy between two positive definite matrices [8]. We derive a new Matrix Exponentiated Gradient update from this divergence (which is a Bregman divergence for positive definite matrices). Finally we prove relative loss bounds using the von Neumann divergence as a measure of progress. Also the following related key problem has received a lot of attention recently [14, 11, 13]: Find a symmetric positive definite matrix that satisfies a number of symmetric linear inequality constraints. The new DefiniteBoost algorithm greedily chooses the most violated constraint and performs an approximated Bregman projection. In the diagonal case, we recover AdaBoost [9]. We also show how the convergence proof of AdaBoost generalizes to the non-diagonal case. 2 von Neumann Divergence or Quantum Relative Entropy If F is a real convex differentiable function on the parameter domain (symmetric d × d positive definite matrices) and f(W) := ∇F(W), then the Bregman divergence between two parameters f W and W is defined as ∆F(f W, W) = F(f W) −F(W) −tr[(f W −W)f(W)]. When choosing F(W) = tr(W log W−W), then f(W) = log W and the corresponding Bregman divergence becomes the von Neumann divergence [8]: ∆F(f W, W) = tr(f W log f W −f W log W −f W + W). (1) In this paper, we are primarily interested in the normalized case (when tr(W) = 1). In this case, the positive symmetric definite matrices are related to density matrices commonly used in Statistical Physics and the divergence simplifies to ∆F(f W, W) = tr(f W log f W − f W log W). If W = P i λiviv⊤ i is our notation for the eigenvalue decomposition, then we can rewrite the normalized divergence as ∆F(f W, W) = X i ˜λi ln ˜λi + X i,j ˜λi ln λj(˜v⊤ i vj)2. So this divergence quantifies the difference in the eigenvalues as well as the eigenvectors. 3 On-line Learning In this section, we present a natural extension of the Exponentiated Gradient (EG) update [6] to an update for symmetric positive definite matrices. At the t-th trial, the algorithm receives a symmetric instance matrix Xt ∈Rd×d. It then produces a prediction ˆyt = tr(WtXt) based on the algorithm’s current symmetric positive definite parameter matrix Wt. Finally it incurs for instance1 a quadratic loss (ˆyt −yt)2, 1For the sake of simplicity, we use the simple quadratic loss: Lt(W) = (tr(XtW) −yt)2. For the general update, the gradient ∇Lt(Wt) is exponentiated in the update (4) and this gradient must be symmetric. Following [5], more general loss functions (based on Bregman divergences) are amenable to our techniques. and updates its parameter matrix Wt. In the update we aim to solve the following problem: Wt+1 = argminW ∆F(W, Wt) + η(tr(WXt) −yt)2 , (2) where the convex function F defines the Bregman divergence. Setting the derivative with respect to W to zero, we have f(Wt+1) −f(Wt) + η∇[(tr(Wt+1Xt) −yt)2] = 0. (3) The update rule is derived by solving (3) with respect to Wt+1, but it is not solvable in closed form. A common way to avoid this problem is to approximate tr(Wt+1Xt) by tr(WtXt) [5]. Then, we have the following update: Wt+1 = f −1(f(Wt) −2η(ˆyt −yt)Xt). In our case, F(W) = tr(W log W −W) and thus f(W) = log W and f −1(W) = exp W. We also augment (2) with the constraint tr(W) = 1, leading to the following Matrix Exponential Gradient (MEG) Update: Wt+1 = 1 Zt exp(log Wt −2η(ˆyt −yt)Xt), (4) where the normalization factor Zt is tr[exp(log Wt −2η(ˆyt −yt)Xt)]. Note that in the above update, the exponent log Wt −2η(ˆyt −yt)Xt is an arbitrary symmetric matrix and the matrix exponential converts this matrix back into a symmetric positive definite matrix. A numerically stable version of the MEG update is given in Section 3.2. 3.1 Relative Loss Bounds We now begin with the definitions needed for the relative loss bounds. Let S = (X1, y1), . . . , (XT , yT ) denote a sequence of examples, where the instance matrices Xt ∈ Rd×d are symmetric and the labels yt ∈R. For any symmetric positive semi-definite matrix U with tr(U) = 1, define its total loss as LU(S) = PT t=1(tr(UXt) −yt)2. The total loss of the on-line algorithm is LMEG(S) = PT t=1(tr(WtXt) −yt)2. We prove a bound on the relative loss LMEG(S)−LU(S) that holds for any U. The proof generalizes a similar bound for the Exponentiated Gradient update (Lemmas 5.8 and 5.9 of [6]). The relative loss bound is derived in two steps: Lemma 3.1 bounds the relative loss for an individual trial and Lemma 3.2 for a whole sequence (Proofs are given in the full paper). Lemma 3.1 Let Wt be any symmetric positive definite matrix. Let Xt be any symmetric matrix whose smallest and largest eigenvalues satisfy λmax −λmin ≤r. Assume Wt+1 is produced from Wt by the MEG update and let U be any symmetric positive semi-definite matrix. Then for any constants a and b such that 0 < a ≤2b/(2 + r2b) and any learning rate η = 2b/(2 + r2b), we have a(yt −tr(WtXt))2 −b(yt −tr(UXt))2 ≤∆(U, Wt) −∆(U, Wt+1) (5) In the proof, we use the Golden-Thompson inequality [3], i.e., tr[exp(A + B)] ≥ tr[exp(A) exp(B)] for symmetric matrices A and B. We also needed to prove the following generalization of Jensen’s inequality to matrices: exp(ρ1A + ρ2(I −A)) ≤ exp(ρ1)A + exp(ρ2)(I −A) for finite ρ1, ρ2 ∈R and any symmetric matrix A with 0 < A ≤I. These two key inequalities will also be essential for the analysis of DefiniteBoost in the next section. Lemma 3.2 Let W1 and U be arbitrary symmetric positive definite initial and comparison matrices, respectively. Then for any c such that η = 2c/(r2(2 + c)), LMEG(S) ≤ 1 + c 2 LU(S) + 1 2 + 1 c r2∆(U, W1). (6) Proof For the maximum tightness of (5), a should be chosen as a = η = 2b/(2 + r2b). Let b = c/r2, and thus a = 2c/(r2(2 + c)). Then (5) is rewritten as 2c 2 + c(yt −tr(WtXt))2 −c(yt −tr(UXt))2 ≤r2(∆(U, Wt) −∆(U, Wt+1)) Adding the bounds for t = 1, · · · , T , we get 2c 2 + cLMEG(S) −cLU(S) ≤r2(∆(U, W1) −∆(U, Wt+1)) ≤r2∆(U, W1), which is equivalent to (6). Assuming LU(S) ≤ℓmax and ∆(U, W1) ≤dmax, the bound (6) is tightest when c = r p 2dmax/ℓmax. Then we have LMEG(S) −LU(S) ≤r√2ℓmaxdmax + r2 2 ∆(U, W1). 3.2 Numerically stable MEG update The MEG update is numerically unstable when the eigenvalues of Wt are around zero. However we can “unwrap” Wt+1 as follows: Wt+1 = 1 ˜Zt exp(ctI + log W1 −2η t X s=1 (ˆys −ys)Xs), (7) where the constant ˜Zt normalizes the trace of Wt+1 to one. As long as the eigen values of W1 are not too small then the computation of log Wt is stable. Note that the update is independent of the choice of ct ∈R. We incrementally maintain an eigenvalue decomposition of the matrix in the exponent (O(n3) per iteration): VtΛtVT t = ctI + log W1 −2η t X s=1 (ˆys −ys)Xs), where the constant ct is chosen so that the maximum eigenvalue of the above is zero. Now Wt+1 = Vt exp(Λt)VT t /tr(exp(Λt)). 4 Bregman Projection and DefiniteBoost In this section, we address the following Bregman projection problem2 W∗= argminW ∆F(W, W1), tr(W) = 1, tr(WCj) ≤0, for j = 1, . . . , n, (8) where the symmetric positive definite matrix W1 of trace one is the initial parameter matrix, and C1, . . . , Cn are arbitrary symmetric matrices. Prior knowledge about W is encoded in the constraints, and the matrix closest to W1 is chosen among the matrices satisfying all constraints. Tsuda and Noble [13] employed this approach for learning a kernel matrix among graph nodes, and this method can be potentially applied to learn a kernel matrix in other settings (e.g. [14, 11]). The problem (8) is a projection of W1 to the intersection of convex regions defined by the constraints. It is well known that the Bregman projection into the intersection of convex regions can be solved by sequential projections to each region [1]. In the original papers only asymptotic convergence was shown. More recently a connection [4, 7] was made to the AdaBoost algorithm which has an improved convergence analysis [2, 9]. We generalize the latter algorithm and its analysis to symmetric positive definite matrices and call the new algorithm DefiniteBoost. As in the original setting, only approximate projections (Figure 1) are required to show fast convergence. 2Note that if η is large then the on-line update (2) becomes a Bregman projection subject to a single equality constraint tr(WXt) = yt. Exact Projection Approximate Projection Figure 1: In (exact) Bregman projections, the intersection of convex sets (i.e., two lines here) is found by iterating projections to each set. We project only approximately, so the projected point does not satisfy the current constraint. Nevertheless, global convergence to the optimal solution is guaranteed via our proofs. Before presenting the algorithm, let us derive the dual problem of (8) by means of Lagrange multipliers γ, γ∗= argminγ log tr exp(log W1 − n X j=1 γjCj) , γj ≥0. (9) See [13] for a detailed derivation of the dual problem. When (8) is feasible, the optimal solution is described as W∗= 1 Z(γ∗) exp(log W1 −Pn j=1 γ∗ j Cj), where Z(γ∗) = tr[exp(log W1 −Pn j=1 γ∗ j Cj)]. 4.1 Exact Bregman Projections First, let us present the exact Bregman projection algorithm to solve (8). We start from the initial parameter W1. At the t-th step, the most unsatisfied constraint is chosen, jt = argmaxj=1,··· ,n tr(WtCj). Let us use Ct as the short notation for Cjt. Then, the following Bregman projection with respect to the chosen constraint is solved. Wt+1 = argminW ∆(W, Wt), tr(W) = 1, tr(WCt) ≤0. (10) By means of a Lagrange multiplier α, the dual problem is described as αt = argminα tr[exp(log Wt −αCt)], α ≥0. (11) Using the solution of the dual problem, Wt is updated as Wt+1 = 1 Zt(αt) exp(log Wt −αtCt) (12) where the normalization factor is Zt(αt) = tr[exp(log Wt −αtCt)]. Note that we can use the same numerically stable update as in the previous section. 4.2 Approximate Bregman Projections The solution of (11) cannot be obtained in closed form. However, one can use the following approximate solution: αt = 1 λmax t −λmin t log 1 + rt/λmax t 1 + rt/λmin t , (13) when the eigenvalues of Ct lie in the interval [λmin t , λmax t ] and rt = tr(WtCt). Since the most unsatisfied constraint is chosen, rt ≥0 and thus αt ≥0. Although the projection is done only approximately,3 the convergence of the dual objective (9) can be shown using the following upper bound. 3The approximate Bregman projection (with αt as in (13) can also be motivated as an online algorithm based on an entropic loss and learning rate one (following Section 3 and [4]). Theorem 4.1 The dual objective (9) is bounded as tr exp log W1 − n X j=1 γjCj ≤ T Y t=1 ρ(rt) (14) where ρ(rt) = 1 − rt λmax t λmax t λmax t −λmin t 1 − rt λmin t −λmin t λmax t −λmin t . The dual objective is monotonically decreasing, because ρ(rt) ≤1. Also, since rt corresponds to the maximum value among all constraint violations {rj}n j=1, we have ρ(rt) = 1 only if rt = 0. Thus the dual objective continues to decrease until all constraints are satisfied. 4.3 Relation to Boosting When all matrices are diagonal, the DefiniteBoost degenerates to AdaBoost [9]: Let {xi, yi}d i=1 be the training samples, where xi ∈ Rm and yi ∈ {−1, 1}. Let h1(x), . . . , hn(x) ∈[−1, 1] be the weak hypotheses. For the j-th hypothesis hj(x), let us define Cj = diag(y1hj(x1), . . . , ydhj(xd)). Since |yhj(x)| ≤1, λmax / min t = ±1 for any t. Setting W1 = I/d, the dual objective (14) is rewritten as 1 d d X i=1 exp −yi n X j=1 γjhj(xi) , which is equivalent to the exponential loss function used in AdaBoost. Since Cj and W1 are diagonal, the matrix Wt stays diagonal after the update. If wti = [Wt]ii, the updating formula (12) becomes the AdaBoost update: wt+1,i = wti exp(−αtyiht(xi))/Zt(αt). The approximate solution of αt (13) is described as αt = 1 2 log 1+rt 1−rt , where rt is the weighted training error of the t-th hypothesis, i.e. rt = Pd i=1 wtiyiht(xi). 5 Experiments on Learning Kernels In this section, our technique is applied to learning a kernel matrix from a set of distance measurements. This application is not on-line per se, but it shows nevertheless that the theoretical bounds can be reasonably tight on natural data. When K is a d×d kernel matrix among d objects, then the Kij characterizes the similarity between objects i and j. In the feature space, Kij corresponds to the inner product between object i and j, and thus the Euclidean distance can be computed from the entries of the kernel matrix [10]. In some cases, the kernel matrix is not given explicitly, but only a set of distance measurements is available. The data are represented either as (i) quantitative distance values (e.g., the distance between i and j is 0.75), or (ii) qualitative evaluations (e.g., the distance between i and j is small) [14, 13]. Our task is to obtain a positive definite kernel matrix which fits well to the given distance data. On-line kernel learning In the first experiment, we consider the on-line learning scenario in which only one distance example is shown to the learner at each time step. The distance example at time t is described as {at, bt, yt}, which indicates that the squared Euclidean distance between objects at and bt is yt. Let us define a time-developing sequence of kernel matrices as {Wt}T t=1, and the corresponding points in the feature space as {xti}d i=1 (i.e. [Wt]ab = x⊤ taxtb). Then, the total loss incurred by this sequence is T X t=1 ∥xtat −xtbt∥2 −yt 2 = T X t=1 (tr(WtXt) −yt)2, 0 0.5 1 1.5 2 2.5 3 x 10 5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Iterations Total Loss 0 0.5 1 1.5 2 2.5 3 x 10 5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Iterations Classification Error Figure 2: Numerical results of on-line learning. (Left) total loss against the number of iterations. The dashed line shows the loss bound. (Right) classification error of the nearest neighbor classifier using the learned kernel. The dashed line shows the error by the target kernel. where Xt is a symmetric matrix whose (at, at) and (bt, bt) elements are 0.5, (at, bt) and (bt, at) elements are -0.5, and all the other elements are zero. We consider a controlled experiment in which the distance examples are created from a known target kernel matrix. We used a 52 × 52 kernel matrix among gyrB proteins of bacteria (d = 52). This data contains three bacteria species (see [12] for details). Each distance example is created by randomly choosing one element of the target kernel. The initial parameter was set as W1 = I/d. When the comparison matrix U is set to the target matrix, LU(S) = 0 and ℓmax = 0, because all the distance examples are derived from the target matrix. Therefore we choose learning rate η = 2, which minimizes the relative loss bound of Lemma 3.2. The total loss of the kernel matrix sequence obtained by the matrix exponential update is shown in Figure 2 (left). In the plot, we have also shown the relative loss bound. The bound seems to give a reasonably tight performance guarantee—it is about twice the actual total loss. To evaluate the learned kernel matrix, the prediction accuracy of bacteria species by the nearest neighbor classifier is calculated (Figure 2, right), where the 52 proteins are randomly divided into 50% training and 50% testing data. The value shown in the plot is the test error averaged over 10 different divisions. It took a large number of iterations (∼2 × 105) for the error rate to converge to the level of the target kernel. In practice one can often increase the learning rate for faster convergence, but here we chose the small rate suggested by our analysis to check the tightness of the bound. Kernel learning by Bregman projection Next, let us consider a batch learning scenario where we have a set of qualitative distance evaluations (i.e. inequality constraints). Given n pairs of similar objects {aj, bj}n j=1, the inequality constraints are constructed as ∥xaj −xbj∥≤γ, j = 1, . . . , n, where γ is a predetermined constant. If Xj is defined as in the previous section and Cj = Xj −γI, the inequalities are then rewritten as tr(WCj) ≤0, j = 1, . . . , n. The largest and smallest eigenvalues of any Cj are 1 −γ and −γ, respectively. As in the previous section, distance examples are generated from the target kernel matrix between gyrB proteins. Setting γ = 0.2/d, we collected all object pairs whose distance in the feature space is less than γ to yield 980 inequalities (n = 980). Figure 3 (left) shows the convergence of the dual objective function as proven in Theorem 4.1. The convergence was much faster than the previous experiment, because, in the batch setting, one can choose the most unsatisfied constraint, and optimize the step size as well. Figure 3 (right) shows the classification error of the nearest neighbor classifier. As opposed to the previous experiment, the error rate is higher than that of the target kernel matrix, because substantial amount of information is lost by the conversion to inequality constraints. 0 50 100 150 200 250 300 15 20 25 30 35 40 45 50 55 Iterations Dual Obj 0 50 100 150 200 250 300 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Iterations Classification Error Figure 3: Numerical results of Bregman projection. (Left) convergence of the dual objective function. (Right) classification error of the nearest neighbor classifier using the learned kernel. 6 Conclusion We motivated and analyzed a new update for symmetric positive matrices using the von Neumann divergence. We showed that the standard bounds for on-line learning and Boosting generalize to the case when the parameters are a symmetric positive definite matrix (of trace one) instead of a probability vector. As in quantum physics, the eigenvalues act as probabilities. Acknowledgment We would like to thank B. Sch¨olkopf, M. Kawanabe, J. Liao and W.S. Noble for fruitful discussions. M.W. was supported by NSF grant CCR 9821087 and UC Discovery grant LSIT02-10110. K.T. and G.R. gratefully acknowledge partial support from the PASCAL Network of Excellence (EU #506778). Part of this work was done while all three authors were visiting the National ICT Australia in Canberra. References [1] L.M. Bregman. Finding the common point of convex sets by the method of successive projections. Dokl. Akad. Nauk SSSR, 165:487–490, 1965. [2] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. [3] S. Golden. Lower bounds for the Helmholtz function. Phys. Rev., 137:B1127–B1128, 1965. [4] J. Kivinen and M. K. Warmuth. Boosting as entropy projection. In Proc. 12th Annu. Conference on Comput. Learning Theory, pages 134–144. ACM Press, New York, NY, 1999. [5] J. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression problems. Machine Learning, 45(3):301–329, 2001. [6] J. Kivinen and M.K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997. [7] J. Lafferty. Additive models, boosting, and inference for generalized divergences. In Proc. 12th Annu. Conf. on Comput. Learning Theory, pages 125–133, New York, NY, 1999. ACM Press. [8] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000. [9] R.E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297–336, 1999. [10] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [11] I.W. Tsang and J.T. Kwok. Distance metric learning with kernels. In Proceedings of the International Conference on Artificial Neural Networks (ICANN’03), pages 126–129, 2003. [12] K. Tsuda, S. Akaho, and K. Asai. The em algorithm for kernel matrix completion with auxiliary data. Journal of Machine Learning Research, 4:67–81, May 2003. [13] K. Tsuda and W.S. Noble. Learning kernels from biological networks by maximizing entropy. Bioinformatics, 2004. to appear. [14] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 505–512. MIT Press, Cambridge, MA, 2003.
|
2004
|
147
|
2,560
|
Blind one-microphone speech separation: A spectral learning approach Francis R. Bach Computer Science University of California Berkeley, CA 94720 fbach@cs.berkeley.edu Michael I. Jordan Computer Science and Statistics University of California Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract We present an algorithm to perform blind, one-microphone speech separation. Our algorithm separates mixtures of speech without modeling individual speakers. Instead, we formulate the problem of speech separation as a problem in segmenting the spectrogram of the signal into two or more disjoint sets. We build feature sets for our segmenter using classical cues from speech psychophysics. We then combine these features into parameterized affinity matrices. We also take advantage of the fact that we can generate training examples for segmentation by artificially superposing separately-recorded signals. Thus the parameters of the affinity matrices can be tuned using recent work on learning spectral clustering [1]. This yields an adaptive, speech-specific segmentation algorithm that can successfully separate one-microphone speech mixtures. 1 Introduction The problem of recovering signals from linear mixtures, with only partial knowledge of the mixing process and the signals—a problem often referred to as blind source separation— is a central problem in signal processing. It has applications in many fields, including speech processing, network tomography and biomedical imaging [2]. When the problem is over-determined, i.e., when there are no more signals to estimate (the sources) than signals that are observed (the sensors), generic assumptions such as statistical independence of the sources can be used in order to demix successfully [2]. Many interesting applications, however, involve under-determined problems (more sources than sensors), where more specific assumptions must be made in order to demix. In problems involving at least two sensors, progress has been made by appealing to sparsity assumptions [3, 4]. However, the most extreme case, in which there is only one sensor and two or more sources, is a much harder and still-open problem for complex signals such as speech. In this setting, simple generic statistical assumptions do not suffice. One approach to the problem involves a return to the spirit of classical engineering methods such as matched filters, and estimating specific models for specific sources—e.g., specific speakers in the case of speech [5, 6]. While such an approach is reasonable, it departs significantly from the desideratum of “blindness.” In this paper we present an algorithm that is a blind separation algorithm—our algorithm separates speech mixtures from a single microphone without requiring models of specific speakers. Our approach involves a “discriminative” approach to the problem of speech separation. That is, rather than building a complex model of speech, we instead focus directly on the task of separation and optimize parameters that determine separation performance. We work within a time-frequency representation (a spectrogram), and exploit the sparsity of speech signals in this representation. That is, although two speakers might speak simultaneously, there is relatively little overlap in the time-frequency plane if the speakers are different [5, 4]. We thus formulate speech separation as a problem in segmentation in the time-frequency plane. In principle, we could appeal to classical segmentation methods from vision (see, e.g. [7]) to solve this two-dimensional segmentation problem. Speech segments are, however, very different from visual segments, reflecting very different underlying physics. Thus we must design features for segmenting speech from first principles. It also proves essential to combine knowledge-based feature design with learning methods. In particular, we exploit the fact that in speech we can generate “training examples” by artificially superposing two separately-recorded signals. Making use of our earlier work on learning methods for spectral clustering [1], we use the training data to optimize the parameters of a spectral clustering algorithm. This yields an adaptive, “discriminative” segmentation algorithm that is optimized to separate speech signals. We highlight one other aspect of the problem here—the major computational challenge involved in applying spectral methods to speech separation. Indeed, four seconds of speech sampled at 5.5 KHz yields 22,000 samples and thus we need to manipulate affinity matrices of dimension at least 22, 000 × 22, 000. Thus a major part of our effort has involved the design of numerical approximation schemes that exploit the different time scales present in speech signals. The paper is structured as follows. Section 2 provides a review of basic methodology. In Section 3 we describe our approach to feature design based on known cues for speech separation [8, 9]. Section 4 shows how parameterized affinity matrices based on these cues can be optimized in the spectral clustering setting. We describe our experimental results in Section 5 and present our conclusions in Section 6. 2 Speech separation as spectrogram segmentation In this section, we first review the relevant properties of speech signals in the timefrequency representation and describe how our training sets are constructed. 2.1 Spectrogram The spectrogram is a two-dimensional (time and frequency) redundant representation of a one-dimensional signal [10]. Let f[t], t = 0, . . . , T −1 be a signal in RT . The spectrogram is defined through windowed Fourier transforms and is commonly referred to as a short-time Fourier transform or as Gabor analysis [10]. The value (Uf)mn of the spectrogram at time window n and frequency m is defined as (Uf)mn = 1 √ M PT −1 t=0 f[t]w[t − na]ei2πmt/M, where w is a window of length T with small support of length c. We assume that the number of samples T is an integer multiple of a and c. There are then N = T/a different windows of length c. The spectrogram is thus an N × M image which provides a redundant time-frequency representation of time signals1 (see Figure 1). Inversion Our speech separation framework is based on the segmentation of the spectrogram of a signal f[t] in S ⩾2 disjoint subsets Ai, i = 1, . . . , S of [0, N −1] × [0, M −1]. 1In our simulations, the sampling frequency is f0 = 5.5kHz and we use a Hanning window of length c = 216 (i.e., 43.2ms). The spacing between window is equal to a = 54 (i.e., 10.8ms). We use a 512-point FFT (M = 512). For a speech sample of length 4 sec, we have T = 22, 000 samples and then N = 407, which makes ≈2 × 105 spectrogram pixels. Time Frequency Time Frequency Figure 1: Spectrogram of speech; (left) single speaker, (right) two simultaneous speakers. The gray intensity is proportional to the magnitude of the spectrogram. This leads to S spectrograms Ui such that (Ui)mn = Umn if (m, n) ∈Ai and zero otherwise—note that the phase is kept the same as the one of the original mixed signal. We now need to find S speech signals fi[t] such that each Ui is the spectrogram of fi. In general there are no exact solutions (because the representation is redundant), and a classical technique is to find the minimum L2 norm approximation, i.e., find fi such that ||Ui −Ufi||2 is minimal [10]. The solution of this minimization problem involves the pseudo-inverse of the linear operator U [10] and is equal to fi = (U ∗U)−1U ∗Ui. By our choice of window (Hanning), U ∗U is proportional to the identity matrix, so that the solution to this problem can simply be obtained by applying the adjoint operator U ∗. Normalization and subsampling There are several ways of normalizing a speech signal. In this paper, we chose to rescale all speech signals as follows: for each time window n, we compute the total energy en = P m |Ufmn|2, and its 20-point moving average. The signals are normalized so that the 80% percentile of those values is equal to one. In order to reduce the number of spectrogram samples to consider, for a given prenormalized speech signal, we threshold coefficients whose magnitudes are less than a value that was chosen so that the distortion is inaudible. 2.2 Generating training samples Our approach is based on a learning algorithm that optimizes a segmentation criterion. The training examples that we provide to this algorithm are obtained by mixing separatelynormalized speech signals. That is, given two volume-normalized speech signals f1, f2 of the same duration, with spectrograms U1 and U2, we build a training sample as U train = U1 + U2, with a segmentation given by z = arg min{U1, U2}. In order to obtain better training partitions (and in particular to be more robust to the choice of normalization), we also search over all α ∈[0, 1] such that the least square reconstruction error of the waveform obtained from segmenting/reconstructing using z = arg min{αU1, (1 −α)U2} is minimized. An example of such a partition is shown in Figure 2 (left). 3 Features and grouping cues for speech separation In this section we describe our approach to the design of features for the spectral segmentation. We base our design on classical cues suggested from studies of perceptual grouping [11]. Our basic representation is a “feature map,” a two-dimensional representation that has the same layout as the spectrogram. Each of these cues is associated with a specific time scale, which we refer to as “small” (less than 5 frames), “medium” (10 to 20 frames), and “large” (across all frames). (These scales will be of particular relevance to the design of numerical approximation methods in Section 4.3). Any given feature is not sufficient for separating by itself; rather, it is the combination of several features that makes our approach successful. 3.1 Non-harmonic cues The following non-harmonic cues have counterparts in visual scenes and for these cues we are able to borrow from feature design techniques used in image segmentation [7]. Continuity Two time-frequency points are likely to belong to the same segment if they are close in time or frequency; we thus use time and frequency directly as features. This cue acts at a small time scale. Common fate cues Elements that exhibit the same time variation are likely to belong to the same source. This takes several particular forms. The first is simply common offset and common onset. We thus build an offset map and an onset map, with elements that are zero when no variation occurs, and are large when there is a sharp decrease or increase (with respect to time) for that particular time-frequency point. The onset and offset maps are built using oriented energy filters as used in vision (with one vertical orientation). These are obtained by convolving the spectrogram with derivatives of Gaussian windows [7]. Another form of the common fate cue is frequency co-modulation, the situation in which frequency components of a single source tend to move in sync. To capture this cue we simply use oriented filter outputs for a set of orientation angles (8 in our simulations). Those features act mainly at a medium time scale. 3.2 Harmonic cues This is the major cue for voiced speech [12, 9, 8], and it acts at all time scales (small, medium and large): voiced speech is locally periodic and the local period is usually referred to as the pitch. Pitch estimation In order to use harmonic information, we need to estimate potentially several pitches. We have developed a simple pattern matching framework for doing this that we present in Appendix A. If S pitches are sought, the output that we obtain from the pitch extractor is, for each time frame n, the S pitches ωn1, . . . , ωnS, as well as the strength ynms of the s-th pitch for each frequency m. Timbre The pitch extraction algorithm presented in Appendix A also outputs the spectral envelope of the signal [12]. This can be used to design an additional feature related to timbre which helps integrate information regarding speaker identification across time. Timbre can be loosely defined as the set of properties of a voiced speech signal once the pitch has been factored out [8]. We add the spectral envelope as a feature (reducing its dimensionality using principal component analysis). Building feature maps from pitch information We build a set of features from the pitch information. Given a time-frequency point (m, n), let s(m, n) = arg maxs ynms (P m′ ynm′s)1/2 denote the highest energy pitch, and define the features ωns(m,n), ynms(m,n), P m′ ynm′s(m,n), ynms(m,n) P m′ ynm′s(m,n) and ynms(m,n) (P m′ ynm′s(m,n)))1/2 . We use a partial normalization with the square root to avoid including very low energy signals, while allowing a significant difference between the local amplitude of the speakers. Those features all come with some form of energy level and all features involving pitch values ω should take this energy into account when the affinity matrix is built in Section 4. Indeed, the values of the harmonic features have no meaning when no energy in that pitch is present. 4 Spectral clustering and affinity matrices Given the features described in the previous section, we now show how to build affinity (i.e., similarity) matrices that can be used to define a spectral segmenter. In particular, our approach builds parameterized affinity matrices, and uses a learning algorithm to adjust these parameters. 4.1 Spectral clustering Given P data points to partition into S ⩾2 disjoint groups, spectral clustering methods use an affinity matrix W, symmetric of size P × P, that encodes topological knowledge about the problem. Once W is available, it is normalized and its first S (P-dimensional) eigenvectors are computed. Then, forming a P × S matrix with these eigenvectors as columns, we cluster the P rows of this matrix as points in RS using K-means (or a weighted version thereof). These clusters define the final partition [7, 1]. We prefer spectral clustering methods over other clustering algorithms such as K-means or mixtures of Gaussians estimated by the EM algorithm because we do not have any reason to expect the segments of interest in our problem to form convex shapes in the feature representation. 4.2 Parameterized affinity matrices The success of spectral methods for clustering depends heavily on the construction of the affinity matrix W. In [1], we have shown how learning can play a role in optimizing over affinity matrices. Our algorithm assumes that fully partitioned datasets are available, and uses these datasets as training data for optimizing the parameters of affinity matrices. As we have discussed in Section 2.2, such training data are easily obtained in the speech separation setting. It remains for us to describe how we parameterize the affinity matrices. From each of the features defined in Section 3, we define a basis affinity matrix Wj = Wj(βj), where βj is a (vector) parameter. We restrict ourselves to affinity matrices whose elements are between zero and one, and with unit diagonal. We distinguish between harmonic and non-harmonic features. For non-harmonic features, we use a radial basis function to define affinities. Thus, if fa is the value of the feature for data point a, we use a basis affinity matrix defined as Wab = exp(−||fa −fb||β), where β > 1. For an harmonic feature, on the other hand, we need to take into account the strength of the feature: if fa is the value of the feature for data point a, with strength ya, we use Wab = exp(−|g(ya, yb) + β3|β4||fa −fb||β2), where g(u, v) = (ueβ5u + veβ5v)/(eβ5u + eβ5v) ranges from the minimum of u and v for β5 = −∞to their maximum for β5 = +∞. Given m basis matrices, we use the following parameterization of W: W = PK k=1 γkW αk1 1 × · · · × W αkm m , where the products are taken pointwise. Intuitively, if we consider the values of affinity as soft boolean variables, taking the product of two affinity matrices is equivalent to considering the conjunction of two matrices, while taking the sum can be seen as their disjunction: our final affinity matrix can thus be seen as a disjunctive normal form. For our application to speech separation, we consider a sum of K = 3 matrices, one matrix for each time scale. This has the advantage of allowing different approximation schemes for each of the time scales, an issue we address in the following section. 4.3 Approximations of affinity matrices The affinity matrices that we consider are huge, of size at least 50,000 by 50,000. Thus a significant part of our effort has involved finding computationally efficient approximations of affinity matrices. Let us assume that the time-frequency plane is vectorized by stacking one time frame after the other. In this representation, the time scale of a basis affinity matrix W exerts an effect on the degree of “bandedness” of W. The matrix W is said band-diagonal with bandwidth B, if for all i, j, |i−j| ⩾B ⇒Wij = 0. On a small time scale, W has a small bandwidth; for a medium time scale, the band is larger but still small compared to the total size of the matrix, while for large scale effects, the matrix W has no band structure. Note that the bandwidth B can be controlled by the coefficient of the radial basis function involving the time feature n. For each of these three cases, we have designed a particular way of approximating the matrix, while ensuring that in each case the time and space requirements are linear in the number of time frames. Small scale If the bandwidth B is very small, we use a simple direct sparse approximation. The complexity of such an approximation grows linearly in the number of time frames. Medium and large scale We use a low-rank approximation of the matrix W similar in spirit to the algorithm of [13]. If we assume that the index set {1, . . . , P} is partitioned randomly into I and J, and that A = W(I, I) and B = W(J, I), then W(J, I) = B⊤ (by symmetry) and we approximate C = W(J, J) by a linear combination of the columns in I, i.e., bC = BE, where E ∈R|I|×|J|. The matrix E is chosen so that when the linear combination defined by E is applied to the columns in I, the error is minimal, which leads to an approximation of W(J, J) by B(A2 + λI)−1AB⊤. If G is the dimension of J, then the complexity of finding the approximation is O(G3 + G2P), and the complexity of a matrix-vector product with the low-rank approximation is O(G2P). The storage requirement is O(GP). For large bandwidths, we use a constant G, i.e., we make the assumption that the rank that is required to encode a speaker is independent of the duration of the signals. For mid-range interactions, we need an approximation whose rank grows with time, but whose complexity does not grow quadratically with time. This is done by using the banded structure of A and W. If ρ is the proportion of retained indices, then the complexity of storage and matrix-vector multiplication is O(Pρ3B). 5 Experiments We have trained our segmenter using data from four different speakers, with speech signals of duration 3 seconds. There were 28 parameters to estimate using our spectral learning algorithm. For testing, we have use mixes from five speakers that were different from those in the training set. In Figure 2, for two speakers from the testing set, we show on the left part an example of the segmentation that is obtained when the two speech signals are known in advance (obtained as described in Section 2.2), and on the right side, the segmentation that is output by our algorithm. Although some components of the “black” speaker are missing, the segmentation performance is good enough to obtain audible signals of reasonable quality. The speech samples for this example can de downloaded from www.cs.berkeley.edu/ ˜fbach/speech/ . On this web site, there are additional examples of speech separation, with various speakers, in French and in English. An important point is that our method does not require to know the speaker in advance in order to demix successfully; rather, it just requires that the two speakers have distinct and far enough pitches most of the time (another but less crucial condition is that one pitch is not too close to twice the other one). As mentioned earlier, there was a major computational challenge in applying spectral methods to single microphone speech separation. Using the techniques described in Section 4.3, the separation algorithm has linear running time complexity and memory requirement and, Time Frequency Time Frequency Figure 2: (Left) Optimal segmentation for the spectrogram in Figure 1 (right), where the two speakers are “black” and “grey;” this segmentation is obtained from the known separated signals. (Right) The blind segmentation obtained with our algorithm. coded in Matlab and C, it takes 30 minutes to separate 4 seconds of speech on a 1.8 GHz processor with 1GB of RAM. 6 Conclusions We have presented an algorithm to perform blind source separation of speech signals from a single microphone. To do so, we have combined knowledge of physical and psychophysical properties of speech with learning methods. The former provide parameterized affinity matrices for spectral clustering, and the latter make use of our ability to generate segmented training data. The result is an optimized segmenter for spectrograms of speech mixtures. We have successfully demixed speech signals from two speakers using this approach. Our work thus far has been limited to the setting of ideal acoustics and equal-strength mixing of two speakers. There are several obvious extensions that warrant investigation. First, the mixing conditions should be weakened and should allow some form of delay or echo. Second, there are multiple applications where speech has to be separated from a non-stationary noise; we believe that our method can be extended to this situation. Third, our framework is based on segmentation of the spectrogram and, as such, distortions are inevitable since this is a “lossy” formulation [6, 4]. We are currently working on postprocessing methods that remove some of those distortions. Finally, while running time and memory requirements of our algorithm are linear in the duration of the signal to be separated, the resource requirements remain a concern. We are currently working on further numerical techniques that we believe will bring our method significantly closer to real-time. Appendix A. Pitch estimation Pitch estimation for one pitch In this paragraph, we assume that we are given one time slice s of the spectrogram magnitude, s ∈RM. The goal is to have a specific pattern match s. Since the speech signals are real, the spectrogram is symmetric and we can consider only M/2 samples. If the signal is exactly periodic, then the spectrogram magnitude for that time frame is exactly a superposition of bumps at multiples of the fundamental frequency, The patterns we are considering have thus the following parameters: a “bump” function u 7→b(u), a pitch ω ∈[0, M/2] and a sequence of harmonics x1, . . . , xH at frequencies ω1 = ω, . . . , ωH = Hω, where H is the largest acceptable harmonic multiple, i.e., H = ⌊M/2ω⌋. The pattern ˜s = ˜s(x, b, ω) is then built as a weighted sum of bumps. By pattern matching, we mean to find the pattern ˜s as close to s in the L2-norm sense. We impose a constraint on the harmonic strengths (xh), namely, that they are samples at hω of a function g with small second derivative norm R M/2 0 |g(2)(ω)|2dω. The function g can be seen as the envelope of the signal and is related to the “timbre” of the speaker [8]. The explicit consideration of the envelope and its smoothness is necessary for two reasons: (a) it will provide a timbre feature helpful for separation, (b) it helps avoid pitch-halving, a traditional problem of pitch extractors [12]. Given b and ω, we minimize with respect to x, ||s −˜s(x)||2 + λ R M/2 0 |g(2)(ω)|2dω, where xh = g(hω). Since ˜s(x) is linear function of x, this is a spline smoothing problem, and the solution can be obtained in closed form with complexity O(H3) [14]. We now have to search over b and ω, knowing that the harmonic strengths x can be found in closed form. We use exhaustive search on a grid for ω, while we take only a few bump shapes. The main reason for several bump shapes is to account for the only approximate periodicity of voiced speech. For further details and extensions, see [15]. Pitch estimation for several pitches If we are to estimate S pitches, we estimate them recursively, by removing the estimated harmonic signals. In this paper, we assume that the number of speakers and hence the maximum number of pitches is known. Note, however, that since all our pitch features are always used with their strengths, our separation method is relatively robust to situations where we try to look for too many pitches. Acknowledgments We wish to acknowledge support from a grant from Intel Corporation, and a graduate fellowship to Francis Bach from Microsoft Research. References [1] F. R. Bach and M. I. Jordan. Learning spectral clustering. In NIPS 16, 2004. [2] A. Hyv¨arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001. [3] M. Zibulevsky, P. Kisilev, Y. Y. Zeevi, and B. A. Pearlmutter. Blind source separation via multinode sparse representation. In NIPS 14, 2002. [4] O. Yilmaz and S. Rickard. Blind separation of speech mixtures via time-frequency masking. IEEE Trans. Sig. Proc., 52(7):1830–1847, 2004. [5] S. T. Roweis. One microphone source separation. In NIPS 13, 2001. [6] G.-J. Jang and T.-W. Lee. A probabilistic approach to single channel source separation. In NIPS 15, 2003. [7] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE PAMI, 22(8):888–905, 2000. [8] A. S. Bregman. Auditory Scene Analysis: The Perceptual Organization of Sound. MIT Press, 1990. [9] G. J. Brown and M. P. Cooke. Computational auditory scene analysis. Computer Speech and Language, 8:297–333, 1994. [10] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1998. [11] M. Cooke and D. P. W. Ellis. The auditory organization of speech and other sources in listeners and computational models. Speech Communication, 35(3-4):141–177, 2001. [12] B. Gold and N. Morgan. Speech and Audio Signal Processing: Processing and Perception of Speech and Music. Wiley Press, 1999. [13] S. Belongie, C. Fowlkes, F. Chung, and J. Malik. Spectral partitioning with indefinite kernels using the Nystr¨om extension. In ECCV, 2002. [14] G. Wahba. Spline Models for Observational Data. SIAM, 1990. [15] F. R. Bach and M. I. Jordan. Discriminative training of hidden Markov models for multiple pitch tracking. In ICASSP, 2005.
|
2004
|
148
|
2,561
|
Resolving Perceptual Aliasing In The Presence Of Noisy Sensors∗ Ronen I. Brafman & Guy Shani Department of Computer Science Ben-Gurion University Beer-Sheva 84105, Israel {brafman, shanigu}@cs.bgu.ac.il Abstract Agents learning to act in a partially observable domain may need to overcome the problem of perceptual aliasing – i.e., different states that appear similar but require different responses. This problem is exacerbated when the agent’s sensors are noisy, i.e., sensors may produce different observations in the same state. We show that many well-known reinforcement learning methods designed to deal with perceptual aliasing, such as Utile Suffix Memory, finite size history windows, eligibility traces, and memory bits, do not handle noisy sensors well. We suggest a new algorithm, Noisy Utile Suffix Memory (NUSM), based on USM, that uses a weighted classification of observed trajectories. We compare NUSM to the above methods and show it to be more robust to noise. 1 Introduction Consider an agent situated in a partially observable domain: It executes an action; the action may change the state of the world; this change is reflected, in turn, by the agent’s sensors; the action may have some associated cost, and the new state may have some associated reward or penalty. Thus, the agent’s interaction with this environment is characterized by a sequence of action-observation-reward steps, known as instances [7]. In this paper we are interested in agents with imperfect and noisy sensors that learn to act in such environments without any prior information about the underlying set of world-states and the world’s dynamics, only information about their sensor’s capabilities. This is a known variant of reinforcement learning (RL) in partially observable domains [1]. As the agent works with observations, rather than states, two possible problems arise: The agent may observe too much data which requires computationally intensive filtering – a problem we do not discuss. Or, sensors may supply insufficient data to identify the current state of the world based on the current observation. This leads to a phenomena known as perceptual aliasing [2], where the same observation is obtained in distinct states requiring different actions. For example, in Figure 1(a) the states marked with X are perceptually aliased. Various RL techniques were developed to handle this problem. ∗Partially supported by the Lynn and William Frankel Center for Computer Sciences. (a) (b) Figure 1: Two maze domains. If only wall configuration is sensed, states marked with the same letter (X or Y) are perceptually aliased. The problem of resolving the perceptual aliasing is exacerbated when the agent’s sensors are not deterministic. For example, if walls can sometimes be reported where none exist, or if a wall sometimes goes undetected. The performance of existing techniques for handling RL domains with perceptually aliased states, such as finite size history windows, eligibility traces, and internal memory quickly degrades as the level of noise increases. In this paper, we introduce the Noisy Utile Suffix Memory (MUSM) algorithm, which builds on McCallum’s Utile Suffix Memory (USM) algorithm [7]. We compare the performance of MUSM to USM and other existing methods on the above two standard maze domains, and show that it is more robust to noise. 2 Background We briefly review a number of algorithms for resolving the problem of perceptual aliasing. We assume familiarity with basic RL techniques, and in particular, Q-learning, SARSA, and eligibility traces (see Sutton and Barto [12] for more details). The simplest way to handle perceptual aliasing is to ignore it by using the observation space as the state space, defining a memory-less policy. This approach works sometimes, although it generally results in poor performance. Jaakkola et al. [4] suggest stochastic memory-less policies, and Williams and Singh [13] implement an online version of their algorithm and showed that it approximates an optimal solution with 50% accuracy. Eligibility traces can be viewed as a type of short-term memory, as they update the recently visited state-action couplings. Therefore, they can be used to augment the ability to handle partial observability. Loch and Singh [6] explore problems where a memory-less optimal policy exists and demonstrated that Sarsa(λ) can learn an optimal policy for such domains. Finite-size history methods are a natural extension to memory-less policies. Instead of identifying a state with the last observation, we can use the last k observations. The size of the history window (k) is a fixed parameter, identical for all observation sequences. Lock and Singh [6] show Sarsa(λ) using fixed history window to learn a good policy in domains where short-term memory optimal policies exist. An arbitrarily predefined history window cannot generally solve perceptual aliasing: an agent cannot be expected to know in advance how long should it remember the actionobservation-reward trajectories. Usually, some areas of the state space require the agent to remember more, while in other locations a reactive policy is sufficient. A better solution is to learn online the history length needed to decide on the best action in the current location. McCallum [7] extensively handles those issues in his dissertation. We review McCallums’ Utile Suffix Memory (USM) algorithm in Section 3. Another possible approach to handle perceptual aliasing is to augment the observations with some internal memory [10]. The agents’ state s is composed of both the current observation o and the internal memory state m. The agents’ actions are enhanced with actions that modify the memory state by flipping one of the bits. The agent uses some standard learning mechanism to learn the proper action, including actions that change the memory state. The algorithm is modified so that no cost or decay is applied to the actions that modify the memory. This approach is better than using finite or variable history length because meaningful events can occur arbitrarily far in the past. Keeping all the possible trajectories from the event onwards until the outcome is observed might cost too much, and McCallums’ techniques are unable to group those trajectories together to deduce the proper action. Peshkin et al. [10] demonstrated the memory-bits approach to converge, but did not show it to be superior to any other algorithm. Other approaches include the use of finite-state automata (FSA) [8] (which can be viewed as a special case of the memory-bits approach); the use of neural networks for internal memory [5, 3]; and constructing and solving a POMDP model [7, 2]. The new emerging technique of Predictive State Representations (PSR) [11] may also provide a good way to online learn a model of the environment. We note that most researchers tested their algorithms on environments with very little noise, and do not examine the effect of noise on their performance. 3 Utile Suffix Memory Instance-based state identification [7] resolves perceptual aliasing with variable length short term memory. An Instance is a tuple Tt = ⟨Tt−1, at−1, ot, rt⟩— the individual observed raw experience. Algorithms of this family keep all the observed raw data (sequences of instances), and use it to identify matching subsequences. The algorithm assumes that if the suffix of two sequences is similar both were likely generated in the same world state. Utile Suffix Memory creates a tree structure, based on the well known suffix trees for string operations. This tree maintains the raw experiences and identifies matching suffixes. The root of the tree is an unlabelled node, holding all available instances. Each immediate child of the root is labelled with one of the observations encountered during the test. A node holds all the instances Tt = ⟨Tt−1, at−1, ot, rt⟩whose final observation ot matches the observation in the node’s label. At the next level, instances are split based on the last action of the instance at. Then, we split again based on (the next to last) observation ot−1, etc. All nodes act as buckets, grouping together instances that have matching history suffixes of a certain length. Leaves take the role of states, holding Q-values and updating them. The deeper a leaf is in the tree, the more history the instances in this leaf share. The tree is built on-line during the test run. To add a new instance to the tree, we examine its precept, and follow the path to the child node labeled by that precept. We then look at the action before this precept and move to the node labeled by that action, then branch on the precept prior to that action and so forth, until a leaf is reached. Identifying the proper depth for a certain leaf is a major issue, and we shall present a number of improvements to McCallum’s methods. Leaves should be split if their descendants show a statistical difference in expected future discounted reward associated with the same action. We split instances in a node if knowing where the agent came from helps predict future discounted rewards. Thus, the tree must keep what McCallum calls fringes, i.e., subtrees below the ”official” leaves. For better performance, McCallum did not compare the nodes in the fringes to their siblings, only to their ancestor ”official” leaf. He also did not compare values from all actions executed from the fringe, only the action that has the highest Q-value in the leaf (the policy action of that leaf). To compare the populations of expected discounted future rewards from the two nodes (the fringe and the ”official” leaf), he used the Kolmogorov-Smirnov (KS) test — a non-parametric statistical test used to find whether two populations were generated by the same distribution. If the test reported that a difference was found between the expected discounted future rewards after executing the policy action, the leaf was split, the fringe node would become the new leaf, and the tree would be expanded to create deeper fringes. Figure 3 presents an example of a possible USM tree, without fringe nodes. Figure 2: A possible USM suffix tree generated by the maze in Figure 1(a). Below is a sequence of instances demonstrating how some instances are clustered into the tree leaves. Instead of comparing the fringe node to its ancestor ”official” leaf, we found it computationally possible to compare the siblings of the fringe, avoiding the problem that the same instance appears in both distributions. McCallum compared only the expected discounted future rewards from executing the policy action, where we compare all the values following all actions executed after any of the instances in the fringe. McCallum used the KS test, where we choose to use the more robust randomization test [14] that works well with small sets of instances. McCallum also considered only fringe nodes of certain depth, given as a parameter to the algorithm, where we choose to create fringe nodes as deep as possible, until the number of instances in the node diminish below some threshold (we use a value of 10 in our experiments). The expected future discounted reward of instance Ti is defined by: Q(Ti) = ri + γU(L(Ti+1)) (1) where L(Ti) is the leaf associated with instance Ti and U(s) = maxa(Q(s, a)). After inserting new instances to the tree, we update Q-values in the leaves using: R(s, a) = P Ti∈T (s,a) ri |T(s, a)| (2) Pr(s′|s, a) = |∀Ti ∈T(s, a), L(Ti+1) = s′| |T(s, a)| (3) Q(s, a) = R(s, a) + γ X s′ Pr(s′|s, a)U(s′) (4) We use s and s′ to denote the leaves of the tree, as in an optimal tree configuration for a problem the leaves of the tree define the sates of the underlying MDP. The above equations therefore correspond to a single step of the value iteration algorithm used in MDPs. Now that the Q-values have been updated, the agent chooses the next action to perform based on the Q-values in the leaf corresponding to the current instance Tt: at+1 = argmaxaQ(L(Tt), a) (5) McCallum uses the fringes of the tree for a smart exploration strategy. In our implementation we use a simple ϵ-greedy technique for exploration. 4 Adding Noise to Utile Suffix Memory There are two types of noise in perception. The system makes different observations at the same location (false negative), or it makes identical observations at different locations (false positive). USM handles false positives by differentiating identical current observations using the agents’ history. Knowing that the agent came from different locations, helps it realize that it is in two different locations, though the observations look the same. USM does not handle false negative perceptions well. When the agent is at the same state, but receives different observations, it is unable to learn from the noisy observation and thus wastes much of the available information. Our Noisy Utile Suffix Memory (NUSM) is designed to overcome this weakness. It is reasonable to assume that the agent has some sensor model defining pr(o|s) — the probability that the agent will observe o in world state s. We can use the sensor model to augment USM with the ability to learn from noisy instances. In our experiments we assume the agent has n boolean sensors with an accuracy probability pi. A single observation is composed of n output values o = ⟨ω1, ..., ωn⟩when ωi ∈0, 1. The probability that an observation o = ⟨ωo 1, ..., ωo 1⟩came from an actual world state s = ⟨ωs 1, ..., ωs n⟩is therefore: pr(o|s) = n Y i=0 ∆i (6) ∆i = ½ pi ω0 i = ωs i (1 −pi) ω0 i ̸= ωs i (7) USM inserts each instance into a single path, ending at one leaf. Using any sensor model we can insert the new instance Tt = ⟨Tt−1, at−1, ot, rt⟩into several paths with different weights. When inserting Tt with weight w into an action node at depth k (with its children labeled by observations) we will insert the instance into every child node c, with weight w · pr(ot−k−1|label(c)). When inserting Tt with weight w into an observation node at depth k (with its children labeled by actions) we will insert the instance only into the child c labeled by at−k−1 with the same weight w. Weights of instances are stored in each node with the instance as ws(Tt) — the weight of instance Tt in node s. We can now rewrite Equation 2 and Equation 3: R(s, a) = P Ti∈T (s,a) ri · ws(Ti) P Ti∈T (s,a) ws(Ti) (8) Pr(s′|s, a) = P Ti,L(Ti+1)=s′ ws(Ti) P Ti∈T (s,a) ws(Ti) (9) The noisy instances are used only for updating the Q-values in the leaves. The test for splitting is still calculated using identical sequences only. The tree structure of USM and NUSM is hence identical, but NUSM learns a better policy. Conducted experiments indicate that using the noisy sequences for deciding when to split leaves provides a slight gain in collected rewards, but the constructed tree is much larger, resulting in a considerable hit to performance. NUSM learns noisy sequences better. When a state corresponding to a noisy sequence is observed, even though the noise in it might make it rare, NUSM can still use data from real sequences to decide which action is useful for this state. 5 Experimental Results To test our algorithms we used two maze worlds, seen in Figure 1(a) and Figure 1(b), identical to the worlds McCallum used to show the performance of the USM algorithm. In both cases some of the world states are perceptually aliased and the algorithm should learn to identify the real world states. The agent in our experiments has four sensors allowing it to sense an immediate wall above, below, to the left, and to the right of its current location. Sensors have a boolean output that has a probability α of being accurate. The probability of all sensors providing the correct output is α4. In both mazes there is a single location that grants the agent a reward of 10. Upon receiving that reward the agent is transformed to any of the perceptually aliased states of the maze randomly. If the agent bumps into a wall it pays a cost (a negative reward) of 1. For every move the agent pays a cost of 0.1. We compared the performance of applying finite size history windows to Q-learning and Sarsa, eligibility traces, memory bits, USM and NUSM on the two worlds. In the tables below, QL2 denotes using Q-learning with a history window of size 2, and S2 denotes using Sarsa with a window size of 2. S(λ) denotes the Sarsa(λ) algorithm. Adding the superfix 1 denotes adding 1 bit of internal memory. For example, S(λ)1 2 denotes using Sarsa(λ) with a history window of 2 and 1 internal memory bit. The columns and rows marked σ2 present the average variance over methods (bottom row) and α values (rightmost column). In the NUSM column, in brackets, is the improvement NUSM gains over USM. As we are only interested in the effect of noisy sensors, the maze examples we use do not demonstrate the advantages of the various algorithms; USM’s ability to automatically compute the needed trajectory length in different locations and the internal memory ability to remember events that occurred arbitrary far in the past are unneeded since our examples require the agent to look only at the last 2 instances in every perceptually aliased location. We ran each algorithm for 50000 steps learning a policy as explained above, and calculated the average reward over the last 5000 iterations only to avoid the difference in convergence time. We ran experiments with varying values of α (accuracy of the sensors) ranging from 1.00 (sensor output is without noise) to 0.85 (overall output accuracy of 0.522). Reported results are averaged over 10 different executions of each algorithm. We also ran experiments for Sarsa and Q-learning with only the immediate observation, which yielded poor results as expected, and for history window of size 3 and 4 which resulted in lower performance than history window of size 2 for all algorithms (and were therefore removed from the tables). Additional memory bits did not improve performance either. In our Sarsa(λ) implementation we used λ = 0.9. α QL2 S2 S(λ)1 S(λ)2 S(λ)3 S(λ)1 1 S(λ)1 2 USM NUSM σ2 1.00 1.51 1.54 0.32 1.53 0.98 1.27 1.53 1.56 1.57(+0%) 0.033 0.99 1.46 1.47 0.33 1.47 0.98 1.34 1.45 1.49 1.54(+3%) 0.016 0.98 1.42 1.42 0.32 1.36 0.83 1.41 1.40 1.42 1.44(+1%) 0.008 0.97 1.36 1.38 0.41 1.31 0.64 1.35 1.35 1.38 1.43(+4%) 0.007 0.96 1.28 1.26 0.38 1.29 0.58 1.24 1.27 1.35 1.40(+4%) 0.014 0.95 1.24 1.23 0.43 1.21 0.55 1.26 1.22 1.30 1.35(+4%) 0.009 0.94 1.11 1.10 0.47 1.16 0.45 0.89 1.12 1.18 1.29(+9%) 0.025 0.93 1.05 1.03 0.42 1.13 0.43 0.94 1.07 1.16 1.29(+11%) 0.023 0.92 0.96 0.88 0.47 1.10 0.33 0.92 1.04 1.12 1.20(+7%) 0.014 0.91 0.88 0.82 0.47 1.06 0.29 0.74 0.94 0.96 1.12(+17%) 0.022 0.90 0.83 0.73 0.46 1.02 0.22 0.77 0.92 0.99 1.07(+8%) 0.013 0.89 0.74 0.60 0.48 0.95 0.23 0.80 0.87 0.93 1.04(+12%) 0.015 0.88 0.61 0.59 0.42 0.90 0.14 0.66 0.83 0.84 1.01(+20%) 0.013 0.87 0.58 0.50 0.48 0.85 0.12 0.63 0.78 0.71 0.98(+38%) 0.011 0.86 0.40 0.37 0.46 0.79 0.07 0.55 0.76 0.57 0.92(+61%) 0.021 0.85 0.45 0.35 0.45 0.75 0.05 0.47 0.68 0.47 0.87(+85%) 0.018 σ2 0.01 0.015 0.004 0.004 0.022 0.061 0.005 0.02 0.003 Table 1: Average reward as function of sensor accuracy, for the maze in Figure 1(a). α QL2 S2 S(λ)1 S(λ)2 S(λ)3 S(λ)1 1 S(λ)1 2 USM NUSM σ2 1.00 1.42 1.46 0.23 1.53 1.49 1.47 1.54 1.75 1.72(-2%) 0.004 0.99 1.40 1.41 0.24 1.44 1.34 1.24 1.43 1.57 1.61(+3%) 0.027 0.98 1.33 1.35 0.25 1.35 1.24 0.94 1.34 1.43 1.46(+2%) 0.034 0.97 1.30 1.29 0.26 1.22 1.15 0.90 1.21 1.40 1.40(0%) 0.032 0.96 1.26 1.25 0.24 1.06 1.06 0.43 1.12 1.28 1.31(+2%) 0.015 0.95 1.19 1.16 0.21 1.00 0.90 0.33 1.03 1.23 1.26(+2%) 0.015 0.94 1.09 1.05 0.14 0.93 0.85 0.30 0.93 1.09 1.14(+5%) 0.011 0.93 1.05 0.94 0.12 0.82 0.76 0.39 0.77 1.09 1.09(+0%) 0.018 0.92 0.94 0.84 0.12 0.72 0.64 0.30 0.66 1.02 1.03(+1%) 0.011 0.91 0.85 0.80 0.09 0.77 0.47 0.24 0.48 0.93 0.96(+3%) 0.013 0.90 0.69 0.72 0.10 0.65 0.42 0.23 0.48 0.87 0.91(+5%) 0.013 0.89 0.64 0.58 0.06 0.58 0.24 0.24 0.40 0.81 0.92(+14%) 0.010 0.87 0.44 0.42 0.06 0.47 0.16 0.16 0.31 0.72 0.82(+14%) 0.012 0.86 0.30 0.21 0.04 0.26 0.09 0.13 0.20 0.68 0.81(+19%) 0.015 0.85 0.08 0.10 0.01 0.29 0.09 0.18 0.16 0.61 0.75(+23%) 0.008 σ2 0.008 0.01 0.003 0.009 0.006 0.071 0.016 0.011 0.006 Table 2: Average reward as function of sensor accuracy, for the maze in Figure 1(b). ,QVWDQFHV $YHUDJHUHZDUG 4/ 6/ 6/0% 860 1860 Figure 3: Convergence rates for the maze in Figure 1(a) when sensor accuracy is 0.9. As we can see, when sensor output is only slightly noisy, all algorithms perform reasonably; NUSM performs the best, but differences are minor. This is because when sensors supply perfect output, resolving the perceptual aliasing results in a fully observable environment which Q-learning and Sarsa can solve optimally. When noise increases, the ability of NUSM to use similar suffixes of trajectories, results in a noticeable performance gain over other algorithms. The only algorithm that competes with NUSM is Sarsa(λ) with a history window of 2. The ability of Sarsa(λ) to perform well in partially observable domains have been noted by Lock and Singh [6]1, but we note here that the performance of Sarsa(λ) relies heavily on the proper definition of the required history window size. When the history window differs slightly from the required size, the performance hit is substantial, as we can see in the two adjacent columns. NUSM is more expensive computationally than USM and takes longer to converge. In Figure 3, we can see that it still converges reasonably fast. Moreover, each NUSM iteration takes about 5.6 milliseconds, when a USM iteration takes 3.1 milliseconds with the same 1Lock and Singh also recommend the use of replacing traces but we found that using accumulating traces produced better performance. accuracy of α = 0.85 and a similar number of nodes (10, 229 for NUSM and 10, 349 for USM — including fringe nodes), making NUSM reasonable for online learning. Finally, Both USM and NUSM attempt to disambiguate the perceptual aliasing and create a fully observable MDP. Yet, it is better to model the world directly as partially observable using a Partially Observable Markov Decision Process (POMDP). POMDP policies explicitly address the problem of incomplete knowledge, taking into account the ability of certain actions to reduce uncertainty without immediately generating useful rewards. Nikovski [9] used McCallum’s Nearest Sequence Memory (NSM), a predecessor of USM to generate and solve a POMDP from the observed instances. They, however, considered environments with little noise. We implemented their algorithms and obtained poor results in the presence of noise in our domains, probably due to the use of NSM for state identification. 6 Conclusions We show that some RL algorithms, including finite size history windows, memory bits and USM, that resolve perceptual aliasing, provide poor results in the presence of noisy sensors. We provided some insights as to why McCallums’ USM algorithm does not handle well noisy input from the agents’ sensors and introduce NUSM — an extension to USM that learns from noisy sequences and handles environments where sensors provide noisy output better. As noise arises, NUSM works better than other algorithms used for handling domains with perceptual aliasing. References [1] A. R. Cassandra, L. P. Kaelbling, and M. L. Littman. Acting optimally in partially observable stochastic domains. In AAAI’94, pages 1023–1028, 1994. [2] L. Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In AAAI’02, pages 183–188, 1992. [3] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735– 1780, 1997. [4] T. Jaakkola, S. P. Singh, and M. I. Jordan. Reinforcement learning algorithm for partially observable Markov decision problems. In NIPS’95, volume 7, pages 345–352, 1995. [5] L.-J. Lin and T. M. Mitchell. Memory approaches to reinforcement learning in non-markovian domains. Technical Report CMU-CS-92-138, 1992. [6] J. Loch and S. Singh. Using eligibility traces to find the best memoryless policy in partially observable Markov decision processes. In ICML’98, pages 323–331, 1998. [7] A. K. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester, 1996. [8] N. Meuleau, L. Peshkin, K. Kim, and L. P. Kaelbling. Learning finite-state controllers for partially observable environments. In UAI’99, pages 427–436, 1999. [9] D. Nikovski. State-Aggregation Algorithms for Learning Probabilistic Models for Robot Control. PhD thesis, Carnegie Mellon University, 2002. [10] L. Peshkin, N. Meuleau, and L. P. Kaelbling. Learning policies with external memory. In ICML’99, pages 307–314, 1999. [11] S. Singh, M. L. Littman, and R. S. Sutton. Predictive representations of state. In NIPS 2001, pages 1555–1561, December 2001. [12] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [13] J. K. Williams and S. Singh. Experimental results on learning stochastic memoryless policies for partially observable markov decision processes. In NIPS, 1998. [14] A. Yeh. More accurate tests for the statistical significance of result differences. In 18th Int. Conf. on Computational Linguistics, pages 947–953, 2000.
|
2004
|
149
|
2,562
|
Exponential Family Harmoniums with an Application to Information Retrieval Max Welling & Michal Rosen-Zvi Information and Computer Science University of California Irvine CA 92697-3425 USA welling@ics.uci.edu Geoffrey Hinton Department of Computer Science University of Toronto Toronto, 290G M5S 3G4, Canada hinton@cs.toronto.edu Abstract Directed graphical models with one layer of observed random variables and one or more layers of hidden random variables have been the dominant modelling paradigm in many research fields. Although this approach has met with considerable success, the causal semantics of these models can make it difficult to infer the posterior distribution over the hidden variables. In this paper we propose an alternative two-layer model based on exponential family distributions and the semantics of undirected models. Inference in these “exponential family harmoniums” is fast while learning is performed by minimizing contrastive divergence. A member of this family is then studied as an alternative probabilistic model for latent semantic indexing. In experiments it is shown that they perform well on document retrieval tasks and provide an elegant solution to searching with keywords. 1 Introduction Graphical models have become the basic framework for generative approaches to probabilistic modelling. In particular models with latent variables have proven to be a powerful way to capture hidden structure in the data. In this paper we study the important subclass of models with one layer of observed units and one layer of hidden units. Two-layer models can be subdivided into various categories depending on a number of characteristics. An important property in that respect is given by the semantics of the graphical model: either directed (Bayes net) or undirected (Markov random field). Most two-layer models fall in the first category or are approximations derived from it: mixtures of Gaussians (MoG), probabilistic PCA (pPCA), factor analysis (FA), independent components analysis (ICA), sigmoid belief networks (SBN), latent trait models, latent Dirichlet allocation (LDA, otherwise known as multinomial PCA, or mPCA) [1], exponential family PCA (ePCA), probabilistic latent semantic indexing (pLSI) [6], non-negative matrix factorization (NMF), and more recently the multiple multiplicative factor model (MMF) [8]. Directed models enjoy important advantages such as easy (ancestral) sampling and easy handling of unobserved attributes under certain conditions. Moreover, the semantics of directed models dictates marginal independence of the latent variables, which is a suitable modelling assumption for many datasets. However, it should also be noted that directed models come with an important disadvantage: inference of the posterior distribution of the latent variables given the observations (which is, for instance, needed within the context of the EM algorithm) is typically intractable resulting in approximate or slow iterative procedures. For important applications, such as latent semantic indexing (LSI), this drawback may have serious consequences since we would like to swiftly search for documents that are similar in the latent topic space. A type of two-layer model that has not enjoyed much attention is the undirected analogue of the above described family of models. It was first introduced in [10] where it was named “harmonium”. Later papers have studied the harmonium under various names (the “combination machine” in [4] and the “restricted Boltzmann machine” in [5]) and turned it into a practical method by introducing efficient learning algorithms. Harmoniums have only been considered in the context of discrete binary variables (in both hidden and observed layers), and more recently in the Gaussian case [7]. The first contribution of this paper is to extend harmoniums into the exponential family which will make them much more widely applicable. Harmoniums also enjoy a number of important advantages which are rather orthogonal to the properties of directed models. Firstly, their product structure has the ability to produce distributions with very sharp boundaries. Unlike mixture models, adding a new expert may decrease or increase the variance of the distribution, which may be a major advantage in high dimensions. Secondly, unlike directed models, inference in these models is very fast, due to the fact that the latent variables are conditionally independent given the observations. Thirdly, the latent variables of harmoniums produce distributed representations of the input. This is much more efficient than the “grandmother-cell” representation associated with mixture models where each observation is generated by a single latent variable. Their most important disadvantage is the presence of a global normalization factor which complicates both the evaluation of probabilities of input vectors1 and learning free parameters from examples. The second objective of this paper is to show that the introduction of contrastive divergence has greatly improved the efficiency of learning and paved the way for large scale applications. Whether a directed two-layer model or a harmonium is more appropriate for a particular application is an interesting question that will depend on many factors such as prior (conditional) independence assumptions and/or computational issues such as efficiency of inference. To expose the fact that harmoniums can be viable alternatives to directed models we introduce an entirely new probabilistic extension of latent semantic analysis (LSI) [3] and show its usefulness in various applications. We do not want to claim superiority of harmoniums over their directed cousins, but rather that harmoniums enjoy rather different advantages that deserve more attention and that may one day be combined with the advantages of directed models. 2 Extending Harmoniums into the Exponential Family Let xi, i = 1...Mx be the set of observed random variables and hj, j = 1...Mh be the set of hidden (latent) variables. Both x and h can take values in either the continuous or the discrete domain. In the latter case, each variable has states a = 1...D. To construct an exponential family harmonium (EFH) we first choose Mx independent distributions pi(xi) for the observed variables and Mh independent distributions pj(hj) 1However, it is easy to compute these probabilities up to a constant so it is possible to compare probabilities of data-points. for the hidden variables from the exponential family and combine them multiplicatively, p({xi}) = Mx Y i=1 ri(xi) exp [ X a θiafia(xi) −Ai({θia})] (1) p({hj}) = Mh Y j=1 sj(hj) exp [ X b λjbgjb(hj) −Bj({λjb})] (2) where {fia(xi), gjb(hj)} are the sufficient statistics for the models (otherwise known as features), {θia, λjb} the canonical parameters of the models and {Ai, Bj} the log-partition functions (or log-normalization factors). In the following we will consider log(ri(xi)) and log(sj(hj)) as additional features multiplied by a constant. Next, we couple the random variables in the log-domain by the introduction of a quadratic interaction term, p({xi, hj}) ∝exp [ X ia θiafia(xi) + X jb λjbgjb(hj) + X ijab W jb ia fia(xi)gjb(hj)] (3) Note that we did not write the log-partition function for this joint model in order to indicate our inability to compute it in general. For some combinations of exponential family distributions it may be necessary to restrict the domain of W jb ia in order to maintain normalizability of the joint probability distribution (e.g. W jb ia ≤0 or W jb ia ≥0). Although we could also have mutually coupled the observed variables (and/or the hidden variables) using similar interaction terms we refrain from doing so in order to keep the learning and inference procedures efficient. Consequently, by this construction the conditional probability distributions are a product of independent distributions in the exponential family with shifted parameters, p({xi}|{hj}) = Mx Y i=1 exp [ X a ˆθiafia(xi) −Ai({ˆθia})] ˆθia = θia + X jb W jb ia gjb(hj) (4) p({hj}|{xi}) = Mh Y j=1 exp [ X b ˆλjbgjb(hj) −Bj({ˆλjb})] ˆλjb = λjb + X ia W jb ia fia(xi)(5) Finally, using the following identity, P y exp P a θafa(y) = exp A({θa}) we can also compute the marginal distributions of the observed and latent variables, p({xi}) ∝exp [ X ia θiafia(xi) + X j Bj({λjb + X ia W jb ia fia(xi)})] (6) p({hj}) ∝exp [ X jb λjbgjb(hj) + X i Ai({θia + X jb W jb ia gjb(hj)})] (7) Note that 1) we can only compute the marginal distributions up to the normalization constant and 2) in accordance with the semantics of undirected models, there is no marginal independence between the variables (but rather conditional independence). 2.1 Training EF-Harmoniums using Contrastive Divergence Let ˜p({xi}) denote the data distribution (or the empirical distribution in case we observe a finite dataset), and p the model distribution. Under the maximum likelihood objective the learning rules for the EFH are conceptually simple2, δθia ∝⟨fia(xi)⟩˜p −⟨fia(xi)⟩p δλjb ∝⟨B′ jb(ˆλjb)⟩˜p −⟨B′ jb(ˆλjb)⟩p (8) 2These learning rules are derived by taking derivatives of the log-likelihood objective using Eqn.6. δW ab ij ∝⟨fia(xi)B′ jb(ˆλjb)⟩˜p −⟨fia(xi)B′ jb(ˆλjb)⟩p (9) where we have defined B′ jb = ∂Bj(ˆλjb)/∂ˆλjb with ˆλjb defined in Eqn.5. One should note that these learning rules are changing the parameters in an attempt to match the expected sufficient statistics of the data distribution and the model distribution (while maximizing entropy). Their simplicity is somewhat deceptive, however, since the averages ⟨·⟩p are intractable to compute analytically and Markov chain sampling or mean field calculations are typically wheeled out to approximate them. Both have difficulties: mean field can only represent one mode of the distribution and MCMC schemes are slow and suffer from high variance in their estimates. In the case of binary harmoniums (restricted BMs) it was shown in [5] that contrastive divergence has the potential to greatly improve on the efficiency and reduce the variance of the estimates needed in the learning rules. The idea is that instead of running the Gibbs sampler to its equilibrium distribution we initialize Gibbs samplers on each data-vector and run them for only one (or a few) steps in parallel. Averages ⟨·⟩p in the learning rules Eqns.8,9 are now replaced by averages ⟨·⟩pCD where pCD is the distribution of samples that resulted from the truncated Gibbs chains. This idea is readily generalized to EFHs. Due to space limitations we refer to [5] for more details on contrastive divergence learning3. Deterministic learning rules can also be derived straightforwardly by generalizing the results described in [12] to the exponential family. 3 A Harmonium Model for Latent Semantic Indexing To illustrate the new possibilities that have opened up by extending harmoniums to the exponential family we will next describe a novel model for latent semantic indexing (LSI). This will represent the undirected counterpart of pLSI [6] and LDA [1]. One of the major drawbacks of LSI is that inherently discrete data (word counts) are being modelled with variables in the continuous domain. The power of LSI on the other hand is that it provides an efficient mapping of the input data into a lower dimensional (continuous) latent space that has the effect of de-noising the input data and inferring semantic relationships among words. To stay faithful to this idea and to construct a probabilistic model on the correct (discrete) domain we propose the following EFH with continuous latent topic variables, hj, and discrete word-count variables, xia, p({hj}|{xia}) = Mh Y j=1 Nhj[ X ia W j iaxia , 1] (10) p({xia}|{hj}) = Mx Y i=1 S{xia}[αia + X j hjW j ia] (11) Note that {xia} represent indicator variables satisfying P a xia = 1 ∀i, where xia = 1 means that word “i” in the vocabulary was observed “a” times. Nh[µ, σ] denotes a normal distribution with mean µ and std.σ and S{xa}[γa] ∝exp (PD a=1 γaxa) is the softmax function defining a probability distribution over x. Using Eqn.6 we can easily deduce the marginal distribution of the input variables, p({xia}) ∝exp [ X ia αiaxia + 1 2 X j ( X ia W j iaxia)2] (12) 3Non-believers in contrastive divergence are invited to simply run the the Gibbs sampler to equilibrium before they do an update of the parameters. They will find that due to the special bipartite structure of EFHs learning is still more efficient than for general Boltzmann machines. We observe that the role of the components W j ia is that of templates or prototypes: input vectors xia with large inner products P ia W j iaxia ∀j will have high probability under this model. Just like pLSI and LDA can be considered as natural generalizations of factor analysis (which underlies LSI) into the class of directed models on the discrete domain, the above model can be considered as the natural generalization of factor analysis into class of undirected models on the discrete domain. This idea is supported by the result that the same model with Gaussian units in both hidden and observed layers is in fact equivalent to factor analysis [7]. 3.1 Identifiability From the form of the marginal distribution Eqn.12 we can derive a number of transformations of the parameters that will leave the distribution invariant. First we note that the components W j ia can be rotated and mirrored arbitrarily in latent space4: W j ia →P k U jkW k ia with U T U = I. Secondly, we note that observed variables xia satisfy a constraint, P a xia = 1 ∀i. This results in a combined shift invariance for the components W j ia and the offsets αia. Taken together, this results in the following set of transformations, W j ia → X k U jk(W k ia + V k i ) αia →(αia + βi) − X j ( X l V j l )(W j ia) (13) where U T U = I. Although these transformations leave the marginal distribution over the observable variables invariant, they do change the latent representation and as such may have an impact on retrieval performance (if we use a fixed similarity measure between topic representations of documents). To fix the spurious degrees of freedom we have chosen to impose conditions on the representations in latent space: hn j = P ia W j iaxn ia. First, we center the latent representations which has the effect of minimizing the “activity” of the latent variables and moving as much log-probability as possible to the constant component αia. Next we align the axes in latent space with the eigen-directions of the latent covariance matrix. This has the effect of approximately decorrelating the marginal latent activities. This follows because the marginal distribution in latent space can be approximated by: p({hj}) ≈P n Q j Nhj[P ia W j iaxn ia, 1]/N where we have used Eqn.10 and replaced p({xia}) by its empirical distribution. Denoting by µ and Σ = U T ΛU the sample mean and sample covariance of {hn j }, it is not hard to show that the following transformation will have the desired effect5: W j ia → X k U jk µ W k ia − 1 Mx µk ¶ αia →αia + X j µjW j ia (14) One could go one step further than the de-correlation process described above by introducing covariances Σ in the conditional Gaussian distribution of the latent variables Eqn.10. This would not result in a more general model because the effect of this on the marginal distribution over the observed variables is given by: W j ia →P k KjkW k ia KKT = Σ. However, the extra freedom can be used to define axes in latent space for which the projected data become approximately independent and have the same scale in all directions. 4Technically we call this the Euclidean group of transformations. 5Some spurious degrees of freedom remain since shifts βi and shifts V j i that satisfy P i V j i = 0 will not affect the projection into latent space. One could decide to fix the remaining degrees of freedom by for example requiring that components are as small as possible in L2 norm (subject to the constraint P i V j i = 0), leading to the further shifts, W j ia →W j ia−1 D P a W j ia+ 1 DMx P ia W j ia and αia →αia −1 D P a αia. 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall Precision Precision vs. Recall for Newsgroups Dataset RANDOM EFH LSA TFIDF 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 Recall Precision Precision vs. Recall for Newgroups Dataset RANDOM EFH+MF EFH LSA TFIDF 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall Precision Precision vs. Recall for Newsgroups Dataset EFH+MF EFH LSA TFIDF RANDOM (a) (b) (c) Figure 1: Precision-recall curves when the query was (a) entire documents, (b) 1 keyword, (c) 2 keywords for the EFH with and without 10 MF iterations, LSI , TFIDF weighted words and random guessing. PR curves with more keywords looked very similar to (c). A marker at position k (counted from the left along a curve) indicates that 2k−1 documents were retrieved. 4 Experiments Newsgroups: We have used the reduced version of the “20newsgroups” dataset prepared for MATLAB by Roweis6. Documents are presented as 100 dimensional binary occurrence vectors and tagged as a member of 1 out of 4 domains. Documents contain approximately 4% of the words, averaged across the 16242 postings. An EFH model with 10 latent variables was trained on 12000 training cases using stochastic gradient descent on mini-batches of 1000 randomly chosen documents (training time approximately 1 hour on a 2GHz PC). A momentum term was added to speed up convergence. To test the quality of the trained model we mapped the remaining 4242 query documents into latent space using hj = P ia W j iaxia and where {W j ia, αia} were “gauged” as in Eqns.14. Precision-recall curves were computed by comparing training and query documents using the usual “cosine coefficient” (cosine of the angle between documents) and reporting success when the retrieved document was in the same domain as the query (results averaged over all queries). In figure 1a we compare the results with LSI (also 10 dimensions) [3] where we preprocessed the data in the standard way (x →log(1 + x) and entropy weighting of the words) and to similarity in word space using TF-IDF weighting of the words. In figure 1b,c we show PR curves when only 1 or 2 keywords were provided corresponding to randomly observed words in the query document. The EFH model allows a principled way to deal with unobserved entries by inferring them using the model (in all other methods we insert 0 for the unobserved entries which corresponds to ignoring them). We have used a few iterations of mean field to achieve that: ˆxia →exp [P jb(P k W k iaW k jb + αjb)ˆxjb]/γi where γi is a normalization constant and where ˆxia represent probabilities: ˆxia ∈[0, 1], PD a=1 ˆxia = 1 ∀i. We note that this is still highly efficient and achieves a significant improvement in performance. In all cases we find that without any preprocessing or weighting EFH still outperforms the other methods except when large numbers of documents were retrieved. In the next experiment we compared performance of EFH, LSI and LDA by training models on a random subset of 15430 documents with 5 and 10 latent dimensions (this was found to be close to optimal for LDA). The EFH and LSI models were trained as in the previous experiment while the training and testing details7 for LDA can be found in [9]. For the remaining test documents we clamped a varying number of observed words and 6http://www.cs.toronto.edu/∼roweis/data.html 7The approximate inference procedure was implemented using Gibbs sampling. 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 0.3 Document Reconstruction for Newsgroup Dataset number of keywords fraction observed words correctly predicted EFH+MF 10D EFH+MF 5D LSI 10D LSI 5D LDA 10D LDA 5D −6 −4 −2 0 2 4 6 −4 −2 0 2 4 6 8 10 12 4 6 8 Latent Representations of Newsgroups Data 0 20 40 60 80 100 0.38 0.4 0.42 0.44 0.46 0.48 0.5 0.52 number of retrieved documents fraction documents also retrieved by TF−IDF Retrieval Performance for NIPS Dataset (a) (b) (c) Figure 2: (a) Fraction of observed words that was correctly observed by EFH, LSI and LDA using 5 and 10 latent variables when we vary the number of keywords (observed words that were “clamped”), (b) latent 3-D representations of newsgroups data, (c) Fraction of documents retrieved by EFH on the NIPS dataset which was also retrieved by the TF-IDF method. asked the models to predict the remaining observed words in the documents by computing the probabilities for all words in the vocabulary to be present and ranking them (see previous paragraph for details). By comparing the list of the R remaining observed words in the document with the top-R ranked inferred words we computed the fraction of correctly predicted words. The results are shown in figure 2a as a function of the number of clamped words. To provide anecdotal evidence that EFH can infer semantic relationships we clamped the words ’drive’ ’driver’ and ’car’ which resulted in: ’car’ ’drive’ ’engine’ ’dealer’ ’honda’ ’bmw’ ’driver’ ’oil’ as the most probable words in the documents. Also, clamping ’pc’ ’driver’ and ’program’ resulted in: ’windows’ ’card’ ’dos’ ’graphics’ ’software’ ’pc’ ’program’ ’files’. NIPS Conference Papers: Next we trained a model with 5 latent dimensions on the NIPS dataset8 which has a large vocabulary size (13649 words) and contains 1740 documents of which 1557 were used for training and 183 for testing. Count values were redistributed in 12 bins. The array W contains therefore 5 · 13649 · 12 = 818940 parameters. Training was completed in the order of a few days. Due to the lack of document labels it is hard to assess the quality of the trained model. We choose to compare performance on document retrieval with the “golden standard”: cosine similarity in TF-IDF weighted word space. In figure 2c we depict the fraction of documents retrieved by EFH that was also retrieved by TF-IDF as we vary the number of retrieved documents. This correlation is indeed very high but note that EFH computes similarity in a 5-D space while TF-IDF computes similarity in a 13649-D space. 5 Discussion The main point of this paper was to show that there is a flexible family of 2-layer probabilistic models that represents a viable alternative to 2-layer causal (directed) models. These models enjoy very different properties and can be trained efficiently using contrastive divergence. As an example we have studied an EFH alternative for latent semantic indexing where we have found that the EFH has a number of favorable properties: fast inference allowing fast document retrieval and a principled approach to retrieval with keywords. These were preliminary investigations and it is likely that domain specific adjustments such as a more intelligent choice of features or parameterization could further improve performance. Previous examples of EFH include the original harmonium [10], Gaussian variants thereof [7], and the PoT model [13] which couples a gamma distribution with the covariance of a 8Obtained from http://www.cs.toronto.edu/∼roweis/data.html. normal distribution. Some exponential family extensions of general Boltzmann machines were proposed in [2], [14], but they do not have the bipartite structure that we study here. While the components of the Gaussian-multinomial EFH act as prototypes or templates for highly probable input vectors, the components of the PoT act as constraints (i.e. input vectors with large inner product have low probability). This can be traced back to the shape of the non-linearity B in Eqn.6. Although by construction B must be convex (it is the log-partition function), for large input values it can both be positive (prototypes, e.g. B(x) = x2) or negative (constraints, e.g. B(x) = −log(1 + x)). It has proven difficult to jointly model both prototypes and constraints in the this formalism except for the fully Gaussian case [11]. A future challenge is therefore to start the modelling process with the desired non-linearity and to subsequently introduce auxiliary variables to facilitate inference and learning. References [1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [2] C.K.I.Williams. Continuous valued Boltzmann machines. Technical report, 1993. [3] S.C. Deerwester, S.T. Dumais, T.K. Landauer, G.W. Furnas, and R.A. Harshman. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391– 407, 1990. [4] Y. Freund and D. Haussler. Unsupervised learning of distributions of binary vectors using 2-layer networks. In Advances in Neural Information Processing Systems, volume 4, pages 912–919, 1992. [5] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. [6] Thomas Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI’99, Stockholm, 1999. [7] T. K. Marks and J. R. Movellan. Diffusion networks, products of experts, and factor analysis. Technical Report UCSD MPLab TR 2001.02, University of California San Diego, 2001. [8] B. Marlin and R. Zemel. The multiple multiplicative factor model for collaborative filtering. In Proceedings of the 21st International Conference on Machine Learning, volume 21, 2004. [9] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors and documents. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, volume 20, 2004. [10] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory. In D.E. Rumehart and J.L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. McGraw-Hill, New York, 1986. [11] M. Welling, F. Agakov, and C.K.I. Williams. Extreme components analysis. In Advances in Neural Information Processing Systems, volume 16, Vancouver, Canada, 2003. [12] M. Welling and G.E. Hinton. A new learning algorithm for mean field Boltzmann machines. In Proceedings of the International Conference on Artificial Neural Networks, Madrid, Spain, 2001. [13] M. Welling, G.E. Hinton, and S. Osindero. Learning sparse topographic representations with products of student-t distributions. In Advances in Neural Information Processing Systems, volume 15, Vancouver, Canada, 2002. [14] R. Zemel, C. Williams, and M. Mozer. Lending direction to neural networks. Neural Networks, 8(4):503–512, 1995.
|
2004
|
15
|
2,563
|
Kernels for Multi–task Learning Charles A. Micchelli Department of Mathematics and Statistics State University of New York, The University at Albany 1400 Washington Avenue, Albany, NY, 12222, USA Massimiliano Pontil Department of Computer Sciences University College London Gower Street, London WC1E 6BT, England, UK Abstract This paper provides a foundation for multi–task learning using reproducing kernel Hilbert spaces of vector–valued functions. In this setting, the kernel is a matrix–valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize classes of matrix– valued kernels which are linear and are of the dot product or the translation invariant type. We discuss how these kernels can be used to model relations between the tasks and present linear multi–task learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization functional which is based on the notion of minimal norm interpolation. 1 Introduction This paper addresses the problem of learning a vector–valued function f : X →Y, where X is a set and Y a Hilbert space. We focus on linear spaces of such functions that admit a reproducing kernel, see [7]. This study is valuable from a variety of perspectives. Our main motivation is the practical problem of multi–task learning where we wish to learn many related regression or classification functions simultaneously, see eg [3, 5, 6]. For instance, image understanding requires the estimation of multiple binary classifiers simultaneously, where each classifier is used to detect a specific object. Specific examples include locating a car from a pool of possibly similar objects, which may include cars, buses, motorbikes, faces, people, etc. Some of these objects or tasks may share common features so it would be useful to relate their classifier parameters. Other examples include multi–modal human computer interface which requires the modeling of both, say, speech and vision, or tumor prediction in bioinformatics from multiple micro–array datasets. Moreover, the spaces of vector–valued functions described in this paper may be useful for learning continuous transformations. In this case, X is a space of parameters and Y a Hilbert space of functions. For example, in face animation X represents pose and expression of a face and Y a space of functions IR2 →IR, although in practice one considers discrete images in which case f(x) is a finite dimensional vector whose components are associated to the image pixels. Other problems such as image morphing, can be formulated as vector–valued learning. When Y is an n−dimensional Euclidean space, one straightforward approach in learning a vector–valued function f = (f1, . . . , fn) consists in separately representing each component of f by a linear space of smooth functions and then learn these components independently, for example by minimizing some regularized error functional. This approach does not capture relations between components of f (which are associated to tasks or pixels in the examples above) and should not be the method of choice when these relations occur. In this paper we investigate how kernels can be used for representing vector–valued functions. We proposed to do this by using a matrix–valued kernel K : X × X →IRn×n that reflects the interaction amongst the components of f. This paper provides a foundation for this approach. For example, in the case of support vector machines (SVM’s) [10], appropriate choices of the matrix–valued kernel implement a trade–off between large margin of each per–task SVM and large margin of combinations of these SVM’s, eg their average. The paper is organized as follows. In section 2 we formalize the above observations and show that reproducing Hilbert spaces (RKHS) of vector–valued functions admit a kernel with values which are bounded linear operators on the output space Y and characterize the form some of these operators in section 3. Finally, in section 4 we provide a novel proof for the representer theorem which is based on the notion of minimal norm interpolation and present linear multi–task learning algorithms. 2 RKHS of vector–valued functions Let Y be a real Hilbert space with inner product (·, ·), X a set, and H a linear space of functions on X with values in Y. We assume that H is also a Hilbert space with inner product ⟨·, ·⟩. We present two methods to enhance standard RKHS to vector–valued functions. 2.1 Matrix–valued kernels based on Aronszajn The first approach extends the scalar case, Y = IR, in [2]. Definition 1 We say that H is a reproducing kernel Hilbert space (RKHS) of functions f : X →Y, when for any y ∈Y and x ∈X the linear functional which maps f ∈H to (y, f(x)) is continuous on H. We conclude from the Riesz Lemma (see, e.g., [1]) that, for every x ∈X and y ∈Y, there is a linear operator Kx : Y →H such that (y, f(x)) = ⟨Kxy, f⟩. (2.1) For every x, t ∈X we also introduce the linear operator K(x, t) : Y →Y defined, for every y ∈Y, by K(x, t)y := (Kty)(x). (2.2) In the proposition below we state the main properties of the function K. To this end, we let L(Y) be the set of all bounded linear operators from Y into itself and, for every A ∈L(Y), we denote by A∗its adjoint. We also use L+(Y) to denote the cone of positive semidefinite bounded linear operators, i.e. A ∈L+(Y) provided that, for every y ∈Y, (y, Ay) ≥0. When this inequality is strict for all y ̸= 0 we say A is positive definite. We also denote by INm the set of positive integers up to and including m. Finally, we say that H is normal provided there does not exist (x, y) ∈X × (Y\{0}) such that the linear functional (y, f(x)) = 0 for all f ∈H. Proposition 1 If K(x, t) is defined, for every x, t ∈X, by equation (2.2) and Kx is given by equation (2.1) then the kernel K satisfies, for every x, t ∈X, the following properties: (a) For every y, z ∈Y, we have that (y, K(x, t)z) = ⟨Ktz, Kxy⟩. (b) K(x, t) ∈L(Y), K(x, t) = K(t, x)∗, and K(x, x) ∈L+(Y). Moreover, K(x, x) is positive definite for all x ∈X if and only if H is normal. (c) For any m ∈IN, {xj : j ∈INm} ⊆X, {yj : j ∈INm} ⊆Y we have that X j,ℓ∈INm (yj, K(xj, xℓ)yℓ) ≥0. (2.3) PROOF. We prove (a) by merely choosing f = Ktz in equation (2.1) to obtain that ⟨Kxy, Ktz⟩= (y, (Ktz)(x)) = (y, K(x, t)z). (2.4) Consequently, from this equation, we conclude that K(x, t) admits an algebraic adjoint K(t, x) defined everywhere on Y and, so, the uniform boundness principle, see, eg, [1, p. 48] implies that K(x, t) ∈L(Y) and K(x, t) = K(t, x)∗. Moreover, choosing t = x in (a) proves that K(x, x) ∈L+(Y). As for the positive definiteness of K(x, x), merely use equations (2.1) and property (a). These remarks prove (b). As for (c), we again use property (a) to obtain that X j,ℓ∈INm (yj, K(xj, xℓ)yℓ) = X j,ℓ∈INm ⟨Kxjyj, Kxℓyℓ⟩= ∥ X j∈INm Kxjyj∥2 ≥0. This completes the proof. For simplicity, we say that K : X × X →L(Y) is a matrix–valued kernel (or simply a kernel if no confusion will arise) if it satisfies properties (b) and (c). So far we have seen that if H is a RKHS of vector–valued functions, there exists a kernel. In the spirit of the Moore-Aronszajn’s theorem for RKHS of scalar functions [2], it can be shown that if K : X × X →L(Y) is a kernel then there exists a unique (up to an isometry) RKHS of functions from X to Y which admits K as the reproducing kernel. The proof parallels the scalar case. Given a vector–valued function f : X →Y we associate to it a scalar–valued function F : X × Y →IR defined by F(x, λ) := (λ, f(x)), x ∈X, λ ∈Y. (2.5) We let H1 be the linear space of all such functions. Thus, H1 consists of functions which are linear in their second variable. We make H1 into a Hilbert space by choosing ∥F∥= ∥f∥. It then follows that H1 is a RKHS with reproducing scalar–valued kernel defined, for all (x, y), (t, z) ∈X × Y, by the formula K1((x, y), (t, z)) := (y, K(x, t)z). (2.6) 2.2 Feature map The second approach uses the notion of feature map, see e.g. [9]. A feature map is a function Φ : X × Y →W where W is a Hilbert space. A feature map representation of a kernel K has the property that, for every x, t ∈X and y, z ∈Y there holds the equation (Φ(x, y), Φ(t, z)) = (y, K(x, t)z). From equation (2.4) we conclude that every kernel admits a feature map representation (a Mercer type theorem) with W = H. With additional hypotheses on H and Y this representation can take a familiar form Kℓq(x, t) = X r∈IN Φℓ r(x)Φq r(t), ℓ, q ∈IN. (2.7) Much more importantly, we begin with a feature map Φ(x, λ) = ((Φℓ(x), λ) : ℓ∈IN) where λ ∈W, this being the space of squared summable sequence on IN. We wish to learn a function f : X →Y where Y = W and f = (fℓ: ℓ∈IN) with fℓ= (w, Φℓ) := P r∈IN wrΦℓ r for each ℓ∈IN, where w ∈W. We choose ∥f∥= ∥w∥and conclude that the space of all such functions is a Hilbert space of function from X to Y with kernel (2.7). These remarks connect feature maps to kernels and vice versa. Note a kernel may have many maps which represent it and a feature map representation for a kernel may not be the appropriate way to write it for numerical computations. 3 Kernel construction In this section we characterize a wide variety of kernels which are potentially useful for applications. 3.1 Linear kernels A first natural question concerning RKHS of vector–valued functions is: if X is IRd what is the form of linear kernels? In the scalar case a linear kernel is a quadratic form, namely K(x, t) = (x, Qt), where Q is a d × d positive semidefinite matrix. We claim that for Y = IRn any linear matrix–valued kernel K = (Kℓq : ℓ, q ∈INn) has the form Kℓq(x, t) = (Bℓx, Bqt), x, t ∈IRd (3.8) where Bℓare p × d matrices for some p ∈IN. To see that such K is a kernel simply note that K is in the Mercer form (2.7) for Φℓ(x) = Bℓx. On the other hand, since any linear kernel has a Mercer representation with linear features, we conclude that all linear kernels have the form (3.8). A special case is provided by choosing p = d and Bℓto be diagonal matrices. We note that the theory presented in section 2 can be naturally extended to the case where each component of the vector–valued function has a different input domain. This situation is important in multi–task learning, see eg [5]. To this end, we specify sets Xℓ, ℓ∈INn, functions gℓ: Xℓ→IR, and note that multi–task learning can be placed in the above framework by defining the input space X := X1 × X2 × · · · × Xn. We are interested in vector–valued functions f : X →IRn whose coordinates are given by fℓ(x) = gℓ(Pℓx), where x = (xℓ: xℓ∈Xℓ, ℓ∈INn) and Pℓ: X →Xℓis a projection operator defined, for every x ∈X by Pℓ(x) = xℓ, ℓ∈INn. For ℓ, q ∈INn, we suppose kernel functions Cℓq : Xℓ× Xq →IR are given such that the matrix valued kernel whose elements are defined as Kℓq(x, t) := Cℓq(Pℓx, Pqt), ℓ, q ∈INn satisfies properties (b) and (c) of Proposition 1. An example of this construction is provided again by linear functions. Specifically, we choose Xℓ= IRdℓ, where dℓ∈IN and Cℓq(xℓ, tq) = (Qℓxℓ, Qqtq), xℓ∈Xℓ, tq ∈Xq, where Qℓare p × dℓmatrices. In this case, the matrix–valued kernel K = (Kℓq : ℓ, q ∈INn) is given by Kℓq(x, t) = (QℓPℓx, QqPqt) (3.9) which is of the form in equation (3.8) for Bℓ= QℓPℓ, ℓ∈INn. 3.2 Combinations of kernels The results in this section are based on a lemma by Schur which state that the elementwise product of two positive semidefinite matrices is also positive semidefinite, see [2, p. 358]. This result implies that, when Y is finite dimensional, the elementwise product of two matrix–valued kernels is also a matrix–valued kernel. Indeed, in view of the discussion at the end of section 2.2 we immediately conclude the following two lemma hold. Lemma 1 If Y = IRn and K1 and K2 are matrix–valued kernels then their elementwise product is a matrix–valued kernel. This result allows us, for example, to enhance the linear kernel (3.8) to a polynomial kernel. In particular, if r is a positive integer, we define, for every ℓ, q ∈INn, Kℓq(x, t) := (Bℓxℓ, Bqtq)r and conclude that K = (Kℓq : ℓ, q ∈INn) is a kernel. Lemma 2 If G : IRd × IRd →IR is a kernel and zℓ: X →IRd a vector–valued function, for ℓ∈INn then the matrix–valued function K : X × X →IRn×n whose elements are defined, for every x, t ∈X, by Kℓq(x, t) = G(zℓ(x), zq(t)) is a matrix–valued kernel. This lemma confirms, as a special case, that if zℓ(x) = Bℓx with Bℓa p × d matrix, ℓ∈INn, and G : IRd × IRd →IR is a scalar–valued kernel, then the function (3.8) is a matrix–valued kernel. When G is chosen to be a Gaussian kernel, we conclude that Kℓq(x, t) = exp(−σ∥Bℓx −Bqt∥2) is a matrix–valued kernel. In the scalar case it is well–known that a nonnegative combination of kernels is a kernel. The next proposition extends this result to matrix–valued kernels. Proposition 2 If Kj, j ∈INs, s ∈IN are scalar–valued kernels and Aj ∈L+(Y) then the function K = X j∈INs AjKj (3.10) is a matrix–valued kernel. PROOF. For any x, t ∈X and c, d ∈Y we have that (c, K(x, t)z) = X j∈INs (c, Ajd)Kj(x, t) and so the proposition follows form the Schur lemma. Other results of this type can be found in [7]. The formula (3.10) can be used to generate a wide variety of matrix–valued kernels which have the flexibility needed for learning. For example, we obtain polynomial matrix–valued kernels by setting X = IRd and Kj(x, t) = (x, t)j, where x, t ∈IRd. We remark that, generally, the kernel in equation (3.10) cannot be reduced to a diagonal kernel. An interesting case of Proposition 2 is provided by low rank kernels which may be useful in situations where the components of f are linearly related, that is, for every f ∈H and x ∈X f(x) lies in a linear subspace M ⊆Y. In this case, it is desirable to use a kernel which has the same property that f(x) ∈M, x ∈X for all f ∈H. We can ensure this by an appropriate choice of the matrices Aj. For example, if M = span({bj : j ∈INs}) we may choose Aj = bjb∗ j. Matrix–valued Gaussian mixtures are obtained by choosing X = IRd, Y = IRn, {σj : j ∈ INs} ⊂IR+, and Kj(x, t) = exp(−σj∥x −t∥2). Specifically, K(x, t) = X j∈INs Aje−σj∥x−t∥2 is a kernel on X × X for any {Aj : j ∈INs} ⊆L+(IRn). 4 Regularization and minimal norm interpolation Let V : Ym × IR+ →IR be a prescribed function and consider the problem of minimizing the functional E(f) := V (f(xj) : j ∈INm), ∥f∥2 (4.11) over all functions f ∈H. A special case is covered by the functional of the form E(f) := X j∈INm Q(yj, f(xj)) + γ∥f∥2 (4.12) where γ is a positive parameter and Q : Y × Y →IR+ is some prescribed loss function, eg the square loss. Within this general setting we provide a “representer theorem” for any function which minimizes the functional in equation (4.11). This result is well-known in the scalar case. Our proof technique uses the idea of minimal norm interpolation, a central notion in function estimation and interpolation. Lemma 3 If y ∈{(f(xj) : j ∈INm) : f ∈H} ⊂IRm the minimum of problem min ∥f∥2 : f(xj) = yj, j ∈INm (4.13) is unique and admits the form ˆf = P j∈INm Kxjcj. We refer to [7] for a proof. This approach achieves both simplicity and generality. For example, it can be extended to normed linear spaces, see [8]. Our next result establishes that the form of any local minimizer1 indeed has the same form as in Lemma 3. This result improves upon [9] where it is proven only for a global minimizer. Theorem 1 If for every y ∈Ym the function h : IR+ →IR+ defined for t ∈IR+ by h(t) := V (y, t) is strictly increasing and f0 ∈H is a local minimum of E then f0 = P j∈INm Kxjcj for some {cj : j ∈INm} ⊆Y. Proof: If g is any function in H such that g(xj) = 0, j ∈INm and t a real number such that |t|∥g∥≤ϵ, for ϵ > 0, then V y0, ∥f0∥2 ≤V y0, ∥f0 + tg∥2 . Consequently, we have that ∥f0∥2 ≤∥f0 + tg∥2 from which it follows that (f0, g) = 0. Thus, f0 satisfies ∥f0∥= min{∥f∥: f(xj) = f0(xj), j ∈INm, f ∈H} and the result follows from Lemma 3. 4.1 Linear regularization We comment on regularization for linear multi–task learning and therefore consider minimizing the functional R0(w) := X j∈INm X ℓ∈INn Q(yjℓ, (w, Bℓxj)) + γ∥w∥2 (4.14) for w ∈IRp. We set uℓ= B∗ ℓw, u = (uℓ: ℓ∈INn), and observe that the above functional is related to the functional R1(u) := X j∈INm X ℓ∈INn Q(yjℓ, (uℓ, xj)) + γJ(u) (4.15) 1A function f0 ∈H is a local minimum for E provided that there is a positive number ϵ such that whenever f ∈H satisfies ∥f0 −f∥≤ϵ then E(f0) ≤E(f). where we have defined the minimum norm functional J(u) := min{∥w∥2 : w ∈IRp, B∗ ℓw = uℓ, ℓ∈INn}. (4.16) Specifically, we have min{R0(w) : w ∈IRp} = min{R1((Bℓw : ℓ∈INn)) : w ∈IRp}. The optimal solution ˆw of problem (4.16) is given by ˆw = P ℓ∈INn Bℓcℓ, where the vectors {cℓ: ℓ∈INn} ⊆IRd satisfy the linear equations X k∈INn B∗ ℓBkck = uℓ, ℓ∈INn and J(u) = X ℓ,q∈INn (uℓ, ˜B−1 ℓq uq) provided the d×d block matrix ˜B = (B∗ ℓBq : ℓ, q ∈INn) is nonsingular. We note that this analysis can be extended to the case of different inputs across the tasks by replacing xj in equations (4.14) and (4.15) by xj,ℓ∈IRdℓand matrix Bℓby QℓPℓ, see section 3.1 for the definition of these quantities. As a special example we choose Bℓto be the (n + 1)d × d matrix whose d × d blocks are all zero expect for the 1−st and (ℓ+ 1)−th block which are equal to c−1Id and Id respectively, where c > 0 and Id is the d−dimensional identity matrix. From equation (3.8) the matrix–valued kernel K in equation (3.8) reduce to Kℓq(x, t) = ( 1 c2 + δℓq)(x, t), ℓ, q ∈INn, x, t ∈IRn. (4.17) Moreover, in this case the minimization in (4.16) is given by J(u) = c2 n + c2 X ℓ∈INn ∥uℓ∥2 + n n + c2 X ℓ∈INn ∥uℓ−1 n X q∈INn uq∥2. (4.18) The model of minimizing (4.14) was proposed in [6] in the context of support vector machines (SVM’s) for these special choice of matrices. The derivation presented here improves upon it. The regularizer (4.18) forces a trade–off between a desirable small size for per–task parameters and closeness of each of these parameters to their average. This trade-off is controlled by the coupling parameter c. If c is small the tasks parameters are related (closed to their average) whereas a large value of c means the task are learned independently. For SVM’s, Q is the Hinge loss function defined by Q(a, b) := max(0, 1 −ab), a, b ∈IR. In this case the above regularizer trades off large margin of each per–task SVM with closeness of each SVM to the average SVM. Numerical experiments showing the good performance of the multi–task SVM compared to both independent per–task SVM’s (ie, c = ∞in equation (4.17)) and previous multi–task learning methods are also discussed in [6]. The analysis above can be used to derive other linear kernels. This can be done by either introducing the matrices Bℓas in the previous example, or by modifying the functional on the right hand side of equation (4.15). For example, we choose an n × n symmetric matrix A all of whose entries are in the unit interval, and consider the regularizer J(u) := 1 2 X ℓ,q∈INn ∥uℓ−uq∥2Aℓq = X ℓ,q∈INn (uℓ, uq)Lℓq (4.19) where L = D −A with Dℓq = δℓq P h∈INn Aℓh. The matrix A could be the weight matrix of a graph with n vertices and L the graph Laplacian, see eg [4]. The equation Aℓq = 0 means that tasks ℓand q are not related, whereas Aℓq = 1 means strong relation. In order to derive the matrix–valued kernel we note that (4.19) can be written as (u, ˜L, u) where ˜L is the n × n block matrix whose ℓ, q block is the d × d matrix IdLℓq. Thus, we define w = ˜L 1 2 u so that we have uℓ= Pℓ˜L−1 2 w (here L−1 is the pseudoinverse), where Pℓis a projection matrix from IRdn to IRd. Consequently, the feature map in equation (2.7) is given by Φℓ= Bℓ= ˜L−1 2 P ∗ ℓand we conclude that Kℓq(x, t) = (x, Pℓ˜L−1P ∗ q t). Finally, as discussed in section 3.2 one can form polynomials or non-linear functions of the above linear kernels. From Theorem 1 the minimizer of (4.12) is still a linear combination of the kernel at the given data examples. 5 Conclusions and future directions We have described reproducing kernel Hilbert spaces of vector–valued functions and discussed their use in multi–task learning. We have provided a wide class of matrix–valued kernels which should proved useful in applications. In the future it would be valuable to study learning methods, using convex optimization or MonteCarlo integration, for choosing the matrix–valued kernel. This problem seems more challenging that its scalar counterpart due to the possibly large dimension of the output space. Another important problem is to study error bounds for learning in these spaces. Such analysis can clarify the role played by the spectra of the matrix–valued kernel. Finally, it would be interesting to link the choice of matrix–valued kernels to the notion of relatedness between tasks discussed in [5]. Acknowledgments This work was partially supported by EPSRC Grant GR/T18707/01 and NSF Grant No. ITR-0312113. We are grateful to Zhongying Chen, Head of the Department of Scientific Computation at Zhongshan University for providing both of us with the opportunity to complete this work in a scientifically stimulating and friendly environment. We also wish to thank Andrea Caponnetto, Sayan Mukherjee and Tomaso Poggio for useful discussions. References [1] N.I. Akhiezer and I.M. Glazman. Theory of linear operators in Hilbert spaces, volume I. Dover reprint, 1993. [2] N. Aronszajn. Theory of reproducing kernels. Trans. AMS, 686:337–404, 1950. [3] J. Baxter. A Model for Inductive Bias Learning. Journal of Artificial Intelligence Research, 12, p. 149– 198, 2000. [4] M. Belkin and P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and data Representation Neural Computation, 15(6):1373–1396, 2003. [5] S. Ben-David and R. Schuller. Exploiting Task Relatedness for Multiple Task Learning. Proc. of the 16–th Annual Conference on Learning Theory (COLT’03), 2003. [6] T. Evgeniou and M.Pontil. Regularized Multitask Learning. Proc. of 17-th SIGKDD Conf. on Knowledge Discovery and Data Mining, 2004. [7] C.A. Micchelli and M. Pontil. On Learning Vector-Valued Functions. Neural Computation, 2004 (to appear). [8] C.A. Micchelli and M. Pontil. A function representation for learning in Banach spaces. Proc. of the 17–th Annual Conf. on Learning Theory (COLT’04), 2004. [9] B. Sch¨olkopf, R. Herbrich, and A.J. Smola. A Generalized Representer Theorem. Proc. of the 14-th Annual Conf. on Computational Learning Theory (COLT’01), 2001. [10] V. N. Vapnik.Statistical Learning Theory. Wiley, New York, 1998.
|
2004
|
150
|
2,564
|
Variational minimax estimation of discrete distributions under KL loss Liam Paninski Gatsby Computational Neuroscience Unit University College London liam@gatsby.ucl.ac.uk http://www.gatsby.ucl.ac.uk/∼liam Abstract We develop a family of upper and lower bounds on the worst-case expected KL loss for estimating a discrete distribution on a finite number m of points, given N i.i.d. samples. Our upper bounds are approximationtheoretic, similar to recent bounds for estimating discrete entropy; the lower bounds are Bayesian, based on averages of the KL loss under Dirichlet distributions. The upper bounds are convex in their parameters and thus can be minimized by descent methods to provide estimators with low worst-case error; the lower bounds are indexed by a one-dimensional parameter and are thus easily maximized. Asymptotic analysis of the bounds demonstrates the uniform KL-consistency of a wide class of estimators as c = N/m →∞(no matter how slowly), and shows that no estimator is consistent for c bounded (in contrast to entropy estimation). Moreover, the bounds are asymptotically tight as c →0 or ∞, and are shown numerically to be tight within a factor of two for all c. Finally, in the sparse-data limit c →0, we find that the Dirichlet-Bayes (add-constant) estimator with parameter scaling like −c log(c) optimizes both the upper and lower bounds, suggesting an optimal choice of the “add-constant” parameter in this regime. Introduction The estimation of discrete distributions given finite data — “histogram smoothing” — is a canonical problem in statistics and is of fundamental importance in applications to language modeling, informatics, and safari organization (1–3). In particular, estimation of discrete distributions under Kullback-Leibler (KL) loss is of basic interest in the coding community, in the context of two-step universal codes (4, 5). The problem has received signicant attention from a variety of statistical viewpoints (see, e.g., (6) and references therein); in this work, we will focus on the “minimax” approach, that is, on developing estimators which work well even in the worst case, with the performance of an estimator measured by the average KL loss. The recent work of (7) and (8) has answered many of the important asymptotic questions in the heavily-sampled limit, where the number of data samples, N, is much larger than the number of support points, m, of the unknown distribution; in particular, the optimal (minimax) error rate has been identified in closed form in the case that m is fixed and N →∞, and a simple estimator that asymptotically achieves this optimum has been described. Our goal here is to analyze further the opposite case, when N/m is bounded or even small (the sparse data case). It will turn out that the estimators which are asymptotically optimal as N/m →∞are far from optimal in this sparse data case, which may be considered more important for applications to modeling of large dictionaries. Much of our approach is influenced by the similarities to the entropy estimation problem (9–11), where the sparse data regime is also important for applications and of independent mathematical interest: how do we decide how much probability to assign to bins for which no samples, or very few samples, are observed? We will emphasize the similarities (and important differences) between these two problems throughout. Upper bounds The basic idea is to find a simple upper bound on the worst-case expected loss, and then to minimize this upper bound over some tractable class of possible estimators; the resulting optimized estimator will then be guaranteed to possess good worst-case properties. Clearly we want this upper bound to be as tight as possible, and the space of allowed estimators to be as large as possible, while still allowing easy minimization. The approach taken here is to develop bounds which are convex in the estimator, and to allow the estimators to range over a large convex space; this implies that the minimization problem is tractable by descent methods, since no non-global local minima exist. We begin by defining the class of estimators we will be minimizing over: ˆp of the form ˆpi = g(ni) Pm i=1 g(ni), with ni defined as the number of samples observed in bin i and the constants gj ≡g(j) taking values in the (N + 1)−dimensional convex space gj ≥0; note that normalization of the estimated distribution is automatically enforced. The “add-constant” estimators, gj = j+α N+mα, α > 0, are an important special case (7). After some rearrangement, the expected KL loss for these estimators satisfies E⃗p (L(⃗p, ˆp)) = E⃗p m X i=1 pi log pi ˆpi ! = X i −H(pi) + N X j=0 (−log gj)piBN,j(pi) + E⃗p log m X k=1 g(nk) ! ≤ X i −H(pi) + X j (−log gj)piBN,j(pi) + E⃗p −1 + X k g(nk) ! = X i f(pi); we have abbreviated ⃗p the true underlying distribution, the entropy function H(t) = −t log t, the binomial functions BN,j(t) = N j tj(1 −t)N−j, and f(t) = −H(t) −t + X j (gj −t log gj)BN,j(t). Equality holds iff P k g(nk) is constant almost surely (as is the case, e.g., for any addconstant estimator). We have two distinct simple bounds on the above: first, the obvious m X i=1 f(pi) ≤m max 0≤t≤1 f(t), which generalizes the bound considered in (7) (where a similar bound was derived asymptotically as N →∞for m fixed, and applied only to the add-constant estimators), or X i f(pi) ≤ m max 0≤t≤1/m f(t) + max 1/m≤t≤1 f(t) t , which follows easily from P i pi = 1; see (11) for a proof. The above maxima are always achieved, by the compactness of the intervals and the continuity of the binomial and entropy functions. Again, the key point is that these bounds are uniform over all possible underlying p (that is, they bound the worst-case error). Why two bounds? The first is nearly tight for N >> m (it is actually asymptotically possible to replace m with m −1 in this limit, due to the fact that pi must sum to one; see (7, 8)), but grows linearly with m and thus cannot be tight for m comparable to or larger than N. In particular, the optimizer doesn’t depend on m, only N (and hence the bound can’t help but behave linearly in m). The second bound is much more useful (and, as we show below, tight) in the data-sparse regime N << m. The resulting minimization problems have a polynomial approximation flavor: we are trying to find an optimal set of weights gj such that the sum in the definition of f(t) (a polynomial in t) will be as close to H(t) + t as possible. In this sense our approach is nearly identical to that recently followed for bounding the bias in the entropy estimation case (11, 12). There are three key differences, however: the term penalizing the variance in the entropy case is missing here, the approximation only has to be good from above, not from below as well (both making the problem easier), and the approximation is nonlinear, instead of linear, in gj (making the problem harder). Indeed, we will see below that the entropy estimation problem is qualitatively easier than the estimation of the full distribution, despite the entropic form of the KL loss. Smooth minimization algorithm In the next subsections, we develop methods for minimizing these bounds as a function of gj (that is, for choosing estimators with good worst-case properties). The first key point is that the bounds involve maxima over a collection of convex functions in gj, and hence the bounds are convex in gj; since the coefficients gj take values in a convex set, no non-global local minima exist, and the global mimimum can be found by simple descent procedures. One complicating factor is that the bounds are nondifferentiable in gj: while methods for direct minimization of this type of L∞error exist (13), they require that we track the location in t of the maximal error; since this argmax can jump discontinuously as a function of gj, this interior maximization loop can be time-consuming. A more efficient solution is given by approximating this nondifferentiable objective function by smooth functions which retain the convexity of the original objective. We employ a Laplace approximation (albeit in a different direction than usual): use the fact that max t∈A h(t) = lim q→∞ 1 q log Z t∈A eqh(t) for continuous h(t) and compact A; thus, letting h(t) = f(t), we can minimize Uq({gj}) ≡ Z 1 0 eqf(t)dt, or Vq({gj}) ≡log Z 1/m 0 eqmf(t)dt ! + log Z 1 1/m eq f(t) t dt ! , for q increasing; these new objective functions are smooth, with easily-computable gradients, and are still convex, since f(t) is convex in gj, convex functions are preserved under convex, increasing maps (i.e., the exponential), and sums of convex functions are convex. (In fact, since Uq is strictly convex in g for any q, the minima are unique, which to our knowledge is not necessarily the case for the original minimax problem.) It is easy to show that any limit point of the sequence of minimizers of the above problems will minimize the original problem; applying conjugate gradient descent for each q, with the previous minimizer as the seed for the minimization in the next largest q, worked well in practice. Initialization; connection to Laplace estimator It is now useful to look for suitable starting points for the minimization. For example, for the first bound, approximate the maximum by an integral, that is, find gj to minimize m Z 1 0 dt −H(t) −t + X j (gj −t log gj)BN,j(t) . (Note that this can be thought of as the limit of the above Uq minimization problem as q → 0, as can be seen by expanding the exponential.) The gj that minimizes this approximation to the upper bound is trivially derived as gj = R 1 0 tBN,j(t)dt R 1 0 BN,j(t)dt = β(j + 2, N −j + 1) β(j + 1, N −j + 1) = j + 1 N + 2, with β(a, b) = R 1 0 ta−1(1 −t)b−1dt defined as usual. The resulting estimator ˆp agrees exactly with “Laplace’s estimator,” the add-α estimator with α = 1. Note, though, that to derive this gj, we completely ignore the first two terms (−H(t) −t) in the upper bound, and the resulting estimator can therefore be expected to be suboptimal (in particular, the gj will be chosen too large, since −H(t) −t is strictly decreasing for t < 1). Indeed, we find that add-α estimators with α < 1 provide a much better starting point for the optimization, as expected given (7,8). (Of course, for N/m large enough an asymptotically optimal estimator is given by the perturbed add-constant estimator of (8), and none of this numerical optimization is necessary.) In the limit as c = N/m →0, we will see below that a better initialization point is the add-α estimator with parameter α ≈H(c) = −c log c. Fixed-point algorithm On examining the gradient of the above problems with respect to gj, a fixed-point algorithm may be derived. We have, for example, that ∂U ∂gj = Z 1 0 dt 1 −t gj eqf(t)BN,j(t); thus, analogously to the q →0 case above, a simple update is given by g1 j = R 1 0 teqf 0(t)BN,j(t)dt R 1 0 eqf 0(t)BN,j(t)dt , which effectively corresponds to taking the mean of the binomial function BN,j, weighted by the “importance” term eqf(t), which in turn is controlled by the proximity of t to the maximum of f 0(t) for q large. While this is an attractive strategy, conjugate gradient descent proved to be a more stable algorithm in our hands. Lower bounds Once we have found an estimator with good worst-case error, we want to compare its performance to some well-defined optimum. To do this, we obtain lower bounds on the worst-case performance of any estimator (not just the class of ˆp we minimized over in the last section). Once again, we will derive a family of bounds indexed by some parameter α, and then optimize over α. Our lower bounds are based on the well-known fact that, for any proper prior distribution, the average (Bayesian) loss is less than or equal to the maximum (worst-case) loss. The most convenient class of priors to use here are the Dirichlet priors. Thus we will compute the average KL error under any Dirichlet distribution (interesting in its own right), then maximize over the possible Dirichlet priors (that is, find the “least favorable” Dirichlet prior) to obtain the tightest lower bound on the worst-case error; importantly, the resulting bounds will be nonasymptotic (that is, valid for all N and m). This approach therefore generalizes the asymptotic lower bound used in (7), who examined the KL loss under the special case of the uniform Dirichlet prior. See also (4) for direct application of this idea to bound the average code length, and (14), who derived a lower bound on the average KL loss, again in the uniform Dirichlet case. We compute the Bayes error as follows. First, it is well-known (e.g., (9, 14)) that the KL-Bayes estimate of ⃗p given count data ⃗n (under any prior, not just the Dirichlet) is the posterior mean (interestingly, the KL loss shares this property with the squared error); for the Dirichlet prior with parameter ⃗α, this conditional mean has the particularly simple form EDir(⃗α|⃗n)⃗p = ⃗α + ⃗n P i αi + ni , with Dir(⃗α|⃗n) denoting the Dir(⃗α) density conditioned on data ⃗n. Second, it is straightforward to show (14) that the conditional average KL error, given this estimate, has an appealing form: the entropy at the conditional mean minus the conditional mean entropy (one can easily check the strict positivity of this average error via the concavity of the vector entropy function H(⃗p) = −P i pi log pi). Thus we can write the average loss as EDir(⃗α) » H( ⃗α + ⃗n P i αi+ni )−EDir(⃗α|⃗n)H(⃗p) – = X i EDir(⃗α) » H( αi + ni N+P i αi )−EDir(⃗α+⃗n)H(pi) – , where the inner averages over ⃗p are under the Dirichlet distribution and the outer averages over ⃗n and ni are under the corresponding Dirichlet-multinomial or Dirichlet-binomial mixtures (i.e., multinomials whose parameter ⃗p is itself Dirichlet distributed); we have used linearity of the expectation, P i ni = N, and Dir(⃗α|⃗n) = Dir(⃗α + ⃗n). Evaluating the right-hand side of the above, in turn, requires the formula −EDir(α)H(pi) = αi P i αi ψ(αi + 1) −ψ(1 + X i αi) ! , with ψ(t) = d dt log Γ(t); recall that ψ(t + 1) = ψ(t) + 1 t . All of the above may thus be easily computed numerically for any N, m, and ⃗α; to simplify, however, we will restrict ⃗α to be constant, ⃗α = (α, α, . . . , α). This symmetrizes the above formulae; we can replace the outer sum with multiplication by m, and substitute P i αi = mα. Finally, abbreviating K = N + mα, we have that the worst-case error is bounded below by: m K N X j=0 pα,m,N(j)(j + α) −log j + α K + ψ(j + α) + 1 j + α −ψ(K) −1 K , (1) with pα,m,N(j) the beta-binomial distribution pα,m,N(j) = N j Γ(mα)Γ(j + α)Γ(K −(j + α)) Γ(K)Γ(α)Γ(mα −α) . This lower bound is valid for all N, m, and α, and can be optimized numerically in the (scalar) parameter α in a straightforward manner. Asymptotic analysis In this section, we aim to understand some of the implications of the rather complicated expressions above, by analyzing them in some simplifying limits. Due to space constraints, we can only sketch the proof of each of the following statements. Proposition 1. Any add-α estimator, α > 0, is uniformly KL-consistent if N/m →∞. This is a simple generalization of a result of (7), who proved consistency for the special case of m fixed and N →∞; the main point here is that N/m is allowed to tend to infinity arbitarily slowly. The result follows on utilizing our first upper bound (the main difference between our analysis and that of (7) is that our bound holds for all m, N, whereas (7) focuses on the asymptotic case) and noting that max0≤t≤1 f(t) = O(1/N) for f(t) defined by any add-constant estimator; hence our upper bound is uniformly O(m/N). To obtain the O(1/N) bound, we plug in the add-constant gj = (j + α)/N: f(t) = α/N + t log t − X j (log j + α N )BN,j(t) . For t fixed, an application of the delta method implies that the sum looks like log(t+ α N )− 1−t 2Nt; an expansion of the logarithm, in turn, implies that the right-hand side converges to 1 2N (1 −t), for any fixed α > 0. On a 1/N scale, on the other hand, we have Nf( t N ) = α + t log t − X j log(j + α)BN,j( t N ) , which can be uniformly bounded above. In fact, as demonstrated by (7), the binomial sum on the right-hand side converges to the corresponding Poisson sum; interestingly, a similar Poisson sum plays a key role in the analysis of the entropy estimation case in (12). A converse follows easily from the lower bounds developed above: Proposition 2. No estimator is uniformly KL-consistent if lim sup N/m < ∞. Of course, it is intuitively clear that we need many more than m samples to estimate a distribution on m bins; our contribution here is a quantitative asymptotic lower bound on the error in the data-sparse regime. (A simpler but slightly weaker asymptotic bound may be developed from the lower bound given in (14).) Once again, we contrast with the entropy estimation case, where consistent estimators do exist in this regime (12). We let N, m →∞, N/m →c, 0 < c < ∞. The beta-binomial distribution has mean N/m and converges to a non-degenerate limit, which we’ll denote pα,c, in this regime. Using Fatou’s lemma and ψ(t) = log(t) − 1 2t + O t−2 , t →∞, we obtain the asymptotic lower bound 1 c + α ∞ X j=0 pα,c(j)(α + j) −log(α + j) + ψ(α + j) + 1 α + j > 0. Also interestingly, it is easy to see that our lower bound behaves as m−1 2N (1 + o(1)) as N/m →∞for any fixed positive α (since in this case Pk j=0 pα,m,N(j) →0 for any fixed finite k). Thus, comparing to the upper bound on the minimax error in (8), we have the somewhat surprising fact that: 0.001 0.01 0.1 α 1 2 3 4 5 lower bound 10 −4 10 −3 10 −2 10 −1 10 0 10 1 1 2 3 N/m (upper) / (lower) optimal α approx opt lower bound j=0 approx (m−1)/2N approx least−favorable Bayes Braess−Sauer optimized Figure 1: Illustration of bounds and asymptotic results. N = 100, m varying. a. Numerically- and theoretically-obtained optimal (least-favorable) α, as a function of c = N/m; note close agreement. b. Numerical lower bounds and theoretical approximations; note the log-linear growth as c →0. The j = 0 approximation is obtained by retaining only the j = 0 term of the sum in the lower bound (1); this approximation turns out to be sufficiently accurate in the c →0 limit, while the (m −1)/2N approximation is tight as c →∞. c. Ratio comparison of upper to lower bounds. Dashed curve is the ratio obtained by plugging the asymptotically optimal estimator due to Braess-Sauer (8) into our upper bound; solid-dotted curve numerically least-favorable Dirichlet estimator; black solid curve optimized estimator. Note that curves for optimized and Braess-Sauer estimators are in constant proportion, since bounds are independent of m for c large enough. Most importantly, note that optimized bounds are everywhere tight within a factor of 2, and asymptotically tight as c →∞or c →0. Proposition 3. Any fixed-α Dirichlet prior is asymptotically least-favorable as N m →∞. This generalizes Theorem 2 of (7) (and in fact, an alternate proof can be constructed on close examination of Krichevskiy’s proof of that result). Finally, we examine the optimizers of the bounds in the data-sparse limit, c = N/m →0. Proposition 4. The least-favorable Dirichlet parameter is given by H(c) as c →0; the corresponding Bayes estimator also asymptotically minimizes the upper bound (and hence the bounds are asymptotically tight in this limit). The maximal and average errors grow as −log(c)(1 + o(1)), c →0. This is our most important asymptotic result. It suggests a simple and interesting rule of thumb for estimating distributions in this data-sparse limit: use the add-α estimator with α = H(c). When the data are very sparse (c sufficiently small) this estimator is optimal; see Fig. 1 for an illustration. The proof, which is longer than those of the above results but still fairly straightforward, has been omitted due to space constraints. Discussion We have omitted a detailed discussion of the form of the estimators which numerically minimize the upper bounds developed here; these estimators were empirically found to be perturbed add-constant estimators, with gj growing linearly for large j but perturbed downward in the approximate range j < 10. Interestingly, in the heavily-sampled limit N >> m, the minimizing estimator provided by (8) again turns out to be a perturbed add-constant estimator. Further details will be provided elsewhere. We note an interesting connection to the results of (9), who find that 1/m scaling of the add-constant parameter α is empirically optimal for for an entropy estimation application with large m. This 1/m scaling bears some resemblance to the optimal H(c) scaling that we find here, at least on a logarithmic scale (Fig. 1a); however, it is easy to see that the extra −log(c) term included here is useful. As argued in (3), it is a good idea, in the data-sparse limit N << m, to assign substantial probability mass to bins which have not seen any data samples. Since the total probability assigned to these bins by any add-α estimator scales in this limit as P(unseen) = mα/(N + mα), it is clear that the choice α ∼1/m decays too quickly. Finally, we note an important direction for future research: the upper bounds developed here turn out to be least tight in the range N ≈m, when the optimum in the bound occurs near t = 1/m; in this case, our bounds can be loose by roughly a factor of two (exactly the degree of looseness we found in Fig. 1c). Thus it would be quite worthwhile to explore upper bounds which are tight in this N ≈m range. Acknowledgements: We thank Z. Ghahramani and D. Mackay for helpful conversations; LP is supported by an International Research Fellowship from the Royal Society. References 1. D. Mackay, L. Peto, Natural Language Engineering 1, 289 (1995). 2. N. Friedman, Y. Singer, NIPS (1998). 3. A. Orlitsky, N. Santhanam, J. Zhang, Science 302, 427 (2003). 4. T. Cover, IEEE Transactions on Information Theory 18, 216 (1972). 5. R. Krichevsky, V. Trofimov, IEEE Transactions on Information Theory 27, 199 (1981). 6. D. Braess, H. Dette, Sankhya 66, 707 (2004). 7. R. Krichevsky, IEEE Transactions on Information Theory 44, 296 (1998). 8. D. Braess, T. Sauer, Journal of Approximation Theory 128, 187 (2004). 9. T. Schurmann, P. Grassberger, Chaos 6, 414 (1996). 10. I. Nemenman, F. Shafee, W. Bialek, NIPS 14 (2002). 11. L. Paninski, Neural Computation 15, 1191 (2003). 12. L. Paninski, IEEE Transactions on Information Theory 50, 2200 (2004). 13. G. Watson, Approximation theory and numerical methods (Wiley, Boston, 1980). 14. D. Braess, J. Forster, T. Sauer, H. Simon, Algorithmic Learning Theory 13, 380 (2002).
|
2004
|
151
|
2,565
|
Online Bounds for Bayesian Algorithms Sham M. Kakade Computer and Information Science Department University of Pennsylvania Andrew Y. Ng Computer Science Department Stanford University Abstract We present a competitive analysis of Bayesian learning algorithms in the online learning setting and show that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) perform favorably when compared, in retrospect, to the single best model in the model class. The analysis does not assume that the Bayesian algorithms’ modeling assumptions are “correct,” and our bounds hold even if the data is adversarially chosen. For Gaussian linear regression (using logloss), our error bounds are comparable to the best bounds in the online learning literature, and we also provide a lower bound showing that Gaussian linear regression is optimal in a certain worst case sense. We also give bounds for some widely used maximum a posteriori (MAP) estimation algorithms, including regularized logistic regression. 1 Introduction The last decade has seen significant progress in online learning algorithms that perform well even in adversarial settings (e.g. the “expert” algorithms of Cesa-Bianchi et al. (1997)). In the online learning framework, one makes minimal assumptions on the data presented to the learner, and the goal is to obtain good (relative) performance on arbitrary sequences. In statistics, this philosophy has been espoused by Dawid (1984) in the prequential approach. We study the performance of Bayesian algorithms in this adversarial setting, in which the process generating the data is not restricted to come from the prior—data sequences may be arbitrary. Our motivation is similar to that given in the online learning literature and the MDL literature (see Grunwald, 2005) —namely, that models are often chosen to balance realism with computational tractability, and often assumptions made by the Bayesian are not truly believed to hold (e.g. i.i.d. assumptions). Our goal is to study the performance of Bayesian algorithms in the worst-case, where all modeling assumptions may be violated. We consider the widely used class of generalized linear models—focusing on Gaussian linear regression and logistic regression—and provide relative performance bounds (comparing to the best model in our model class) when the cost function is the logloss. Though the regression problem has been studied in a competitive framework and, indeed, many ingenious algorithms have been devised for it (e.g., Foster, 1991; Vovk, 2001; Azoury and Warmuth, 2001) , our goal here is to study how the more widely used, and often simpler, Bayesian algorithms fare. Our bounds for linear regression are comparable to the best bounds in the literature (though we use the logloss as opposed to the square loss). The competitive approach to regression started with Foster (1991), who provided competitive bounds for a variant of the ridge regression algorithm (under the square loss). Vovk (2001) presents many competitive algorithms and provides bounds for linear regression (under the square loss) with an algorithm that differs slightly from the Bayesian one. Azoury and Warmuth (2001) rederive Vovk’s bound with a different analysis based on Bregman distances. Our work differs from these in that we consider Bayesian Gaussian linear regression, while previous work typically used more complex, cleverly devised algorithms which are either variants of a MAP procedure (as in Vovk, 2001) , or that involve other steps such as “clipping” predictions (as in Azoury and Warmuth, 2001) . These distinctions are discussed in more detail in Section 3.1. We should also note that when the loss function is the logloss, multiplicative weights algorithms are sometimes identical to Bayes rule with particular choices of the parameters (see Freund and Schapire, 1999) . Furthermore, Bayesian algorithms have been used in some online learning settings, such as the sleeping experts setting of Freund et al. (1997) and the online boolean prediction setting of Cesa-Bianchi et al. (1998). Ng and Jordan (2001) also analyzed an online Bayesian algorithm but assumed that the data generation process was not too different from the model prior. To our knowledge, there have been no studies of Bayesian generalized linear models in an adversarial online learning setting (though many variants have been considered as discussed above). We also examine maximum a posteriori (MAP) algorithms for both Gaussian linear regression (i.e., ridge regression) and for (regularized) logistic regression. These algorithms are often used in practice, particularly in logistic regression where Bayesian model averaging is computationally expensive, but the MAP algorithm requires only solving a convex problem. As expected, MAP algorithms are somewhat less competitive than full Bayesian model averaging, though not unreasonably so. 2 Bayesian Model Averaging We now consider the Bayesian model averaging (BMA) algorithm and give a bound on its worst-case online loss. We start with some preliminaries. Let x ∈Rn denote the inputs of a learning problem and y ∈R the outputs. Consider a model from the generalized linear model family (see McCullagh and Nelder, 1989) , that can be written p(y|x, θ) = p(y|θT x), where θ ∈Rn are the parameters of our model (θT denotes the transpose of θ). Note that the predicted distribution of y depends only on θT x, which is linear in θ. For example, in the case of Gaussian linear regression, we have p(y|x, θ) = 1 √ 2πσ2 exp −(θT x −y)2 2σ2 , (1) where σ2 is a fixed, known constant that is not a parameter of our model. In logistic regression, we would have log p(y|x, θ) = y log 1 1 + exp(−θT x) + (1 −y) log 1 − 1 1 + exp(−θT x) , (2) where we assume y ∈{0, 1}. Let S = {(x(1), y(1)), (x(2), y(2)), . . . , (x(T ), y(T ))} be an arbitrary sequence of examples, possibly chosen by an adversary. We also use St to denote the subsequence consisting of only the first t examples. We assume throughout this paper that ||x(t)|| ≤1 (where || · || denotes the L2 norm). Assume that we are going to use a Bayesian algorithm to make our online predictions. Specifically, assume that we have a Gaussian prior on the parameters: p(θ) = N(θ;⃗0, ν2In), where In is the n-by-n identity matrix, N(·; µ, Σ) is the Gaussian density with mean µ and covariance Σ, and ν2 > 0 is some fixed constant governing the prior variance. Also, let pt(θ) = p(θ|St) = Qt i=1 p(y(i)|x(i), θ) p(θ) R θ Qt i=1 p(y(i)|x(i), θ) p(θ)dθ be the posterior distribution over θ given the first t training examples. We have that p0(θ) = p(θ) is just the prior distribution. On iteration t, we are given the input x(t), and our algorithm makes a prediction using the posterior distribution over the outputs: p(y|x(t), St−1) = Z θ p(y|x(t), θ)p(θ|St−1)dθ. We are then given the true label y(t), and we suffer logloss −log p(y(t)|x(t), St−1). We define the cumulative loss of the BMA algorithm after T rounds to be LBMA(S) = T X t=1 −log p(y(t)|x(t), St−1). Importantly, note that even though the algorithm we consider is a Bayesian one, our theoretical results do not assume that the data comes from any particular probabilistic model. In particular, the data may be chosen by an adversary. We are interested in comparing against the loss of any “expert” that uses some fixed parameters θ ∈Rn. Define ℓθ(t) = −log p(y(t)|x(t), θ), and let Lθ(S) = T X t=1 ℓθ(t) = T X t=1 −log p(y(t)|x(t), θ). Sometimes, we also wish to compare against distributions over experts. Given a distribution Q over θ, define ℓQ(t) = R θ −Q(θ) log p(y(t)|x(t), θ)dθ, and LQ(S) = T X t=1 ℓQ(t) = Z θ Q(θ)Lθ(S)dθ. This is the expected logloss incurred by a procedure that first samples some θ ∼Q and then uses this θ for all its predictions. Here, the expectation is over the random θ, not over the sequence of examples. Note that the expectation is of the logloss, which is a different type of averaging than in BMA, which had the expectation and the log in the reverse order. 2.1 A Useful Variational Bound The following lemma provides a worst case bound of the loss incurred by Bayesian algorithms and will be useful for deriving our main result in the next section. A result very similar to this (for finite model classes) is given by Freund et al. (1997). For completeness, we prove the result here in its full generality, though our proof is similar to theirs. As usual, define KL(q||p) = R θ q(θ) log q(θ) p(θ). Lemma 2.1: Let Q be any distribution over θ. Then for all sequences S LBMA(S) ≤LQ(S) + KL(Q||p0). Proof: Let Y = {y(1), . . . , y(T )} and X = {x(1), . . . , x(T )}. The chain rule of conditional probabilities implies that LBMA(S) = −log p(Y |X) and Lθ(S) = −log p(Y |X, θ). So LBMA(S) −LQ(S) = −log p(Y |X) + Z θ Q(θ) log p(Y |X, θ)dθ = Z θ Q(θ) log p(Y |X, θ) p(Y |X) dθ By Bayes rule, we have that pT (θ) = p(Y |X,θ)p0(θ) p(Y |X) . Continuing, = Z θ Q(θ) log pT (θ) p0(θ) dθ = Z θ Q(θ) log Q(θ) p0(θ)dθ − Z θ Q(θ) log Q(θ) pT (θ)dθ = KL(Q||p0) −KL(Q||pT ). Together with the fact that KL(Q||pT ) ≥0, this proves the lemma. □ 2.2 An Upper Bound for Generalized Linear Models For the theorem that we shortly present, we need one new definition. Let fy(z) = −log p(y|θT x = z). Thus, fy(t)(θT x(t)) = ℓθ(t). Note that for linear regression (as defined in Equation 1), we have that for all y |f ′′ y (z)| = 1 σ2 (3) and for logistic regression (as defined in Equation 2), we have that for y ∈{0, 1} |f ′′ y (z)| ≤1 . Theorem 2.2: Suppose fy(z) is continuously differentiable. Let S be a sequence such that ||x(t)|| ≤1 and such that for some constant c, |f ′′ y(t)(z)| ≤c (for all z). Then for all θ∗, LBMA(S) ≤Lθ∗(S) + 1 2ν2 ||θ∗||2 + n 2 log 1 + Tcν2 n (4) The ||θ∗||2/2ν2 term can be interpreted as a penalty term from our prior. The log term is how fast our loss could grow in comparison to the best θ∗. Importantly, this extra loss is only logarithmic in T in this adversarial setting. This bound almost identical to those provided by Vovk (2001); Azoury and Warmuth (2001) and Foster (1991) for the linear regression case (under the square loss); the only difference is that in their bounds, the last term is multiplied by an upper bound on y(t). In contrast, we require no bound on y(t) in the Gaussian linear regression case due to the fact that we deal with the logloss (also recall |f ′′ y (z)| = 1 σ2 for all y). Proof: We use Lemma 2.1 with Q(θ) = N(θ; θ∗, ϵ2In) being a normal distribution with mean θ∗and covariance ϵ2In. Here, ϵ2 is a variational parameter that we later tune to get the tightest possible bound. Letting H(Q) = n 2 log 2πeϵ2 be the entropy of Q, we have KL(Q||p0) = Z θ Q(θ) log 1 (2π)n/2|ν2In|1/2 exp −1 2ν2 θT θ −1 dθ −H(Q) = n log ν + 1 2ν2 Z θ Q(θ)θT θdθ −n 2 −n log ϵ = n log ν + 1 2ν2 ||θ∗||2 + nϵ2 −n 2 −n log ϵ. (5) To prove the result, we also need to relate the error of LQ to that of Lθ∗. By taking a Taylor expansion of fy (assume y ∈S), we have that fy(z) = fy(z∗) + f ′ y(z∗)(z −z∗) + f ′′ y (ξ(z))(z −z∗)2 2 , for some appropriate function ξ. Thus, if z is a random variable with mean z∗, we have Ez[fy(z)] = fy(z∗) + f ′ y(z∗) · 0 + Ez f ′′ y (ξ(z))(z −z∗)2 2 ≤ fy(z∗) + cEz (z −z∗)2 2 = fy(z∗) + c 2Var(z) Consider a single example (x, y). We can apply the argument above with z∗= θ∗T x, and z = θT x, where θ ∼Q. Note that E[z] = z∗since Q has mean θ∗. Also, Var(θT x) = xT (ϵ2In)x = ||x||2ϵ2 ≤ϵ2 (because we previously assumed that ||x|| ≤1). Thus, we have Eθ∼Q[fy(θT x)] ≤fy(θ∗T x) + cϵ2 2 Since ℓQ(t) = Eθ∼Q[fy(t)(θT x(t))] and ℓθ∗(t) = fy(t)(θ∗T x(t)), we can sum both sides from t = 1 to T to obtain LQ(S) ≤Lθ∗(S) + Tc 2 ϵ2 Putting this together with Lemma 2.1 and Equation 5, we find that LBMA(S) ≤Lθ∗(S) + Tc 2 ϵ2 + n log ν + 1 2ν2 ||θ∗||2 + nϵ2 −n 2 −n log ϵ. Finally, by choosing ϵ2 = nν2 n+T cν2 and simplifying, Theorem 2.2 follows. □ 2.3 A Lower Bound for Gaussian Linear Regression The following lower bound shows that, for linear regression, no other prediction scheme is better than Bayes in the worst case (when our penalty term is ||θ∗||2). Here, we compare to an arbitrary predictive distribution q(y|x(t), St−1) for prediction at time t, which suffers an instant loss ℓq(t) = −log q(y(t)|x(t), St−1). In the theorem, ⌊·⌋denotes the floor function. Theorem 2.3: Let Lθ∗(S) be the loss under the Gaussian linear regression model using the parameter θ∗, and let ν2 = σ2 = 1. For any set of predictive distributions q(y|x(t), St−1), there exists an S with ||x(t)|| ≤1 such that T X t=1 ℓq(t) ≥inf θ∗(Lθ∗(S) + 1 2||θ∗||2) + n 2 log 1 + T n Proof: (sketch) If n = 1 and if S is such that x(t) = 1, one can show the equality: LBMA(S) = inf θ∗(Lθ∗(S) + 1 2||θ∗||2) + 1 2 log(1 + T) Let Y = {y(1), . . . , y(T )} and X = {1, . . . , 1}. By the chain rule of conditional probabilities, LBMA(S) = −log p(Y |X) (where p is the Gaussian linear regression model), and q’s loss is PT t=1 ℓq(t) = −log q(Y |X). For any predictive distribution q that differs from p, there must exist some sequence S such that −log q(Y |X) is greater than −log p(Y |X) (since probabilities are normalized). Such a sequence proves the result for n = 1. The modification for n dimensions follows: S is broken into ⌊T/n⌋subsequences where in every subsequence only one dimension has x(t) k = 1 (and the other dimensions are set to 0). The result follows due to the additivity of the losses on these subsequences. □ 3 MAP Estimation We now present bounds for MAP algorithms for both Gaussian linear regression (i.e., ridge regression) and logistic regression. These algorithms use the maximum ˆθt−1 of pt−1(θ) to form their predictive distribution p(y|x(t), ˆθt−1) at time t, as opposed to BMA’s predictive distribution of p(y|x(t), St−1). As expected these bounds are weaker than BMA, though perhaps not unreasonably so. 3.1 The Square Loss and Ridge Regression Before we provide the MAP bound, let us first present the form of the posteriors and the predictions for Gaussian linear regression. Define At = 1 ν2 In + 1 σ2 Pt i=1 x(i)x(i)T , and bt = Pt i=1 x(i)y(i). We now have that pt(θ) = p(θ|St) = N θ; ˆθt, ˆΣt , (6) where ˆθt = A−1 t bt, and ˆΣt = A−1 t . Also, the predictions at time t + 1 are given by p(y(t+1)|x(t+1), St) = N y(t+1); ˆyt+1, s2 t+1 (7) where ˆyt+1 = ˆθT t x(t+1), s2 t+1 = x(t+1)T ˆΣtx(t+1) + σ2. In contrast, the prediction of a fixed expert using parameter θ∗would be p(y(t)|x(t), θ∗) = N y(t); y∗ t , σ2 , (8) where y∗ t = θ∗T x(t). Now the BMA loss is: LBMA(S) = T X t=1 1 2s2 t (y(t) −ˆθT t−1x(t))2 + log q 2πs2 t (9) Importantly, note how Bayes is adaptively weighting the squared term with the inverse variances 1/st (which depend on the current observation x(t)). The logloss of using a fixed expert θ∗is just: Lθ∗(S) = T X t=1 1 2σ2 (y(t) −θ∗T x(t))2 + log √ 2πσ2 (10) The MAP procedure (referred to as ridge regression) uses p(y|x(t), ˆθt−1) which has a fixed variance. Hence, the MAP loss is essentially the square loss and we define it as such: eLMAP(S) = 1 2 T X t=1 (y(t) −ˆθT t−1x(t))2 , eLθ∗(S) = 1 2 T X t=1 (y(t) −θ∗T x(t))2, (11) where ˆθt is the MAP estimate (see Equation 6). Corollary 3.1: Let γ2 = σ2 + ν2. For all S such that ||x(t)|| ≤1 and for all θ∗, we have eLMAP(S) ≤γ2 σ2 eLθ∗(S) + γ2 2ν2 ||θ∗||2 + γ2n 2 log 1 + Tν2 σ2n Proof: Using Equations (9,10) and Theorem 2.2, we have T X t=1 1 2s2 t (y(t) −ˆθT t−1x(t))2 ≤ T X t=1 1 2σ2 (y(t) −θ∗T x(t))2 + 1 2ν2 ||θ∗||2 +n 2 log 1 + Tcν2 n + T X t=1 log √ 2πσ2 p 2πs2 t Equations (6, 7) imply that σ2 ≤s2 t ≤σ2 + ν2. Using this, the result follows by noting that the last term is negative and by multiplying both sides of the equation by σ2 + ν2. □ We might have hoped that MAP were more competitive in that the leading coefficient, in front of the eLθ∗(S) term in the bound, be 1 (similar to Theorem 2.2) rather than γ2 σ2 . Crudely, the reason that MAP is not as effective as BMA is that MAP does not take into account the uncertainty in its predictions—thus the squared terms cannot be reweighted to take variance into account (compare Equations 9 and 11). Some previous (non-Bayesian) algorithms did in fact have bounds with this coefficient being unity. Vovk (2001) provides such an algorithm, though this algorithm differs from MAP in that its predictions at time t are a nonlinear function of x(t) (it uses At instead of At−1 at time t). Foster (1991) provides a bound with this coefficient being 1 with more restrictive assumptions. Azoury and Warmuth (2001) also provide a bound with a coefficient of 1 by using a MAP procedure with “clipping.” (Their algorithm thresholds the prediction ˆyt = ˆθT t−1x(t) if it is larger than some upper bound. Note that we do not assume any upper bound on y(t).) As the following lower bound shows, it is not possible for the MAP linear regression algorithm to have a coefficient of 1 for eLθ∗(S), with a reasonable regret bound. A similar lower bound is in Vovk (2001), which doesn’t apply to our setting where we have the additional constraint ||x(t)|| ≤1. Theorem 3.2: Let γ2 = σ2 + ν2. There exists a sequence S with ||x(t)|| ≤1 such that eLMAP(S) ≥inf θ∗(eLθ∗(S) + 1 2||θ∗||2) + Ω(T) Proof: (sketch) Let S be a length T + 1 sequence, with n = 1, where for the first T steps, x(t) = 1/ √ T and y(t) = 1, and at T +1, x(T +1) = 1 and y(T +1) = 0. Here, one can show that infθ∗(eLθ∗(S) + 1 2||θ∗||2) = T/4 and eLMAP(S) ≥3T/8, and the result follows. □ 3.2 Logistic Regression MAP estimation is often used for regularized logistic regression, since it requires only solving a convex program (while BMA has to deal with a high dimensional integral over θ that is intractable to compute exactly). Letting ˆθt−1 be the maximum of the posterior pt−1(θ), define LMAP(S) = PT t=1 −log p(y(t)|x(t), ˆθt−1). As with the square loss case, the bound we present is multiplicatively worse (by a factor of 4). Theorem 3.3: In the logistic regression model with ν ≤0.5, we have that for all sequences S such that ||x(t)|| ≤1 and y(t) ∈{0, 1} and for all θ∗ LMAP(S) ≤4Lθ∗(S) + 2 ν2 ||θ∗||2 + 2n log 1 + Tν2 n Proof: (sketch) Assume n = 1 (the general case is analogous). The proof consists of showing that ℓˆθt−1(t) = −log p(y(t)|x(t), ˆθt−1) ≤4ℓBMA(t). Without loss of generality, assume y(t) = 1 and x(t) ≥0, and for convenience, we just write x instead of x(t). Now the BMA prediction is R θ p(1|θ, x)pt−1(θ)dθ, and ℓBMA(t) is the negative log of this. Note θ = ∞gives probability 1 for y(t) = 1 (and this setting of θ minimizes the loss at time t). Since we do not have a closed form solution of the posterior pt−1, let us work with another distribution q(θ) in lieu of pt−1(θ) that satisfies certain properties. Define pq = R θ p(1|θ, x)q(θ)dθ, which can be viewed as the prediction using q rather than the posterior. We choose q to be the rectification of the Gaussian N(θ; ˆθt−1, ν2In), such that there is positive probability only for θ ≥ˆθt−1 (and the distribution is renormalized). With this choice, we first show that the loss of q, −log pq, is less than or equal to ℓBMA(t). Then we complete the proof by showing that ℓˆθt−1(t) ≤−4 log pq, since −log pq ≤ℓBMA(t). Consider the q which maximizes pq subject to the following constraints: let q(θ) have its maximum at ˆθt−1; let q(θ) = 0 if θ < ˆθt−1 (intuitively, mass to the left of ˆθt−1 is just making the pq smaller); and impose the constraint that −(log q(θ))′′ ≥1/ν2. We now argue that for such a q, −log pq ≤ℓBMA(t). First note that due to the Gaussian prior p0, it is straightforward to show that −(log pt−1)′′(θ) ≥ 1 ν2 (the prior imposes some minimum curvature). Now if this posterior pt−1 were rectified (with support only for θ ≥ˆθt−1) and renormalized, then such a modified distribution clearly satisfies the aforementioned constraints, and it has loss less than the loss of pt−1 itself (since the rectification only increases the prediction). Hence, the maximizer, q, of pq subject to the constraints has loss less than that of pt−1, i.e. −log pq ≤ℓBMA(t). We now show that such a maximal q is the (renormalized) rectification of the Gaussian N(θ; ˆθt−1, ν2In), such that there is positive probability only for θ > ˆθt−1. Assume some other q2 satisfied these constraints and maximized pq. It cannot be that q2(ˆθt−1) < q(ˆθt−1), else one can show q2 would not be normalized (since with q2(ˆθt−1) < q(ˆθt−1), the curvature constraint imposes that this q2 cannot cross q). It also cannot be that q2(ˆθt−1) > q(ˆθt−1). To see this, note that normalization and curvature imply that q2 must cross pt only once. Now a sufficiently slight perturbation of this crossing point to the left, by shifting more mass from the left to the right side of the crossing point, would not violate the curvature constraint and would result in a new distribution with larger pq, contradicting the maximality of q2. Hence, we have that q2(ˆθt−1) = q(ˆθt−1). This, along with the curvature constraint and normalization, imply that the rectified Gaussian, q, is the unique solution. To complete the proof, we show ℓˆθt−1(t) = −log p(1|x, ˆθt−1) ≤−4 log pq. We consider two cases, ˆθt−1 < 0 and ˆθt−1 ≥0. We start with the case ˆθt−1 < 0. Using the boundedness of the derivative |∂log p(1|x, θ)/∂θ| < 1 and that q only has support for θ ≥ˆθt−1, we have pq = Z θ exp(log p(1|x, θ))q(θ)dθ ≤ Z θ exp log(p(1|x, ˆθt−1) + θ −ˆθt−1 q(θ)dθ ≤1.6p(1|x, ˆθt−1) where we have used that R θ exp(θ −ˆθt−1)q(θ)dθ < 1.6 (which can be verified numerically using the definition of q with ν ≤0.5). Now observe that for ˆθt−1 ≤0, we have the lower bound −log p(1|x, ˆθt−1) ≥log 2. Hence, −log pq ≥−log p(1|x, ˆθt−1) −log 1.6 ≥ (−log p(1|x, ˆθt−1))(1 −log 1.6/ log 2) ≥0.3ℓˆθt−1(t), which shows ℓˆθt−1(t) ≤−4 log pq. Now for the case ˆθt−1 ≥0. Let σ be the sigmoid function, so p(1|x, θ) = σ(θx) and pq = R θ σ(xθ)q(θ)dθ. Since the sigmoid is concave for θ > 0 and, for this case, q only has support from positive θ, we have that pq ≤σ x R θ θq(θ)dθ . Using the definition of q, we then have that pq ≤σ(x(ˆθt−1 + ν)) ≤σ(ˆθt−1 + ν), where the last inequality follows from ˆθt−1 + ν > 0 and x ≤1. Using properties of σ, one can show |(log σ)′(z)| < −log σ(z) (for all z). Hence, for all θ ≥ˆθt−1, |(log σ)′(θ)| < −log σ(θ) ≤−log σ(ˆθt−1). Using this derivative condition along with the previous bound on pq, we have that −log pq ≥ −log σ(ˆθt−1 + ν) ≥(−log σ(ˆθt−1))(1 −ν) = ℓˆθt−1(t)(1 −ν), which shows that ℓˆθt−1(t) ≤−4 log pq (since ν ≤0.5). This proves the claim when ˆθt−1 ≥0. □ Acknowledgments. We thank Dean Foster for numerous helpful discussions. This work was supported by the Department of the Interior/DARPA under contract NBCHD030010. References Azoury, K. S. and Warmuth, M. (2001). Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning, 43(3). Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D., Schapire, R., and Warmuth, M. (1997). How to use expert advice. J. ACM, 44. Cesa-Bianchi, N., Helmbold, D., and Panizza, S. (1998). On Bayes methods for on-line boolean prediction. Algorithmica, 22. Dawid, A. (1984). Statistical theory: The prequential approach. J. Royal Statistical Society. Foster, D. P. (1991). Prediction in the worst case. Annals of Statistics, 19. Freund, Y. and Schapire, R. (1999). Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103. Freund, Y., Schapire, R., Singer, Y., and Warmuth, M. (1997). Using and combining predictors that specialize. In STOC. Grunwald, P. (2005). A tutorial introduction to the minimum description length principle. McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models (2nd ed.). Chapman and Hall. Ng, A. Y. and Jordan, M. (2001). Convergence rates of the voting Gibbs classifier, with application to Bayesian feature selection. In Proceedings of the 18th Int’l Conference on Machine Learning. Vovk, V. (2001). Competitive on-line statistics. International Statistical Review, 69.
|
2004
|
152
|
2,566
|
Analysis of a greedy active learning strategy Sanjoy Dasgupta∗ University of California, San Diego dasgupta@cs.ucsd.edu Abstract We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels. 1 Introduction An increasingly common phenomenon in classification tasks is that unlabeled data is abundant, whereas labels are considerably harder to come by. Genome sequencing projects, for instance, are producing vast numbers of peptide sequences, but reliably labeling even one of these with structural information requires time and close attention. This distinction between labeled and unlabeled data is not captured in standard models like the PAC framework, and has motivated the field of active learning, in which the learner is able to ask for the labels of specific points, but is charged for each label. These query points are typically chosen from an unlabeled data set, a practice called pool-based learning [10]. There has also been some work on creating query points synthetically, including a rich body of theoretical results [1, 2], but this approach suffers from two problems: first, from a practical viewpoint, the queries thus produced can be quite unnatural and therefore bewildering for a human to classify [3]; second, since these queries are not picked from the underlying data distribution, they might have limited value in terms of generalization. In this paper, we focus on pool-based learning. We are interested in active learning with generalization guarantees. Suppose the hypothesis class has VC dimension d and we want a classifier whose error rate on distribution P over the joint (input, label) space, is less than ϵ > 0. The theory tells us that in a supervised setting, we need some m = m(ϵ, d) labeled points drawn from P (for a fixed level of confidence, which we will henceforth ignore). Can we get away with substantially fewer than m labels if we are given unlabeled points from P and are able to adaptively choose which points to label? How much fewer, and what querying strategies should we follow? Here is a toy example illustrating the potential of active learning. Suppose the data lie on the real line, and the classifiers are simple thresholding functions, H = {hw : w ∈R}: hw(x) = 1(x ≥w). VC theory tells us that if the underlying distribution P can be classified perfectly by some hypothesis in H (called the realizable case), then it is enough to draw m = O(1/ϵ) random labeled examples from P, and to return any classifier consistent with them. But suppose we instead draw m unlabeled samples from P. If we lay these points down on the line, their hidden labels are a sequence of 0’s followed by a sequence of 1’s, and the goal is to discover the point w at which the transition occurs. This can be accomplished with a simple binary search which asks for just log m labels. Thus active learning gives us an exponential improvement in the number of labels needed: by adaptively querying log m labels, we can automatically infer the rest of them. Generalized binary search? So far we have only looked at an extremely simple learning problem. For more complicated hypothesis classes H, is a sort of a generalized binary search possible? What would the search space look like? For supervised learning, in the realizable case, the usual bounds specify a sample complexity of (very roughly) m ≈d/ϵ labeled points if the target error rate is ϵ. So let’s pick this many unlabeled points, and then try to find a hypothesis consistent with all the hidden labels by adaptively querying just a few of them. We know via Sauer’s lemma that H can classify these m points (considered jointly) in at most O(md) different ways – in effect, the size of H is reduced to O(md). This finite set is the effective hypothesis class bH. (In the 1-d example, bH has size m + 1, corresponding to the intervals into which the points xi split the real line.) The most we can possibly learn about the target hypothesis, even if all labels are revealed, is to narrow it down to one of these regions. Is it possible to pick among these O(md) possibilities using o(m) labels? If binary search were possible, just O(d log m) labels would be needed. Unfortunately, we cannot hope for a generic positive result of this kind. The toy example above is a 1-d linear separator. We show that for d ≥2, the situation is very different: Pick any collection of m (unlabeled) points on the unit sphere in Rd, for d ≥2, and assume their hidden labels correspond perfectly to some linear separator. Then there are target hypotheses in bH which cannot be identified without querying all the labels. (What if the active learner is not required to identify exactly the right hypothesis, but something close to it? This and other little variations don’t help much.) Therefore, even in the most benign situations, we cannot expect that every target hypothesis will be identifiable using o(m) labels. To put it differently, in the worst case over target hypotheses, active learning gives no improvement in sample complexity. But hopefully, on average (with respect to some distribution over target hypotheses), the number of labels needed is small. For instance, when d = 2 in the bad case above, a target hypothesis chosen uniformly at random from bH can be identified by querying just O(log m) labels in expectation. This motivates the main model of this paper. An average-case model We will count the expected number of labels queried when the target hypothesis is chosen from some distribution π over bH. This can be interpreted as a Bayesian setting, but it is more accurate to think of π merely as a device for averaging query counts, which has no bearing on the final generalization bound. A natural choice is to make π uniform over bH. Most existing active learning schemes work with H rather than bH; but bH reflects the underlying combinatorial structure of the problem, and it can’t hurt to deal with it directly. Often π can chosen to mask the structure of bH; for instance, if H is the set of linear separators, then bH is a set of convex regions of H, and π can be made proportional to the volume of each region. This makes the problem continuous rather than combinatorial. What is the expected number of labels needed to identify a target hypothesis chosen from π? In this average-case setting, is it always possible to get away with o(m) labels, where m is the sample complexity of the supervised learning problem as defined above? We show that the answer, once again, is sadly no. Thus the benefit of active learning is really a function of the specific hypothesis class and the particular pool of unlabeled data. Depending on these, the expected number of labels needed lies in the following range (within constants): ideal case: d log m perfect binary search worst case: m all labels, or randomly chosen queries Notice the exponential gap between the top and bottom of this range. Is there some simple querying strategy which always achieves close to the minimum (expected) number of labels, whatever this minimum number might be? Our main result is that this property holds for a variant of a popular greedy scheme: always ask for the label which most evenly divides the current effective version space weighted by π. This doesn’t necessarily minimize the number of queries, just as a greedy decision tree algorithm need not produce trees of minimum size. However: When π is uniform over bH, the expected number of labels needed by this greedy strategy is at most O(ln | bH|) times that of any other strategy. We also give a bound for arbitrary π, and show corresponding lower bounds in both the uniform and non-uniform cases. Variants of this greedy scheme underlie many active learning heuristics, and are often described as optimal in the literature. This is the first rigorous validation of the scheme in a general setting. The performance guarantee is significant: recall log | bH| = O(d log m), the minimum number of queries possible. 2 Preliminaries Let X be the input space, Y = {0, 1} the space of labels, and P an unknown underlying distribution over X × Y. We want to select a hypothesis (a function X →Y) from some class H of VC dimension d < ∞, which will accurately predict labels of points in X. We will assume that the problem is realizable, that is, there is some hypothesis in H which gives a correct prediction on every point. Suppose that points (x1, y1) . . . , (xm, ym) are drawn randomly from P. Standard bounds give us a function m(ϵ, d) such that if we want a hypothesis of error ≤ϵ (on P, modulo some fixed confidence level), and if m ≥m(ϵ, d), then we need only pick a hypothesis h ∈H consistent with these labeled points [9]. Now suppose just the pool of unlabeled data x1, . . . , xm is available. The possible labelings of these points form a subset of {0, 1}m, the effective hypothesis class bH ∼= {(h(x1), . . . , h(xm)) : h ∈H}. Sauer’s lemma [9] tells us | bH| = O(md). We want to pick the unique h ∈bH which is consistent with all the hidden labels, by querying just a few of them. Any deterministic search strategy can be represented as a binary tree whose internal nodes are queries (“what is the xi’s label?”), and whose leaves are elements of bH. We can also accommodate randomization – for instance, to allow a random choice of query point – by letting internal nodes of the tree be random coin flips. Our main result, Theorem 3, is unaffected by this generalization. x4 x3 x1 xm x2 L3 Figure 1: To identify target hypotheses like L3, we need to see all the labels. 3 Some bad news Claim 1 Let H be the hypothesis class of linear separators in R2. For any set of m distinct data points on the perimeter of the unit circle, there are always some target hypotheses in bH which cannot be identified without querying all m labels. Proof. To see this, consider the following realizable labelings (Figure 1): • Labeling L0: all points are negative. • Labeling Li (1 ≤i ≤m): all points are negative except xi. It is impossible to distinguish these cases without seeing all the labels.1 Remark. To rephrase this example in terms of learning a linear separator with error ≤ϵ, suppose the input distribution P(X) is a density over the perimeter of the unit circle. No matter what this density is, there are always target hypotheses in H which force us to ask for Ω(1/ϵ) labels: no improvement over the sample complexity of supervised learning. In this example, the bad target hypotheses have a large imbalance in probability mass between their positive and negative regions. By adding an extra dimension and an extra point, exactly the same example can be modified to make the bad hypotheses balanced. Let’s return to the original 2-d case. Some hypotheses must lie at depth m in any query tree; but what about the rest? Well, suppose for convenience that x1, . . . , xm are in clockwise order around the unit circle. Then bH = {hij : 1 ≤i ̸= j ≤m} ∪{h0, h1}, where hij labels xi · · · xj−1 positive (if j < i it wraps around) and the remaining points negative, and h0, h1 are everywhere negative/positive. It is possible to construct a query tree in which each hij lies at depth ≤2(m/|j −i| + log |j −i|). Thus, if the target hypothesis is chosen uniformly from bH, the expected number of labels queried is at most 1 m(m−1)+2 2m + P i̸=j 2(m/|j −i| + log |j −i|) = O(log m). This is why we place our hopes in an average-case analysis. 1What if the final hypothesis – considered as a point in {0, 1}m – doesn’t have to be exactly right, but within Hamming distance k of the correct one? Then a similar example forces Ω(m/k) queries. 4 Main result Let π be any distribution over bH; we will analyze search strategies according to the number of labels they require, averaged over target hypotheses drawn from π. In terms of query trees, this is the average depth of a leaf chosen according to π. Specifically, let T be any tree whose leaves include the support of π. The quality of this tree is Q(T, π) = P h∈b H π(h) · (# labels needed for h) = P h∈b H π(h) · leaf-depth(h). Is there always a tree of average depth o(m)? The answer, once again, is sadly no. Claim 2 Pick any d ≥2 and any m ≥2d. There is an input space X of size m and a hypothesis class H of VC dimension d, defined on domain X, with the following property: if π is chosen to be uniform over H = bH, then any query tree T has Q(T, π) ≥m/8. Proof. Let X consist of any m points x1, . . . , xm, and let H consist of all hypotheses h : X →{0, 1} which are positive on exactly d inputs. In order to identify a particular element h ∈H, any querying method must discover exactly the d points xi on which h is nonzero. By construction, the order in which queries are asked is irrelevant – it might as well be x1, x2, . . .. The rest is a simple probability calculation. In our average-case model, we have seen one example in which intelligent querying results in an exponential improvement in the number of labels required, and one in which it is no help at all. Is there some generic scheme which always comes close to minimizing the number of queries, whatever the minimum number might be? Here’s a natural candidate: Greedy strategy. Let S ⊆bH be the current version space. For each unlabeled xi, let S+ i be the hypotheses which label xi positive and S− i the ones which label it negative. Pick the xi for which these sets are most nearly equal in π-mass, that is, for which min{π(S+ i ), π(S− i )} is largest. We show this is almost as good at minimizing queries as any other strategy. Theorem 3 Let π be any distribution over bH. Suppose that the optimal query tree requires Q∗labels in expectation, for target hypotheses chosen according to π. Then the expected number of labels needed by the greedy strategy is at most 4Q∗ln 1/(minh π(h)). For the case of uniform π, the approximation ratio is thus at most 4 ln | bH|. We also show almost-matching lower bounds in both the uniform and non-uniform cases. 5 Analysis of the greedy active learner 5.1 Lower bounds on the greedy scheme The greedy approach is not optimal because it doesn’t take into account the way in which a query reshapes the search space – specifically, the effect of a query on the quality of other queries. For instance, bH might consist of several dense clusters, each of which permits rapid binary search. However, the version space must first be whittled down to one of these subregions, and this process, though ultimately optimal, might initially be slower at shrinking the hypothesis space than more shortsighted alternatives. A concrete example of this type gives rise to the following lower bound. Claim 4 For any n ≥16 which is a power of two, there is a concept class bHn of size n such that: under uniform π, the optimal tree has average height at most qn = Θ(log n), but the greedy active learning strategy produces a tree of average height Ω(qn · log n log log n). For non-uniform π, the greedy scheme can deviate more substantially from optimality. Claim 5 For any n ≥2, there is a hypothesis class bH with 2n + 1 elements and a distribution π over bH, such that: (a) π ranges in value from 1/2 to 1/2n+1; (b) the optimal tree has average depth less than 3; (c) the greedy tree has average depth at least n/2. Proofs of these lower bounds appear in the full paper, available at the author’s website. 5.2 Upper bound Overview. The lower bounds on the quality of a greedy learner are sobering, but things cannot get too much worse than this. Here’s the basic argument for uniform π: we show that if the optimal tree T ∗requires Q∗queries in expectation, then some query must (again in expectation) “cut off” a chunk of bH of π-mass Ω(1/Q∗). Therefore, the root query of the greedy tree TG is at least this good (cf. Johnson’s set cover analysis [8]). Things get trickier when we try to show that the rest of TG is also good, because although T ∗uses just Q∗queries on average, it may need many more queries for certain hypotheses. Subtrees of TG could correspond to version spaces for which more than Q∗queries are needed, and the roots of these subtrees might not cut down the version space much... For a worst-case model, a proof of approximate optimality is known in a related context [6]; as we saw in Claim 1, that model is trivial in our situation. The average-case model, and especially the use of arbitrary weights π, require more care. Details. For want of space, we only discuss some issues that arise in proving the main theorem, and leave the actual proof to the full paper. The key concept we have to define is the quality of a query, and it turns out that we need this to be monotonically decreasing, that is, it should only go down as active learning proceeds and the version space shrinks. This rules out some natural entropy-based notions. Suppose we are down to some version space S ⊆bH, and a possible next query is xj. If S+ is the subset of S which labels xj positive, and S−are the ones that label it negative, then on average the probability mass (measured by π) eliminated by xj is π(S+) π(S) π(S−) + π(S−) π(S) π(S+) = 2π(S+)π(S−) π(S) . We say xj shrinks (S, π) by this much, with the understanding that this is in expectation. Shrinkage is easily seen to have the monotonicity property we need. Lemma 6 If xj shrinks ( bH, π) by ∆, then it shrinks (S, π) by at most ∆for any S ⊆bH. We would expect that if the optimal tree is short, there must be at least one query which shrinks ( bH, π) considerably. More concretely, the definition of shrinkage seems to suggest that if all queries provide shrinkage at most ∆, and the current version space has mass π(S), then at least about π(S)/∆more queries are needed. This isn’t entirely true, because of a second effect: if |S| = 2, then we need just one query, regardless of π(S). Roughly speaking, when there are lots of hypotheses with significant mass left in S, the first effect dominates; thereafter the second takes over. To smoothly incorporate both effects, we use the notion of collision probability. For a distribution ν over support Z, this is CP(ν) = P z∈Z ν(z)2, the chance that two random draws from ν are identical. Lemma 7 Suppose every query shrinks ( bH, π) by at most ∆> 0. Pick any S ⊆bH, and any query tree T whose leaves include S. If πS is the restriction of π to S (that is, πS(h) = π(h)/π(S) for h ∈S), then Q(T, πS) ≥(1 −CP(πS)) · π(S)/∆. Corollary 8 Pick any S ⊆bH and any tree T whose leaves include all of S. Then there must exist a query which shrinks (S, πS) by at least (1 −CP(πS))/Q(T, πS). So if the current version space S ⊆bH is such that πS has small collision probability, some query must split off a sizeable chunk of S. This can form the basis of a proof by induction. But what if CP(πS) is large, say greater than 1/2? In this case, the mass of some particular hypothesis h0 ∈S exceeds that of all the others combined, and S could shrink by just an insignificant amount during the subsequent greedy query, or even during the next few iterations of greedy queries. It turns out, however, that within roughly the number of iterations that the optimal tree needs for target h0, the greedy procedure will either reject h0 or identify it as the target. If it is rejected, then by that time S will have shrunk considerably. By combining the two cases for CP(πS), we get the following lemma, which is proved in the full paper and yields our main theorem as an immediate consequence. Lemma 9 Let T ∗denote any particular query tree for π, and let T be the greedilyconstructed query tree. For any S ⊆bH which corresponds to a subtree TS of T, Q(TS, πS) ≤4Q(T ∗, πS) ln π(S) minh∈S π(h). 6 Related work and promising directions Rather than attempting to summarize the wide range of proposed active learning methods, for instance [5, 7, 10, 13, 14], we will discuss three basic techniques upon which they rely. Greedy search. This is the technique we have abstracted and rigorously validated in this paper. It is the foundation of most of the schemes cited above. Algorithmically, the main problem is that the query selection rule is not immediately tractable, so approximations are necessary. For linear separators, bH consists of convex sets, and if π is chosen to be proportional to volume, query selection involves estimating volumes of convex regions, which is tractable but (using present techniques) inconvenient. Tong and Koller [13] investigate margin-based approximations which are efficiently computable using SVM technology. Opportunistic priors. This is a trick in which the learner takes a look at the unlabeled data and then places bets on hypotheses. A uniform bet over all of bH leads to standard generalization bounds. But if the algorithm places more weight on certain hypotheses (for instance, those with large margin), then its final error bound is excellent if it guessed right, and worse-than-usual if it guessed wrong. This technique is not specific to active learning, and has been analyzed elsewhere (eg. [12]). One interesting line of work investigates a flexible family of priors specified by pairwise similarities between data points, eg. [14]. Bayesian assumptions. In our analysis, although π can be seen as some sort of prior belief, there is no assumption that nature shares this belief; in particular, the generalization bound does not depend on it. A Bayesian assumption has an immediate benefit for active learning: if at any stage the remaining version space (weighted by prior π) is largely in agreement on the unlabeled data, it is legitimate to stop and output one of these remaining hypotheses [7]. In a non-Bayesian setting this is not legitimate. When the hypothesis class consists of probabilistic classifiers, the Bayesian assumption has also been used in another way: to approximate the greedy selection rule using the MAP estimate instead of an expensive summation over the posterior (eg. [11]). In terms of theoretical results, another work which considers the tradeoff between labels and generalization error is [7], in which a greedy scheme, realized using sampling, is analyzed in a Bayesian setting. The authors show that it is possible to achieve an exponential improvement in the number of labels needed to learn linear separators, when both data and target hypothesis are chosen uniformly from the unit sphere. It is an intriguing question whether this holds for more general data distributions. Other directions. We have looked at the case where the acceptable error rate is fixed and the goal is to minimize the number of queries. What about fixing the number of queries and asking for the best (average) error rate possible? In other words, the query tree has a fixed depth, and each leaf is annotated with its remaining version space S ⊆bH. Treating each element of S as a point in {0, 1}m (its predictions on the pool of data), the error at this leaf depends on the Hamming diameter of S. What is a good querying strategy for producing low-diameter leaves? The most widely-used classifiers are perhaps linear separators. Existing active learning schemes ignore the rich algebraic structure of bH, an arrangement of hyperplanes [4]. Acknowledgements. I am very grateful to the anonymous NIPS reviewers for their careful and detailed feedback. References [1] D. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. [2] D. Angluin. Queries revisited. Proceedings of the Twelfth International Conference on Algorithmic Learning Theory, pages 12–31, 2001. [3] E.B. Baum and K. Lang. Query learning can work poorly when a human oracle is used. International Joint Conference on Neural Networks, 1992. [4] A. Bjorner, M. Las Vergnas, B. Sturmfels, N. White, and G. Ziegler. Oriented matroids. Cambridge University Press, 1999. [5] D. Cohn, Z. Ghahramani, and M. Jordan. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145, 1996. [6] S. Dasgupta, P.M. Long, and W.S. Lee. A theoretical analysis of query selection for collaborative filtering. Machine Learning, 51:283–298, 2003. [7] Y. Freund, S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28:133–168, 1997. [8] D.S. Johnson. Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences, 9:256–278, 1974. [9] M.J. Kearns and U.V. Vazirani. An introduction to computational learning theory. MIT Press, 1993. [10] A. McCallum and K. Nigam. Employing em and pool-based active learning for text classification. Fifteenth International Conference on Machine Learning, 1998. [11] N. Roy and A. McCallum. Toward optimal active learning through sampling of error reduction. Twentieth International Conference on Machine Learning, 2003. [12] J. Shawe-Taylor, P. Bartlett, R. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 1998. [13] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2001. [14] X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learning and semi-supervised learning using gaussian fields and harmonic functions. ICML workshop, 2003.
|
2004
|
153
|
2,567
|
A Cost-Shaping LP for Bellman Error Minimization with Performance Guarantees Daniela Pucci de Farias Mechanical Engineering Massachusetts Institute of Technology Benjamin Van Roy Management Science and Engineering and Electrical Engineering Stanford University Abstract We introduce a new algorithm based on linear programming that approximates the differential value function of an average-cost Markov decision process via a linear combination of pre-selected basis functions. The algorithm carries out a form of cost shaping and minimizes a version of Bellman error. We establish an error bound that scales gracefully with the number of states without imposing the (strong) Lyapunov condition required by its counterpart in [6]. We propose a path-following method that automates selection of important algorithm parameters which represent counterparts to the “state-relevance weights” studied in [6]. 1 Introduction Over the past few years, there has been a growing interest in linear programming (LP) approaches to approximate dynamic programming (DP). These approaches offer algorithms for computing weights to fit a linear combination of pre-selected basis functions to a dynamic programming value function. A control policy that is “greedy” with respect to the resulting approximation is then used to make real-time decisions. Empirically, LP approaches appear to generate effective control policies for highdimensional dynamic programs [1, 6, 11, 15, 16]. At the same time, the strength and clarity of theoretical results about such algorithms have overtaken counterparts available for alternatives such as approximate value iteration, approximate policy iteration, and temporal-difference methods. As an example, a result in [6] implies that, for a discrete-time finite-state Markov decision process (MDP), if the span of the basis functions contains the constant function and comes within a distance of ϵ of the dynamic programming value function then the approximation generated by a certain LP will come within a distance of O(ϵ). Here, the coefficient of the O(ϵ) term depends on the discount factor and the metric used for measuring distance, but not on the choice of basis functions. On the other hand, the strongest results available for approximate value iteration and approximate policy iteration only promise O(ϵ) error under additional requirements on iterates generated in the course of executing the algorithms [3, 13]. In fact, it has been shown that, even when ϵ = 0, approximate value iteration can generate a diverging sequence of approximations [2, 5, 10, 14]. In this paper, we propose a new LP for approximating optimal policies. We work with a formulation involving average cost optimization of a possibly infinite-state MDP. The fact that we work with this more sophisticated formulation is itself a contribution to the literature on LP approaches to approximate DP, which have been studied for the most part in finite-state discounted-cost settings. But we view as our primary contributions the proposed algorithms and theoretical results, which strengthen in important ways previous results on LP approaches and unify certain ideas in the approximate DP literature. In particular, highlights of our contributions include: 1. Relaxed Lyapunov Function dependence. Results in [6] suggest that – in order for the LP approach presented there to scale gracefully to large problems – a certain linear combination of the basis functions must be a “Lyapunov function,” satisfying a certain strong Lyapunov condition. The method and results in our current paper eliminate this requirement. Further, the error bound is strengthened because it alleviates an undesirable dependence on the Lyapunov function that appears in [6] even when the Lyapunov condition is satisfied. 2. Restart Distribution Selection. Applying the LP studied in [6] requires manual selection of a set of parameters called state-relevance weights. That paper illustrated the importance of a good choice and provided intuition on how one might go about making the choice. The LP in the current paper does not explicitly make use of state-relevance weights, but rather, an analog which we call a restart distribution, and we propose an automated method for finding a desirable restart distribution. 3. Relation to Bellman-Error Minimization. An alternative approach for approximate DP aims at minimizing “Bellman error” (this idea was first suggested in [16]). Methods proposed for this (e.g., [4, 12]) involve stochastic steepest descent of a complex nonlinear function. There are no results indicating whether a global minimum will be reached or guaranteeing that a local minimum attained will exhibit desirable behavior. In this paper, we explain how the LP we propose can be thought of as a method for minimizing a version of Bellman error. The important differences here are that our method involves solving a linear – rather than a nonlinear (and nonconvex) – program and that there are performance guarantees that can be made for the outcome. The next section introduces the problem formulation we will be working with. Section 3 presents the LP approximation algorithm and an error bound. In Section 4, we propose a method for computing a desirable reset distribution. The LP approximation algorithm works with a perturbed version of the MDP. Errors introduced by this perturbation are studied in Section 5. A closing section discusses relations to our prior work on LP approaches to approximate DP [6, 8]. 2 Problem Formulation and Perturbation Via Restart Consider an MDP with a countable state space S and a finite set of actions A available at each state. Under a control policy u : S 7→A, the system dynamics are defined by a transition probability matrix Pu ∈ℜ|S|×|S|, where for policies u and u and states x and y, (Pu)xy = (Pu)xy if u(x) = u(x). We will assume that, under each policy u, the system has a unique invariant distribution, given by πu(x) = limt→∞(P t u)yx, for all x, y ∈S. A cost g(x, a) is associated with each state-action pair (x, a). For shorthand, given any policy u, we let gu(x) = g(x, u(x)). We consider the problem of computing a policy that minimizes the average cost λu = πT u gu. Let λ∗= minu λu and define the differential value function h∗(x) = minu limT →∞Eu x[PT t=0(gu(xt) −λ∗)]. Here, the superscript u of the expectation operator denotes the control policy and the subscript x denotes conditioning on x0 = x. It is easy to show that there exists a policy u that simultaneously minimizes the expectation for every x. Further, a policy u∗is optimal if and only if u∗(x) ∈arg minu(g(x, a) + P y(Pu)xyh∗(y)) for all x ∈S. While in principle h∗can be computed exactly by dynamic programming algorithms, this is often infeasible due to the curse of dimensionality. We consider approximating h∗using a linear combination PK k=1 rkφk of fixed basis functions φ1, . . . , φK : S 7→ ℜ. In this paper, we propose and analyze an algorithm for computing weights r ∈ℜK to approximate: h∗≈PK k=1 φk(x)rk. It is useful to define a matrix Φ ∈ℜ|S|×K so that our approximation to h∗can be written as Φr. The algorithm we will propose operates on a perturbed version of the MDP. The nature of the perturbation is influenced by two parameters: a restart probability (1 −α) ∈[0, 1] and a restart distribution c over the state space. We refer to the new system as an (α, c)-perturbed MDP. It evolves similarly with the original MDP, except that at each time, the state process restarts with probability 1 −α; in this event, the next state is sampled randomly according to c. Hence, the perturbed MDP has the same state space, action space, and cost function as the original one, but the transition matrix under each policy u are given by Pα,u = αPu +(1−α)ecT . We define some notation that will streamline our discussion and analysis of perturbed MDPs. Let πα,u(x) = limt→∞(P t α,u)yx, λα,u = πT α,ugu, λ∗ α = minu λα,u, and let h∗ α be the differential value function for the (α, c)-perturbed MDP, and let u∗ α be a policy satisfying u∗ α(x) ∈arg minu(g(x, a) + P y(Pα,u)xyh∗ α(y)) for all x ∈S. Finally, we will make use of dynamic programming operators Tα,uh = gu + Pα,uh and Tαh = minu Tα,uh. 3 The New LP We now propose a new LP that approximates the differential value function of a (α, c)-perturbed MDP. This LP takes as input several pieces of problem data: 1. MDP parameters: g(x, a) and (Pu)xy for all x, y ∈S, a ∈A, u : S 7→A. 2. Perturbation parameters: α ∈[0, 1] and c : S 7→[0, 1] with P x c(x) = 1. 3. Basis functions: Φ = [φ1 · · · φK] ∈ℜ|S|×K. 4. Slack function and penalty: ψ : S 7→[1, ∞) and η > 0. We have defined all these terms except for the slack function and penalty, which we will explain after defining the LP. The LP optimizes decision variables r ∈ℜK and s1, s2 ∈ℜaccording to minimize s1 + ηs2 (1) subject to TαΦr −Φr + s11 + s2ψ ≥0 s2 ≥0. It is easy to see that this LP is feasible. Further, if η is sufficiently large, the objective is bounded. We assume that this is the case and denote an optimal solution by (˜r, ˜s1, ˜s2). Though the first |S| constraints are nonlinear, each involves a minimization over actions and therefore can be decomposed into |A| constraints. This results in a total of |S| × |A| + 1 constraints, which is unmanageable if the state space is large. We expect, however, that the solution to this LP can be approximated closely and efficiently through use of constraint sampling techniques along the lines discussed in [7]. We now offer an interpretation of the LP. The constraint TαΦr −Φr −λ∗ α1 ≥0 is satisfied if and only if Φr = h∗ α +κ1 for some κ ∈ℜ. Terms (s1 +λ∗ α)1 and s2ψ can be viewed as cost shaping. In particular, they effectively transform the costs g(x, a) to g(x, a) + s1 + λ∗ α + s2ψ(x), so that the constraint TαΦr −Φr −λ∗ α1 ≥0 can be met. The LP can alternatively be viewed as an efficient method for minimizing a form of Bellman error, as we now explain. Suppose that s2 = 0. Then, minimization of s1 corresponds to minimization of ∥min(TαΦr −Φr −λ∗ α1, 0)∥∞, which can be viewed as a measure of (one-sided) Bellman error. Measuring error with respect to the maximum norm is problematic, however, when the state space is large. In the extreme case, when there is an infinite number of states and an unbounded cost function, such errors are typically infinite and therefore do not provide a meaningful objective for optimization. This shortcoming is addressed by the slack term s2ψ. To understand its role, consider constraining s1 to be −λ∗ α and minimizing s2. This corresponds to minimization of ∥min(TαΦr −Φr −λ∗ α1, 0)∥∞,1/ψ, where the norm is defined by ∥h∥∞,1/ψ = maxx |h(x)|/ψ(x). This term can be viewed as a measure of Bellman error with respect to a weighted maximum norm, with weights 1/ψ(x). One important factor that distinguishes our LP from other approaches to Bellman error minimization [4, 12, 16] is a theoretical performance guarantee, which we now develop. For any r, let uα,r(x) ∈arg minu(gu(x) + (Pα,uΦr)(x)). Let πα,r = πα,uα,r Let λα,r = πT α,rguα,r. The following theorem establishes that the difference between the average cost λα,˜r associated with an optimal solution (˜r, ˜s1, ˜s2) to the LP and the optimal average cost λ∗ α is proportional to the minimal error that can be attained given the choice of basis functions. A proof of this theorem is provided in the appendix of a version of this paper available at http://www.stanford.edu/ bvr/psfiles/LPnips04.pdf. Theorem 3.1. If η ≥(2 −α)πT α,u∗ αψ then λα,˜r −λ∗ α ≤(1 + β)η max(θ, 1) 1 −α min r∈ℜK ∥h∗ α −Φr∥∞,1/ψ, where β = max u ∥Pα,u∥∞,1/ψ ≡max u ∥Pα,uh∥∞,1/ψ ∥h∥∞,1/ψ , θ = πT α,˜r(TαΦ˜r −Φ˜r + ˜s11 + ˜s2ψ) cT (TαΦ˜r −Φ˜r + ˜s11 + ˜s2ψ) . The bound suggests that the slack function ψ should be chosen so that the basis functions can offer a reasonably sized approximation error ∥h∗ α −Φr∥∞,1/ψ. At the same time, this choice affects the sizes of η and β. The theorem requires that the penalty η be at least (2 −α)πT α,uα∗ψ. The term πT α,u∗ αψ is the steady-state expectation of the slack function under an optimal policy. Note that β ≤max u ∥Pα,uψ∥∞,1/ψ = max u,x (Pα,uψ)(x) ψ(x) , which is the maximal factor by which the expectation of ψ can increase over a single time period. When dealing with specific classes of problems it is often possible to select ψ so that the norm ∥h∗ α −Φr∥∞,1/ψ as well as the terms maxu ∥Pα,u∥∞,1/ψ and πT α,uα∗ψ scale gracefully with the number of states and/or state variables. This issue will be addressed further in a forthcoming full-length version of this paper. It may sometimes be difficult to verify that any particular value of η dominates (2−α)πT α,uα∗ψ. One approach to selecting η is to perform a line search over possible values of η, solving an LP in each case, and choosing the value of η that results in the best-performing control policy. A simple line search algorithm solves the LP successively for η = 1, 2, 4, 8, . . ., until the optimal solution is such that ˜s2 = 0. It is easy to show that the LP is unbounded for all η < 1, and that there is a finite η = inf{η|˜s2 = 0} such that for each η ≥η, the solution is identical and ˜s2 = 0. This search process delivers a policy that is at least as good as a policy generated by the LP for some η ∈[(2 −α)πT α,uαψ, 2(2 −α)πT α,uαψ], and the upper bound of Theorem 3.1 would hold with η replaced by 2(2 −α)πT α,u∗ αψ. We have discussed all but two terms involved in the bound: θ and 1/(1 −α). Note that if c = πα,˜r, then θ = 1. In the next section, we discuss an approach that aims at choosing c to be close enough to πα,˜r so that θ is approximately 1. In Section 5, we discuss how the reset probability 1 −α should be chosen in order to ensure that policies for the perturbed MDP offer similar performance when applied to the original MDP. This choice determines the magnitude of 1/(1 −α). 4 Fixed Points and Path Following The coefficient θ would be equal to 1 if c were equal to πα,˜r. We can not to simply choose c to be equal to πα,˜r, since πα,˜r depends on ˜r, an outcome of the LP which depends on c. Rather, arriving at a distribution c such that c = πα,˜r is a fixed point problem. In this section, we explore a path-following algorithm for approximating such a fixed point [9], with the aim of arriving at a value of θ that is close to one. Consider solving a sequence indexed by i = 1, . . . , M of (αi, ci)-perturbed MDPs. Let ˜ri denote the weight vector associated with an optimal solution to the LP (1) with perturbation parameters (αi, ci). Let α1 = 0 and αi+1 = αi + δ for i ≥1, where δ is a small positive step size. For any initial choice of c1, we have c1 = πα1,˜r1, since the system resets in every time period. For i ≥1, let ci+1 = παi,˜ri. One might hope that the change in ci is gradual, and therefore, ci ≈παi,˜ri for each i. We can not yet offer rigorous theoretical support for the proposed path following algorithm. However, we will present promising results from a simple computational experiment. This experiment involves a problem with continuous state and action spaces. Though our main result – Theorem 3.1 – applies to problems with countable state spaces and finite action spaces, there is no reason why the LP cannot be applied to broader classes of problems such as the one we now describe. Consider a scalar state process xt+1 = xt + at + wt, driven by scalar actions at and a sequence wt i.i.d. zero-mean unit-variance normal random variables. Consider a cost function g(x, a) = (x −2)2 + a2. We aim at approximating the differential value function using a single basis function φ(x) = x2. Hence, (Φr)(x) = rx2, with r ∈ℜ. We will use a slack function ψ(x) = 1 + x2 and penalty η = 5. The special structure of this problem allows for exact solution of the LP (1) as well as the exact computation of the parameter θ, though we will not explain here how this is done. Figure 1 plots θ versus α, as α is increased from 0 to 0.99, with c initially set to a zero-mean normal distribution with variance 4. The three curves represent results from using three different step sizes δ ∈{0.01, 0.005, 0.0025}. Note that in all cases, θ is very close to 1. Smaller values of δ resulted in curves being closer to 1: the lowest curve corresponds to δ = 0.01 and the highest curve corresponds to δ = 0.0025. Figure 1: Evolution of θ with δ ∈{0.01, 0.005, 0.0025}. 5 The Impact of Perturbation Some simple algebra will show that for any policy u, λα,u −λu = (1 −α) ∞ X t=0 αt cT P t ugu −πT u gu . When the state space is finite |cT P t ugu −πT u gu| decays at a geometric rate. This is also true in many practical contexts involving infinite state spaces. One might think of mu = P∞ t=0(cT P t ugu −πT u gu), as the mixing time of the policy u if the initial state is drawn according to the restart distribution c. This mixing time is finite if the differences cT P t ugu −πT u gu converge geometrically. Further, we have |λα,u −λu| = mu(1 −α), and coming back to the LP, this implies that λuα,˜r −λu∗≤λα,˜r −λα,u∗α + (1 −α)(muα,˜r + max(mu∗, mu∗α)). Combined with the bound of Theorem 3.1, this offers a performance bound for the policy uα,˜r applied to the original MDP. Note that when c = πα,˜r, in the spirit discussed in Section 4, we have muα,˜r = 0. For simplicity, we will assume in the rest of this section that muα,˜r = 0 and mu∗≥mu∗ α, so that λuα,˜r −λu∗≤λα,˜r −λα,u∗α + (1 −α)mu∗. Let us turn to discuss how α should be chosen. This choice must strike a balance between two factors: the coefficient of 1/(1 −α) in the bound of Theorem 3.1 and the loss of (1−α)mu∗associated with the perturbation. One approach is to fix some ϵ > 0 that we are willing to accept as an absolute performance loss, and then choose α so that (1 −α)mu∗≤ϵ. Then, we would have 1/(1 −α) ≥mu∗/ϵ. Note that the term 1/(1 −α) multiplying the right-hand-side of the bound can then be thought of as a constant multiple of the mixing time of u∗. An important open question is whether it is possible to design an approximate DP algorithm and establish for that algorithm an error bound that does not depend on the mixing time in this way. 6 Relation to Prior Work In closing, it is worth discussing how our new algorithm and results relate to our prior work on LP approaches to approximate DP [6, 8]. If we remove the slack function by setting s2 to zero and let s1 = −(1 −α)cT Φr, our LP (1) becomes maximize cT Φr (2) subject to min u (gu + αPuΦr) −Φr ≥0, which is precisely the LP considered in [6] for approximating the optimal cost-to-go function in a discounted MDP with discount factor α. Let ˆr be an optimal solution to (2). For any function V : S 7→ℜ+, let βV = α∥maxu PuV ∥∞,1/V . We call V a Lyapunov function if βV < 1. The following result can be established using an analysis entirely analogous to that carried out in [6]: Theorem 6.1. If βΦv < 1 and Φv′ = 1 for some v, v′ ∈ℜK. Then, λα,ˆr −λ∗ α ≤2θcT Φv 1 −βΦv min r∈ℜK ∥h∗ α −Φr∥∞,1/Φv. A comparison of Theorems 3.1 and 6.1 reveals benefits afforded by the slack function. We consider the situation where ψ = Φv, which makes the bounds directly comparable. An immediate observation is that, even though ψ and Φv play analogous roles in the bounds, ψ is not required to be a Lyapunov function. In this sense, Theorem 3.1 is stronger than Theorem 6.1. Moreover, if η = πT α,u∗ αψ, we have η 1 −α = cT (I −αPu∗ α)−1ψ ≤max u cT (I −αPu)−1Φv ≤cT Φv 1 −βV . Hence, the first term – which appears in the bound of Theorem 6.1 – grows with the largest mixing time among all policies, whereas the second term – which appears in the bound of Theorem 3.1 – only depends on the mixing time of an optimal policy. As discussed in [6], appropriate choice of c – there referred to as the state-relevance weights – can be important for the error bound of Theorem 6.1 to scale well with the number of states. In [8], it is argued that some form of weighting of states in terms of a metric of relevance should continue to be important when considering average cost problems. An LP-based algorithm is also presented in [8], but the results are far weaker than the ones we have presented in this paper, and we suspect that the that LP-based algorithm of [8] will not scale well to high-dimensional problems. Some guidance is offered in [6] regarding how c might be chosen. However, this is ultimately left as a manual task. An important contribution of this paper is the path-following algorithm proposed in Section 4, which aims at automating an effective choice of c. Acknowledgments This research was supported in part by the NSF under CAREER Grant ECS9985229 and by the ONR under grant MURI N00014-00-1-0637. References [1] D. Adelman, “A Price-Directed Approach to Stochastic Inventory/Routing,” preprint, 2002, to appear in Operations Research. [2] L. C. Baird, “Residual Algorithms: Reinforcement Learning with Function Approximation,” ICML, 1995. [3] D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, Athena Scientific, Bellmont, MA, 1996. [4] D. P. Bertsekas, Dynamic Programming and Optimal Control, second edition, Athena Scientific, Bellmont, MA, 2001. [5] J. A. Boyan and A. W. Moore, “Generalization in Reinforcement Learning: Safely Approximating the Value Function,” NIPS, 1995. [6] D. P. de Farias and B. Van Roy, “The Linear Programming Approach to Approximate Dynamic Programming,” Operations Research, Vol. 51, No. 6, November-December 2003, pp. 850-865. Preliminary version appeared in NIPS, 2001. [7] D. P. de Farias and B. Van Roy, “On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming,” Mathematics of Operations Research, Vol. 29, No. 3, 2004, pp. 462–478. [8] D.P. de Farias and B. Van Roy, “Approximate Linear Programming for Average-Cost Dynamic Programming,” NIPS, 2003. [9] C. B. Garcia and W. I. Zangwill, Pathways to Solutions, Fixed Points, and Equilibria, Prentice-Hall, Englewood Cliffs, NJ, 1981. [10] G. J. Gordon, “Stable Function Approximation in Dynamic Programming,” ICML, 1995. [11] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman, “Efficient Solution Algorithms for Factored MDPs,” Journal of Artificial Intelligence Research, Volume 19, 2003, pp. 399-468. Preliminary version appeared in NIPS, 2001. [12] M. E. Harmon, L. C. Baird, and A. H. Klopf, “Advantage Updating Applied to a Differential Game,” NIPS 1995. [13] R. Munos, “Error Bounds for Approximate Policy Iteration,” ICML, 2003. [14] J. N. Tsitsiklis and B. Van Roy, “Feature-Based Methods for Large Scale Dynamic Programming,” Machine Learning, Vol. 22, 1996, pp. 59-94. [15] D. Schuurmans and R. Patrascu, “Direct Value Approximation for Factored MDPs,” NIPS, 2001. [16] P. J. Schweitzer and A. Seidman, “Generalized Polynomial Approximation in Markovian Decision Processes,” Journal of Mathematical Analysis and Applications, Vol. 110, ‘985, pp. 568-582.
|
2004
|
154
|
2,568
|
Validity estimates for loopy Belief Propagation on binary real-world networks Joris Mooij Dept. of Biophysics, Inst. for Neuroscience, Radboud Univ. Nijmegen 6525 EZ Nijmegen, the Netherlands j.mooij@science.ru.nl Hilbert J. Kappen Dept. of Biophysics, Inst. for Neuroscience, Radboud Univ. Nijmegen 6525 EZ Nijmegen, the Netherlands b.kappen@science.ru.nl Abstract We introduce a computationally efficient method to estimate the validity of the BP method as a function of graph topology, the connectivity strength, frustration and network size. We present numerical results that demonstrate the correctness of our estimates for the uniform random model and for a real-world network (“C. Elegans”). Although the method is restricted to pair-wise interactions, no local evidence (zero “biases”) and binary variables, we believe that its predictions correctly capture the limitations of BP for inference and MAP estimation on arbitrary graphical models. Using this approach, we find that BP always performs better than MF. Especially for large networks with broad degree distributions (such as scale-free networks) BP turns out to significantly outperform MF. 1 Introduction Loopy Belief Propagation (BP) [1] and its generalizations (such as the Cluster Variation Method [2]) are powerful methods for inference and optimization. As is well-known, BP is exact on trees, but also yields surprisingly good results for many other graphs that arise in real-world applications [3, 4]. On the other hand, for densely connected graphs with high interaction strengths the results can be quite bad or BP can simply fail to converge. Despite the fact that BP is often used in applications nowadays, a good theoretical understanding of its convergence properties and the quality of the approximation is still lacking (except for the very special case of graphs with a single loop [5]). In this article we attempt to answer the question in what way the quality of the BP results depends on the topology of the underlying graph (looking at structural properties such as short cycles and large “hubs”) and on the interaction potentials (i.e. strength and frustration). We do this for the special but interesting case of binary networks with symmetric pairwise potentials (i.e. Boltzmann machines) without local evidence. This has the practical advantage that analytical calculations are feasible and furthermore we believe that adding local evidence will only serve to extend the domain of convergence, implying this to be the worst-case scenario. We compare the results with those of the variational mean-field (MF) method. Real-world graphs are often far from uniformly random and possess structure such as clustering and power-law degree distributions [6]. Since we expect these structural features to arise in many applications of BP, we focus in this article on graphs modeling this kind of features. In particular, we consider Erd˝os-R´enyi uniform random graphs [7], Bar´abasiAlbert “scale-free” graphs [8], and the neural network of a widely studied worm, the Caenorhabditis elegans. This paper is organized as follows. In the next section we describe the class of graphical models under investigation and explain our method to efficiently estimate the validity of BP and MF. In section 3 we give a qualitative discussion of how the connectivity strength and frustration generally govern the model behavior and discuss the relevant regimes of the model parameters. We show for uniform random graphs that our validity estimates are in very good agreement with the real behavior of the BP algorithm. In section 4 we study the influence of graph topology. Thanks to the numerical efficiency of our estimation method we are able to study very large (N ∼10000) networks, for which it would not be feasible to simply run BP and look what happens. We also try our method on the neural network of the worm C. Elegans and find almost perfect agreement of our predictions with observed BP behavior. We conclude that BP is always better than MF and that the difference is particularly striking for the case of large networks with broad degree distributions such as scale-free graphs. 2 Model, paramagnetic solution and stability analysis Let G = (V, B) be an undirected labelled graph without self-connections, defined by a set of nodes V = {1, . . . , N} and a set of links B ⊆{(i, j) | 1 ≤i < j ≤N}. The adjacency matrix corresponding to G is denoted M and defined as follows: Mij := 1 if (ij) ∈B or (ji) ∈B and 0 otherwise. We denote the set of neighbors of node i ∈V by Ni := {j ∈V | (ij) ∈B} and its degree by di := #(Ni). We define the average degree d := 1 N P i∈V di and the maximum degree ∆:= maxi∈V di. To each node i we associate a binary random variable xi taking values in {−1, +1}. Let W be a symmetric N ×N-matrix defining the strength of the links between the nodes. The probability distribution over configurations x = (x1, . . . , xN) is given by P(x) := 1 Z Y (ij)∈B eWijxixj = 1 Z Y i,j∈V e 1 2 MijWijxixj (1) with Z a normalization constant. We will take the weight matrix W to be random, with i.i.d. entries {Wij}1≤i<j≤N distributed according to the Gaussian law with mean J0 and variance J2. For this model, instead of using the single-node and pair-wise beliefs bi(xi) resp. bij(xi, xj), it turns out to be more convenient to use the (equivalent) quantities m := {mi}i∈V and ξ := {ξij}(ij)∈B, defined by: mi := bi(+1) −bi(−1); ξij := bij(+1, +1) −bij(+1, −1) −bij(−1, +1) + bij(−1, −1). We will use these throughout this paper. We call the mi magnetizations; note that the expectation values E xi vanish because of the symmetry in the probability distribution (1). As is well-known [2, 9], fixed points of BP correspond to stationary points of the Bethe free energy, which is in this case given by FBe(m, ξ) := − X (ij)∈B Wijξij + N X i=1 (1 −di) X xi=±1 η 1 + mixi 2 + X (ij)∈B X xi,xj=±1 η 1 + mixi + mjxj + xixjξij 4 with η(x) := x log x. Note that with this parameterization all normalization and overlap constraints (i.e. P xj bij(xi, xj) = bi(xi)) are satisfied by construction [10]. We can minimize the Bethe free energy analytically by setting its derivatives to zero; one then immediately sees that a possible solution of the resulting equations is the paramagnetic1 solution: mi = 0 and ξij = tanh Wij (for (ij) ∈B). For this solution to be a minimum (instead of a saddle point or maximum), the Hessian of FBe at that point should be positive-definite. This condition turns out to be equivalent to the following Bethe stability matrix (ABe)ij := δij 1 + X k∈Ni ξ2 ik 1 −ξ2 ik ! −Mij ξij 1 −ξ2 ij (with ξij = tanh Wij) (2) being positive-definite. Whether this is the case obviously depends on the values of the weights Wij and the adjacency matrix M. Since for zero weights (W = 0), the stability matrix is just the identity matrix, the paramagnetic solution is a minimum of the Bethe free energy for small values of the weights Wij. The question of what “small” exactly means in terms of J and J0 and how this relates to the graph topology will be taken on in the next two sections. First we discuss the situation for the mean-field variational method. The mean-field free energy FMF (m) only depends on m; we can set its derivatives to zero, which again yields the paramagnetic solution m = 0. The corresponding stability matrix (equal to the Hessian) is given by (AMF )ij := δij −WijMij and should be positive-definite for the paramagnetic solution to be stable. One can prove [11] that ABe is positive-definite whenever AMF is positive-definite. Since the exact magnetizations are zero, we conclude that the Bethe approximation is better than the mean-field approximation for all possible choices of the weights W. As we will see later on, this difference can become quite large for large networks. 3 Weight dependence The behavior of the graphical model depends critically on the parameters J0 and J. Taking the graph topology to be uniformly random (see also subsection 4.1) we recover the model known in the statistical physics community as the Viana-Bray model [12], which has been thoroughly studied and is quite well-understood. In the limit N →∞, there are different relevant regimes (“phases”) for the parameters J and J0 to be distinguished (cf. Fig. 1): • The paramagnetic phase, where the magnetizations all vanish (m = 0), valid for J and J0 both small. • The ferromagnetic phase, where two configurations (characterized by all magnetizations being either positive or negative) each get half of the probability mass. This is the phase occurring for large J0. 1Throughout this article, we will use terminology from statistical physics if there is no good corresponding terminology in the field of machine learning available. 0 0.02 0.04 0.06 0.08 0.1 0 0.1 0.2 0.3 0.4 J0 J BP convergence behavior 0 0.02 0.04 0.06 0.08 0.1 0 0.1 0.2 0.3 0.4 J0 J Stability m=0 minimum Bethe free energy no convergence convergence to ferromagnetic solutions convergence to m=0 m=0 stable (paramagnetic phase) m=0 stable (spin−glass phase) m=0 instable (ferromagnetic phase) marginal instability ? (a) (b) Figure 1: Empirical regime boundaries for the ER graph model with N = 100 and d = 20, averaged over three instances; expectation values are shown as thick black lines, standarddeviations are indicated by the gray areas. See the main text for additional explanation. The exact location of the boundary between the spin-glass and ferromagnetic phase in the right-hand plot (indicated by the dashed line) was not calculated. The red dash-dotted line shows the stability boundary for MF. • The spin-glass phase where the probability mass is distributed over exponentially (in N) many different configurations. This phase occurs for frustrated weights, i.e. for large J. Consider now the right-hand plot in Fig. 1. Here we have plotted the different regimes concerning the stability of the paramagnetic solution of the Bethe approximation.2 We find that the m = 0 solution is indeed stable for J and J0 small and becomes unstable at some point when J0 increases. This signals the paramagnetic-ferromagnetic phase transition. The location is in good agreement with the known phase boundary found for the N →∞limit by advanced statistical physics methods as we show in more detail in [11]. For comparison we have also plotted the stability boundary for MF (the red dash-dotted line). Clearly, the mean-field approximation breaks down much earlier than the Bethe approximation and is unable to capture the phase transitions occurring for large connectivity strengths. The boundary between the spin-glass phase and the paramagnetic phase is more subtle. What happens is that the Bethe stability matrix becomes marginally stable at some point when we increase J, i.e. the minimum eigenvalue of ABe approaches zero (in the limit N →∞). This means that the Bethe free energy becomes very flat at that point. If we go on increasing J, the m = 0 solution becomes stable again (in other words, the minimum eigenvalue of the stability matrix ABe becomes positive again). We interpret the marginal instability as signalling the onset of the spin-glass phase. Indeed it coincides with the known phase boundary for the Viana-Bray model [11, 12]. We observe a similar marginal instability for other graph topologies. Now consider the left-hand plot, Fig. 1(a). It shows the convergence behavior of the BP algorithm, which was determined by running BP with a fixed number of maximum iterations and slight damping. The messages were initialized randomly. We find different regimes that are separated by the boundaries shown in the plot. For small J and J0, BP converges to m = 0. For J0 large enough, BP converges to one of the two ferromagnetic solutions 2Although in Fig. 1 we show only one particular graph topology, the general appearance of these plots does not differ much for other graph topologies, especially for large N. The scale of the plots mostly depends on the network size N and the average degree d as we will show in the next section. 0 0.5 1 1.5 2 10 100 1000 10000 Jc d1/2 N Mean Field 0 0.5 1 1.5 2 10 100 1000 10000 N Bethe Figure 2: Critical values for Bethe and MF for different graph topologies (■: ER, △: BA) in the dense limit with d = 0.1N as a function of network size. Note that the y-axis is rescaled by √ d. (which one is determined by the random initial conditions). For large J, BP does not converge within 1000 iterations, indicating a complex probability distribution. The boundaries coincide within statistical precision with those in the right-hand plot which were obtained by the stability analysis. The computation time necessary for producing a plot such as Fig. 1(a), showing the convergence behavior of BP, quickly increases with increasing N. The computation time needed for the stability analysis (Fig. 1(b)), which amounts to calculating the minimal eigenvalue of the N × N stability matrix, is much less, allowing us to investigate the behavior of BP for large networks. 4 Graph topology In this section we will concentrate on the frustrated case, more precisely on the case J0 = 0 (i.e. the y-axis in the regime diagrams) and study the location of the Bethe marginal instability and of the MF instability for various graph topologies as a function of network size N and average degree d. We will denote by JBe c the critical value of J at which the Bethe paramagnetic solution becomes marginally unstable and we will refer to this as the Bethe critical value. The critical value of J where the MF solution becomes unstable will be denoted as JMF c and referred to as the MF critical value. In studying the influence of graph topology for large networks, we have to distinguish two cases, which we call the dense and sparse limits. In the dense limit, we let N →∞and scale the average degree as d = cN for some fixed constant c. In this limit, we find that the influence of the graph topology is almost negligible. For all graph topologies that we have considered, we find the following asymptotic behavior for the critical values: JBe c ∝ 1 √ d , JMF c ∝ 1 2 √ d The constant of proportionality is approximately 1. These results are illustrated in Fig. 2 for two different graph topologies that will be discussed in more detail below. In the sparse limit, we let N →∞but keep d fixed. In that case the resulting critical values show significant dependence on the graph topology as we will see. 4.1 Uniform random graphs (ER) The first and most elementary random graph model we will consider was introduced and studied by Erd˝os and R´enyi [7]. The ensemble, which we denote as ER(N, p), consists of 0 0.1 0.2 0.3 0.4 0.5 10 100 1000 10000 Jc N Bethe Jc 1/d1/2 MF Jc 1/(2∆1/2) Figure 3: Critical values for Bethe and MF for Erd˝os-R´enyi uniform random graphs with average degree d = 10. the graphs with N nodes; links are added between each pair of nodes independently with probability p. The resulting graphs have a degree distribution that is approximately Poisson for large N and the expected average degree is E d = p(N −1). As was mentioned before, the resulting graphical model is known in the statistical physics literature as the Viana-Bray model (with zero “external field”). Fig. 3 shows the results for the sparse limit, where p is chosen such that the expected average degree is fixed to d = 10. The Bethe critical value JBe c appears to be independent of network size and is slightly larger than 1/ √ d. The MF critical value JMF c does depend on network size (it looks to be proportional to 1/ √ ∆instead of 1/ √ d); in fact it can be proven that it converges very slowly to 0 as N →∞[11], implying that the MF approximation breaks down for very large ER networks in the sparse limit. Although this is an interesting result, one could say that for all practical purposes the MF critical value JMF c is nearly independent of network size N for uniform random graphs. 4.2 Scale-free graphs (BA) A phenomenon often observed in real-world networks is that the degree distribution behaves like a power-law, i.e. the number of nodes with degree δ is proportional to δ−α for some α > 0. These graphs are also known as “scale-free” graphs. The first random graph model exhibiting this behavior is from Barab´asi and Albert [8]. We will consider a slightly different model, which we will denote by BA(N, m). It is defined as a stochastic process, yielding graphs with more and more nodes as time goes on. At t = 0 one starts with the graph consisting of m nodes and no links. At each time step, one node is added; it is connected with m different already existing nodes, attaching preferably to nodes with higher degree (“rich get richer”). More specifically, we take the probability to connect to a node of degree δ to be proportional to δ + 1. The degree distribution turns out to have a power-law dependence for N →∞with exponent α = 3. In Fig. 4 we illustrate some BA graphs. The difference between the maximum degree ∆and the average degree d is rather large: whereas the average degree d converges to 2m, the maximum degree ∆is known to scale as √ N. Fig. 5 shows the results of the stability analysis for BA graphs with average degree d = 10. Note that the y-axis is rescaled by √ ∆to show that the MF critical value JMF c is proportional to 1/ √ ∆. The Bethe critical values are seen to have a scaling behavior that lies somewhere between 1/ √ d and 1/ √ ∆. Compared to the situation for uniform ER graphs, BP now even more significantly outperforms MF. The relatively low sensitivity to the maximum degree ∆that BP exhibits here can be understood intuitively since BA graphs resemble forests of sparsely interconnected stars of high degree, on which BP is exact. 4.3 C. Elegans We have also applied our stability analysis on the neural network of the worm C. Elegans, that is publicly available on http://elegans.swmed.edu/. This graph has N = 202 and d = 19.4. We have calculated the ferromagnetic (J = 0) transition and spin-glass (J0 = 0) transition. We also calculated the critical value of J where BP stops converging, and the value of J where BP does not find the paramagnetic solution anymore. The results are shown in Table 1. Note the very good agreement for the Bethe critical value and the critical J where BP stops finding the m = 0 solution. These results show the accuracy of our method of estimating BP validity on real-world networks. Table 1: Critical values and BP boundaries for C. Elegans network. Spin-glass Ferromagnetic MF critical value 0.0927 ± 0.0023 0.0387 Bethe critical value 0.197 ± 0.016 0.0406 BP m = 0 boundary 0.194 ± 0.014 0.0400 BP convergence boundary 0.209 ± 0.027 > 1 5 Conclusions We have introduced a computationally efficient method to estimate the validity of BP as a function of graph topology, the connectivity strength, frustration and network size. Using this approach, we have found that: • for any graph, the Bethe approximation is valid for a larger set of connectivity strengths Wij than the mean-field approximation; • for uniform random graphs, the quality of both the MF approximation and the Bethe approximation is determined by the average degree of the network (Jc ∝ 1/ √ d for the spin-glass transition) and is nearly independent of network size; • for scale-free networks the validity of the MF approximation scales very poorly with network size due to the increase of the maximal degree (“rich get richer”). In contrast, the validity of the BP approximation scales very well with network size. This is in agreement with our intuition that these networks resemble a forest of high degree stars (“hubs”) that are sparsely interconnected and the fact that BP is exact on stars. • In the limit in which the graph size N →∞and the average degree d scales proportional to N, the influence of the graph-topological details on the location of the spin-glass transition (at J ∝1/ √ d) diminishes and becomes largely irrelevant. m = 1 m = 2 m = 3 Figure 4: Bar´abasi-Albert graphs for N = 20. 0 1 2 3 4 5 6 10 100 1000 10000 Jc ∆1/2 N Bethe 1/d1/2 MF Figure 5: Critical values for Bethe and MF for BA scale-free random graphs with average degree d = 10. Note that the y-axis is rescaled by √ ∆. Acknowledgments The research reported here is part of the Interactive Collaborative Information Systems (ICIS) project, supported by the Dutch Ministry of Economic Affairs, grant BSIK03024. References [1] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988. [2] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems, volume 13, pages 689–695, 2001. [3] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: an empirical study. In Proc. of the Conf. on Uncertainty in AI, pages 467–475, 1999. [4] B. Frey and D. MacKay. A revolution: Belief propagation in graphs with cycles. In Advances in Neural Information Processing Systems, volume 10, pages 479–485, 1997. [5] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neur. Comp., 12:1–41, 2000. [6] R. Albert and A.-L. Barab´asi. Statistical mechanics of complex networks. Rev. Mod. Phys., 74:47–97, 2002. [7] P. Erd˝os and A. R´enyi. On random graphs i. Publ. Math. Debrecen, 6:290–291, 1959. [8] A.-L. Barab´asi and R. Albert. Emergence of scaling in random networks. Science, 286:509– 512, 1999. [9] T. Heskes. Stable fixed points of loopy belief propagation are local minima of the bethe free energy. In Advances in Neural Information Processing Systems, volume 15, pages 343–350, 2003. [10] M. Welling and Y.W. Teh. Belief optimization for binary networks: a stable alternative to loopy belief propagation. In Proc. of the Conf. on Uncertainty in AI, volume 17, 2001. [11] J.M. Mooij and H.J. Kappen. Spin-glass phase transitions on real-world graphs. preprint, condmat:0408378, 2004. [12] L. Viana and A. Bray. Phase diagrams for dilute spin glasses. J. Phys. C: Solid State Phys., 18:3037–3051, 1985.
|
2004
|
155
|
2,569
|
Bayesian inference in spiking neurons Sophie Deneve∗ Gatsby Computational Neuroscience Unit University College London London, UK WC1N 3AR sdeneve@gatsby.ucl.ac.uk Abstract We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities. We proceed to develop a theory of Bayesian inference in spiking neural networks, recurrent interactions implementing a variant of belief propagation. Many perceptual and motor tasks performed by the central nervous system are probabilistic, and can be described in a Bayesian framework [4, 3]. A few important but hidden properties, such as direction of motion, or appropriate motor commands, are inferred from many noisy, local and ambiguous sensory cues. These evidences are combined with priors about the sensory world and body. Importantly, because most of these inferences should lead to quick and irreversible decisions in a perpetually changing world, noisy cues have to be integrated on-line, but in a way that takes into account unpredictable events, such as a sudden change in motion direction or the appearance of a new stimulus. This raises the question of how this temporal integration can be performed at the neural level. It has been proposed that single neurons in sensory cortices represent and compute the log probability that a sensory variable takes on a certain value (eg Is visual motion in the neuron’s preferred direction?) [9, 7]. Alternatively, to avoid normalization issues and provide an appropriate signal for decision making, neurons could represent the log probability ratio of a particular hypothesis (eg is motion more likely to be towards the right than towards the left) [7, 6]. Log probabilities are convenient here, since under some assumptions, independent noisy cues simply combine linearly. Moreover, there are physiological evidence for the neural representation of log probabilities and log probability ratios [9, 6, 7]. However, these models assume that neurons represent probabilities in their firing rates. We argue that it is important to study how probabilistic information are encoded in spikes. Indeed, it seems spurious to marry the idea of an exquisite on-line integration of noisy cues with an underlying rate code that requires averaging on large populations of noisy neurons and long periods of time. In particular, most natural tasks require this integration to take place on the time scale of inter-spike intervals. Spikes are more efficiently signaling events ∗Institute of Cognitive Science, 69645 Bron, France than analog quantities. In addition, a neural theory of inference with spikes will bring us closer to the physiological level and generate more easily testable predictions. Thus, we propose a new theory of neural processing in which spike trains provide a deterministic, online representation of a log-probability ratio. Spikes signals events, eg that the log-probability ratio has exceeded what could be predicted from previous spikes. This form of coding was loosely inspired by the idea of ”energy landscape” coding proposed by Hinton and Brown [2]. However, contrary to [2] and other theories using rate-based representation of probabilities, this model is self-consistent and does not require different models for encoding and decoding: As output spikes provide new, unpredictable, temporally independent evidence, they can be used directly as an input to other Bayesian neurons. Finally, we show that these neurons can be used as building blocks in a theory of approximate Bayesian inference in recurrent spiking networks. Connections between neurons implement an underlying Bayesian network, consisting of coupled hidden Markov models. Propagation of spikes is a form of belief propagation in this underlying graphical model. Our theory provides computational explanations of some general physiological properties of cortical neurons, such as spike frequency adaptation, Poisson statistics of spike trains, the existence of strong local inhibition in cortical columns, and the maintenance of a tight balance between excitation and inhibition. Finally, we discuss the implications of this model for the debate about temporal versus rate-based neural coding. 1 Spikes and log posterior odds 1.1 Synaptic integration seen as inference in a hidden Markov chain We propose that each neuron codes for an underlying ”hidden” binary variable, xt, whose state evolves over time. We assume that xt depends only on the state at the previous time step, xt−dt, and is conditionally independent of other past states. The state xt can switch from 0 to 1 with a constant rate ron = 1 dt limdt→0 P(xt = 1|xt−dt = 0), and from 1 to 0 with a constant rate roff. For example, these transition rates could represent how often motion in a preferred direction appears the receptive field and how long it is likely to stay there. The neuron infers the state of its hidden variable from N noisy synaptic inputs, considered to be observations of the hidden state. In this initial version of the model, we assume that these inputs are conditionally independent homogeneous Poisson processes, synapse i emitting a spike between time t and t + dt (si t = 1) with constant probability qi ondt if xt = 1, and another constant probability qi offdt if xt = 0. The synaptic spikes are assumed to be otherwise independent of previous synaptic spikes, previous states and spikes at other synapses. The resulting generative model is a hidden Markov chain (figure 1-A). However, rather than estimating the state of its hidden variable and communicating this estimate to other neurons (for example by emitting a spike when sensory evidence for xt = 1 goes above a threshold) the neuron reports and communicates its certainty that the current state is 1. This certainty takes the form of the log of the ratio of the probability that the hidden state is 1, and the probability that the state is 0, given all the synaptic inputs received so far: Lt = log ¡ P (xt=1|s0→t) P (xt=0|s0→t) ¢ . We use s0→t as a short hand notation for the N synaptic inputs received at present and in the past. We will refer to it as the log odds ratio. Thanks to the conditional independencies assumed in the generative model, we can compute this Log odds ratio iteratively. Taking the limit as dt goes to zero, we get the following differential equation: ˙L = ron ¡ 1 + e−L¢ −roff ¡ 1 + eL¢ + P i wiδ(si t −1) −θ dt ts ts dt ts off on q q , tx dt tx dt tx off on r r . off on r r . off on q q , off on q q , A. -4 -2 0 2 4 Log odds C. E. 500 1000 1500 2000 2500 3000 -2 0 2 20 Lt Ot 0 500 1000 1500 2000 2500 3000 Time D. 0 200 400 600 ISI Count tI ³ t tL t G i ts j ts t O ³ t t O B. Figure 1: A. Generative model for the synaptic input. B. Schematic representation of log odds ratio encoding and decoding. The dashed circle represents both eventual downstream elements and the self-prediction taking place inside the model neuron. A spike is fired only when Lt exceeds Gt. C. One example trial, where the state switches from 0 to 1 (shaded area) and back to 0. plain: Lt, dotted: Gt. Black stripes at the top: corresponding spikes train. D. Mean Log odds ratio (dark line) and mean output firing rate (clear line). E. Output spike raster plot (1 line per trial) and ISI distribution for the neuron shown is C. and D. Clear line: ISI distribution for a poisson neuron with the same rate. wi, the synaptic weight, describe how informative synapse i is about the state of the hidden variable, e.g. wi = log ¡ qi on qi off ¢ . Each synaptic spike (si t = 1) gives an impulse to the log odds ratio, which is positive if this synapse is more active when the hidden state if 1 (i.e it increases the neuron’s confidence that the state is 1), and negative if this synapse is more active when xt = 0 (i.e it decreases the neuron’s confidence that the state is 1). The bias, θ, is determined by how informative it is not to receive any spike, e.g. θ = P i qi on −qi off. By convention, we will consider that the ”bias” is positive or zero (if not, we need simply to invert the status of the state x). 1.2 Generation of output spikes The spike train should convey a sparse representation of Lt, so that each spike reports new information about the state xt that is not redundant with that reported by other, preceding, spikes. This proposition is based on three arguments: First, spikes, being metabolically expensive, should be kept to a minimum. Second, spikes conveying redundant information would require a decoding of the entire spike train, whereas independent spike can be taken into account individually. And finally, we seek a self consistent model, with the spiking output having a similar semantics to its spiking input. To maximize the independence of the spikes (conditioned on xt), we propose that the neuron fires only when the difference between its log odds ratio Lt and a prediction Gt of this log odds ratio based on the output spikes emitted so far reaches a certain threshold. Indeed, supposing that downstream elements predicts Lt as best as they can, the neuron only needs to fire when it expects that prediction to be too inaccurate (figure 1-B). In practice, this will happen when the neuron receives new evidence for xt = 1. Gt should thereby follow the same dynamics as Lt when spikes are not received. The equation for Gt and the output Ot (Ot = 1 when an output spike is fired) are given by: ˙G = ron ¡ 1 + e−L¢ −roff ¡ 1 + eL¢ + goδ(Ot −1) (1) Ot = 1. when Lt > Gt + go 2 , 0 otherwise, (2) Here go, a positive constant, is the only free parameter, the other parameters being constrained by the statistics of the synaptic input. 1.3 Results Figure 1-C plots a typical trial, showing the behavior of L, G and O before, during and after presentation of the stimulus. As random synaptic inputs are integrated, L fluctuates and eventually exceeds G+0.5, leading to an output spike. Immediately after a spike, G jumps to G + go, which prevents (except in very rare cases) a second spike from immediately following the first. Thus, this ”jump” implements a relative refractory period. However, G decays as it tends to converge back to its stable level gstable = log ¡ ron roff ¢ . Thus L eventually exceeds G again, leading to a new spike. This threshold crossing happens more often during stimulation (xt = 1) as the net synaptic input alters to create a higher overall level of certainty, Lt. Mean Log odds ratio and output firing rate The mean firing rate ¯Ot of the Bayesian neuron during presentation of its preferred stimulus (i.e. when xt switches from 0 to 1 and back to 0) is plotted in figure 1-D, together with the mean log posterior ratio ¯Lt, both averaged over trials. Not surprisingly, the log-posterior ratio reflects the leaky integration of synaptic evidence, with an effective time constant that depends on the transition probabilities ron, roff. If the state is very stable (ron = roff∼0), synaptic evidence is integrated over almost infinite time periods, the mean log posterior ratio tending to either increase or decrease linearly with time. In the example in figure 1D, the state is less stable, so ”old” synaptic evidence are discounted and Lt saturates. In contrast, the mean output firing rate ¯Ot tracks the state of xt almost perfectly. This is because, as a form of predictive coding, the output spikes reflect the new synaptic evidence, It = P i δ(si t −1) −θ, rather than the log posterior ratio itself. In particular, the mean output firing rate is a rectified linear function of the mean input, e. g. ¯O = 1 go ¯I = hP i wiqi on(off) −θ i+ . Analogy with a leaky integrate and fire neuron We can get an interesting insight into the computation performed by this neuron by linearizing L and G around their mean levels over trials. Here we reduce the analysis to prolonged, statistically stable periods when the state is constant (either ON or OFF). In this case, the mean level of certainty ¯L and its output prediction ¯G are also constant over time. We make the rough approximation that the post spike jump, go, and the input fluctuations are small compared to the mean level of certainty ¯L. Rewriting Vt = Lt −Gt + go 2 as the ”membrane potential” of the Bayesian neuron: ˙V = −k¯LV + It −∆go −goOt where k¯L = rone−¯L +roffe¯L, the ”leak” of the membrane potential, depends on the overall level of certainty. ∆go is positive and a monotonic increasing function of go. 500 1000 1500 2000 2500 -10 -5 0 5 10 500 1000 1500 2000 -1.5 -1 -0.5 -2 -1 Stripes No inh 3 tx 3 dt tx 3 dt tx 1 dt ts 1 ts 1 dt ts 2 tx 2 dt tx 2 dt tx 2 dt ts 2 ts 2 dt ts Tiger Lx2 … 2 tx 3 tx n tx 1 tx … A. B. D. 2 tx 3 tx n tx 1 tx … C. Time Log odds Feedback 1 tx 1 dt tx 1 dt tx Figure 2: A. Bayesian causal network for yt (tiger), x1 t (stripes) and x2 t (paws). B. A network feedforward computing the log posterior for x1 t. C. A recurrent network computing the log posterior odds for all variables. D. Log odds ratio in a simulated trial with the network in C (see text). Thick line: Lx2 t , thin line: Lx1 t , dash-dotted: Lx1 t without inhibition. Insert: Lx2 t averaged over trials, showing the effect of feedback. The linearized Bayesian neuron thus acts in its stable regime as a leaky integrate and fire (LIF) neuron. The membrane potential Vt integrates its input, Jt = It −∆go, with a leak k¯L. The neuron fires when its membrane potential reaches a constant threshold go. After each spikes, Vt is reset to 0. Interestingly, for appropriately chosen compression factor go, the mean input to the linearized neuron ¯J = ¯I −∆go ≈0 1. This means that the membrane potential is purely driven to its threshold by input fluctuations, or a random walk in membrane potential. As a consequence, the neuron’s firing will be memoryless, and close to a Poisson process. In particular, we found Fano factor close to 1 and quasi-exponential ISI distribution (figure 1E) on the entire range of parameters tested. Indeed, LIF neurons with balanced inputs have been proposed as a model to reproduce the statistics of real cortical neurons [8]. This balance is implemented in our model by the neuron’s effective self-inhibition, even when the synaptic input itself is not balanced. Decoding As we previously said, downstream elements could predict the log odds ratio Lt by computing Gt from the output spikes (Eq 1, fig 1-B). Of course, this requires an estimate of the transition probabilities ron, roff, that could be learned from the observed spike trains. However, we show next that explicit decoding is not necessary to perform bayesian inference in spiking networks. Intuitively, this is because the quantity that our model neurons receive and transmit, eg new information, is exactly what probabilistic inference algorithm propagate between connected statistical elements. 1Even if go is not chosen optimally, the influence of the drift ¯J is usually negligible compared to the large fluctuations in membrane potential. 2 Bayesian inference in cortical networks The model neurons, having the same input and output semantics, can be used as building blocks to implement more complex generative models consisting of coupled Markov chains. Consider, for example, the example in figure 2-A. Here, a ”parent” variable x1 t (the presence of a tiger) can cause the state of n other ”children” variables ([xk t ]k=2...n), of whom two are represented (the presence of stripes,x2 t, and motion, x3 t). The ”children” variables are Bayesian neurons identical to those described previously. The resulting bayesian network consist of n + 1 coupled hidden Markov chains. Inference in this architecture corresponds to computing the log posterior odds ratio for the tiger, x1 t, and the log posterior of observing stripes or motion, ([xk t ]k=2...n), given the synaptic inputs received by the entire network so far, i.e. s2 0→t, . . . , sk 0→t. Unfortunately, inference and learning in this network (and in general in coupled Markov chains) requires very expensive computations, and cannot be performed by simply propagating messages over time and among the variable nodes. In particular, the state of a child variable xk t depends on xk t−dt, sk t , x1 t and the state of all other children at the previous time step, [xj t−dt]2<j<n,j̸=i. In contrast, our network can only implement pairwise interactions, a connection between two spiking neurons implementing the conditional probability linking the two corresponding binary variables. Thus, we need to assume additional conditional independencies between the nodes in the generative model, so that their joint probability can be pairwise factorized: p(xt, xt−1) = 1 Z Q ij φ(xi t, xj t) Q i φ(xi t, xi t−dt). In words, it means that variables bias each other’s probabilities, but do not influence each other’s dynamics, i.e they do not affect each other’s transition probabilities. For example, a tiger does not affect the probability that stripes appear or disappear, but increases their probability of being present. Naive implementation In this restricted case, marginal posterior probabilities can be computed iteratively by propagating beliefs in time and between the variables, or, in our model, by propagating spikes in a neural network. This is because the probability of a variable xk t can be directly updated by the conditional probability of observing the synaptic input to another connected neuron, sl t, eg p(sl t|xk t ) = P xl t p(sl t|xl t)p(xl t|xk t ), marginalizing out the hidden state xl. Of course, rather than using sl t, we use Ol t, the output of the Bayesian neuron coding for xl t. As we said previously, this output directly represents the new synaptic evidence received by neuron l. The resulting equation is identical to the one derived previously for poisson input, ˙Lxk = fk(Lxk) + P l wlkδ(Ol t −1) −θk where fk(x) = rk on(1 + e−x) −rk off(1 + ex). As previously, wlk, the synaptic weight, describes how informative it is for neuron k to receive a spike from neuron (or synapse) l, wlk = log( P (Ok t =1|yt=1) P (Ok t =1|yt=0)), while θk is how informative it is not to receive a spike, θk = dt P l P(Ol t = 1|xk t = 1) −P(Ol t = 1|xk t = 0). This shows that our model is self-consistent. Except at the first stage of processing (eg in the retina), all inputs are proposed to come from other Bayesian neurons. Results We implemented these update rules in a spiking neural network (figure 2-B) representing the generative model in figure 2-A, with 100 possible children for x1 t. We first consider the case where there is no feedback connections, meaning that w1k = 0 for all k. In this case the network computes the probability of a tiger at time t, integrating multiple sensory cues such as the presence of stripes or motion in the visual scene. In the example trials plotted in figure 2-D, we fixed the state of x1 t and x2 t: the tiger and the stripes are present in the shaded temporal window, and absent outside of it. We then sample the states of the other children (i.e is there motion or not?) and the corresponding ”observed” synaptic inputs, sk 0→tmax, from the generative model. Once this synaptic input has been generated, it is used as an input to the network in figure 2-B. What is plotted in the Log odds ratio for the tiger, Lx1 t and the stripes, Lx1 t , as a function of time. As we can see, the stripes receive very noisy synaptic input and can only provide weak evidence that they are present. However, the tiger neuron is able to combine inputs from its 100 children and get a much higher certainty. Unfortunately, bayesian inference in this feedforward network is incomplete: The presence of a tiger affects the probability of stripes, not only the other way round. To implement this, we also need feedback connections, w1k. The network with feedforward and feedback processing fails miserably, given that its activity explode, as illustrated in figure 2-D. Balanced excitation/inhibition This failure is due to the presence of loops, whereby a spike from neuron k increases the certainty of neuron l (and its probability of firing) by wkl, and a spike from neuron l increases in turn the certainty of neuron k by wlk. These loops result in spikes reverberating through the network, ad infinitum, without reporting new information, a phenomena akin to loopy belief propagation [10]. To avoid overcounting of evidence, we thus have to discount the reverberated ”old evidence” from the synaptic input: ˙Lxk = fk(Lxk) + P l wklδ(Ol t −1) −P l wklwlkδ(Ok t−dt −1) −θk. We implemented this discounting using inhibitory neurons recurrently connected to each excitatory neuron (figure 2-C). The inhibitory neurons are used to predict the redundant feedback a bayesian neuron will receive and substract this prediction so that, once again, only new information are taken into account and communicated to other neurons. Each excitatory loop is compensated by an inhibitory loop, resulting in a balance between excitation and inhibition at the level of each neuron within the network. The result on one trial is plotted in figure 2-D. The ”tiger” log odds ratio is almost indistinguishable from the feedforward case, and is not plotted. The ”stripes” Log odds ratio increases during presentation of the tiger due to the feedback. In other words, the stripes neuron can take into account no only its own its own synaptic input, but also the synaptic input to the other children neurons (such as evidence for motion), thanks to the presence of a common source (the tiger). Over many trials, we found that the statistics of the bayesian neuron were still poisson, and their output firing is still a rectified linear function of the input firing rate in a stable statistical regime, i.e. ¯Ok = [P l wkl ¯Ol]+. Discussion We started from an interpretation of synaptic integration in single neurons as a form of inference in a hidden Markov chain. We derived a model of spiking neurons and their interactions able to compute the marginal posterior probabilities of sensory and motor variables given evidence received in the entire network. In this view, the brain implements an underlying bayesian network in an interconnected neural architecture, with conditional probabilities represented by synaptic weights. The model makes a rich set of predictions for the general properties of neuron and synaptic dynamics, such as a time constant that depends on the overall level of inputs, specific forms of frequency dependant spike and synaptic adaptation (not shown here) and micro-balanced excitation and inhibition. However, it is still restricted to probabilistic computations involving binary variables. In a related work similar ideas are applied population encoding of log probability distribution for analog variables (Zemel, Huys and Dayan, submitted to NIPS 2004). Despite non-linear processing at the single neural level, the emerging picture is relatively simple: The neuron acts as a leaky integrate and fire neuron driven by noise. The output firing rate is a rectified weighted sum of the input firing rates, while the firing statistics are Poisson. However, these output spike trains are a deterministic function of the input spike trains. Spikes report fluctuations in the level of certainty that could not be predicted either from the stability of its stimulus (contribution from Gt) or the loops in the network (contribution from the inhibitory neuron). Thus firing will be, by definition, unpredictable. This last observation leads us to suggest that the irregular firing and Poisson statistics observed in cortical neurons [1] arises as a direct consequence of the random fluctuations in the sensory inputs and the instability of the real word, but are not due to unreliable or ”chaotic” neural processing. Finally, it is crucial for the biological realism of the model to find adaptive neural dynamics and synaptic plasticity able to learn the parameters of the internal model and conditional probabilities, and we are currently exploring these issues. Fortunately, the required learning rules are local and unsupervised. According to our preliminary work, the synaptic weights and bias depend on the joint probability of presynaptic/postsynaptic spikes and can be learned with the spike time dependent plasticity observed in hippocampus and cortex [5]. Meanwhile, the transition probabilities simply correspond to how often the neuron switches between an active and an inactive state. Acknowledgments We thank Peter Dayan, Peter Latham, Zoubin Ghahramani, Jean Laurens and Jacques Droulez for helpful discussions and comments. This work was supported by a Dorothy Hodgkin fellowship for the Royal Society and the BIBA European Project. References [1] K. H. Britten, M. N. Shadlen, W. T. Newsome, and J. A. Movshon. The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12:4745–4765, 1992. [2] G. Hinton and A. Brown. Spiking boltzmann machines. In S. Solla, T. Leen, and K. Muller, editors, Neural Information Processing System, volume 12, pages 122–8. MIT Press, Cambridge, MA, 2000. [3] D. Knill and W. Richards. Perception as Bayesian inference. Cambridge University Press, Cambridge, MA, 1996. [4] K. Kording and D. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244–7, 2004. [5] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature, 382:807–19, 1996. [6] M. Mazurek, J. Roitman, J. Ditterich, and M. Shadlen. A role for neural integrators in perceptual decision making. Cerebral cortex, 13(11):1257–69, 2003. [7] R. Rao. Bayesian computation in recurrent neural circuits. Neural Computation, 16(1):1–38, 2003. [8] M. Shadlen and W. Newsome. Noise, neural codes and cortical organization. Current Opinion in Neurobiology, 4:569–579, 1994. [9] Y. Weiss and D. Fleet. Velocity likelihood in biological and machine vision. In R. Rao, B. Olshausen, and M. Lewicki, editors, Probabilistic Models of the Brain: Perception and Neural Function, pages 77–96. MIT Press, Cambridge, MA, 2002. [10] Y. Weiss and W. Freeman. Correctness of belief propagation in gaussian graphical models of arbitrary topology. Neural Computation, 13:2173–200, 2001.
|
2004
|
156
|
2,570
|
Computing regularization paths for learning multiple kernels Francis R. Bach & Romain Thibaux Computer Science University of California Berkeley, CA 94720 {fbach,thibaux}@cs.berkeley.edu Michael I. Jordan Computer Science and Statistics University of California Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract The problem of learning a sparse conic combination of kernel functions or kernel matrices for classification or regression can be achieved via the regularization by a block 1-norm [1]. In this paper, we present an algorithm that computes the entire regularization path for these problems. The path is obtained by using numerical continuation techniques, and involves a running time complexity that is a constant times the complexity of solving the problem for one value of the regularization parameter. Working in the setting of kernel linear regression and kernel logistic regression, we show empirically that the effect of the block 1-norm regularization differs notably from the (non-block) 1-norm regularization commonly used for variable selection, and that the regularization path is of particular value in the block case. 1 Introduction Kernel methods provide efficient tools for nonlinear learning problems such as classification or regression. Given a learning problem, two major tasks faced by practitioners are to find an appropriate kernel and to understand how regularization affects the solution and its performance. This paper addresses both of these issues within the supervised learning setting by combining three themes from recent statistical machine learning research, namely multiple kernel learning [2, 3, 1], computation of regularization paths [4, 5], and the use of path following methods [6]. The problem of learning the kernel from data has recently received substantial attention, and several formulations have been proposed that involve optimization over the conic structure of the space of kernels [2, 1, 3]. In this paper we follow the specific formulation of [1], who showed that learning a conic combination of basis kernels is equivalent to regularizing the original supervised learning problem by a weighted block 1-norm (see Section 2.2 for further details). Thus, by solving a single convex optimization problem, the coefficients of the conic combination of kernels and the values of the parameters (the dual variables) are obtained. Given the basis kernels and their coefficients, there is one free parameter remaining—the regularization parameter. Kernel methods are nonparametric methods, and thus regularization plays a crucial role in their behavior. In order to understand a nonparametric method, in particular complex nonparametric methods such as those considered in this paper, it is useful to be able to consider the entire path of regularization, that is, the set of solutions for all values of the regularization parameter [7, 4]. Moreover, if it is relatively cheap computationally to compute this path, then it may be of practical value to compute the path as standard practice in fitting a model. This would seem particularly advisable in cases in which performance can display local minima along the regularization path. In such cases, standard local search methods may yield unnecessarily poor performance. For least-squares regression with a 1-norm penalty or for the support vector machine, there exist efficient computational techniques to explore the regularization path [4, 5]. These techniques exploit the fact that for these problems the path is piecewise linear. In this paper we consider the extension of these techniques to the multiple kernel learning problem. As we will show (in Section 3), in this setting the path is no longer piecewise linear. It is, however, piecewise smooth, and we are able to follow it by using numerical continuation techniques [8, 6]. To do this in a computationally efficient way, we invoke logarithmic barrier techniques analogous to those used in interior point methods for convex optimization (see Section 3.3). As we shall see, the complexity of our algorithms essentially depends on the number of “kinks” in the path, i.e., the number of discontinuity points of the derivative. Our experiments suggest that the number of those kinks is always less than a small constant times the number of basis kernels. The empirical complexity of our algorithm is thus a constant times the complexity of solving the problem using interior point methods for one value of the regularization parameter (see Section 3.4 for details). In Section 4, we present simulation experiments for classification and regression problems, using a large set of basis kernels based on the most widely used kernels (linear, polynomial, Gaussian). In particular, we show empirically that the number of kernels in the conic combination is not a monotonic function of the amount of regularization. This contrasts with the simpler non-block 1-norm case for variable selection (i.e., blocks of size one [4]), where the number of variables is usually monotonic (or nearly so). Thus the need to compute full regularization paths is particularly acute in our more complex (block 1-norm regularization) case. 2 Block 1-norm regularization In this section we review the block 1-norm regularization framework of [1] as it applies to differentiable loss functions. To provide necessary background we begin with a short review of classical 2-norm regularization. 2.1 Classical 2-norm regularization Primal formulation We consider the general regularized learning optimization problem [7], where the data xi, i = 1, . . . , n, belong to the input space X , and yi, i = 1, . . . , n are the responses (lying either in {−1, 1} for classification or R for regression). We map the data into a feature space F through x 7→Φ(x). The kernel associated with this feature map is denoted k(x, y) = Φ(x)⊤Φ(y). The optimization problem is the following1: minw∈Rp Pn i=1 ℓ(yi, w⊤Φ(xi)) + λ 2 ||w||2, (1) where λ > 0 is a regularization parameter and ||w|| is the 2-norm of w, defined as ||w|| = (w⊤w)1/2. The loss function ℓis any function from R × R to R. In this paper, we focus on loss functions that are strictly convex and twice continuously differentiable in their second argument. Let ψi(v), v ∈R, be the Fenchel conjugate [9] of the convex function ϕi(u) = ℓ(yi, u), defined as ψi(v) = maxu∈R(vu −ϕi(u)). Since we have assumed that 1We omit the intercept as it can be included by adding the constant variable equal to 1 to each feature vector Φ(xi). ℓis strictly convex and differentiable, the maximum defining ψi(v) is attained at a unique point equal to ψ′ i(v) (possibly equal to +∞or −∞). The function ψi(v) is then strictly convex and twice differentiable in its domain. In particular, we have the following examples in mind: for least-squares regression, we have ϕi(u) = 1 2(yi −u)2 and ψi(v) = 1 2v2 + vyi, while for logistic regression, we have ϕi(u) = log(1 + exp(−yiui)), where yi ∈{−1, 1}, and ψi(v) = (1 + vyi) log(1 + vyi) − vyi log(−vyi) if vyi ∈(−1, 0), +∞otherwise. Dual formulation and optimality conditions The Lagrangian for problem (1) is L(w, u, α) = P i ϕi(ui) + λ 2 ||w||2 −λ P i αi(ui −w⊤Φ(xi)) and is minimized with respect to u and w with w = −P i αiΦ(xi). The dual problem is then maxα∈Rn −P i ψi(λαi) −λ 2 α⊤Kα , (2) where K ∈Rn×n is the kernel matrix of the points, i.e., Kab = k(xa, xb). The optimality condition for the dual variable α is then: ∀i, (Kα)i + ψ′ i(λαi) = 0 (3) 2.2 Block 1-norm regularization In this paper, we map the input space X to m different feature spaces F1, . . . , Fm, through m feature maps Φ1(x), . . . , Φm(x). We now have m different variables wj ∈Fj, j = 1, . . . , m. We use the notation Φ(x) = (Φ1(x), . . . , Φm(x)) and w = (w1, . . . , wm), and from now on, we use the implicit convention that the index i ranges over data points (from 1 to n), while the index j ranges over kernels/feature spaces (from 1 to m). Let dj, j = 1, . . . , m, be weights associated with each kernel. We will see in Section 4 how these should be linked to the rank of the kernel matrices. Following [1], we consider the following problem with weighted block 1-norm regularization2 (where ||wj|| = (w⊤ j wj)1/2 still denotes the 2-norm of wj): minw∈F1×···×Fm P i ϕi(w⊤Φ(xi)) + λ P j dj||wj||. (4) The problem (4) is a convex problem, but not differentiable. In order to derive optimality conditions, we can reformulate it with conic constraints and derive the following dual problem (we omit details for brevity) [9, 1]: maxα −P i ψi(λαi) such that ∀j, α⊤Kjα ⩽d2 j (5) where Kj is the kernel matrix associated with kernel kj, i.e., defined as (Kj)ab = kj(xa, xb). From the KKT conditions for problem Eq. (5), we obtain that the dual variable α is optimal if and only if there exists η ∈Rm such that η ⩾0 and ∀i, (P j ηjKjα)i + ψ′ i(λαi) = 0 (6) ∀j, α⊤Kjα ⩽d2 j, ηj ⩾0, ηj(d2 i −α⊤Kjα) = 0. We can go back and forth between optimal w and α by w = −λ Diag(η) P i αixi or αi = 1 λϕ′ i(w⊤xi). We see that the solution of Eq. (5) can be obtained by using only the kernel matrices Kj (i.e., this is indeed a kernel machine) and that the optimal solution of the block 1-norm 2In [1], the square of the block 1-norm was used. However, when the entire regularization path is sought, it is easy to show that the two problems are equivalent. The advantage of the current formulation is that when the blocks are of size one the problem reduces to classical 1-norm regularization [4]. path β/λ target Path (α ,σ ) (α ,σ ) 1 1 0 0 Corrector steps Predictor step Figure 1: (Left) Geometric interpretation of the dual problem in Eq. (5) for linear regression; see text for details. (Right) Predictor-corrector algorithm. problem in Eq. (5), with optimality conditions in Eq. (6), is the solution of the regular 2norm problem in Eq. (2) with kernel K = P j ηjKj. Thus, with this formulation, we learn the coefficients of the conic combination of kernels as well as the dual variables α [1]. As shown in [1], the conic combination is sparse, i.e., many of the coefficients ηj are equal to zero. 2.3 Geometric interpretation of dual problem Each function ψi is strictly convex, with a strict minimum at βi defined by ψ′ i(βi) = 0 (for least-squares regression we have βi = −yi, and for the logistic regression we have βi = −yi/2). The negated dual objective P i ψi(λαi) is thus a metric between α and β/λ (for least-squares regression, this is simply the squared distance while for logistic regression, this is an entropy distance). Therefore, the dual problem aims to minimize a metric between α and the target β/λ, under the constraint that α belongs to an intersection of m ellipsoids {α ∈Rn, α⊤Kjα ⩽d2 j}. When computing the regularization path from λ = +∞to λ = 0, the target goes from 0 to ∞in the direction β (see Figure 1). The geometric interpretation immediately implies that as long as 1 λ2 β⊤Kjβ ⩽d2 j, the active set is empty, the optimal α is equal to β/λ and the optimal w is equal to 0. We thus initialize the path following technique with λ = maxj(β⊤Kjβ/d2 j)1/2 and α = β/λ. 3 Building the regularization path In this section, the goal is to vary λ from +∞(no regularization) to 0 (full regularization) and obtain a representation of the path of solutions (α(λ), η(λ)). We will essentially approximate the path by a piecewise linear function of σ = log(λ). 3.1 Active set method For the dual formulation Eq. (5)-Eq. (6), if the set of active kernels J (α) is known, i.e., the set of kernels that are such that α⊤Kjα = d2 j, then the optimality conditions become ∀j ∈J , α⊤Kjα = d2 j (7) ∀i, (P j∈J ηjKjα)i + ψ′ i(λαi) = 0 and they are valid as long as ∀j /∈J , α⊤Kjα ⩽d2 j and ∀j ∈J , ηj ⩾0. The path is thus piecewise smooth, with “kinks” at each point where the active set J changes. On each of the smooth sections, only those kernels with index belonging to J are used to define α and η, through Eq. (7). When all blocks have size one, or equivalently when all kernel matrices have rank one, then the path is provably linear in 1/λ between each kink [4] and is thus easy to follow. However, when the kernel matrices have higher rank, this is not the case and additional numerical techniques are needed, which we now present. In the regularized formulation we present in Section 3.3, the optimal η is a function of α, and therefore we only have to follow the optimal α, as a function of σ = log(λ). 3.2 Following a smooth path using numerical continuation techniques In this section, we provide a brief review of path following, focusing on predictor-corrector methods [8]. We assume that the function α(σ) ∈Rd is defined implicitly by J(α, σ) = 0, where J is C∞from Rd+1 to Rd and σ is a real variable. Starting from a point α0, σ0 such that J(α0, σ0) = 0, by the implicit function theorem, the solution is well defined and C∞if the differential ∂J ∂α ∈Rd×d is invertible. The derivative at σ0 is then equal to dα dσ (σ0) = − ∂J ∂α(α0, σ0) −1 ∂J ∂σ (α0, σ0). In order to follow the curve α(σ), the most effective numerical method is the predictorcorrector method, which works as follows (see Figure 1): • predictor step : from (α0, σ0) predict where α(σ0 + h) should be using the first order expansion, i.e., take λ1 = λ0 + h, α1 = α0 + h dα dσ (σ0) (note that h can be chosen positive or negative, depending on the direction we want to follow). • corrector steps : (α1, σ1) might not satisfy J(α1, σ1) = 0, i.e., the tangent prediction might (and generally will) leave the curve α(σ). In order to return to the curve, Newton’s method is used to solve the nonlinear system of equations (in α) J(α, σ1) = 0, starting from α = α1. If h is small enough, then the Newton steps will converge quadratically to a solution α2 of J(α, σ1) = 0 [8]. Methods that do only one of the two steps are not as efficient: doing only predictor steps is not stable and the algorithm leaves the path very quickly, whereas doing only corrector steps (with increasing σ) is essentially equivalent to seeding the optimizer for a given σ with the solution for a previous σ, which is very inefficient in sections where the path is close to linear. Predictor-corrector methods approximate the path by a sequence of points on that path, which can be joined to provide a piecewise linear approximation. At first glance, in order to follow the piecewise smooth path all that is needed is to follow each piece and detect when the active set changes, i.e, when ∃j /∈J , α⊤Kjα = d2 j or ∃j ∈J , ηj = 0. However this approach can be tricky numerically [8]. We instead prefer to use a numerical regularization technique that will (a) make the entire path smooth, (b) make sure that the Newton steps are globally convergent, and (c) will still enable us to use only a subset of the kernels to define the path locally. 3.3 Numerical regularization We borrow a classical regularization method from interior point methods, in which a constrained problem is made unconstrained by using a convex log-barrier [9]. In the dual formulation, we solve the following problem (note that we now use a min-problem and we have divided by λ2, which leaves the problem unchanged), where µ is a fixed small constant: minα F(α, λ) where F(α, λ) = P i 1 λ2 ψi(λαi) −µ 2λ P j log(d2 j −α⊤Kjα) (8) For λ fixed, α 7→F(α, λ) is C∞and strictly convex in its domain {α, ∀j, α⊤Kjα < d2 j}, and thus the global minimum is uniquely defined by ∂F ∂α = 0. If we define ηj(α) = µ/(d2 j −α⊤Kjα), then we have ∂F ∂αi = 1 λψ′ i(λαi) + 1 λ P j ηj(α)(Kjα)i, and thus, the optimality condition for the problem with the log-barrier is exactly equivalent to the one in Eq. (6). But now instead of having ηj(d2 j −α⊤Kjα) = 0 (which would define an optimal solution of the numerically unregularized problem), we have ηj(d2 j −α⊤Kjα) = µ. Any 0 2 4 6 0 1 2 3 4 −log(λ) η 0 5 10 0 2 4 6 8 10 −log(λ) η Figure 2: Examples of variation of η along the regularization path for linear regression (left) and logistic regression (right). dual-feasible variables η and α (not necessarily linked through a functional relationship) define primal-dual variables and the quantity ηj(d2 j−α⊤Kjα) is exactly the duality gap [9], i.e., the difference between the primal and dual objectives. Thus the parameter µ holds fixed the duality gap we are willing to pay. In simulations, we used µ = 10−3. We can apply the techniques of Section 3.2 to follow the path for a fixed µ, for the variables α only, since η is now a function of α. The corrector steps, are not only Newton steps for solving a system of nonlinear equations, they are also Newton-Raphson steps to minimize a strictly convex function, and are thus globally convergent [9]. 3.4 Path following algorithm Our path following algorithm is simply a succession of predictor-corrector steps, described in Section 3.2, with J(α, σ) = ∂F ∂α (α, σ) defined in Section 3.3, where σ = log(λ). The initialization presented in Section 2.3 is used. In Figure 2, we show simple examples of the values of the kernel weights η along the path for a toy problem with a small number of kernels, for kernel linear regression and kernel logistic regression. It is worth noting that the weights are not even approximately monotonic functions of λ; also the behavior of those weights as λ approaches zero (or σ grows unbounbed) is very specific: they become constant for linear regression and they grow up to infinity for logistic regression. In Section 4, we show (a) why these behaviors occur and (b) what the consequences are regarding the performance of the multiple kernel learning problem. In the remaining of this section, we review some important algorithmic issues3. Step size selection A major issue in path following methods is the choice of the step h: if h is too big, the predictor will end up very far from the path and many Newton steps have to be performed, while if h is too small, progress is too slow. We chose a simple adaptive scheme where at each predictor step we select the biggest h so that the predictor step stays in the domain |J(α, σ)| ⩽ε. The precision parameter ε is itself adapted at each iteration: if the number of corrector steps at the previous iteration is greater than 8 then ε is decreased whereas if this number is less than 4, it is increased. Running time complexity Between each kink, the path is smooth, thus there is a bounded number of steps [8, 9]. Each of those steps has complexity O(n3 + mn2). We have observed empirically that the overall number of those steps is O(m), thus the total empirical complexity is O(mn3 + m2n2). The complexity of solving the optimization problem in Eq. (5) using an interior point method for only one value of the regularization parameter is O(mn3) [2], thus if m ⩽n, the empirical complexity of our algorithm, which yields the entire regularization path, is a constant times the complexity of obtaining only one point in the path using an interior point method. This makes intuitive sense, as both methods follow a path, by varying µ in the case of the interior point method, and by varying λ in our case. The difference is that every point along our path is meaningful, not just the destination. 3A Matlab implementation can be downloaded from www.cs.berkeley.edu/˜fbach . 0 2 4 6 8 0 0.1 0.2 0.3 0.4 −log(λ) error rate 0 2 4 6 8 0 0.1 0.2 0.3 0.4 −log(λ) error rate 0 5 10 0 0.2 0.4 0.6 0.8 1 −log(λ) mean square error 0 5 10 0 0.2 0.4 0.6 0.8 1 −log(λ) mean square error 0 2 4 6 8 0 10 20 30 −log(λ) number of kernels 0 2 4 6 8 0 10 20 30 −log(λ) number of kernels 0 5 10 0 10 20 30 40 50 −log(λ) number of kernels 0 5 10 0 10 20 30 40 50 −log(λ) number of kernels Figure 3: Varying the weights (dj): (left) classification on the Liver dataset, (right) regression on the Boston dataset ; for each dataset, two different values of γ, (left) γ = 0 and (right) γ = 1 . (Top) training set accuracy in bold, testing set accuracy in dashed, (bottom) number of kernels in the conic combination. Efficient implementation Because of our numerical regularization, none of the ηj’s are equal to zero (in fact each ηj is lower bounded by µ/d2 j). We thus would have to use all kernels when computing the various derivatives. We circumvent this by truncating those ηj that are close to their lower bound to zero: we thus only use the kernels that are numerically present in the combination. Second-order predictor step The implicit function theorem also allows to compute derivative of the path of higher orders. By using a second-order approximation of the path, we can reduce significantly the number of predictor-corrector steps required for the path. 4 Simulations We have performed simulations on the Boston dataset (regression, 13 variables, 506 data points) and Liver dataset (classification, 6 variables, 345 data points) from the UCI repository, with the following kernels: linear kernel on all variables, linear kernels on single variables, polynomial kernels (with 4 different orders), Gaussian kernels on all variables (with 7 different kernel widths), Gaussian kernels on subsets of variables (also with 7 different kernel widths), and the identity matrix. This makes 110 kernels for the Boston dataset and 54 for the Liver dataset. All kernel matrices were normalized to unit trace. Intuitively, the regularization weight dj for kernel Kj should be an increasing function of the rank of Kj, i.e., we should penalize more feature spaces of higher dimensions. In order to explore the effect of dj on performance, we set dj as follows: we compute the number pj of eigenvalues of Kj that are greater than 1 2n (remember that because of the unit trace constraint, these n eigenvalues sum to 1), and we take dj = pγ j . If γ = 0, then all dj’s are equal to one, and when γ increases, kernel matrices of high rank such as the identity matrix have relatively higher weights, noting that a higher weight implies a heavier regularization. In Figure 3, for the Boston and liver datasets, we plot the number of kernels in the conic combination as well as the training and testing errors, for γ = 0 and γ = 1. We can make the following simple observations: Number of kernels The number of kernels present in the sparse conic combination is a non monotonic function of the regularization parameter. When the blocks are onedimensional, a situation equivalent to variable selection with a 1-norm penalty, this number is usually a nearly monotonic function of the regularization parameter [4]. Local minima Validation set performance may exhibit local minima, and thus algorithms based on hill-climbing might exhibit poor performance by being trapped in a local minimum, whereas our approach where we compute the entire path would avoid that. Behavior for small λ For all values of γ, as λ goes to zero, the number of kernels remains the same, the training error goes to zero, while the testing error remains constant. What changes when γ changes is the value of λ at which this behavior appears; in particular, for small values of γ, it happens before the testing error goes back up, leading to an unusual validation performance curve (an usual cross-validation curve would diverge to large values when the regularization parameter goes to zero). It is thus crucial to use weights dj that grow with the “size” of the kernel, and not simply constant. This behavior can be confirmed by a detailed analysis of the optimality conditions, which show that if one of the kernel has a flat spectrum (such as the identity matrix), then, as λ goes to zero, α tends to a limit, η tends to a limit for linear regression and goes to infinity as log(1/λ) for logistic regression. Also, once in that limiting regime, the training error goes to zero quickly, while the testing error remains constant. 5 Conclusion We have presented an algorithm to compute entire regularization paths for the problem of multiple kernel learning. Empirical results using this algorithm have provided us with insight into the effect of regularization for such problems. In particular we showed that the behavior of the block 1-norm regularization differs notably from traditional (non-block) 1-norm regularization. As presented, the empirical results suggest that our algorithm scales quadratically in the number of kernels, but cubically in the number of data points. Indeed, the main computational burden (for both predictor and corrector steps) is the inversion of a Hessian. In order to make the computation of entire paths efficient for problems involving a large number of data points, we are currently investigating inverse Hessian updating, a technique which is commonly used in quasi-Newton methods [10]. Acknowledgments We wish to acknowledge support from NSF grant 0412995, a grant from Intel Corporation, and a graduate fellowship to Francis Bach from Microsoft Research. References [1] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In ICML, 2004. [2] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix with semidefinite programming. JMLR, 5:27–72, 2004. [3] C. S. Ong, A. J. Smola, and R. C. Williamson. Hyperkernels. In NIPS 15, 2003. [4] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2):407–499, 2004. [5] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support vector machine. In NIPS 17, 2005. [6] A. Corduneanu and T. Jaakkola. Continuation methods for mixing heterogeneous sources. In UAI, 2002. [7] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag, 2001. [8] E. L. Allgower and K. Georg. Continuation and path following. Acta Numer., 2:1–64, 1993. [9] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2003. [10] J. F. Bonnans, J. C. Gilbert, C. Lemar´echal, and C. A. Sagastizbal. Numerical Optimization Theoretical and Practical Aspects. Springer, 2003.
|
2004
|
157
|
2,571
|
Saliency-Driven Image Acuity Modulation on a Reconfigurable Silicon Array of Spiking Neurons R. Jacob Vogelstein1, Udayan Mallik2, Eugenio Culurciello3, Gert Cauwenberghs2 and Ralph Etienne-Cummings2 1Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 2Dept. of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 3Dept. of Electrical Engineering, Yale University, New Haven, CT {jvogelst,umallik1,gert,retienne}@jhu.edu, eugenio.culurciello@yale.edu Abstract We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience. 1 Introduction The goal of neuromorphic engineering is to design large-scale sensory information processing systems that emulate the brain. In many biological neural systems, the information received by a sensory organ passes through multiple stages of neural computations before a judgment is made. A convenient way to study this functionality is to design separate chips for each stage of processing and connect them with a fast data bus. However, it is not always advisable to fabricate a new chip to test a hypothesis regarding a particular neural computation, and software models of spiking neural networks cannot typically execute or communicate with external devices in real-time. Therefore, we have designed specialized hardware that implements a reconfigurable array of spiking neurons for rapid prototyping of large-scale neural networks. Neuromorphic sensors can generate up to millions of spikes per second (see, e.g., [1]), so a proper communication protocol is required for multi-chip systems. “Address-Event Representation” (AER) was developed for this purpose over a decade ago and has since become the common “language” of neuromorphic chips [2–7]. The central idea of AER is to use time-multiplexing to emulate extensive connectivity between neurons. Although it was originally proposed to implement a one-to-one connection topology, AER has been extended to allow convergent and divergent connectivity [5, 8, 9], and has even been used RAM synapse neuron addr. DIO DAC presynaptic parameters weight synaptic neuron addr. postsynaptic MCU driving potential (digital) postsynaptic neuron addr. driving potential (analog) incoming spike addr. feed-forward outgoing spike addr. recurrent mode mode I&F I&F MCU RAM DAC DIO (a) (b) Figure 1: (a) Block diagram of IFAT system. Incoming and outgoing address-events are communicated through the digital I/O port (DIO), with handshaking executed by the microcontroller (MCU). The digital-to-analog converter (DAC) is controlled by the MCU and provides the synaptic driving potential (‘E’ in Figure 2) to the integrate-and-fire neurons (I&F), according to the synapse parameters stored in memory (RAM). Modified from [18]. (b) Printed circuit board integrating all components of the IFAT system. for functions in addition to inter-chip communication [10–12]. Within our hardware array, all inter-neuron communication is performed using AER; the absence of hardwired connections is the feature that allows for reconfigurability. A few examples of AER-based reconfigurable neural array transceivers can be found in the literature [8, 9], but our Integrate-and-Fire Array Transceiver system (IFAT) differs in its size and flexibility. With four custom aVLSI chips [13] operating in parallel and 128 MB of digital RAM, the system contains 9,600 neurons and up to 4,194,304 synapses. Because it was designed from the start for generality and biological realism, every silicon neuron implements a discrete-time version of the classical biological “membrane equation” [14], a simple conductance-like model of neural function that allows for emulating an unlimited number of synapse types by dynamically varying two parameters [13]. By using a memorybased projective field mapping to route incoming address-events to different target neurons, the system can implement arbitrarily complex network topologies, limited only by the capacity of the RAM. To demonstrate the functionality of the IFAT, we designed a three-stage feed-forward model of salience-based attention and implemented it entirely on the reconfigurable array. The model is based on a biologically-plausible architecture that has been used to explain human visual search strategies [15,16]. Unlike previous hardware implementations (e.g. [17]), we use a multi-chip system and perform all computations with spiking neurons. The network accepts spikes from an address-event imager as inputs, computes spatial derivatives of light intensity as a measure of local information content, identifies regions of high salience, and foveates a location of interest by reducing the resolution in the surrounding areas. These capabilities are useful for smart, low-bandwidth, wide-angle surveillance networks. 2 Hardware From the perspective of an external device, the IFAT system (Figure 1) operates as an AER transceiver, both receiving and transmitting spikes over a bidirectional address-event (AE) bus. Internally, incoming events are routed to any number of integrate-and-fire (I&F) W0 C0 W1 C1 W2 C2 M1 M2 M3 Cm φ1 φ1 φ2 φ2 E M4 Vthresh Vdd Vcomp Vdd Cack Rack M5 M6 M7 M8 M9 M10 Vbp Vdd Vbn M11 M12 Rscan Creq Rreq M13 M14 M15 X1 X2 Vm Vreset Figure 2: Silicon neuron. The “general-purpose” synapse is shown inside the dashed box [13], with event generation circuitry shown right [9]. neurons according to a look-up table stored in RAM. When the inputs are sufficient to cause a neuron to spike, the output is either directed to other internal neurons (for recurrent networks) or to an external device via the AE bus. The following two sections will describe the system and silicon neurons in more detail. 2.1 The IFAT system A block diagram of the IFAT system and its physical implementation are shown in Figure 1. The primary components are a 100 MHz FPGA (Xilinx XC2S100PQ208), 128 MB of nonvolatile SRAM (TI bq4017), a high-speed DAC (TI TLC7524CN), a 68-pin digital I/O interface (DIO), and 4 custom aVLSI chips that implement a total of 9,600 I&F neurons. The FPGA controls access to both an internal and external AE bus, and communicates address-events between both the I&F neurons and external devices in bit-parallel using a four-phase asynchronous handshaking scheme. The 128 MB of RAM is arranged as a 4 MB × 32 array. Each 32-bit entry contains a complete description of a single synapse, specifying the postsynaptic target, the synaptic equilibrium potential, and the synaptic weight. The weight field can be further subdivided into three parts, corresponding to three ways in which biological neurons can control the synaptic weight (w) [14, p. 91]: w = npq (1) where n is the number of quantal neurotransmitter sites, p is the probability of neurotransmitter release per site, and q is a measure of the postsynaptic effect of the neurotransmitter. In the IFAT system, the FPGA can implement p with a simple pseudo-random number algorithm, it can control n by sending multiple outgoing spikes for each incoming spike, and it sends the value of q to the I&F neuron chips (see Section 2.2). Instead of hardwired connections between neurons, the IFAT implements “virtual connections” by serially routing incoming events to their appropriate targets at a rate of up to 1,000,000 events per second. When the IFAT receives an AE from an external device, the FPGA observes the address, appends some “chip identifier” (CID) bits, and stores the resulting binary number as a base address. It then adds additional offset bits to form a complete 22-bit RAM address, which it uses to look up a set of synaptic parameters. After configuring q and instructing the DAC to produce the analog synaptic equilibrium potential, the FPGA activates a target neuron by placing its address on the internal AE bus and initiating asynchronous handshaking with the appropriate I&F chip. It then increments the offset by one and repeats the process for the next synapse, stopping when it sees a reserved code word in the data field. Recurrent connections can be implemented simply by appending a different CID to events generated by the on-board I&F neurons, while connections to external devices are achieved by specifying the appropriate CID for 000 111 W 0 5 E (V) 0 0.1 0.2 0.3 0.4 Time (Arbitrary Units) 0 0.5 1 1.5 Vm Spike Threshold 60 x 40 Array of Neurons Row Arbitration Column Arbitration Column Decoder Row Decoder LFSR (a) (b) Figure 3: (a) Data collected from one neuron during operation of the chip. The lower trace illustrates the membrane potential (Vm) of a single neuron in the array as a series of events are sent at times marked at the bottom of the figure. The synaptic equilibrium potential (E) and synaptic weight (W) are drawn in the top two traces. Figure from [13]. (b) Integrateand-fire chip micrograph. The linear-feedback shift register (LFSR) implements a pseudorandom element for resolving arbitration conflicts. Modified from [13]. the postsynaptic target. With this infrastructure, arbitrary patterns of connectivity can be implemented, limited only by the memory’s capacity. 2.2 Integrate-and-Fire Neurons As described above, the IFAT system includes four custom aVLSI chips [13] that contain a total of 9,600 integrate-and-fire neurons. All the neurons are identical and each implements a simple conductance-like model of a single, “general purpose” synapse using a switchedcapacitor architecture (Figure 2). The synapses have two internal parameters that can be dynamically modulated for each incoming event: the synaptic equilibrium potential (E) and the synaptic weight (W0-W2). Values for both parameters are stored in RAM; the 3-bit q is used by the FPGA to selectively enable binary-sized capacitors C0-C2, while E is converted to an analog value by the DAC. By varying these parameters, it is possible to emulate a large number of different kinds of synapses impinging on the same cell. An example of one neuron in operation is shown in Figure 3a. A micrograph of the integrate-and-fire chip is shown in Figure 3b. Incoming address-events are decoded and sent to the appropriate neuron in the 60 × 40 array. When a neuron’s membrane potential exceeds an externally-provided threshold voltage, it requests service from the peripheral arbitration circuitry. After request is acknowledged, the neuron is reset and its address is placed on the IFAT system’s internal AER bus. Conflicts between simultaneously active neurons are resolved by a novel arbitration scheme that includes a pseudo-random element on-chip [19]. 3 Experimental Design and Results To demonstrate the functionality of the IFAT system, we designed and implemented a threestage network for salience-based foveation [16] of an address-event imager. This work is motivated by the fact that wide-angle image sensors in a monitoring sensor network (a) (b) Figure 4: (a) Test image. (b) Output from Octopus Retina. extract a large quantity of data from the environment, most of which is irrelevant. Because bandwidth is limited and data transmission is energy-intensive, it is desirable to reduce the amount of information sent over the communication channel. Therefore, if a particular region of the visual field can be identified as having high salience, that part of the image can be selectively transmitted with high resolution and the surrounding scene can be be compressed. The input to the first stage of the network is a stream of address-events generated by an asynchronous imager called the “Octopus Retina” (OR) [1,20]. The OR contains a 60 × 80 array of light-sensitive “neurons” that each represent local light intensity as a spike rate. In other words, pixels that receive a lot of light spike more frequently than those that receive a little light. For these experiments, we collected 100,000 events from the OR over the course of about one second while it was viewing a grayscale picture mounted on a white background. The test image and OR output are shown in Figure 4. To identify candidate regions of salience, the first stage of the network is configured to compute local changes in contrast. Every 2 × 2 block of pixels in the OR corresponds to four neurons on the IFAT that respond to light-to-dark or dark-to-light transitions in the rightward or downward direction (Figure 5a). Each IFAT cell computes local changes in contrast due to a receptive field (RF) that spans four OR pixels in either the horizontal or vertical dimension, with two of its inputs being excitatory and the other two being inhibitory. When a given IFAT cell’s RF falls on a region of visual space with uniform brightness, all of the OR pixels projecting to that cell will have the same mean firing rate, so the excitatory and inhibitory inputs will cancel. However, if a cell’s excitatory inputs are exposed to high light intensity and its inhibitory inputs are exposed to low light intensity, the cell will receive more excitatory inputs than inhibitory inputs and will generate an output spike train with spike frequency proportional to the contrast. The output from the 4,800 IFAT neurons in the first stage of the network in response to the OR input is shown in Figure 5b. The second stage of processing is designed to pool inputs from neighboring contrastsensitive cells to identify locations of high salience. Our underlying assumption is that regions of interest will contain more detail than their surroundings, producing a large output from the first stage. Blocks of 8 × 8 IFAT cells from the first stage project to single cells in the second stage, and each 8 × 8 region overlaps the next by 4 neurons (Figure 6a). Therefore, every IFAT cell in the second stage has an 8 × 8 RF. Although it is not necessary to normalize the firing rates of the first and second stages, because every second stage IFAT cell receives 64 inputs, we reduce the strength of the synaptic connections between the two stages to conserve bandwidth. The output from the 300 IFAT neurons in the second 1 3 2 4 (a) (b) Figure 5: (a) Stage 1 network for computing local changes in contrast. Squares in the center represent OR pixels. Circles represent IFAT neurons. Excitatory synapses are represented by triangles, and inhibitory synapses as circles. Only four IFAT neurons with nonoverlapping receptive fields are shown for clarity. (b) Output of stage 1, as implemented on the IFAT, with Figure 4b from the OR as input. stage of the network in response to the output from the first stage IFAT neurons is shown in Figure 6b. The final stage of processing modulates the spatial acuity of the original image to reduce the resolution outside the region of highest salience. This is achieved by a foveation network that pools inputs from neighboring pixels using overlapping Gaussian kernels (Figure 7a) [18]. The shape of the kernel functions is implemented by varying the synaptic weight and synaptic equilibrium potential between OR neurons and IFAT cells in the third stage: within every pooled block, the strongest connections originate from the center pixels and the weakest connections come from the outermost pixels. Instead of physically moving the OR imager to center the fovea on the region of interest, we relocate the fovea by performing simple manipulations in the address domain. First, the address space of incoming events is enlarged beyond the range provided by the OR and the fovea is centered within this virtual visual field (Figure 7a). Then, the row and column address of the second stage IFAT neuron with the largest output is subtracted from the address of the center of the fovea, and the result is stored as a constant offset. This offset is then added to the addresses of all incoming events from the OR, resulting in a shift of the OR image in the virtual visual field so that the fovea will be positioned over the region of highest salience. The output from the 1,650 IFAT neurons in the third stage network is shown in Figure 7b. With a 32 × 32 pixel high-resolution fovea, the network allows for a 66% reduction in the number of address-events required to reconstruct the image. 4 Conclusion We have demonstrated a multi-chip neuromorphic system for performing saliency-based spatial acuity modulation. An asynchronous imager provides the input and communicates with a reconfigurable array of spiking silicon neurons using address-events. The resulting output is useful for efficient spatial and temporal bandwidth allocation in low-power vision sensors for wide-angle video surveillance. Future work will concentrate on extending the functionality of the multi-chip system to perform stereo processing on address-event data from two imagers. (a) (b) Figure 6: (a) Stage 2 network for computing local changes in contrast. Blocks of 8 × 8 IFAT neurons from stage 1 (shown as regions alternately shaded white and gray) project to single IFAT neurons in stage 2 (not shown). Blocks are shown as non-overlapping for clarity. (b) Output of stage 2, as implemented on the IFAT, with Figure 5b from stage 1 as input. Acknowledgments This work was partially funded by NSF Awards #0120369, #9896362, and IIS-0209289; ONR Award #N00014-99-1-0612; and a DARPA/ONR MURI #N00014-95-1-0409. Additionally, RJV is supported by an NSF Graduate Research Fellowship. References [1] E. Culurciello, R. Etienne-Cummings, and K. A. Boahen, “A biomorphic digital image sensor,” IEEE J. Solid-State Circuits, vol. 38, no. 2, 2003. [2] M. Sivilotti, Wiring considerations in analog VLSI systems, with application to fieldprogrammable networks. PhD thesis, California Institute of Technology, Pasadena, CA, 1991. [3] M. Mahowald, An analog VLSI system for stereoscopic vision. Boston, MA: Kluwer Academic Publishers, 1994. [4] J. Lazzaro, J. Wawrzynek, M. Mahowald, M. Sivilotti, and D. Gillespie, “Silicon auditory processors as computer peripherals,” IEEE Trans. Neural Networks, vol. 4, no. 3, pp. 523–528, 1993. [5] K. A. Boahen, “Point-to-point connectivity between neuromorphic chips using address events,” IEEE Trans. Circuits & Systems II, vol. 47, no. 5, pp. 416–434, 2000. [6] C. M. Higgins and C. Koch, “Multi-chip neuromorphic motion processing,” in Proc. 20th Anniversary Conference on Advanced Research in VLSI (D. S. Wills and S. P. DeWeerth, eds.), (Los Alamitos, CA), pp. 309–323, IEEE Computer Society, 1999. [7] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbr¨uck, and R. Douglas, “Orientation-selective aVLSI spiking neurons,” in Advances in Neural Information Processing Systems 14 (T. G. Dietterich, S. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002. [8] G. Indiveri, A. M. Whatley, and J. Kramer, “A reconfigurable neuromorphic VLSI multi-chip system applied to visual motion computation,”in Proc. MicroNeuro’99, Apr. 1999. [9] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, “Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons,”Neural Networks, vol. 14, no. 6-7, pp. 781–793, 2001. (a) (b) Figure 7: (a) Stage 3 foveation network. The 32 × 32 pixel high-resolution fovea (center) is surrounded by lower-resolution areas where 2 × 2, 4 × 4, and 8 × 8 blocks of OR neurons (shown as non-overlapping for clarity) project to single IFAT cells. The address space for inputs to the foveation network is 128 × 128. [18]. (b) Output of stage 3, as implemented on the IFAT, with the fovea centered on the location with the maximum firing rate in Figure 6b, from stage 2. Peripheral pixels that receive no input are not shown. [10] S. R. Deiss, R. J. Douglas, and A. M. Whatley, “A pulse-coded communications infrastructure for neuromorphic systems,” in Pulsed Neural Networks (W. Maass and C. M. Bishop, eds.), pp. 157–178, Cambridge, MA: MIT Press, 1999. [11] M. Mahowald and R. Douglas, “A silicon neuron,”Nature, vol. 354, pp. 515–518, 1991. [12] R. J. Vogelstein, F. Tenore, R. Philipp, M. S. Adlerstein, D. H. Goldberg, and G. Cauwenberghs, “Spike timing-dependent plasticity in the address domain,”in Advances in Neural Information Processing Systems 15 (S. Becker, S. Thrun, and K. Obermayer, eds.), Cambridge, MA: MIT Press, 2003. [13] R. J. Vogelstein, U. Mallik, and G. Cauwenberghs, “Silicon spike-based synaptic array and address-event transceiver,”in Proc. ISCAS’04, vol. 5, (Vancouver, BC), pp. 385–388, 2004. [14] C. Koch, Biophysics of Computation: Information Processing in Single Neurons. New York, NY: Oxford University Press, 1999. [15] C. Koch and S. Ullman, “Shifts in selective visual attention: towards the underlying neural circuitry,”Human Neurobiology, vol. 4, pp. 219–227, 1985. [16] L. Itti, E. Niebur, and C. Koch, “A model of saliency-based fast visual attention for rapid scene analysis,”IEEE Trans. Pattern Analysis & Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998. [17] T. Horiuchi, T. Morris, C. Koch, and S. P. DeWeerth, “Analog VLSI circuits for attentionbased, visual tracking,”in Advances in Neural Information Processing Systems 9, pp. 706–712, Cambridge, MA: MIT Press, 1997. [18] R. J. Vogelstein, U. Mallik, E. Culurciello, G. Cauwenberghs, and R. Etienne-Cummings, “Spatial acuity modulation of an address-event imager,”in ICECS’04, 2004. [19] R. J. Vogelstein, U. Mallik, and G. Cauwenberghs, “Reconfigurable silicon array of spiking neurons,”IEEE Trans. Neural Networks, 2005. (Submitted). [20] E. Culurciello, R. Etienne-Cummings, and K. Boahen, “Second generation of high dynamic range, arbitrated digital imager,” in Proc. ISCAS’04, vol. 4, (Vancouver, BC), pp. 828–831, 2004.
|
2004
|
158
|
2,572
|
Common-Frame Model for Object Recognition Pierre Moreels Pietro Perona California Insitute of Technology - Pasadena CA91125 - USA pmoreels,perona@vision.caltech.edu Abstract A generative probabilistic model for objects in images is presented. An object consists of a constellation of features. Feature appearance and pose are modeled probabilistically. Scene images are generated by drawing a set of objects from a given database, with random clutter sprinkled on the remaining image surface. Occlusion is allowed. We study the case where features from the same object share a common reference frame. Moreover, parameters for shape and appearance densities are shared across features. This is to be contrasted with previous work on probabilistic ‘constellation’ models where features depend on each other, and each feature and model have different pose and appearance statistics [1, 2]. These two differences allow us to build models containing hundreds of features, as well as to train each model from a single example. Our model may also be thought of as a probabilistic revisitation of Lowe’s model [3, 4]. We propose an efficient entropy-minimization inference algorithm that constructs the best interpretation of a scene as a collection of objects and clutter. We test our ideas with experiments on two image databases. We compare with Lowe’s algorithm and demonstrate better performance, in particular in presence of large amounts of background clutter. 1 Introduction There is broad agreement in the machine vision literature that objects and object categories should be represented as collections of features or parts with distinctive appearance and mutual position [1, 2, 4, 5, 6, 7, 8, 9]. A number of ideas for efficient detection algorithms (find instances of a given object category, e.g. faces) have been proposed by virtually all the cited authors, far fewer for recognition (list all objects and their pose in a given image) where matching would ideally take a logarithmic time with respect to the number of available models [3, 4]. Learning of parameters characterizing features shape or appearance is still a difficult area, with most authors opting for heavy human intervention (typically segmentation and alignment of the training examples, although [1, 2, 3] train without supervision) and very large training sets for object categories (typically in the order of 10 3 104, although [10] recently demonstrated learning categories from 1-10 examples). This work is based on two complementary efforts: the deterministic recognition system proposed by Lowe [3, 4], and the probabilistic constellation models by Perona and collaborators [1, 2]. The first line of work has three attractive characteristics: objects are represented with hundreds of features, thus increasing robustness; models are learned from a single training example; last but not least, recognition is efficient with databases of hundreds of objects. The drawback of Lowe’s approach is that both modeling decisions and algorithms rely on heuristics, whose design and performance may be far from optimal in Figure 1: Diagram of our recognition model showing database, test image and two competing hypotheses. To avoid a cluttered diagram, only one partial hypothesis is displayed for each hypothesis. The predicted position of models according to the hypotheses are overlaid on the test image. some circumstances. Conversely, the second line of work is based on principled probabilistic object models which yield principled and, in some respects, optimal algorithms for learning and recognition/detection. Unfortunately, the large number of parameters employed in each model limit in practice the number of features being used and require many training examples. By recasting Lowe’s model and algorithms in probabilistic terms, we hope to combine the advantages of both methods. Besides, in this paper we choose to focus on individual objects as in [3, 4] rather than on categories as in [1, 2]. In [11] we presented a model aimed at the same problem of individual object recognition. A major difference with the work described here lies in the probabilistic treatment of hypotheses, which allows us here to use directly hypothesis likelihood as a guide for the search, instead of the arbitrary admissible heuristic required by A*. 2 Probabilistic framework and notations Each model object is represented as a collection of features. Features are informative parts extracted from images by an interest point operator. Each model is the set of features extracted from one training image of a given object - although this could be generalized to features from many images of the same object. Models are indexed by k and denoted by mk, while indices i and j are used respectively for features extracted from the test image and from model images: fi denotes the i −th test feature, while f k j denotes the j −th feature from the k −th model. The features extracted from model images (training set) form the database. A feature detected in a test image can be a consequence of the presence of a model object in the image, in which case it should be associated to a feature from the database. In the alternative, this feature is attributed to a clutter - or background - detection. The geometric information associated to each feature contains position information (x and y coordinates, denoted by the vector x), orientation (denoted by θ) and scale (denoted by σ). It is denoted by Xi = (x, θi, σi) for test feature fi and X k j = (xk j θk j , σk j ) for model feature f k j . This geometric information is measured relatively to the standard reference frame of the image in which the feature has been detected. All features extracted from the same image share the same reference frame. The appearance information associated to a feature is a descriptor characterizing the local image appearance near this feature. The measured appearance information is denoted by Ai for test feature fi and Ak j for model feature f k j . In our experiments, features are detected at multiple scales at the extrema of difference-of-gaussiansfiltered versions of the image [4, 12]. The SIFT descriptor [4] is then used to characterize the local texture about keypoints. A partial hypothesis h explains the observations made in a fraction of the test image. It combines a model image mh and a corresponding set of pose parameters Xh. Xh encodes position, rotation, scale (this can easily be extended to affine transformations). We assume independence between partial hypotheses. This requires in particular independence between models. Although reasonable, this approximation is not always true (e.g. a keyboard is likely to be detected close to a computer screen). This allows us to search in parallel for multiple objects in a test image. A hypothesis H is the combination of several partial hypotheses, such that it explains completely the observations made in the test image. A special notation H 0 or h0 denotes any (partial) hypothesis that states that no model object is present in a given fraction of the test image, and that features that could have been detected there are due to clutter. Our objective is to find which model objects are present in the test scene, given the observations made in the test scene and the information that is present in the database. In probabilistic terms, we look for hypotheses H for which the likelihood ration LR(H) = P (H|{fi},{f k j }) P (H0|{fi},{f k j }) > 1. This ratio characterizes how well models and poses specified by H explain the observations, as opposed to them being generated by clutter. Using Bayes rules and after simplifications, LR(H) = P(H|{fi}, {f k j }) P(H0|{fi}, {f k j }) = P({fi}|{f k j }, H) · P(H) P({fi}|{f k j }, H0) · P(H0) (1) where we used P({f k j }|H) = P({f k j }) since the database observations do not depend on the current hypothesis. A key assumption of this work is that once the pose parameters of the objects (and thus their reference frames) are known, the geometric configuration and appearance of the test features are independent from each other. We also assume independence between features associated to models and features associated to clutter detections, as well as independence between separate clutter detections. Therefore, P({fi}|{f k j }, H) = i P(fi|{f k j }, H). These assumptions of independence are also made in [13], and undelying in [4]. Assignment vectors v represent matches between features from the test scene, and model features or clutter. The dimension of each assignment vector is the number of test features ntest. Its i −th component v(i) = (k, j) denotes that the test feature fi is matched to fv(i) = f k j , j −th feature from model mk. v(i) = (0, 0) denotes the case where fi is attributed to clutter. The set VH of assignment vectors compatible with a hypothesis H are those that assign test features only to models present in H (and to clutter). In particular, the only assignment vector compatible with h0 is v0 such that ∀i, v0(i) = (0, 0). We obtain LR(H) = P(H) P(H0) v∈VH h∈H P(v|{f k j }, mh, Xh) · i|fi∈h P(fi|fv(i), mh, Xh) P(fi|h0) (2) P(H) is a prior on hypotheses, we assume it is constant. The term P(v|{f k j }, mh, Xh) is discussed in 3.1, we now explore the other terms. •P(fi|fv(i), mh, Xh) : fi and fv(i) are believed to be one and the same feature. Differences measured between them are noise due to the imaging system as well as distortions caused by viewpoint or lighting conditions changes. This noise probability p n encodes differences in appearance of the descriptors, but also in geometry, i.e. position, scale, orientation Assuming independence between appearance information and geometry information, pn(fi|f k j , mh, Xh) = pn,A(Ai|Av(i), mh, Xh) · pn,X(Xi|Xv(i), mh, Xh) (3) Figure 2: Snapshots from the iterative matching process. Two competing hypotheses are displayed (top and bottom row) a) Each assignment vector contains one assignment, suggesting a transformation (red box) b) End of iterative process. The correct hypothesis is supported by numerous matches and high belief, while the wrong hypothesis has only a weak support from few matches and low belief. The error in geometry is measured by comparing the values observed in the test image, with the predicted values that would be observed if the model features were to be transformed according to the parameters Xh. Let’s denote by Xh(xv(i)),Xh(σv(i)),Xh(θv(i)) those predicted values, the geometry part of the noise probability can be decomposed into pn,X(Xi|Xv(i), h) = pn,x(xi, Xh(xv(i))) · pn,σ(σi, Xh(σv(i))) · pn,θ(θi, Xh(θv(i))) (4) •P(fi|h0) is a density on appearance and position of clutter detections, denoted by p bg(fi). We can decompose this density as well into an appearance term and a geometry term: pbg(fi) = pbg,A(Ai) · pbg,X(Xi) = pbg,A(Ai) · pbg,(x)(xi) · pbg,σ(σi) · pbg,θ(θi) (5) pbg,A, pbg,(x)(xi) pbg,σ(σi), pbg,θ(θi) are densities that characterize, for clutter detections, appearance, position, scale and rotation respectively. Out of lack of space, and since it is not the main focus of this paper, we will not go into the details of how the “foreground density” pn and the “background density” pbg are learned. The main assumption is that those densities are shared across features, instead of having one set of parameters for each feature as in [1, 2]. This results in an important decrease of the number of parameters to be learned, at a slight cost in the model expressiveness. 3 Search for the best interpretation of the test image The building block of the recognition process is a question, comparing a feature from a database model with a feature of the test image. A question selects a feature from the database, and tries to identify if and where this feature appears in the test image. 3.1 Assignment vectors compatible with hypotheses For a given hypothesis H, the set of possible assignment vectors V H is too large for explicit exploration. Indeed, each potential match can either be accepted or rejected, which creates a combinatorial explosion. Hence, we approximate the summation in (2) by its largest term. In particular, each assignment vector v and each model referenced in v implies a set of pose parameters Xv (extracted e.g. with least-squares fitting). Therefore, the term P(v|{f k j }, mh, Xh) from (2) will be significant only when Xv ≈Xh, i.e. when the pose implied by the assignment vector agrees with the pose specified by the partial hypothesis. We consider only the assignment vectors v for which Xv ≈Xh. P(vH|{f k j }, h) is assumed to be close to 1. Eq.(2) becomes LR(H) ≈P(H) P(H0) h∈H i|fi∈h P(fi|fvh(i), mh, Xh) P(fi|h0) (6) Our recognition system proceeds by asking questions sequentially and adding matches to assignment vectors. It is therefore natural to define, for a given hypothesis H and the corresponding assignment vector vH and t ≤ntest, the belief in vH by B0(vH) = 1, Bt(vH) = pn(ft|fv(t), mht, Xht) pbg(ft|h0) · Bt−1(vH) (7) The geometric part of the belief (cf.(3)-(5) characterizes how close the pose X v implied by the assignments is to the pose Xh specified by the hypothesis. The geometric component of the belief characterizes the quality of the appearance match for the pairs (f i, fv(i)). 3.2 Entropy-based optimization Our goal is finding quickly the hypothesis that best explains the observations, i.e. the hypothesis (models+poses) that has the highest likelihood ratio. We compute such hypothesis incrementally by asking questions sequentially. Each time a question is asked we update the beliefs. We stop the process and declare a detection (i.e. a given model is present in the image) as soon as the belief of a corresponding hypothesis exceeds a given confidence threshold. The speed with which we reach such a conclusion depends on choosing cleverly the next question. A greedy strategy says that the best next question is the one that takes us closest to a detection decision. We do so by considering the entropy of the vector of beliefs (the vector may be normalized to 1 so that each belief is in fact a probability): the lower the entropy the closer we are to a detection. Therefore we study the following heuristic: The most informative next question is the one that minimizes the expectation of the entropy of our beliefs. We call this strategy ‘minimum expected entropy’ (MEE). This idea is due to Geman et al. [14]. Calculating the MEE question is, unfortunately, a complex and expensive calculation in itself. In Monte-Carlo simulations of a simplified version of our problem we notice that the MEE strategy tends to ask questions that relate to the maximum-belief hypothesis. Therefore we approximate the MEE strategy with a simple heuristic: The next question consists of attempting to match one feature of the highest-belief model; specifically, the feature with best appearance match to a feature in the test image. 3.3 Search for the best hypotheses In an initialization step, a geometric hash table [3, 6, 7] is created by discretizing the space of possible transformations Note that we add only partial hypotheses in a hypothesis one at a time, which allows us to discretize only the space of partial hypotheses (models + poses), instead of discretizing the space of combinations of partial hypotheses. Questions to be examined are created by pairing database features to the test features closest in terms of appearance. Note that since features encode location, orientation and scale, any single assignment between a test feature and a model feature contains enough information to characterize a similarity transformation. It is therefore natural to restrict the set of possible transformations to similarities, and to insert each candidate assignment in the corresponding geometric hash table entry. This forms a pool of candidate assignments. The set of hypotheses is initialized to the center of the hash table entries, and their belief is set to 1. The motivation for this initialization step is to examine, for each partial hypothesis, only a small number of candidate matches. A partial hypothesis corresponds to a hash table entry, we consider only the candidate assignments that fall into this same entry. Each iteration proceeds as follows. The hypothesis H that currently has the highest likelihood ratio is selected. If the geometric hash table entry corresponding to the current partial hypothesis h, contains candidate assignments that have not been examined yet, one of them, (fi, f mh j ) is picked - currently, the best appearance match - and the probabilities p bg(fi) and pn(fi|f mh j , mh, Xh) are computed. As mentioned in 3.1, only the best assignment Figure 3: Results from our algorithm in various situations (viewpoint change can be seen in Fig.6). Each row shows the best hypothesis in terms of belief. a) Occlusion b) Change of scale. Figure 4: ROC curves for both experiments. The performance improvement from our probabilistic formulation is particularly significant when a low false alarm rate is desired. The threshold used is the repeatability rate defined in [15] vector is explored: if pn(fi|f mh j , mh, Xh) > pbg(fi) the match is accepted and inserted in the hypothesis. In the alternative, fi is considered a clutter detection and f mh j is a missed detection. The belief B(vH) and the likelihood ratio LR(H) are updated using (7). After adding an assignment to a hypothesis, frame parameters X h are recomputed using least-squares optimization, based on all assignments currently associated to this hypothesis. This parameter estimation step provides a progressive refinement of the model pose parameters as assignments are added. Fig.2 illustrates this process. The exploration of a partial hypothesis ends when no more candidate match is available in the hash table entry. We proceed with the next best partial hypothesis. The search ends when all test scene features have been matched or assigned to clutter. 4 Experimental results 4.1 Experimental setting We tested our algorithm on two sets of images, containing respectively 49 and 161 model images, and 101 and 51 test images (sets PM −gadgets −03 and JP −3Dobjects −04 available from http : //www.vision.caltech.edu/html −files/archive.html). Each model image contained a single object. Test images contained from zero (negative examples) to five objects, for a total of 178 objects in the first set, and 79 objects in the second set. A large fraction of each test image consists of background. The images were taken with no precautions relatively to lighting conditions or viewing angle. The first set contains common kitchen items and objects of everyday use. The second set (Ponce Lab, UIUC) includes office pictures. The objects were always moved between model images and test images. The images of model objects used in the learning stage were downsampled to fit in a 500 × 500 pixels box, the test images were downsampled to 800 × 800 pixels. With these settings, the number of features generated by the features detector was of the order of 1000 per training image and 2000-4000 per test image. Figure 5: Behavior induced by clutter detections. A ground truth model was created by cutting a rectangle from the test image and adding noise. The recognition process is therefore expected to find a perfect match. The two rows show the best and second best model found by each algorithm (estimated frame position shown by the red box, features that found a match are shown in yellow). 4.2 Results Our probabilistic method was compared against Lowe’s voting approach on both sets of images. We implemented Lowe’s algorithm following the details provided in [3, 4]. Direct comparison of our approach to ‘constellation’ models [1, 2] is not possible as those require many training samples for each class in order to learn shape parameters, while our method learns from single examples. Recognition time for our unoptimized implementations was 10 seconds for Lowe’s algorithm and 25 seconds for our probabilistic method on a 2.8GHz PC, both implementations used approximately 200MB of memory. Both methods yielded similar detection rates for simple scenes. In challenging situations with multiple objects or textured clutter, our method performs a more systematic check on geometric consistency by updating likelihoods every time a match is added. Hypotheses starting with wrong matches due to clutter don’t find further supporting matches, and are easily discarded by a threshold based on the number of matches. Conversely, Lowe’s algorithm checks geometric consistency as a last step of the recognition process, but needs to allow for a large slop in the transformation parameters. Spurious matches induced by clutter detections may still be accepted, thus leading to the acceptance of incorrect hypotheses. An example of this behavior is displayed in Fig.5: the test image consists of a picture of concrete. A rectangular patch was extracted from this image, noise was added to this patch, and it was inserted in the database as a new model. With our algorithm, the best hypothesis found the correct match with the patch of concrete, its best contender doesn’t succeed in collecting more than one correspondence and is discarded. In Lowe’s case, other models manage to accumulate a high number of correspondences induced by texture matches among clutter detections. Although the first correspondence concerns the correct model, it contains wrong matches. Moreover, the model displayed in the second row leads to a false alarm supported by many matches. Fig.4 displays receiver-operating curves (ROC) for both tests sets, obtained for our probabilistic system and Lowe’s method. Both curves confirm that our probabilistic interpretation leads to less false alarms than Lowe’s method for a same detection rate. 5 Conclusion We have proposed an object recognition method that combines the benefits of a set of rich features with those of a probabilistic model of features positions and appearance. The use of large number of features brings robustness with respect to occlusions and clutter. The probabilistic model verifies the validity of candidate hypotheses in terms of appearance and geometric configuration. Our system improves upon a state-of-the art recognition method based on strict feature matching. In particular, the rate of false alarms in the presence Figure 6: Sample scenes and training objects from the two sets of images. Recognized frame poses are overlayed in red. of textured backgrounds generating strong erroneous matches, is lower. This is a strong advantage in real-world situations, where a “clean” background is not always available. References [1] M. Weber, M. Welling and P. Perona, “Unsupervised Learning of Models for Recognition”, Proc. Europ. Conf. Comp. Vis., 2000. [2] R. Fergus, P. Perona, A. Zisserman, “Object Class Recognition by Unsupervised Scale-invariant Learning”, IEEE. Conf. on Comp. Vis. and Patt. Recog., 2003. [3] D.G. Lowe, “Object Recognition from Local Scale-invariant Features”, ICCV,1999 [4] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, Int. J. Comp. Vis., 60(2), pp. 91-110, 2004. [5] G. Carneiro and A. Jepson “Flexible Spatial Models for Grouping Local Image Features”, IEEE. Conf. on Comp. Vis. and Patt. Recog., 2004. [6] I. Rigoutsos and R. Hummel “A Bayesian Approach to Model Matching with Geometric Hashing”, CVIU, 62(1), pp. 11-26, 1995. [7] W.E.L. Grimson and D.P. Huttenlocher, “On the Sensitivity of Geometric Hashing”, ICCV, 1990 [8] H. Rowley, S. Baluja, T. Kanade, “Neural Network-based Face Detection”, IEEE. Trans. Patt. Anal. Mach. Int., 20(1):pp. 23-38, 1998. [9] P. Viola and M. Jones, “Rapid Object Detection Using a Boosted Cascade of Simple Features”, Proc. IEEE Conf. Comp. Vis. Patt. Recog., 2001. [10] L. Fei-Fei, R. Fergus, P. Perona. “Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories” CVPR, 2004. [11] P. Moreels, M. Maire, P. Perona, ’Recognition by Probabilistic Hypothesis Construction’, Proc. 8th Europ. Conf. Comp. Vision, Prague, Czech Republic, pp.55-68, 2004 [12] T. Lindeberg, “Scale-space Theory: a Basic Tool for Analising Structures at Different Scales”, J. Appl. Stat., 21(2), pp.225-270, 1994. [13] A.R. Pope and D.G. Lowe, “Probabilistic Models of Appearance for 3-D Object Recognition”, Int. J. Comp. Vis., 40(2), pp. 149-167, 2000. [14] D. Geman and B. Jedynak, “An Active Testing Model for Tracking Roads in Satellite Images”, IEEE. Trans. Patt. Anal. Mach. Int.,18(1) pp. 1 - 14,1996 [15] C. Schmid, R. Mohr, C. Bauckhage”, “Comparing and Evaluating Interest Points”, Proc. of 6th Int. Conf. Comp. Vis., Bombay, India, 1998.
|
2004
|
159
|
2,573
|
Distributed Occlusion Reasoning for Tracking with Nonparametric Belief Propagation Erik B. Sudderth, Michael I. Mandel, William T. Freeman, and Alan S. Willsky Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology esuddert@mit.edu, mim@alum.mit.edu, billf@mit.edu, willsky@mit.edu Abstract We describe a three–dimensional geometric hand model suitable for visual tracking applications. The kinematic constraints implied by the model’s joints have a probabilistic structure which is well described by a graphical model. Inference in this model is complicated by the hand’s many degrees of freedom, as well as multimodal likelihoods caused by ambiguous image measurements. We use nonparametric belief propagation (NBP) to develop a tracking algorithm which exploits the graph’s structure to control complexity, while avoiding costly discretization. While kinematic constraints naturally have a local structure, self– occlusions created by the imaging process lead to complex interpendencies in color and edge–based likelihood functions. However, we show that local structure may be recovered by introducing binary hidden variables describing the occlusion state of each pixel. We augment the NBP algorithm to infer these occlusion variables in a distributed fashion, and then analytically marginalize over them to produce hand position estimates which properly account for occlusion events. We provide simulations showing that NBP may be used to refine inaccurate model initializations, as well as track hand motion through extended image sequences. 1 Introduction Accurate visual detection and tracking of three–dimensional articulated objects is a challenging problem with applications in human–computer interfaces, motion capture, and scene understanding [1]. In this paper, we develop a probabilistic method for tracking a geometric hand model from monocular image sequences. Because articulated hand models have many (roughly 26) degrees of freedom, exact representation of the posterior distribution over model configurations is intractable. Trackers based on extended and unscented Kalman filters [2, 3] have difficulties with the multimodal uncertainties produced by ambiguous image evidence. This has motived many researchers to consider nonparametric representations, including particle filters [4, 5] and deterministic multiscale discretizations [6]. However, the hand’s high dimensionality can cause these trackers to suffer catastrophic failures, requiring the use of models which limit the hand’s motion [4] or sophisticated prior models of hand configurations and dynamics [5, 6]. An alternative way to address the high dimensionality of articulated tracking problems is to describe the posterior distribution’s statistical structure using a graphical model. GraphFigure 1: Projected edges (left block) and silhouettes (right block) for a configuration of the 3D structural hand model matching the given image. To aid visualization, the model is also projected following rotations by 35◦(center) and 70◦(right) about the vertical axis. ical models have been used to track view–based human body representations [7], contour models of restricted hand configurations [8], view–based 2.5D “cardboard” models of hands and people [9], and a full 3D kinematic human body model [10]. Because the variables in these graphical models are continuous, and discretization is intractable for three–dimensional models, most traditional graphical inference algorithms are inapplicable. Instead, these trackers are based on recently proposed extensions of particle filters to general graphs: mean field Monte Carlo in [9], and nonparametric belief propagation (NBP) [11, 12] in [10]. In this paper, we show that NBP may be used to track a three–dimensional geometric model of the hand. To derive a graphical model for the tracking problem, we consider a redundant local representation in which each hand component is described by its own three– dimensional position and orientation. We show that the model’s kinematic constraints, including self–intersection constraints not captured by joint angle representations, take a simple form in this local representation. We also provide a local decomposition of the likelihood function which properly handles occlusion in a distributed fashion, a significant improvement over our earlier tracking results [13]. We conclude with simulations demonstrating our algorithm’s robustness to occlusions. 2 Geometric Hand Modeling Structurally, the hand is composed of sixteen approximately rigid components: three phalanges or links for each finger and thumb, as well as the palm [1]. As proposed by [2, 3], we model each rigid body by one or more truncated quadrics (ellipsoids, cones, and cylinders) of fixed size. These geometric primitives are well matched to the true geometry of the hand, allow tracking from arbitrary orientations (in contrast to 2.5D “cardboard” models [5, 9]), and permit efficient computation of projected boundaries and silhouettes [3]. Figure 1 shows the edges and silhouettes corresponding to a sample hand model configuration. Note that only a coarse model of the hand’s geometry is necessary for tracking. 2.1 Kinematic Representation and Constraints The kinematic constraints between different hand model components are well described by revolute joints [1]. Figure 2(a) shows a graph describing this kinematic structure, in which nodes correspond to rigid bodies and edges to joints. The two joints connecting the phalanges of each finger and thumb have a single rotational degree of freedom, while the joints connecting the base of each finger to the palm have two degrees of freedom (corresponding to grasping and spreading motions). These twenty angles, combined with the palm’s global position and orientation, provide 26 degrees of freedom. Forward kinematic transformations may be used to determine the finger positions corresponding to a given set of joint angles. While most model–based hand trackers use this joint angle parameterization, we instead explore a redundant representation in which the ith rigid body is described by its position qi and orientation ri (a unit quaternion). Let xi = (qi, ri) denote this local description of each component, and x = {x1, . . . , x16} the overall hand configuration. Clearly, there are dependencies among the elements of x implied by the kinematic con(a) (b) (c) (d) Figure 2: Graphs describing the hand model’s constraints. (a) Kinematic constraints (EK) derived from revolute joints. (b) Structural constraints (ES) preventing 3D component intersections. (c) Dynamics relating two consecutive time steps. (d) Occlusion consistency constraints (EO). straints. Let EK be the set of all pairs of rigid bodies which are connected by joints, or equivalently the edges in the kinematic graph of Fig. 2(a). For each joint (i, j) ∈EK, define an indicator function ψK i,j (xi, xj) which is equal to one if the pair (xi, xj) are valid rigid body configurations associated with some setting of the angles of joint (i, j), and zero otherwise. Viewing the component configurations xi as random variables, the following prior explicitly enforces all constraints implied by the original joint angle representation: pK(x) ∝ Y (i,j)∈EK ψK i,j (xi, xj) (1) Equation (1) shows that pK(x) is an undirected graphical model, whose Markov structure is described by the graph representing the hand’s kinematic structure (Fig. 2(a)). 2.2 Structural and Temporal Constraints In reality, the hand’s joint angles are coupled because different fingers can never occupy the same physical volume. This constraint is complex in a joint angle parameterization, but simple in our local representation: the position and orientation of every pair of rigid bodies must be such that their component quadric surfaces do not intersect. We approximate this ideal constraint in two ways. First, we only explicitly constrain those pairs of rigid bodies which are most likely to intersect, corresponding to the edges ES of the graph in Fig. 2(b). Furthermore, because the relative orientations of each finger’s quadrics are implicitly constrained by the kinematic prior pK(x), we may detect most intersections based on the distance between object centroids. The structural prior is then given by pS(x) ∝ Y (i,j)∈ES ψS i,j (xi, xj) ψS i,j (xi, xj) = ½ 1 ||qi −qj|| > δi,j 0 otherwise (2) where δi,j is determined from the quadrics composing rigid bodies i and j. Empirically, we find that this constraint helps prevent different fingers from tracking the same image data. In order to track hand motion, we must model the hand’s dynamics. Let xt i denote the position and orientation of the ith hand component at time t, and xt = {xt 1, . . . , xt 16}. For each component at time t, our dynamical model adds a Gaussian potential connecting it to the corresponding component at the previous time step (see Fig. 2(c)): pT ¡ xt | xt−1¢ = 16 Y i=1 N ¡ xt i −xt−1 i ; 0, Λi ¢ (3) Although this temporal model is factorized, the kinematic constraints at the following time step implicitly couple the corresponding random walks. These dynamics can be justified as the maximum entropy model given observations of the nodes’ marginal variances Λi. 3 Observation Model Skin colored pixels have predictable statistics, which we model using a histogram distribution pskin estimated from training patches [14]. Images without people were used to create a histogram model pbkgd of non–skin pixels. Let Ω(x) denote the silhouette of projected hand configuration x. Then, assuming pixels Υ are independent, an image y has likelihood pC(y | x) = Y u∈Ω(x) pskin(u) Y v∈Υ\Ω(x) pbkgd(v) ∝ Y u∈Ω(x) pskin(u) pbkgd(u) (4) The final expression neglects the proportionality constant Q v∈Υ pbkgd(v), which is independent of x, and thereby limits computation to the silhouette region [8]. 3.1 Distributed Occlusion Reasoning In configurations where there is no self–occlusion, pC(y | x) decomposes as a product of local likelihood terms involving the projections Ω(xi) of individual hand components [13]. To allow a similar decomposition (and hence distributed inference) when there is occlusion, we augment the configuration xi of each node with a set of binary hidden variables zi = {zi(u)}u∈Υ. Letting zi(u) = 0 if pixel u in the projection of rigid body i is occluded by any other body, and 1 otherwise, the color likelihood (eq. (4)) may be rewritten as pC(y | x, z) = 16 Y i=1 Y u∈Ω(xi) µ pskin(u) pbkgd(u) ¶zi(u) = 16 Y i=1 pC(y | xi, zi) (5) Assuming they are set consistently with the hand configuration x, the hidden occlusion variables z ensure that the likelihood of each pixel in Ω(x) is counted exactly once. We may enforce consistency of the occlusion variables using the following function: η(xj, zi(u); xi) = ½ 0 if xj occludes xi, u ∈Ω(xj), and zi(u) = 1 1 otherwise (6) Note that because our rigid bodies are convex and nonintersecting, they can never take mutually occluding configurations. The constraint η(xj, zi(u); xi) is zero precisely when pixel u in the projection of xi should be occluded by xj, but zi(u) is in the unoccluded state. The following potential encodes all of the occlusion relationships between nodes i and j: ψO i,j (xi, zi, xj, zj) = Y u∈Υ η(xj, zi(u); xi) η(xi, zj(u); xj) (7) xj y u ∈ϒ xk zi(u) xi Figure 3: Factor graph showing p(y | xi, zi), and the occlusion constraints placed on xi by xj, xk. Dashed lines denote weak dependencies. The plate is replicated once per pixel. These occlusion constraints exist between all pairs of nodes. As with the structural prior, we enforce only those pairs EO (see Fig. 2(d)) most prone to occlusion: pO(x, z) ∝ Y (i,j)∈EO ψO i,j (xi, zi, xj, zj) (8) Figure 3 shows a factor graph for the occlusion relationships between xi and its neighbors, as well as the observation potential pC(y | xi, zi). The occlusion potential η(xj, zi(u); xi) has a very weak dependence on xi, depending only on whether xi is behind xj relative to the camera. 3.2 Modeling Edge Filter Responses Edges provide another important hand tracking cue. Using boundaries labeled in training images, we estimated a histogram pon of the response of a derivative of Gaussian filter steered to the edge’s orientation [8, 10]. A similar histogram poffwas estimated for filter outputs at randomly chosen locations. Let Π(x) denote the oriented edges in the projection of model configuration x. Then, again assuming pixel independence, image y has edge likelihood pE(y | x, z) ∝ Y u∈Π(x) pon(u) poff(u) = 16 Y i=1 Y u∈Π(xi) µ pon(u) poff(u) ¶zi(u) = 16 Y i=1 pE(y | xi, zi) (9) where we have used the same occlusion variables z to allow a local decomposition. 4 Nonparametric Belief Propagation Over the previous sections, we have shown that a redundant, local representation of the geometric hand model’s configuration xt allows p (xt | yt), the posterior distribution of the hand model at time t given image observations yt, to be written as p ¡ xt | yt¢ ∝ X zt pK(xt)pS(xt)pO(xt, zt) " 16 Y i=1 pC(yt | xt i, zt i)pE(yt | xt i, zt i) # (10) The summation marginalizes over the hidden occlusion variables zt, which were needed to locally decompose the edge and color likelihoods. When τ video frames are observed, the overall posterior distribution is given by p (x | y) ∝ τY t=1 p ¡ xt | yt¢ pT (xt | xt−1) (11) Excluding the potentials involving occlusion variables, which we discuss in detail in Sec. 4.2, eq. (11) is an example of a pairwise Markov random field: p (x | y) ∝ Y (i,j)∈E ψi,j (xi, xj) Y i∈V ψi (xi, y) (12) Hand tracking can thus be posed as inference in a graphical model, a problem we propose to solve using belief propagation (BP) [15]. At each BP iteration, some node i ∈V calculates a message mij (xj) to be sent to a neighbor j ∈Γ(i) ≜{j | (i, j) ∈E}: mn ij (xj) ∝ Z xi ψj,i (xj, xi) ψi (xi, y) Y k∈Γ(i)\j mn−1 ki (xi) dxi (13) At any iteration, each node can produce an approximation ˆp(xi | y) to the marginal distribution p (xi | y) by combining the incoming messages with the local observation: ˆpn(xi | y) ∝ψi (xi, yi) Y j∈Γ(i) mn ji (xi) (14) For tree–structured graphs, the beliefs ˆpn(xi | y) will converge to the true marginals p (xi | y). On graphs with cycles, BP is approximate but often highly accurate [15]. 4.1 Nonparametric Representations For the hand tracking problem, the rigid body configurations xi are six–dimensional continuous variables, making accurate discretization intractable. Instead, we employ nonparametric, particle–based approximations to these messages using the nonparametric belief propagation (NBP) algorithm [11, 12]. In NBP, each message is represented using either a sample–based density estimate (a mixture of Gaussians) or an analytic function. Both types of messages are needed for hand tracking, as we discuss below. Each NBP message update involves two stages: sampling from the estimated marginal, followed by Monte Carlo approximation of the outgoing message. For the general form of these updates, see [11]; the following sections focus on the details of the hand tracking implementation. The hand tracking application is complicated by the fact that the orientation component ri of xi = (qi, ri) is an element of the rotation group SO(3). Following [10], we represent orientations as unit quaternions, and use a linearized approximation when constructing density estimates, projecting samples back to the unit sphere as necessary. This approximation is most appropriate for densities with tightly concentrated rotational components. 4.2 Marginal Computation BP’s estimate of the belief ˆp(xi | y) is equal to the product of the incoming messages from neighboring nodes with the local observation potential (see eq. (14)). NBP approximates this product using importance sampling, as detailed in [13] for cases where there is no self–occlusion. First, M samples are drawn from the product of the incoming kinematic and temporal messages, which are Gaussian mixtures. We use a recently proposed multiscale Gibbs sampler [16] to efficiently draw accurate (albeit approximate) samples, while avoiding the exponential cost associated with direct sampling (a product of d M–Gaussian mixtures contains M d Gaussians). Following normalization of the rotational component, each sample is assigned a weight equal to the product of the color and edge likelihoods with any structural messages. Finally, the computationally efficient “rule of thumb” heuristic [17] is used to set the bandwidth of Gaussian kernels placed around each sample. To derive BP updates for the occlusion masks zi, we first cluster (xi, zi) for each hand component so that p (xt, zt | yt) has a pairwise form (as in eq. (12)). In principle, NBP could manage occlusion constraints by sampling candidate occlusion masks zi along with rigid body configurations xi. However, due to the exponentially large number of possible occlusion masks, we employ a more efficient analytic approximation. Consider the BP message sent from xj to (zi, xi), calculated by applying eq. (13) to the occlusion potential Q u η(xj, zi(u); xi). We assume that ˆp(xj | y) is well separated from any candidate xi, a situation typically ensured by the kinematic and structural constraints. The occlusion constraint’s weak dependence on xi (see Fig. 3) then separates the message computation into two cases. If xi lies in front of typical xj configurations, the BP message µj,i(u)(zi(u)) is uninformative. If xi is occluded, the message approximately equals µj,i(u)(zi(u) = 0) = 1 µj,i(u)(zi(u) = 1) = 1 −Pr [u ∈Ω(xj)] (15) where we have neglected correlations among pixel occlusion states, and where the probability is computed with respect to ˆp(xj | y). By taking the product of these messages µk,i(u)(zi(u)) from all potential occluders xk and normalizing, we may determine an approximation to the marginal occlusion probability νi(u) ≜Pr[zi(u) = 0]. Because the color likelihood pC(y | xi, zi) factorizes across pixels u, the BP approximation to pC(y | xi) may be written in terms of these marginal occlusion probabilites: pC(y | xi) ∝ Y u∈Ω(xi) · νi(u) + (1 −νi(u)) µ pskin(u) pbkgd(u) ¶¸ (16) Intuitively, this equation downweights the color evidence at pixel u as the probability of that pixel’s occlusion increases. The edge likelihood pE(y | xi) averages over zi similarly. The NBP estimate of ˆp(xi | y) is determined by sampling configurations of xi as before, and reweighting them using these occlusion–sensitive likelihood functions. 4.3 Message Propagation To derive the propagation rule for non–occlusion edges, as suggested by [18] we rewrite the message update equation (13) in terms of the marginal distribution ˆp(xi | y): mn ij (xj) = α Z xi ψj,i (xj, xi) ˆpn−1(xi | y) mn−1 ji (xi) dxi (17) Our explicit use of the current marginal estimate ˆpn−1(xi | y) helps focus the Monte Carlo approximation on the most important regions of the state space. Note that messages sent 1 2 1 2 Figure 4: Refinement of a coarse initialization following one and two NBP iterations, both without (left) and with (right) occlusion reasoning. Each plot shows the projection of the five most significant modes of the estimated marginal distributions. Note the difference in middle finger estimates. along kinematic, structural, and temporal edges depend only on the belief ˆp(xi | y) following marginalization over occlusion variables zi. Details and pseudocode for the message propagation step are provided in [13]. For kinematic constraints, we sample uniformly among permissable joint angles, and then use forward kinematics to propagate samples from ˆpn−1(xi | y) /mn−1 ji (xi) to hypothesized configurations of xj. Following [12], temporal messages are determined by adjusting the bandwidths of the current marginal estimate ˆp(xi | y) to match the temporal covariance Λi. Because structural potentials (eq. (2)) equal one for all state configurations outside some ball, the ideal structural messages are not finitely integrable. We therefore approximate the structural message mij (xj) as an analytic function equal to the weights of all kernels in ˆp(xi | y) outside a ball centered at qj, the position of xj. 5 Simulations We now present a set of computational examples which investigate the performance of our distributed occlusion reasoning; see [13] for additional simulations. In Fig. 4, we use NBP to refine a coarse, user–supplied initialization into an accurate estimate of the hand’s configuration in a single image. When occlusion constraints are neglected, the NBP estimates associate the ring and middle fingers with the same image pixels, and miss the true middle finger location. With proper occlusion reasoning, however, the correct hand configuration is identified. Using M = 200 particles, our Matlab implementation requires about one minute for each NBP iteration (an update of all messages in the graph). Video sequences demonstrating the NBP hand tracker are available at http://ssg.mit.edu/nbp/. Selected frames from two of these sequences are shown in Fig. 5, in which filtered estimates are computed by a single “forward” sequence of temporal message updates. The initial frame was approximately initialized manually. The first sequence shows successful tracking through a partial occlusion of the ring finger by the middle finger, while the second shows a grasping motion in which the fingers occlude each other. For both of these sequences, rough tracking (not shown) is possible without occlusion reasoning, since all fingers are the same color and the background is unambiguous. However, we find that stability improves when occlusion reasoning is used to properly discount obscured edges and silhouettes. 6 Discussion Sigal et. al. [10] developed a three–dimensional NBP person tracker which models the conditional distribution of each linkage’s location, given its neighbor, via a Gaussian mixture estimated from training data. In contrast, we have shown that an NBP tracker may be built around the local structure of the true kinematic constraints. Conceptually, this has the advantage of providing a clearly specified, globally consistent generative model whose properties can be analyzed. Practically, our formulation avoids the need to explicitly approximate kinematic constraints, and allows us to build a functional tracker without the need for precise, labelled training data. Figure 5: Four frames from two different video sequences: a hand rotation containing finger occlusion (top), and a grasping motion (bottom). We show the projections of NBP’s marginal estimates. We have described the graphical structure underlying a kinematic model of the hand, and used this model to build a tracking algorithm using nonparametric BP. By appropriately augmenting the model’s state, we are able to perform occlusion reasoning in a distributed fashion. The modular state representation and robust, local computations of NBP offer a solution particularly well suited to visual tracking of articulated objects. Acknowledgments The authors thank C. Mario Christoudias and Michael Siracusa for their help with video data collection, and Michael Black, Alexander Ihler, Michael Isard, and Leonid Sigal for helpful conversations. This research was supported in part by DARPA Contract No. NBCHD030010. References [1] Y. Wu and T. S. Huang. Hand modeling, analysis, and recognition. IEEE Signal Proc. Mag., pages 51–60, May 2001. [2] J. M. Rehg and T. Kanade. DigitEyes: Vision–based hand tracking for human–computer interaction. In Proc. IEEE Workshop on Non–Rigid and Articulated Objects, 1994. [3] B. Stenger, P. R. S. Mendonca, and R. Cipolla. Model–based 3D tracking of an articulated hand. In CVPR, volume 2, pages 310–315, 2001. [4] J. MacCormick and M. Isard. Partitioned sampling, articulated objects, and interface–quality hand tracking. In ECCV, volume 2, pages 3–19, 2000. [5] Y. Wu, J. Y. Lin, and T. S. Huang. Capturing natural hand articulation. In ICCV, 2001. [6] B. Stenger, A. Thayananthan, P. H. S. Torr, and R. Cipolla. Filtering using a tree–based estimator. In ICCV, pages 1063–1070, 2003. [7] D. Ramanan and D. A. Forsyth. Finding and tracking people from the bottom up. In CVPR, volume 2, pages 467–474, 2003. [8] J. M. Coughlan and S. J. Ferreira. Finding deformable shapes using loopy belief propagation. In ECCV, volume 3, pages 453–468, 2002. [9] Y. Wu, G. Hua, and T. Yu. Tracking articulated body by dynamic Markov network. In ICCV, pages 1094–1101, 2003. [10] L. Sigal, M. Isard, B. H. Sigelman, and M. J. Black. Attractive people: Assembling loose– limbed models using nonparametric belief propagation. In NIPS, 2003. [11] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In CVPR, volume 1, pages 605–612, 2003. [12] M. Isard. PAMPAS: Real–valued graphical models for computer vision. In CVPR, volume 1, pages 613–620, 2003. [13] Erik B. Sudderth, M. I. Mandel, W. T. Freeman, and A. S. Willsky. Visual hand tracking using nonparametric belief propagation. MIT LIDS TR2603, May 2004. Presented at CVPR Workshop on Generative Model Based Vision, June 2004. http://ssg.mit.edu/nbp/. [14] M. J. Jones and J. M. Rehg. Statistical color models with application to skin detection. IJCV, 46(1):81–96, 2002. [15] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. Technical Report 2004-040, MERL, May 2004. [16] A. T. Ihler, E. B. Sudderth, W. T. Freeman, and A. S. Willsky. Efficient multiscale sampling from products of Gaussian mixtures. In NIPS, 2003. [17] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986. [18] D. Koller, U. Lerner, and D. Angelov. A general algorithm for approximate inference and its application to hybrid Bayes nets. In UAI 15, pages 324–333, 1999.
|
2004
|
16
|
2,574
|
Brain Inspired Reinforcement Learning François Rivest* Yoshua Bengio Département d’informatique et de recherche opérationnelle Université de Montréal CP 6128 succ. Centre Ville, Montréal, QC H3C 3J7, Canada francois.rivest@mail.mcgill.ca bengioy@iro.umontreal.ca John Kalaska Département de physiologie Université de Montréal kalaskaj@physio.umontreal.ca Abstract Successful application of reinforcement learning algorithms often involves considerable hand-crafting of the necessary non-linear features to reduce the complexity of the value functions and hence to promote convergence of the algorithm. In contrast, the human brain readily and autonomously finds the complex features when provided with sufficient training. Recent work in machine learning and neurophysiology has demonstrated the role of the basal ganglia and the frontal cortex in mammalian reinforcement learning. This paper develops and explores new reinforcement learning algorithms inspired by neurological evidence that provides potential new approaches to the feature construction problem. The algorithms are compared and evaluated on the Acrobot task. 1 Introduction Reinforcement learning algorithms often face the problem of finding useful complex non-linear features [1]. Reinforcement learning with non-linear function approximators like backpropagation networks attempt to address this problem, but in many cases have been demonstrated to be non-convergent [2]. The major challenge faced by these algorithms is that they must learn a value function instead of learning the policy, motivating an interest in algorithms directly modifying the policy [3]. In parallel, recent work in neurophysiology shows that the basal ganglia can be modeled by an actor-critic version of temporal difference (TD) learning [4][5][6], a well-known reinforcement learning algorithm. However, the basal ganglia do not, by themselves, solve the problem of finding complex features. But the frontal cortex, which is known to play an important role in planning and decision-making, is tightly linked with the basal ganglia. The nature or their interaction is still poorly understood, and is generating a growing interest in neurophysiology. * URL: http://www.iro.umontreal.ca/~rivestfr This paper presents new algorithms based on current neurophysiological evidence about brain functional organization. It tries to devise biologically plausible algorithms that may help overcome existing difficulties in machine reinforcement learning. The algorithms are tested and compared on the Acrobot task. They are also compared to TD using standard backpropagation as function approximator. 2 Biological Background The mammalian brain has multiple learning subsystems. Major learning components include the neocortex, the hippocampal formation (explicit memory storage system), the cerebellum (adaptive control system) and the basal ganglia (reinforcement learning, also known as instrumental conditioning). The cortex can be argued to be equipotent, meaning that, given the same input, any region can learn to perform the same computation. Nevertheless, the frontal lobe differs by receiving a particularly prominent innervation of a specific type of neurotransmitter, namely dopamine. The large frontal lobe in primates, and especially in humans, distinguishes them from lower mammals. Other regions of the cortex have been modeled using unsupervised learning methods such as ICA [7], but models of learning in the frontal cortex are only beginning to emerge. The frontal dopaminergic input arises in a part of the basal ganglia called ventral tegmental area (VTA) and the substantia nigra (SN). The signal generated by dopaminergic (DA) neurons resembles the effective reinforcement signal of temporal difference (TD) learning algorithms [5][8]. Another important part of the basal ganglia is the striatum. This structure is made of two parts, the matriosome and the striosome. Both receive input from the cortex (mostly frontal) and from the DA neurons, but the striosome projects principally to DA neurons in VTA and SN. The striosome is hypothesized to act as a reward predictor, allowing the DA signal to compute the difference between the expected and received reward. The matriosome projects back to the frontal lobe (for example, to the motor cortex). Its hypothesized role is therefore in action selection [4][5][6]. Although there have been several attempts to model the interactions between the frontal cortex and basal ganglia, little work has been done on learning in the frontal cortex. In [9], an adaptive learning system based on the cerebellum and the basal ganglia is proposed. In [10], a reinforcement learning model of the hippocampus is presented. In this paper, we do not attempt to model neurophysiological data per se, but rather to develop, from current neurophysiological knowledge, new and efficient biologically plausible reinforcement learning algorithms. 3 The Model All models developed here follow the architecture depicted in Figure 1. The first layer (I) is the input layer, where activation represents the current state. The second layer, the hidden layer (H), is responsible for finding the non-linear features necessary to solve the task. Learning in this layer will vary from model to model. Both the input and the hidden layer feed the parallel actor-critic layers (A and V) which are the computational analogs of the striatal matriosome and striosome, respectively. They represent a linear actor-critic implementation of TD. The neurological literature reports an uplink from V and the reward to DA neurons which sends back the effective reinforcement signal e (dashed lines) to A, V and H. The A action units usually feed into the motor cortex, which controls muscle activation. Here, A’s are considered to represent the possible actions. The basal ganglia receive input mainly from the frontal cortex and the dopaminergic signal (e). They also receive some input from parietal cortex (which, as opposed to the frontal cortex, does not receive DA input, and hence, may be unsupervised). H will represent frontal cortex when given e and non-frontal cortex when not. The weights W, v and U correspond to weights into the layers A, V and H respectively (e is not weighted). Figure 1: Architecture of the models. Let xt be the vector of the input layer activations based on the state of the environment at time t. Let f be the sigmoidal activation function of hidden units in H. Then yt = [f(u1xt), …,f(unxt)]T, the vector of activations of the hidden layer at time t, and where ui is a row of the weight matrix U. Let zt = [xt T yt T]T be the state description formed by the layers I and H at time t. 3.1 Actor-critic The actor-critic model of the basal ganglia developed here is derived from [4]. It is very similar to the basal ganglia model in [5] which has been used to simulate neurophysiological data recorded while monkeys were learning a task [6]. All units are linear weighted sums of activity from the previous layers. The actor units behave under a winner-take-all rule. The winner’s activity settles to 1, and the others to 0. The initial weights are all equal and non-negative in order to obtain an initial optimist policy. Beginning with an overestimate of the expected reward leads every action to be negatively corrected, one after the other until the best one remains. This usually favors exploration. Then V(zt) = vTzt. Let bt = Wzt be the vector of activation of the actor layer before the winner take all processing. Let at = argmax(bt,i) be the winning action index at time t, and let the vector ct be the activation of the layer A after the winner take all processing such that ct,a = 1 if a = at, 0 otherwise. 3.1.1 Formal description TD learns a function V of the state that should converge to the expected total discounted reward. In order to do so, it updates V such that ( ) ( ) [ ] t t t z V r E z V γ + → −1 where rt is the reward at time t and γ the discount factor. A simple way to achieve that is to transform the problem into an optimization problem where the goal is to minimize: ( ) ( ) [ ] 2 1 t t t z V r z V E γ − − = − reward D (Frontal) Cortex Sensory input Striatum I H V A v U W e It is also useful at this point, to introduce the TD effective reinforcement signal, equivalent to the dopaminergic signal [5]: ( ) ( ) 1 − − + = t t t t z V z V r e γ Thus: 2 te E = . A learning rule for the weights v of V can then be devised by finding the gradient of E with respect to the weights v. Here, V is the weighted sum of the activity of I and H. Thus, the gradient is given by [ ] 1 2 − − = ∂ ∂ t t t z z e v E γ Adding a learning rate and negating the gradient for minimization gives the update: [ ] t t t z z e v γ α − = ∆ −1 Developing a learning rule for the actor units and their weights W using a cost function is a bit more complex. One approach is to use the tri-hebbian rule T t t t z c e W 1 1 − − = ∆ α Remark that only the row vector of weights of the winning action is modified. This rule was first introduced, but not simulated, in [4]. It associates the error e to the last selected action. If the reward is higher than expected (e > 0), than the action units activated by the previous state should be reinforced. Conversely, if it is less than expected (e < 0), than the winning actor unit activity should be reduced for that state. This is exactly what this tri-hebbian rule does. 3.1.2 Biological justification [4] presented the first description of an actor-critic architecture based on data from the basal ganglia that resemble the one here. The major difference is that the V update rule did not use the complete gradient information. A similar version was also developed in [5], but with little mathematical justification for the update rule. The model presented here is simpler and the critic update rule is basically the same, but justified neurologically. Our model also has a more realistic actor update rule consistent with neurological knowledge of plasticity in the corticostriatal synapses [11] (H to V weights). The main purpose of the model presented in [5] was to simulate dopaminergic activity for which V is the most important factor, and in this respect, it was very successful [6]. 3.2 Hidden Layer Because the reinforcement learning layer is linear, the hidden layer must learn the necessary non-linearity to solve the task. The rules below are attempts at neurologically plausible learning rules for the cortex, assuming it has no clear supervision signal other than the DA signal for the frontal cortex. All hidden units weight vectors are initialized randomly and scaled to norm 1 after each update. • Fixed random This is the baseline model to which the other algorithms will be compared. The hidden layer is composed of randomly generated hidden units that are not trained. • ICA In [7], the visual cortex was modeled by an ICA learning rule. If the non-frontal cortex is equipotent, then any region of the cortex could be successfully modeled using such a generic rule. The idea of combining unsupervised learning with reinforcement learning has already proven useful [1], but the unsupervised features were trained prior to the reinforcement training. On the other hand, [12] has shown that different systems of this sort could learn concurrently. Here, the ICA rule from [13] will be used as the hidden layer. This means that the hidden units are learning to reproduce the independent source signals at the origin of the observed mixed signal. • Adaptive ICA (e-ICA) If H represents the frontal cortex, then an interesting variation of ICA is to multiply its update term by the DA signal e. The size of e may act as an adaptive learning rate whose source is the reinforcement learning system critic. Also, if the reward is less than expected (e < 0), the features learned by the ICA unit may be more counterproductive than helpful, and e pushes the learning away from those features. • e-gradient method Another possible approach is to base the update rule on the derivative of the objective function E applied to the hidden layer weights U, but constraining the update rule to only use information available locally. Let f’ be the derivative of f, then the gradient of E with respect to U is approximated by: ( ) ( ) [ ] 1 1 2 − − ′ − ′ = ∂ ∂ t t i i t t i i t i x x u f v x x u f v e u E γ Negating the gradient for minimization, adding a learning rate and removing the non-local weight information, gives the weight update rule: ( ) ( ) [ ] t t i t t i t i x x u f x x u f e u ′ − ′ = ∆ − − γ α 1 1 Using the value of the weights v would lead to a rule that use non-local information. The cortex is unlikely to have this and might consider all the weights in v to be equal to some constant. To avoid neurons all moving in the same direction uniformly, we encourage the units on the hidden layer to minimize their covariance. This can be achieved by adding an inhibitory neuron. Let qt be the average activity of the hidden units at time t, i.e., the inhibitory neuron activity. Let tq be the moving exponential average of qt. Since [ ] ( ) ( ) ( ) 2 , , , 2 , cov 1 t t j i j t i t t q q e TimeAverag y y n q Var − ≅ = ∑ and ignoring the f’s non-linearity , the gradient of the Var[qt] with respect to the weights U is approximated by: [ ] ( ) t t t i t x q q u q Var − = ∂ ∂ 2 Combined with the previous equation, this results in a new update rule: ( ) ( ) [ ] [ ] t t t t t i t t i t i x q q x x u f x x u f e u − + ′ − ′ = ∆ − − α γ α 1 1 When allowing the discount factor to be different on the hidden layer, we found that γ = 0 gave much better results (e-gradient(0)). 4 Simulations & Results All models of section 3 were run on the Acrobot task [8]. This task consists of a two-link pendulum with torque on the middle joint. The goal is to bring the tip of the second pole in a totally upright position. 4.1 The task: Acrobot The input was coded using 12 equidistant radial basis functions for each angle and 13 equidistant radial basis functions for each angular velocity, for a total of 50 nonnegative inputs. This somewhat simulates the input from joint-angle receptors. A reward of 1 was given only when the final state was reached (in all other case, the reward of an action was 0). Only 3 actions were available (3 actor units), either -1, 0 or 1 unit of torque. The details can be found in [8]. 50 networks with different random initialization where run for all models for 100 episodes (an episode is the sequence of steps the network performs to achieve the goal from the start position). Episodes were limited to 10000 steps. A number of learning rate values were tried for each model (actor-critic layer learning rate, and hidden layer learning rate). The selected parameters were the ones for which the average number of steps per episode plus its standard deviation was the lowest. All hidden layer models got a learning rate of 0.1. 4.2 Results Figure 2 displays the learning curves of every model evaluated. Three variables were compared: overall learning performance (in number of steps to success per episode), final performance (number of steps on the last episode), and early learning performance (number of steps for the first episode). Averaged Learning Curves 0 250 500 750 1000 1250 1500 1750 2000 2250 2500 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Episodes Steps Baseline ICA e-ICA e-Gradient(0) Figure 2: Learning curves of the models. Average Number of Steps Per Episode 0 100 200 300 400 500 600 700 800 900 1000 Baseline e-Gradient e-ICA ICA e-Gradient(0) Hidden Layer Steps per Episode Figure 3: Average number of steps per episode with 95% confidence interval. 4.2.1 Space under the learning curve Figure 3 shows the average steps per episode for each model in decreasing order. All models needed fewer steps on average than baseline (which has no training at the hidden layer). In order to assess the performance of the models, an ANOVA analysis of the average number of steps per episode over the 100 episodes was performed. Scheffé post-hoc analysis revealed that the performance of every model was significantly different from every other, except for e-gradient and e-ICA (which are not significantly different from each other). 4.2.2 Final performance ANOVA analysis was also used to determine the final performance of the models, by comparing the number of steps on the last episode. Scheffé test results showed that all but e-ICA are significantly better than the baseline. Figure 4 shows the results on the last episode in increasing order. The curved lines on top show the homogeneous subsets. Number of Steps on the Last Episode 0 100 200 300 400 500 600 700 800 e-Gradient(0) ICA e-Gradient e-ICA Baseline Hidden Layer Steps per Episode Figure 4: Number of steps on the last episode with 95% confidence interval. Number of Steps on the First Episode 0 500 1000 1500 2000 2500 3000 e-Gradient(0) e-ICA e-Gradient Baseline ICA Hidden Layer Steps per Episode Figure 5: Number of steps on the first episode with 95% confidence interval. 4.2.3 Early learning Figure 2 shows that the models also differed in their initial learning. To assess how different those curves are, an ANOVA was run on the number of steps on the very first episode. Under this measure, e-gradient(0) and e-ICA were significantly faster than the baseline and ICA was significantly slower (Figure 5). It makes sense for ICA to be slower at the beginning, since it first has to stabilize for the RL system to be able to learn from its input. Until the ICA has stabilized, the RL system has moving inputs, and hence cannot learn effectively. Interestingly, e-ICA was protected against this effect, having a start-up significantly faster than the baseline. This implies that the e signal could control the ICA learning to move synergistically with the reinforcement learning system. 4.3 External comparison Acrobot was also run using standard backpropagation with TD and ε-Greedy policy. In this setup, a neural network of 50 inputs, 50 hidden sigmoidal units, and 1 linear output was used as function approximator for V. The network had cross-connections and its weights were initialized as in section 3 such that both architectures closely matched in terms of power. In this method, the RHS of the TD equation is used as a constant target value for the LHS. A single gradient was applied to minimize the squared error after the result of each action. Although not different from the baseline on the first episode, it was significantly worst on overall and final performance, unable to constantly improve. This is a common problem when using backprop networks in RL without handcrafting the necessary complex features. We also tried SARSA (using one network per action), but results were worst than TD. The best result we found in the literature on the exact same task are from [8]. They used SARSA(λ) with a linear combination of tiles. Tile coding discretized the input space into small hyper-cubes and few overlapping tilings were used. From available reports, their first trial could be slower than e-gradient(0) but they could reach better final performance after more than 100 episodes with a final average of 75 steps (after 500 episodes). On the other hand, their function had about 75000 weights while all our models used 2900 weights. 5 Discussion In this paper we explored a new family of biologically plausible reinforcement learning algorithms inspired by models of the basal ganglia and the cortex. They use a linear actor-critic model of the basal ganglia and were extended with a variety of unsupervised and partially supervised learning algorithms inspired by brain structures. The results showed that pure unsupervised learning was slowing down learning and that a simple quasi-local rule at the hidden layer greatly improved performance. Results also demonstrated the advantage of such a simple system over the use of function approximators such as backpropagation. Empirical results indicate a strong potential for some of the combinations presented here. It remains to test them on further tasks, and to compare them to more reinforcement learning algorithms. Possible loops from the actor units to the hidden layer are also to be considered. Acknowledgments This research was supported by a New Emerging Team grant to John Kalaska and Yoshua Bengio from the CIHR. We thank Doina Precup for helpful discussions. References [1] Foster, D. & Dayan, P. (2002) Structure in the space of value functions. Machine Learning 49(2):325-346. [2] Tsitsiklis, J.N. & Van Roy, B. (1996) Featured-based methods for large scale dynamic programming. Machine Learning 22:59-94. [3] Sutton, R.S., McAllester, D., Singh, S. & Mansour, Y. (2000) Policy gradient methods for reinforcement learning with function approximation. Advances in Neural Information Processing Systems 12, pp. 1057-1063. MIT Press. [4] Barto A.G. (1995) Adaptive critics and the basal ganglia. In Models of Information Processing in the Basal Ganglia, pp.215-232. Cambridge, MA: MIT Press. [5] Suri, R.E. & Schultz, W. (1999) A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task. Neuroscience 91(3):871-890. [6] Suri, R.E. & Schultz, W. (2001) Temporal difference model reproduces anticipatory neural activity. Neural Computation 13:841-862. [7] Doi, E., Inui, T., Lee, T.-W., Wachtler, T. & Sejnowski, T.J. (2003) Spatiochromatic receptive field properties derived from information-theoritic analysis of cone mosaic responses to natural scenes. Neural Computation 15:397-417. [8] Sutton R.S. & Barto A.G. (1998) Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press. [9] Doya K. (1999) What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural Networks 12:961-974. [10] Foster, D.J., Morris, R.G.M., & Dayan, P. (2000) A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus 10:1-16. [11] Wickens, J. & Kötter, R. (1995) Cellular models of reinforcement. In Models of Information Processing in the Basal Ganglia, pp.187-214. Cambridge, MA: MIT Press. [12] Whiteson, S. & Stone, P. (2003) Concurrent layered learning. In Proceedings of the 2nd Internaltional Joint Conference on Autonomous Agents & Multi-agent Systems. [13] Amari, S-I (1999) Natural gradient learning for over- and under-complete bases in ICA. Neural Computation 11:1875-1883.
|
2004
|
160
|
2,575
|
Semi-supervised Learning via Gaussian Processes Neil D. Lawrence Department of Computer Science University of Sheffield Sheffield, S1 4DP, U.K. neil@dcs.shef.ac.uk Michael I. Jordan Computer Science and Statistics University of California Berkeley, CA 94720, U.S.A. jordan@cs.berkeley.edu Abstract We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits. 1 Introduction The traditional machine learning classification problem involves a set of input vectors X = [x1 . . . xN]T and associated labels y = [y1 . . . yN]T , yn ∈{−1, 1}. The goal is to find a mapping between the inputs and the labels that yields high predictive accuracy. It is natural to consider whether such predictive performance can be improved via “semi-supervised learning,” in which a combination of labeled data and unlabeled data are available. Probabilistic approaches to classification either estimate the class-conditional densities or attempt to model p (yn|xn) directly. In the latter case, if we fail to make any assumptions about the underlying distribution of input data, the unlabeled data will not affect our predictions. Thus, most attempts to make use of unlabeled data within a probabilistic framework focus on incorporating a model of p (xn): for example, by treating it as a mixture, P yn p (xn|yn) p (yn), and inferring p (yn|xn) (e.g., [5]), or by building kernels based on p (xn) (e.g., [8]). These approaches can be unwieldy, however, in that the complexities of the input distribution are typically of little interest when performing classification, so that much of the effort spent modelling p (xn) may be wasted. An alternative is to make weaker assumptions regarding p (xn) that are of particular relevance to classification. In particular, the cluster assumption asserts that the data density should be reduced in the vicinity of a decision boundary (e.g., [2]). Such a qualitative assumption is readily implemented within the context of nonprobabilistic kernel-based classifiers. In the current paper we take up the challenge Figure 1: The ordered categorical noise model. The plot shows p (yn|fn) for different values of yn. Here we have assumed three categories. of showing how it can be achieved within a (nonparametric) probabilistic framework. Our approach involves a notion of a “null category region,” a region which acts to exclude unlabeled data points. Such a region is analogous to the traditional notion of a “margin” and indeed our approach is similar in spirit to the transductive SVM [10], which seeks to maximize the margin by allocating labels to the unlabeled data. A major difference, however, is that our approach maintains and updates the process variance (not merely the process mean) and, as we will see, this variance turns out to interact in a significant way with the null category concept. The structure of the paper is as follows. We introduce the basic probabilistic framework in Section 2 and discuss the effect of the null category in Section 3. Section 4 discusses posterior process updates and prediction. We present comparative experimental results in Section 5 and present our conclusions in Section 6. 2 Probabilistic Model In addition to the input vector xn and the label yn, our model includes a latent process variable fn, such that the probability of class membership decomposes as p (yn|xn) = R p (yn|fn) p (fn|xn) dfn. We first focus on the noise model, p (yn|fn), deferring the discussion of an appropriate process model, p (fn|xn), to later. 2.1 Ordered categorical models We introduce a novel noise model which we have termed a null category noise model, as it derives from the general class of ordered categorical models [1]. In the specific context of binary classification, our focus in this paper, we consider an ordered categorical model containing three categories1. p (yn|fn) = φ − fn + w 2 for yn = −1 φ fn + w 2 −φ fn −w 2 for yn = 0 φ fn −w 2 for yn = 1 , where φ (x) = R x −∞N (z|0, 1) dz is the cumulative Gaussian distribution function and w is a parameter giving the width of category yn = 0 (see Figure 1). We can also express this model in an equivalent and simpler form by replacing the 1See also [9] who makes use of a similar noise model in a discussion of Bayesian interpretations of the SVM. Figure 2: Graphical representation of the null category model. The fully-shaded nodes are always observed, whereas the lightly-shaded node is observed when zn = 0. cumulative Gaussian distribution by a Heaviside step function H(·) and adding independent Gaussian noise to the process model: p (yn|fn) = H − fn + 1 2 for yn = −1 H fn + 1 2 −H fn −1 2 for yn = 0 H fn −1 2 for yn = 1 , where we have standardized the width parameter to 1, by assuming that the overall scale is also handled by the process model. To use this model in an unlabeled setting we introduce a further variable, zn, which is one if a data point is unlabeled and zero otherwise. We first impose p (zn = 1|yn = 0) = 0; (1) in other words, a data point can not be from the category yn = 0 and be unlabeled. We assign probabilities of missing labels to the other classes p (zn = 1|yn = 1) = γ+ and p (zn = 1|yn = −1) = γ−. We see from the graphical representation in Figure 2 that zn is d-separated from xn. Thus when yn is observed, the posterior process is updated by using p (yn|fn). On the other hand, when the data point is unlabeled the posterior process must be updated by p (zn|fn) which is easily computed as: p (zn = 1|fn) = X yn p (yn|fn) p (zn = 1|yn) . The “effective likelihood function” for a single data point, L (fn), therefore takes one of three forms: L (fn) = H − fn + 1 2 for yn = −1, zn = 0 γ−H − fn + 1 2 + γ+H fn −1 2 for zn = 1 H fn −1 2 for yn = 1 zn = 0 . The constraint imposed by (1) implies that an unlabeled data point never comes from the class yn = 0. Since yn = 0 lies between the labeled classes this is equivalent to a hard assumption that no data comes from the region around the decision boundary. We can also soften this hard assumption if so desired by injection of noise into the process model. If we also assume that our labeled data only comes from the classes yn = 1 and yn = −1 we will never obtain any evidence for data with yn = 0; for this reason we refer to this category as the null category and the overall model as a null category noise model (NCNM). 3 Process Model and Effect of the Null Category We work within the Gaussian process framework and assume p (fn|xn) = N (fn|µ (xn) , ς (xn)) , where the mean µ (xn) and the variance ς (xn) are functions of the input space. A natural consideration in this setting is the effect of our likelihood function on the Figure 3: Two situations of interest. Diagrams show the prior distribution over fn (long dashes) the effective likelihood function from the noise model when zn = 1 (short dashes) and a schematic of the resulting posterior over fn (solid line). Left: The posterior is bimodal and has a larger variance than the prior. Right: The posterior has one dominant mode and a lower variance than the prior. In both cases the process is pushed away from the null category. distribution over fn from incorporating a new data point. First we note that if yn ∈{−1, 1} the effect of the likelihood will be similar to that incurred in binary classification, in that the posterior will be a convolution of the step function and a Gaussian distribution. This is comforting as when a data point is labeled the model will act in a similar manner to a standard binary classification model. Consider now the case when the data point is unlabeled. The effect will depend on the mean and variance of p (fn|xn). If this Gaussian has little mass in the null category region, the posterior will be similar to the prior. However, if the Gaussian has significant mass in the null category region, the outcome may be loosely described in two ways: 1. If p (fn|xn) “spans the likelihood,” Figure 3 (Left), then the mass of the posterior can be apportioned to either side of the null category region, leading to a bimodal posterior. The variance of the posterior will be greater than the variance of the prior, a consequence of the fact that the effective likelihood function is not log-concave (as can be easily verified). 2. If p (fn|xn) is “rectified by the likelihood,” Figure 3 (Right), then the mass of the posterior will be pushed in to one side of the null category and the variance of the posterior will be smaller than the variance of the prior. Note that for all situations when a portion of the mass of the prior distribution falls within the null category region it is pushed out to one side or both sides. The intuition behind the two situations is that in case 1, it is not clear what label the data point has, however it is clear that it shouldn’t be where it currently is (in the null category). The result is that the process variance increases. In case 2 the data point is being assigned a label and the decision boundary is pushed to one side of the point so that it is classified according to the assigned label. 4 Posterior Inference and Prediction Broadly speaking the effects discussed above are independent of the process model: the effective likelihood will always force the latent function away from the null category. To implement our model, however, we must choose a process model and an inference method. The nature of the noise model means that it is unlikely that we will find a non-trivial process model for which inference (in terms of marginalizing fn) will be tractable. We therefore turn to approximations which are inspired by “assumed density filtering” (ADF) methods; see, e.g., [3]. The idea in ADF is to approximate the (generally non-Gaussian) posterior with a Gaussian by matching the moments between the approximation and the true posterior. ADF has also been extended to allow each approximation to be revisited and improved as the posterior distribution evolves [7]. Recall from Section 3 that the noise model is not log-concave. When the variance of the process increases the best Gaussian approximation to our noise model can have negative variance. This situation is discussed in [7], where various suggestions are given to cope with the issue. In our implementation we followed the simplest suggestion: we set a negative variance to zero. One important advantage of the Gaussian process framework is that hyperparameters in the covariance function (i.e., the kernel function), can be optimized by type-II maximum likelihood. In practice, however, if the process variance is maximized in an unconstrained manner the effective width of the null category can be driven to zero, yielding a model that is equivalent to a standard binary classification noise model2. To prevent this from happening we regularize with an L1 penalty on the process variances (this is equivalent to placing an exponential prior on those parameters). 4.1 Prediction with the NCNM Once the parameters of the process model have been learned, we wish to make predictions about a new test-point x∗via the marginal distribution p (y∗|x∗). For the NCNM an issue arises here: this distribution will have a non-zero probability of y∗= 0, a label that does not exist in either our labeled or unlabeled data. This is where the role of z becomes essential. The new point also has z∗= 1 so in reality the probability that a data point is from the positive class is given by p (y∗|x∗, z∗) ∝p (z∗|y∗) p (y∗|x∗) . (2) The constraint that p (z∗|y∗= 0) = 0 causes the predictions to be correctly normalized. So for the distribution to be correctly normalized for a test data point we must assume that we have observed z∗= 1. An interesting consequence is that observing x∗will have an effect on the process model. This is contrary to the standard Gaussian process setup (see, e.g., [11]) in which the predictive distribution depends only on the labeled training data and the location of the test point x∗. In the NCNM the entire process model p (f∗|x∗) should be updated after the observation of x∗. This is not a particular disadvantage of our approach; rather, it is an inevitable consequence of any method that allows unlabeled data to affect the location of the decision boundary—a consequence that our framework makes explicit. In our experiments, however, we disregard such considerations and make (possibly suboptimal) predictions of the class labels according to (2). 5 Experiments Sparse representations of the data set are essential for speeding up the process of learning. We made use of the informative vector machine3 (IVM) approach [6] to 2Recall, as discussed in Section 1, that we fix the width of the null category to unity: changes in the scale of the process model are equivalent to changing this width. 3The informative vector machine is an approximation to a full Gaussian Process which is competitive with the support vector machine in terms of speed and accuracy. −10 −5 0 5 10 −10 −5 0 5 10 −10 −5 0 5 10 −10 −5 0 5 10 Figure 4: Results from the toy problem. There are 400 points, which are labeled with probability 0.1. Labelled data-points are shown as circles and crosses. Data-points in the active set are shown as large dots. All other data-points are shown as small dots. Left: Learning on the labeled data only with the IVM algorithm. All labeled points are used in the active set. Right: Learning on the labeled and unlabeled data with the NCNM. There are 100 points in the active set. In both plots decision boundaries are shown as a solid line; dotted lines represent contours within 0.5 of the decision boundary (for the NCNM this is the edge of the null category). greedily select an active set according to information-theoretic criteria. The IVM also enables efficient learning of kernel hyperparameters, and we made use of this feature in all of our experiments. In all our experiments we used a kernel of the form knm = θ2 exp −θ1 (xn −xm)T (xn −xm) + θ3δnm, where δnm is the Kronecker delta function. The IVM algorithm selects an active set, and the parameters of the kernel were learned by performing type-II maximum likelihood over the active set. Since active set selection causes the marginalized likelihood to fluctuate it cannot be used to monitor convergence, we therefore simply iterated fifteen times between active set selection and kernel parameter optimisation. The parameters of the noise model, {γ+, γ−} can also be optimized, but note that if we constrain γ+ = γ−= γ then the likelihood is maximized by setting γ to the proportion of the training set that is unlabeled. We first considered an illustrative toy problem to demonstrate the capabilities of our model. We generated two-dimensional data in which two class-conditional densities interlock. There were 400 points in the original data set. Each point was labeled with probability 0.1, leading to 37 labeled points. First a standard IVM classifier was trained on the labeled data only (Figure 4, Left). We then used the null category approach to train a classifier that incorporates the unlabeled data. As shown in Figure 4 (Right), the resulting decision boundary finds a region of low data density and more accurately reflects the underlying data distribution. 5.1 High-dimensional example To explore the capabilities of the model when the data set is of a much higher dimensionality we considered the USPS data set4 of handwritten digits. The task chosen was to separate the digit 3 from 5. To investigate performance across a range of different operating conditions, we varied the proportion of unlabeled data between 4The data set contains 658 examples of 5s and 556 examples of 3s. 10 −2 10 −1 0.8 0.9 1 prob. of label present area under ROC curve Figure 5: Area under the ROC curve plotted against probability of a point being labeled. Mean and standard errors are shown for the IVM (solid line), the NCNM (dotted line), the SVM (dash-dot line) and the transductive SVM (dashed line). 0.2 and 1.25 × 10−2. We compared four classifiers: a standard IVM trained on the labeled data only, a support vector machine (SVM) trained on the labeled data only, the NCNM trained on the combined labeled-unlabeled data, and an implementation of the transductive SVM trained on the combined labeled-unlabeled data. The SVM and transductive SVM used the SVMlight software [4]. For the SVM, the kernel inverse width hyperparameter θ1 was set to the value learned by the IVM. For the transductive SVM it was set to the higher of the two values learned by the IVM and the NCNM5. For the SVM-based models we set θ2 = 1 and θ3 = 0; the margin error cost, C, was left at the SVMlight default setting. The quality of the resulting classifiers was evaluated by computing the area under the ROC curve for a previously unseen test data set. Each run was completed ten times with different random seeds. The results are summarized in Figure 5. The results show that below a label probability of 2.5 × 10−2 both the SVM and transductive SVM outperform the NCNM. In this region the estimate θ1 provided by the NCNM was sometimes very low leading to occasional very poor results (note the large error bar). Above 2.5 × 10−2 a clear improvement is obtained for the NCNM over the other models. It is of interest to contrast this result with an analogous experiment on discriminating twos vs. threes in [8], where p (xn) was used to derive a kernel. No improvement was found in this case, which [8] attributed to the difficulties of modelling p (xn) in high dimensions. These difficulties appear to be diminished for the NCNM, presumably because it never explicitly models p (xn). We would not want to read too much into the comparison between the transductive SVM and the NCNM since an exhaustive exploration of the regularisation parameter C was not undertaken. Similar comments also apply to the regularisation of the process variances for the NCNM. However, these preliminary results appear encouraging for the NCNM. Code for recreating all our experiments is available at http://www.dcs.shef.ac.uk/~neil/ncnm. 5Initially we set the value to that learned by the NCNM, but performance was improved by selecting it to be the higher of the two. 6 Discussion We have presented an approach to learning a classifier in the presence of unlabeled data which incorporates the natural assumption that the data density between classes should be low. Our approach implements this qualitative assumption within a probabilistic framework without explicit, expensive and possibly counterproductive modeling of the class-conditional densities. Our approach is similar in spirit to the transductive SVM, but with a major difference that in the SVM the process variance is discarded. In the NCNM, the process variance is a key part of data point selection; in particular, Figure 3 illustrated how inclusion of some data points actually increases the posterior process variance. Discarding process variance has advantages and disadvantages—an advantage is that it leads to an optimisation problem that is naturally sparse, while a disadvantage is that it prevents optimisation of kernel parameters via type-II maximum likelihood. In Section 4.1 we discussed how test data points affect the location of our decision boundary. An important desideratum would be that the location of the decision boundary should converge as the amount of test data goes to infinity. One direction for further research would be to investigate whether or not this is the case. Acknowledgments This work was supported under EPSRC Grant No. GR/R84801/01 and a grant from the National Science Foundation. References [1] A. Agresti. Categorical Data Analysis. John Wiley and Sons, 2002. [2] O. Chapelle, J. Weston, and B. Sch¨olkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing Systems, Cambridge, MA, 2002. MIT Press. [3] L. Csat´o. Gaussian Processes — Iterative Sparse Approximations. PhD thesis, Aston University, 2002. [4] T. Joachims. Making large-scale SVM learning practical. In Advances in Kernel Methods: Support Vector Learning, Cambridge, MA, 1998. MIT Press. [5] N. D. Lawrence and B. Sch¨olkopf. Estimating a kernel Fisher discriminant in the presence of label noise. In Proceedings of the International Conference in Machine Learning, San Francisco, CA, 2001. Morgan Kaufmann. [6] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In Advances in Neural Information Processing Systems, Cambridge, MA, 2003. MIT Press. [7] T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. [8] M. Seeger. Covariance kernels from Bayesian generative models. In Advances in Neural Information Processing Systems, Cambridge, MA, 2002. MIT Press. [9] P. Sollich. Probabilistic interpretation and Bayesian methods for support vector machines. In Proceedings 1999 International Conference on Artificial Neural Networks, ICANN’99, pages 91–96, 1999. [10] V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [11] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In Learning in Graphical Models, Cambridge, MA, 1999. MIT Press.
|
2004
|
161
|
2,576
|
Parallel Support Vector Machines: The Cascade SVM Hans Peter Graf, Eric Cosatto, Leon Bottou, Igor Durdanovic, Vladimir Vapnik NEC Laboratories 4 Independence Way, Princeton, NJ 08540 {hpg, cosatto, leonb, igord, vlad}@nec-labs.com Abstract We describe an algorithm for support vector machines (SVM) that can be parallelized efficiently and scales to very large problems with hundreds of thousands of training vectors. Instead of analyzing the whole training set in one optimization step, the data are split into subsets and optimized separately with multiple SVMs. The partial results are combined and filtered again in a ‘Cascade’ of SVMs, until the global optimum is reached. The Cascade SVM can be spread over multiple processors with minimal communication overhead and requires far less memory, since the kernel matrices are much smaller than for a regular SVM. Convergence to the global optimum is guaranteed with multiple passes through the Cascade, but already a single pass provides good generalization. A single pass is 5x – 10x faster than a regular SVM for problems of 100,000 vectors when implemented on a single processor. Parallel implementations on a cluster of 16 processors were tested with over 1 million vectors (2-class problems), converging in a day or two, while a regular SVM never converged in over a week. 1 Introduction Support Vector Machines [1] are powerful classification and regression tools, but their compute and storage requirements increase rapidly with the number of training vectors, putting many problems of practical interest out of their reach. The core of an SVM is a quadratic programming problem (QP), separating support vectors from the rest of the training data. General-purpose QP solvers tend to scale with the cube of the number of training vectors (O(k3)). Specialized algorithms, typically based on gradient descent methods, achieve impressive gains in efficiency, but still become impractically slow for problem sizes on the order of 100,000 training vectors (2-class problems). One approach for accelerating the QP is based on ‘chunking’ [2][3][4], where subsets of the training data are optimized iteratively, until the global optimum is reached. ‘Sequential Minimal Optimization’ (SMO) [5], which reduces the chunk size to 2 vectors, is the most popular of these algorithms. Eliminating non-support vectors early during the optimization process is another strategy that provides substantial savings in computation. Efficient SVM implementations incorporate steps known as ‘shrinking’ for identifying non-support vectors early [4][6][7]. In combination with caching of the kernel data, such techniques reduce the computation requirements by orders of magnitude. Another approach, named ‘digesting’ optimizes subsets closer to completion before adding new data [8], saving considerable amounts of storage. Improving compute-speed through parallelization is difficult due to dependencies between the computation steps. Parallelizations have been proposed by splitting the problem into smaller subsets and training a network to assign samples to different subsets [9]. Variations of the standard SVM algorithm, such as the Proximal SVM have been developed that are better suited for parallelization [10], but how widely they are applicable, in particular to high-dimensional problems, remains to be seen. A parallelization scheme was proposed where the kernel matrix is approximated by a block-diagonal [11]. A technique called variable projection method [12] looks promising for improving the parallelization of the optimization loop. In order to break through the limits of today’s SVM implementations we developed a distributed architecture, where smaller optimizations are solved independently and can be spread over multiple processors, yet the ensemble is guaranteed to converge to the globally optimal solution. 2 The Cascade SVM As mentioned above, eliminating non-support vectors early from the optimization proved to be an effective strategy for accelerating SVMs. Using this concept we developed a filtering process that can be parallelized efficiently. After evaluating multiple techniques, such as projections onto subspaces (in feature space) or clustering techniques, we opted to use SVMs as filters. This makes it straightforward to drive partial solutions towards the global optimum, while alternative techniques may optimize criteria that are not directly relevant for finding the global solution. SV1 SV2 SV3 SV4 SV5 SV6 SV7 SV8 SV9 SV10 SV11 SV12 SV13 SV14 SV15 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 1st layer 2nd layer 3rd layer 4th layer SV1 SV2 SV3 SV4 SV5 SV6 SV7 SV8 SV9 SV10 SV11 SV12 SV13 SV14 SV15 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 TD / 8 1st layer 2nd layer 3rd layer 4th layer Figure 1: Schematic of a binary Cascade architecture. The data are split into subsets and each one is evaluated individually for support vectors in the first layer. The results are combined two-by-two and entered as training sets for the next layer. The resulting support vectors are tested for global convergence by feeding the result of the last layer into the first layer, together with the non-support vectors. TD: Training data, SVi: Support vectors produced by optimization i. We initialize the problem with a number of independent, smaller optimizations and combine the partial results in later stages in a hierarchical fashion, as shown in Figure 1. Splitting the data and combining the results can be done in many different ways. Figure 1 merely represents one possible architecture, a binary Cascade that proved to be efficient in many tests. It is guaranteed to advance the optimization function in every layer, requires only modest communication from one layer to the next, and converges to a good solution quickly. In the architecture of Figure 1 sets of support vectors from two SVMs are combined and the optimization proceeds by finding the support vectors in each of the combined subsets. This continues until only one set of vectors is left. Often a single pass through this Cascade produces satisfactory accuracy, but if the global optimum has to be reached, the result of the last layer is fed back into the first layer. Each of the SVMs in the first layer receives all the support vectors of the last layer as inputs and tests its fraction of the input vectors, if any of them have to be incorporated into the optimization. If this is not the case for all SVMs of the input layer, the Cascade has converged to the global optimum, otherwise it proceeds with another pass through the network. In this architecture a single SVM never has to deal with the whole training set. If the filters in the first few layers are efficient in extracting the support vectors then the largest optimization, the one of the last layer, has to handle only a few more vectors than the number of actual support vectors. Therefore, in problems where the support vectors are a small subset of the training vectors - which is usually the case - each of the sub-problems is much smaller than the whole problem (compare section 4). 2.1 Notation (2-class, maximum margin) We discuss here the 2-class classification problem, solved in dual formulation. The Cascade does not depend on details of the optimization algorithm and alternative formulations or regression algorithms map equally well onto this architecture. The 2-class problem is the most difficult one to parallelize because there is no natural split into sub-problems. Multi-class problems can always be separated into 2-class problems. Let us consider a set of l training examples (xi; yi); where d i R x ∈ represents a d-dimensional pattern and 1 ± = iy the class label. K(xi,xj) is the matrix of kernel values between patterns and αi the Lagrange coefficients to be determined by the optimization. The SVM solution for this problem consists in maximizing the following quadratic optimization function (dual formulation): ∑ ∑∑ + ∗ − = l i i j i j i j l i l j i x x K y y W α α α α ) , ( 2 / 1 ) ( max Subject to: i C i ∀ ≤ ≤ , 0 α and The gradient G ) (α W ∇ = of W with respect to α is then: 1 ) , ( 1 + − = ∂ ∂ = ∑ = j i j l j j i i i x x K y y W G α α 2.2 Formal proof of convergence The main issue is whether a Cascade architecture will actually converge to the global optimum. The following theorems show that this is the case for a wide range of conditions. Let S denote a subset of the training set Ω, W(S) is the optimal objective function over S (equation 1), and let S S Sv ⊂ ) ( be the subset of S for which the optimal α are non-zero (support vectors of S). It is obvious that: (1) (2) 0 1 = ∑ = i l i i y α 0 = ∑ l i i i y α ) ( )) ( ( ) ( , Ω ≤ = Ω ⊂ ∀ W S Sv W S W S Let us consider a family F of sets of training examples for which we can independently compute the SVM solution. The set F S ∈ * that achieves the greatest W(S) will be called the best set in family F. We will write W(F) as a shorthand for W(S*), that is: ) ( ) ( max ) ( Ω ≤ = ∈ W S W F W F S We are interested in defining a sequence of families Ft such that W(Ft) converges to the optimum. Two results are relevant for proving convergence. Theorem 1: Let us consider two families F and G of subsets of Ω. If a set G T ∈ contains the support vectors of the best set F S F ∈ * , then ). ( ) ( F W G W ≥ Proof: Since T S Sv F ⊂ ) ( * , we have ). ( )) ( ( ) ( * * T W S Sv W S W F F ≤ = Therefore, ) ( ) ( ) ( ) ( * G W T W S W F W F ≤ ≤ = Theorem 2: Let us consider two families F and G of subsets of Ω. Assume that every set G T∈ contains the support vectors of the best set F SF ∈ * . ). ( ) ( ) ( ) ( * T W S W F W G W If G T F ∈ = ⇒ = U Proof: Theorem 1 implies that ) ( ) ( F W G W ≥ . Consider a vector α* solution of the SVM problem restricted to the support vectors ) ( * F S Sv . For all G T∈ , we have )) ( ( ) ( * F S Sv W T W ≥ because ) ( * F S Sv is a subset of T. We also have )). ( ( ) ( ) ( ) ( ) ( * * F F S Sv W S W F W G W T W = = = ≤ Therefore )) ( ( ) ( * F S Sv W T W = . This implies that α* is also a solution of the SVM on set T. Therefore α* satisfies all the KKT conditions corresponding to all sets G T ∈ . This implies that α* also satisfies the KKT conditions for the union of all sets in G. Definition 1. A Cascade is a sequence (Ft) of families of subsets of Ω satisfying: i) For all t > 1, a set tF T∈ contains the support vectors of the best set in Ft-1. ii) For all t, there is a k > t such that: • All sets k F T∈ contain the support vectors of the best set in Fk-1. • The union of all sets in Fk is equal to Ω. Theorem 3: A Cascade (Ft) converges to the SVM solution of Ω in finite time, namely: ) ( ) ( , , * * Ω = > ∀ ∃ W F W t t t t Proof: Assumption i) of Definition 1 plus theorem 1 imply that the sequence W(Ft) is monotonically increasing. Since this sequence is bounded by W(Ω), it converges to some value ) ( * Ω ≤W W . The sequence W(Ft) takes its values in the finite set of the W(S) for all Ω ⊂ S . Therefore there is a l > 0 such that * ) ( , W F W l t t = > ∀ . This observation, assertion ii) of definition 1, plus theorem 2 imply that there is a k > l such that ) ( ) ( Ω =W F W k . Since W(Ft) is monotonically increasing, ) ( ) ( Ω = W F W k for all t > k. As stated in theorem 3, a layered Cascade architecture is guaranteed to converge to the global optimum if we keep the best set of support vectors produced in one layer, and use it in at least one of the subsets in the next layer. This is the case in the binary Cascade shown in Figure 1. However, not all layers meet assertion ii) of Definition 1. The union of sets in a layer is not equal to the whole training set, except in the first layer. By introducing the feedback loop that enters the result of the last layer into the (3) (4) first one, combined with all non-support vectors, we fulfill all assertions of Definition 1. We can test for global convergence in layer 1 and do a fast filtering in the subsequent layers. 2.3 Interpretation of the SVM filtering process An intuitive picture of the filtering process is provided in Figure 2. If a subset Ω ⊂ S is chosen randomly from the training set, it will most likely not contain all support vectors of Ω and its support vectors may not be support vectors of the whole problem. However, if there is not a serious bias in a subset, support vectors of S are likely to contain some support vectors of the whole problem. Stated differently, it is plausible that ‘interior’ points in a subset are going to be ‘interior’ points in the whole set. Therefore, a non-support vector of a subset has a good chance of being a non-support vector of the whole set and we can eliminate it from further analysis. Figure 2: A toy problem illustrating the filtering process. Two disjoint subsets are selected from the training data and each of them is optimized individually (left, center; the data selected for the optimizations are the solid elements). The support vectors in each of the subsets are marked with frames. They are combined for the final optimization (right), resulting in a classification boundary (solid curve) close to the one for the whole problem (dashed curve). 3 Distributed Optimization Figure 3: A Cascade with two input sets D1, D2. Wi, Gi and Qi are objective function, gradient, and kernel matrix, respectively, of SVMi (in vector notation); ei is a vector with all 1. Gradients of SVM1 and SVM2 are merged (Extend) as indicated in (6) and are entered into SVM3. Support vectors of SVM3 are used to test D1, D2 for violations of the KKT conditions. Violators are combined with the support vectors for the next iteration. Section 2 shows that a distributed architecture like the Cascade indeed converges to the global solution, but no indication is provided how efficient this approach is. For a good performance we try to advance the optimization as much as possible in each stage. This depends on how the data are split initially, how partial results are merged and how well an optimization can start from the partial results provided by the previous stage. We focus on gradient-ascent algorithms here, and discuss how to handle merging efficiently. (5) ; ; 2 1 i i T i i i T i i i T i i e Q G e Q W r r r r r r r + − = + − = α α α α 3.1 Merging subsets For this discussion we look at a Cascade with two layers (Figure 3). When merging the two results of SVM1 and SVM2, we can initialize the optimization of SVM3 to different starting points. In the general case the merged set starts with the following optimization function and gradient: + − = 2 1 2 1 2 1 2 21 12 1 2 1 3 2 1 α α α α α α r r r r r r r r T T e e Q Q Q Q W + − = 2 1 2 21 12 1 2 1 3 e e Q Q Q Q G T r r r r r α α We consider two possible initializations: Case 1: 0 ; 2 1 1 1 r r r = = α α α SVM of ; Case 2: 2 2 2 1 1 1 ; SVM of SVM of α α α α r r r = = . Since each of the subsets fulfills the KKT conditions, each of these cases represents a feasible starting point with: 0 = ∑ i i y α . Intuitively one would probably assume that case 2 is the preferred one since we start from a point that is optimal in the two spaces defined by the vectors D1 and D2. If Q12 is 0 (Q21 is then also 0 since the kernel matrix is symmetric), the two spaces are orthogonal (in feature space) and the sum of the two solutions is the solution of the whole problem. Therefore, case 2 is indeed the best choice for initialization, because it represents the final solution. If, on the other hand, the two subsets are identical, then an initialization with case 1 is optimal, since this represents now the solution of the whole problem. In general, we are probably somewhere between these two cases and therefore it is not obvious, which case is best. While the theorems of section 2 guarantee the convergence to the global optimum, they do not provide any indication how fast this going to happen. Empirically we find that the Cascade converges quickly to the global solution, as is indicated in the examples below. All the problems we tested converge in 2 to 5 passes. 4 Experimental results We implemented the Cascade architecture for a single processor as well as for a cluster of processors and tested it extensively with several problems; the largest are: MNIST1, FOREST2, NORB3 (all are converted to 2-class problems). One of the main advantages of the Cascade architecture is that it requires far less memory than a single SVM, because the size of the kernel matrix scales with the square of the active set. This effect is illustrated in Figure 4. It has to be emphasized that both cases, single SVM and Cascade, use shrinking, but shrinking alone does not solve the problem of exorbitant sizes of the kernel matrix. A good indication of the Cascade’s inherent efficiency is obtained by counting the number of kernel evaluations required for one pass. As shown in Table 1, a 9-layer Cascade requires only about 30% as many kernel evaluations as a single SVM for 1 MNIST: handwritten digits, d=784 (28x28 pixels); training: 60,000; testing: 10,000; classes: odd digits - even digits; http://yann.lecun.com/exdb/mnist. 2 FOREST: d=54; class 2 versus rest; training: 560,000; testing: 58,100 ftp://ftp.ics.uci.edu/pub/machine-learning-databases/covtype/covtype.info. 3 NORB: images, d=9,216 ; trainingr=48,600; testing=48,600; monocular; merged class 0 and 1 versus the rest. http://www.cs.nyu.edu/~ylclab/data/norb-v1.0 (6) (7) 100,000 training vectors. How many kernel evaluations actually have to be computed depends on the caching strategy and the memory size. Number of Iterations one SVM Cascade SVM 6,000 4,000 2,000 Active set size Figure 4: The size of the active set as a function of the number of iterations for a problem with 30,000 training vectors. The upper curve represents a single SVM, while the lower one shows the active set size for a 4-layer Cascade. As indicated in Table 1, this parameter, and with it the compute times, are reduced even more. Therefore, even a simulation on a single processor can produce a speed-up of 5x to 10x or more, depending on the available memory size. For practical purposes often a single pass through the Cascade produces sufficient accuracy (compare Figure 5). This offers a particularly simple way for solving problems of a size that would otherwise be out of reach for SVMs. Number of Layers 1 2 3 4 5 6 7 8 9 K-eval request x109 106 89 77 68 61 55 48 42 38 K-eval x109 33 12 4.5 3.9 2.7 2.4 1.9 1.6 1.4 Table 1: Number of Kernel evaluations (requests and actual, with a cache size of 800MB) for different numbers of layers in the Cascade (single pass). The number of Kernel evaluations is reduced as the number of Cascade layers increases. Then, larger amounts of the problems fit in the cache, reducing the actual Kernel computations even more. Problem: FOREST, 100K vectors. Iteration Training time Max # training vect. per machine # Support Vectors W Acc. 0 21.6h 72,658 54,647 167427 99.08% 1 22.2h 67,876 61,084 174560 99.14% 2 0.8h 61,217 61,102 174564 99.13% Table 2: Training times for a large data set with 1,016,736 vectors (MNIST was expanded by warping the handwritten digits). A Cascade with 5 layers is executed on a Linux cluster with 16 machines (AMD 1800, dual processors, 2GB RAM per machine). The solution converges in 3 iterations. Shown are also the maximum number of training vectors on one machine and the number of support vectors in the last stage. W: optimization function; Acc: accuracy on test set. Kernel: RBF, gamma=1; C=50. Table 2 shows how a problem with over one million vectors is solved in about a day (single pass) with a generalization performance equivalent to the fully converged solution. While the full training set contains over 1M vectors, one processor never handles more than 73k vectors in the optimization and 130k for the convergence test. The Cascade provides several advantages over a single SVM because it can reduce compute- as well as storage-requirements. The main limitation is that the last layer consists of one single optimization and its size has a lower limit given by the number of support vectors. This is why the acceleration saturates at a relatively small number of layers. Yet this is not a hard limit since a single optimization can be distributed over multiple processors as well, and we are working on efficient implementations of such algorithms. Figure 5: Speed-up for a parallel implementation of the Cascades with 1 to 5 layers (1 to 16 SVMs, each running on a separate processor), relative to a single SVM: single pass (left), fully converged (middle) (MNIST, NORB: 3 iterations, FOREST: 5 iterations). On the right is the generalization performance of a 5-layer Cascade, measured after each iteration. For MNIST and NORB, the accuracy after one pass is the same as after full convergence (3 iterations). For FOREST, the accuracy improves from 90.6% after a single pass to 91.6% after convergence (5 iterations). Training set sizes: MNIST: 60k, NORB: 48k, FOREST: 186k. References [1] V. Vapnik, “Statistical Learning Theory”, Wiley, New York, 1998. [2] B. Boser, I. Guyon, V. Vapnik, “A training algorithm for optimal margin classifiers” in Proc. 5th Annual Workshop on Computational Learning Theory, Pittsburgh, ACM, 1992. [3] E. Osuna, R. Freund, F. Girosi, “Training Support Vector Machines, an Application to Face Detection”, in Computer vision and Pattern Recognition, pp.130-136, 1997. [4] T. Joachims, “Making large-scale support vector machine learning practical”, in Advances in Kernel Methods, B. Schölkopf, C. Burges, A. Smola, (eds.), Cambridge, MIT Press, 1998. [5] J.C. Platt, “Fast training of support vector machines using sequential minimal optimization”, in Adv. in Kernel Methods: Schölkopf, C. Burges, A. Smola (eds.), 1998. [6] C. Chang, C. Lin, “LIBSVM”, http://www.csie.ntu.edu.tw/~cjlin/libsvm/. [7] R. Collobert, S. Bengio, and J. Mariéthoz. Torch: A modular machine learning software library. Technical Report IDIAP-RR 02-46, IDIAP, 2002. [8] D. DeCoste and B. Schölkopf, “Training Invariant Support Vector Machines”, Machine Learning, 46, 161-190, 2002. [9] R. Collobert, Y. Bengio, S. Bengio, “A Parallel Mixture of SVMs for Very Large Scale Problems”, in Neural Information Processing Systems, Vol. 17, MIT Press, 2004. [10] A. Tveit, H. Engum. Parallelization of the Incremental Proximal Support Vector Machine Classifier using a Heap-based Tree Topology. Tech. Report, IDI, NTNU, Trondheim, 2003. [11] J. X. Dong, A. Krzyzak , C. Y. Suen, “A fast Parallel Optimization for Training Support Vector Machine.” Proceedings of 3rd International Conference on Machine Learning and Data Mining, P. Perner and A. Rosenfeld (Eds.) Springer Lecture Notes in Artificial Intelligence (LNAI 2734), pp. 96--105, Leipzig, Germany, July 5-7, 2003. [12] G. Zanghirati, L. Zanni, “A parallel solver for large quadratic programs in training support vector machines”, Parallel Computing, Vol. 29, pp.535-551, 2003.
|
2004
|
162
|
2,577
|
Sampling Methods for Unsupervised Learning Rob Fergus∗& Andrew Zisserman Dept. of Engineering Science University of Oxford Parks Road, Oxford OX1 3PJ, UK. {fergus,az }@robots.ox.ac.uk Pietro Perona Dept. Electrical Engineering California Institute of Technology Pasadena, CA 91125, USA. perona@vision.caltech.edu Abstract We present an algorithm to overcome the local maxima problem in estimating the parameters of mixture models. It combines existing approaches from both EM and a robust fitting algorithm, RANSAC, to give a data-driven stochastic learning scheme. Minimal subsets of data points, sufficient to constrain the parameters of the model, are drawn from proposal densities to discover new regions of high likelihood. The proposal densities are learnt using EM and bias the sampling toward promising solutions. The algorithm is computationally efficient, as well as effective at escaping from local maxima. We compare it with alternative methods, including EM and RANSAC, on both challenging synthetic data and the computer vision problem of alpha-matting. 1 Introduction In many real world applications we wish to learn from data which is not labeled, to find clusters or some structure within the data. For example in Fig. 1(a) we have some clumps of data that are embedded in noise. Our goal is to automatically find and model them. Since our data has many components so must our model. Consequently the model will have many parameters and finding the optimal settings for these is a difficult problem. Additionally, in real world problems, the signal we are trying to learn is usually mixed in with a lot of irrelevant noise, as demonstrated by the example in Fig. 1(b). The challenge here is to find these lines reliably despite them only constituting a small portion of the data. Images from Google, shown in Fig. 1(c), are typical of real world data, presenting both the challenges highlighted above. Our motivating real-world problem is to learn a visual model from the set of images returned by Google’s image search on an object type (such as “camel”, “tiger” or “bottles”), like those shown. Since text-based cues alone were used to compile the images, typically only 20%-50% images are visually consistent and the remainder may not even be images of the sought object type, resulting in a challenging learning problem. Latent variable models provide a framework for tackling such problems. The parameters of these may be estimated using algorithms based on EM [2] in a maximum likelihood framework. While EM provides an efficient estimation scheme, it has a serious problem in that for complex models, a local maxima of the likelihood function is often reached rather than the global maxima. Attempts to remedy this problem include: annealed versions of EM [8]; Markov-Chain Monte-Carlo (MCMC) based clustering [4] and Split and Merge EM (SMEM) [9]. ∗corresponding author −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (a) −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (b) (c) Figure 1: The objective is to learn from contaminated data such as these: (a) Synthetic Gaussian data containing many components. (b) Synthetic line data with few components but with a large portion of background noise. (c) Images obtained by typing “bottles” into Google’s image search. Alternative approaches to unsupervised learning include the RANSAC [3, 5] algorithm and its many derivatives. These rely on stochastic methods and have proven highly effective at solving certain problems in Computer Vision, such as structure from motion, where the signal-to-noise ratios are typically very small. In this paper we introduce an unsupervised learning algorithm that is based on both latent variable models and RANSAC-style algorithms. While stochastic in nature, it operates in data space rather than parameter space, giving a far more efficient algorithm than traditional MCMC methods. 2 Specification of the problem We have a set of data x = {x1 . . . xN} with unseen labels y = {y1 . . . yN} and a parametric mixture model with parameters θ, of the form: p(x|θ) = X y p(x, y|θ) = X y p(x|y, θ) P(y|θ) (1) We assume the number of mixture components is known and equal to C. We also assume that the parametric form of the mixture components is given. One of these components will model the background noise, while the remainder fit the signal within the data. Thus the task is to find the value of θ that maximizes the likelihood, p(x|θ) of the data. This is not a straightforward as the dimensionality of θ is large and the likelihood function is highly non-linear. Algorithms such as EM often get stuck in local maxima such as those illustrated in Fig. 2, and since they use gradient-descent alone, are unable to escape. Before describing our algorithm, we first review the robust fitting algorithm RANSAC, from which we borrow several key concepts to enable us to escape from local maxima. 2.1 RANSAC RANSAC (RANdom Sampling And Consensus) attempts to find global maxima by drawing random subset of points, fitting a model to them and then measuring their support from the data. A variant, MLESAC [7], gives a probabilistic interpretation of the original scheme which we now explain. The basic idea is to draw at random and without replacement from x, a set of P samples for each of the C components in our model; P being the smallest number required to compute the parameters θc for each component. Let draw i be represented by zi, a vector of length N containing exactly P ones, indicating the points selected with the rest being zeros. Thus x(zi) is the subset of points drawn from x. From x(zi) we then compute the parameters for the component, θi c. Having done this for all components, we then estimate the component mixing portions, π using EM (keeping the other parameters fixed), giving a set of parameters for draw i, θi = {π, θi 1 . . . θi C}. Using these parameters, we compute the likelihood over all the data: p(x|θi). The entire process is repeated until either we exceed our maximum limit on the number of draws or we reach a pre-defined performance level. The final set of parameters are those that gave the highest likelihood: θ∗= arg maxi p(x|θi). Since this process explores a finite set of points in the space of θ, it is unlikely that the globally optimal point, θML, will be found, but θ∗should be close so that running EM from it is guaranteed to find the global optimum. However, it is clear that the approach of sampling randomly, while guaranteed to eventually find a point close to θML, is very inefficient since the number of possible draws scales exponentially with both P and C. Hence it is only suitable for small values of both P and C. While Tordoff et. al. [6] proposed drawing the samples from a non-uniform density, this approach involved incorporating auxiliary information about each sample point which may not be available for more general problems. However, Matas et. al. [1] propose general scheme to draw samples selectively from points tentatively classified as signal. This increases the efficiency of the sampling and motivates our approach. 3 Our approach – PROPOSAL Our approach, which we name PROPOSAL (PROPOsal based SAmple Learning), combines aspects of both EM and RANSAC to produce a method with the robustness of RANSAC but with a far greater efficiency, enabling it to work on more complex models. The problem with RANSAC is that points are drawn randomly. Even after a large number of draws this random sampling continues, despite the fact that we may have already discovered a good, albeit local, maximum in our likelihood function. The key idea in PROPOSAL is to draw samples from a proposal density. Initially this density is uniform, as in RANSAC, but as regions of high likelihood are discovered, we update it so that it gives a strong bias toward producing good draws again, increasing the efficiency of the sampling process. However, having found local maxima, we must still be able to escape and find the global maxima. Local maxima are characterized by too many components in one part of the space and too few in another. To resolve this we borrow ideas from Split and Merge EM (SMEM) [9]. SMEM uses two types of discrete moves to discover superior maxima. In the first, a component in an underpopulated region is split into two new ones, while in the second two components in an overpopulated area are merged. These two moves are performed together to keep the number of components constant. For the local maxima encountered in Fig. 2(a), merging the green and blue components, while splitting the red component will yield a superior solution. (a) (b) Figure 2: (a) Examples of different types of local maxima encountered. The green and blue components on the left are overpopulating a small clump of data. The magenta component in the center models noise, while missing a clump altogether. The single red component on the right is inadequately modeling two clumps of data. (b) The global optimum solution. PROPOSAL acts in a similar manner, by first finding components that are superfluous via two tests (described in section 3.3): (i) the Evaporation test – which would find the magenta component in Fig. 2(a) and (ii) the Overlap test – which would identify one of the green and blue components in Fig. 2(a). Then their proposal densities are adjusted so that they focus on data that is underpopulated by the model, thus subsequent samples are likely to discover a superior solution. An overview of the algorithm is as follows: Algorithm 1 PROPOSAL Require: Data x; Parameters: C, πmin, ϵ for i = 1 to IMax do repeat • For each component, c, compute parameters θi c from P points drawn from the proposal density qc(x|θc). • Estimate mixing portions, πi, using EM, keeping θi c fixed. • Compute the likelihood Li = Q n p(xn|πi, θi 1 . . . θi C). until Li > LBest Rough • Refine θi using EM to give θ∗with likelihood L∗. if L∗> LBest then • Update the proposal densities, q(x|θ), using θ∗. • Apply the Evaporation and Overlap tests (using parameters πmin and ϵ). • Reassign the proposal densities of any components failing the above tests. • Let LBest Rough = Li; let LBest = L∗and let θBest = θ∗. end if end for Output: θBest and LBest. We now elaborate on the various stages of the algorithm, using Fig. 3 as an example. 3.1 Sampling from data proposal densities Each component, c, draws its samples from a proposal density, which is an empirical distribution of the form: qc(x|θ) = PN n=1 δ(x −xn)P(y = c|xn, θc) PN n=1 P(y = c|xn, θc) (2) where P(y|x, θ) is the posterior on the labels: P(y|x, θ) = p(x|y, θ)P(y|θ) P y p(x|y, θ)P(y|θ) (3) Initially, q(x|θ) is uniform, so we are drawing the points completely at random, but q(x|θ) will become more peaked, biasing our draws toward the data picked out by the component, demonstrated in Fig. 3(c), which shows the non-uniform proposal densities for each component on a simulated problem. Note that if we are sampling with replacement, then E[z] = P(y|x, θ)1. However, since we must avoid degenerate combinations of points, certain values of z are not permissible, so E[z] →P(y|x, θ) as N →∞. 3.2 Computing model parameters Each component c has a subset of points picked out by z from which its parameters θi c are estimated. Since each subset is of the minimal size required to constrain all parameters, this process is straightforward since it is usually closed-form. For the Gaussian example 1Recall that z is a vector representing a draw of P points from q(x|θ). It is of length N with exactly P ones, the remaining elements being zero. in Fig. 3, we draw 3 points for each of the 4 Gaussian components, whose mean and covariance matrices are directly computed, using appropriate normalizations to give unbiased estimators of the population parameters. Given θi c for each component, the only unknown parameter is their relative weighting, π = P(y|θ). This is estimated using EM. The E-step involves computing P(y|x, θ) from (3). This can done efficiently since the component parameters are fixed, allowing the precomputation of p(x|y, θ). The M-step is then πc = 1 N PN n=1 P(y = c|x, θ). 3.3 Updating proposal densities Having obtained a rough model for draw i with parameters θi and likelihood Li, we first see if its likelihood exceeds the likelihood of the previous best rough model, LBest Rough. If this is the case we refine the rough model to ensure that we are at an actual maximum since the sampling process limits us to a set of discrete points in θ-space, which are unlikely to be maxima themselves. Running EM again, this time updating all parameters and using θi as an initialization, the parameters converge to θ∗, having likelihood L∗. If L∗exceeds a second threshold (the previous best refined model’s likelihood) LBest, then we we recompute the proposal densities, as given in (2), using P(y|x, θ∗). The two thresholds are needed to avoid wasting time refining θi’s that are not initially promising. In updating the proposal densities, two tests are applied to θ∗: 1. Evaporation test: If πc < πmin, then the component is deemed to model noise, so is flagged for resetting. Fig. 3 illustrates this test. 2. Overlap test2: If for any two components, a and b, ∥θi a−θi b∥2 ∥θia∥∥θi b∥< ϵ2, then the two components are judged to be fitting the same data. Component a or b is picked at random and flagged for resetting. 3.4 Resetting a proposal density If a component’s proposal density is to be reset, it is given a new density that maximizes the entropy of the mean proposal density qM(x|θ) = 1 C PC c=1 qc(x|θ). By maximizing the entropy of qM(x|θ), we are ensuring that the samples will subsequently be drawn as widely as possible, maximizing the chances of escaping from the local minima. If qd(x|θ) are the proposal densities to be reset, then we wish to maximize: H[qM(x|θ)] = H 1 D X d qd(x|θ) + 1 C −D X c̸=d qd(x|θ) (4) with the constraints that P n qd(xn|θ) = 1 ∀d and qd(xn|θ) ≥0 ∀n, d. For brevity, let us define: qf(x|θ) = 1 C−D P c̸=d qd(x|θ). Since a uniform distribution has the highest entropy, but qd(x|θ) cannot be negative, the optimal choice of qd(x|θ) will be zero everywhere, except for x corresponding to the smallest k values of qf(x|θ). At these points qd(x|θ) must add to qf(x|θ) to give a constant qM(x|θ). We solve for k using the other constraint, that probability mass of exactly D/C must be added. Thus qd(x|θ) be large where qf(x|θ) is small, giving the appealing result that the new component will draw preferentially from underpopulated portion of the data, as demonstrated in Fig. 3(d). 2An alternative overlap test would compare the responsibilities of each pair of components, a and b: P (y=a|x,θi a)T P (y=b|x,θi b) ∥P (y=a|x,θia)∥∥P (y=b|x,θi b)∥< ϵ2. −6 −4 −2 0 2 4 6 −5 −4 −3 −2 −1 0 1 2 3 4 5 (a) −6 −4 −2 0 2 4 6 −5 −4 −3 −2 −1 0 1 2 3 4 5 (b) 0 100 200 300 400 500 600 700 800 900 1000 0 0.0025 0.005 0.0075 0.01 0.0125 0.015 0.0175 (c) 0 100 200 300 400 500 600 700 800 900 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 −3 (d) 0 100 200 300 400 500 600 700 800 900 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 −3 (e) Figure 3: The Evaporation step in action. A local maximum is found in (a). (c) shows the corresponding proposal densities for each component (black is the background model). Note how spiky the green density is, since it is only modeling a few data points. Since πgreen < πmin, its proposal density is set to qd(x|θ), as shown in (d). Note how qd(x|θ) is higher in the areas occupied by the red component which is a poor fit for two clumps of data. (b) The global maxima along with its proposal density (e). Note that the data points are ordered for ease of visualization only. 4 Experiments 4.1 Synthetic experiments We tested PROPOSAL on two types of synthetic data – mixtures of 2-D lines and Gaussians with uniform background noise. We compared six algorithms: Plain EM; Deterministic Annealing EM (DAEM)[8]; Stochastic EM (SEM)[10]; Split and Merge EM (SMEM); MLESAC and PROPOSAL. Four experiments were performed: two using lines and two with Gaussians. The first pair of experiments examined how many components the different algorithms could handle reliably. The second pair tested the robustness to background noise. In the Gaussian experiments, the model consisted of a mixture of 2-D Gaussian densities and a uniform background component. In the line experiments, the model consisted of a mixture of densities modeling the residual to the line with a Gaussian noise model, having a variance σ that was also learnt. Each line component has therefore three parameters – its gradient; y-intercept and variance. Each experiment was repeated 250 times with a different, randomly generated dataset, examples of which can be seen in Fig. 1(a) & (b). In each experiment, the same time was allocated for each algorithm, so for example, EM which ran quickly was repeated until it had spent the same amount of time as the slowest (usually PROPOSAL or SMEM), and the best result from the repeated runs taken. For simplicity, the Overlap test compared only the means of the distributions. Parameter values used for PROPOSAL were: I = 200, πmin = 0.01 and ϵ = 0.1. In the first pair of experiments, the number of components was varied from 2 upto 10 for lines and 20 for Gaussians. The background noise was held constant at 20%. The results are shown in Fig. 4. PROPOSAL clearly outperforms the other approaches. In the second pair of experiments, C = 3 components were used, with the background noise varying from 1% up to 99% . Parameters used were the same as for the first experiment. The results can be seen in Fig. 5. Both SMEM and PROPOSAL outperformed EM convincingly. PROPOSAL performed well down to 30% in the line case (i.e. 10% per line) and 20% in the Gaussian case. 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of components % success EM MLESAC PROPOSAL DAEM SEM SMEM (a) −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (b) 2 4 6 8 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of components % success EM MLESAC PROPOSAL DAEM SEM SMEM (c) −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (d) Figure 4: Experiments showing the robustness to the number of components in the model. The x-axis is the number of components ranging from 2 upwards. The y-axis is portion of correct solutions found from 250 runs, each having with a different randomly generated dataset. Key: EM (red solid); DAEM (cyan dot-dashed); SEM (magenta solid); SMEM (black dotted); MLESAC (green dashed) and PROPOSAL (blue solid). (a) Results for line data. (b) A typical line dataset for C = 10. (c) Results for Gaussian data. PROPOSAL is still achieving 75% correct with 10 components - twice the performance of the next best algorithm (SMEM). (d) A typical Gaussian dataset for C = 10. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise portion % success EM MLESAC PROPOSAL DAEM SEM SMEM (a) −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (b) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise portion % success EM MLESAC PROPOSAL DAEM SEM SMEM (c) −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (d) Figure 5: Experiments showing the robustness to background noise. The x-axis is the portion of noise, varying between 1% and 99%. The y-axis is portion of correct solutions found. Key: EM (red solid); DAEM (cyan dot-dashed); SEM (magenta solid); SMEM (black dotted); MLESAC (green dashed) and PROPOSAL (blue solid). (a) Results for three component line data. (b) A typical line dataset for 80% noise. (c) Results for three component Gaussian data. SMEM is marginally superior to PROPOSAL. (d) A typical Gaussian dataset for 80% noise. 4.2 Real data experiments We test PROPOSAL against other clustering methods on the computer vision problem of alpha-matting (the extraction of a foreground element from a background image by estimating the opacity for each pixel of the foreground element, see Figure 6 for examples). The simple approach we adopt is to first form a tri-mask (the composite image is divided into 3 regions: pixels that are definitely foreground; pixels that are definitely background and uncertain pixels). Two color models are constructed by clustering with a mixture of Gaussians the foreground and background pixels respectively. The opacity (alpha values) of the uncertain pixels are then determined by using comparing the color of the pixel under the foreground and background color models. Figure 7 compares the likelihood of the foreground and background color models clustered using EM, SMEM and PROPOSAL on two sets of images (11 face images and 5 dog images, examples of which are shown in Fig. 6). Each model is clustering ∼2×104 pixels in a 4-D space (R,G,B and edge strength) with a 10 component model. In the majority of cases, PROPOSAL can be seen to outperform SMEM which in turn out performs plain EM. 5 Discussion In contrast to SMEM, MCEM [10] and MCMC [4], which operate in θ-space,PROPOSAL is a data-driven approach. It prevalently examines the small portion of θ-space which has support from the data. This gives the algorithm its robustness and efficiency. We have shown PROPOSAL to work well on synthetic data, outperforming many standard algorithms. On real data, PROPOSAL also convincingly beats SMEM and EM. One problem (a) (b) (c) (d) (e) (f) Figure 6: The alpha-matte problem. (a) & (d): Composite images. (b) & (e): Background images. (c) & (f): Desired object segmentation. This figure is best viewed in color. 1 2 3 4 5 6 7 8 9 10 11 15 15.5 16 16.5 17 17.5 Image number Log−likelihood EM SMEM PROPOSAL 1 2 3 4 5 14.8 15 15.2 15.4 15.6 15.8 16 16.2 16.4 16.6 Image number Log−likelihood EM SMEM PROPOSAL Figure 7: Clustering performance on (Left) 11 face images (e.g. Fig. 6(a)) and (Right) 5 dog images (e.g. Fig. 6(d)). x-axis is image number. y-axis is log-likelihood of foreground color model on foreground pixels plus log-likelihood of background color model on background pixels. Three clustering methods are shown: EM (red); SMEM (green) and PROPOSAL (blue). Line indicates mean of 10 runs from different random initializations while error bars show the best and worst models found from the 10 runs. with PROPOSAL is that P scales with the square of the dimension of the data (due to the number of terms in the covariance matrix) meaning for high dimensions, a very large number of draws would be needed to find new portions of data. Hence PROPOSAL is suited to problems of low dimension. Acknowledgments: Funding was provided by EC Project CogViSys, EC NOE Pascal, Caltech CNSE, the NSF and the UK EPSRC. Thanks to F. Schaffalitzky & P. Torr for useful discussions. References [1] Ondˇrej Chum, Jiˇr´ı Matas, and Josef Kittler. Locally optimized ransac. In DAGM 2003: Proceedings of the 25th DAGM Symposium, pages 236–243, 2003. [2] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, 39:1–38, 1976. [3] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM, 24(6):381–395, 1981. [4] S. Richardson and P.J. Green. On bayesian analysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society, 59(4):731–792, 1997. [5] C.V. Stewart. Robust parameter estimation. SIAM Review, 41(3):513–537, Sept. 1999. [6] B. Tordoff and D.W. Murray. Guided sampling and consensus for motion estimation. In Proc. ECCV, 2002. [7] P. H. S. Torr and A. Zisserman. MLESAC: A new robust estimator with application to estimating image geometry. CVIU, 78:138–156, 2000. [8] N. Ueda and R. Nakano. Deterministic Annealing EM algorithm. Neural Networks, 11(2):271–282, 1998. [9] N. Ueda, R. Nakano, Z. Ghahramani, and G. E. Hinton. SMEM algorithm for mixture models. Neural Computation, 12(9):2109–2128, 2000. [10] G. Wei and M. Tanner. A Monte Carlo implementation of the EM algorithm. Journal American Statistical Society, 85:699–704, 1990.
|
2004
|
163
|
2,578
|
Limits of Spectral Clustering Ulrike von Luxburg and Olivier Bousquet Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 T¨ubingen, Germany {ulrike.luxburg,olivier.bousquet}@tuebingen.mpg.de Mikhail Belkin The University of Chicago, Department of Computer Science 1100 E 58th st., Chicago, USA misha@cs.uchicago.edu Abstract An important aspect of clustering algorithms is whether the partitions constructed on finite samples converge to a useful clustering of the whole data space as the sample size increases. This paper investigates this question for normalized and unnormalized versions of the popular spectral clustering algorithm. Surprisingly, the convergence of unnormalized spectral clustering is more difficult to handle than the normalized case. Even though recently some first results on the convergence of normalized spectral clustering have been obtained, for the unnormalized case we have to develop a completely new approach combining tools from numerical integration, spectral and perturbation theory, and probability. It turns out that while in the normalized case, spectral clustering usually converges to a nice partition of the data space, in the unnormalized case the same only holds under strong additional assumptions which are not always satisfied. We conclude that our analysis gives strong evidence for the superiority of normalized spectral clustering. It also provides a basis for future exploration of other Laplacian-based methods. 1 Introduction Clustering algorithms partition a given data set into several groups based on some notion of similarity between objects. The problem of clustering is inherently difficult and often lacks clear criteria of “goodness”. Despite the difficulties in determining the quality of a given partition, it is still possible to study desirable properties of clustering algorithms from a theoretical point of view. In this paper we study the consistency of spectral clustering, which is an important property in the general framework of statistical pattern recognition. A clustering algorithm is consistent if it produces a well-defined (and, hopefully, sensible) partition, given sufficiently many data points. The consistency is a basic sanity check, as an algorithm which is not consistent would change the partition indefinitely as we add points to the dataset, and, consequently, no reasonable small-sample performance could be expected at all. Surprisingly, relatively little research into consistency of clustering algorithms has been done so far, exceptions being only k-centers (Pollard, 1981) and linkage algorithms (Hartigan, 1985). While finite-sample properties of spectral clustering have been studied from a theoretical point of view (Spielman and Teng, 1996; Guattery and Miller, 1998; Kannan et al., 2000; Ng et al., 2001; Meila and Shi, 2001) we focus on the limit behavior for sample size tending to infinity. In this paper we develop a new strategy to prove convergence results for spectral clustering algorithms. Unlike our previous attempts this strategy allows to obtain results for both normalized and unnormalized spectral clustering. As a first result we can recover the main theorem of von Luxburg et al. (2004), which had been proved with different and more restrictive methods, and, in brief, states that usually normalized spectral clustering converges. We also extend that result to the case of multiple eigenvectors. Our second result concerns the case of unnormalized spectral clustering, for which no convergence properties had been known so far. This case is much more difficult to treat than the normalized case, as the limit operators have a more complicated form. We show that unnormalized spectral clustering also converges, but only under strong additional assumptions. Contrary to the normalized case, those assumptions are not always satisfied, as we can show by constructing an example, and in this case there is no hope for convergence. Even worse, on a finite sample it is impossible to verify whether the assumptions hold or not. As a third result we prove statements about the form of the limit clustering. It turns out that in case of convergence, the structure of the clustering constructed on finite samples is conserved in the limit process. From this we can conclude that if convergence takes place, then the limit clustering presents an intuitively appealing partition of the data space. It is also interesting to note that several recent methods for semi-supervised and transductive learning are based on eigenvectors of similarity graphs (cf. Belkin and Niyogi, 2003; Chapelle et al., 2003; Zhu et al., 2003). Our theoretical framework can also be applied to investigate the consistency of those algorithms with respect to the unlabeled data. There is an ongoing debate on the advantages of the normalized versus unnormalized graph Laplacians for spectral clustering. It has been found empirically that the normalized version performs as well or better than the unnormalized version (e.g., Van Driessche and Roose, 1995; Weiss, 1999; in the context of semi-supervised learning see also Zhou et al., 2004). We are now able to provide additional evidence to this effect from a theoretical point of view. Normalized spectral clustering is a well-behaved algorithm which always converges to a sensible limit clustering. Unnormalized spectral clustering on the other hand should be treated with care as consistency can only be asserted under strong assumptions which are not always satisfied and, moreover, are difficult to check in practice. 2 Graph Laplacians and spectral clustering on finite samples In the following we denote by σ(T) the spectrum of a linear operator, by C(X) the space of continuous functions on X with infinity norm, and by rg(d) the range of a function d ∈C(X). For given sample points X1, ..., Xn drawn iid according to an (unknown) distribution P on some data space X we denote the empirical distribution by Pn. For a nonnegative, symmetric similarity function s : X×X →IR we define the similarity matrix as Kn := (s(Xi, Xj))i,j=1,...,n, set di := Pn j=1 s(Xi, Xj), and define the degree matrix Dn as the diagonal matrix with entries di. The unnormalized Laplacian matrix is defined as Ln := Dn −Kn, and two common ways of normalizing it are L′ n := D−1/2 n LnD−1/2 n or L′′ n := D−1 n Ln. In the following we always arrange the eigenvalues of the Laplacian matrices in non-decreasing order 0 = λ1 ≤λ2... ≤λn respecting their multiplicities. In its simplest form, unnormalized (resp. normalized) spectral clustering partitions the sample points Xi into two groups according to whether the i-th coordinate of the second eigenvector is larger or smaller than a certain threshold b ∈IR. Often, instead of considering only the second eigenvector, one uses the first r eigenvectors (for some small number r) simultaneously to obtain a partition into several sets. For an overview of different spectral clustering algorithms see for example Weiss (1999). 3 Limit results In this section we want to state and discuss our main results. The general assumptions in the following three theorems are that the data space X is a compact metric space from which the sample points (Xi)i∈IN are drawn independently according to an unknown probability distribution P. Moreover we require the similarity function s : X×X →IR to be continuous, and in the normalized case to be bounded away from 0, that is s(x, y) > l > 0 for all x, y ∈X and some l ∈IR. By d ∈C(X) we will denote the “degree function”, and U ′ and U will denote the “limit operators” of L′ n and Ln for n →∞. The exact definitions of these functions and operators, as well as all further mathematical details, definitions, and proofs will be postponed to Section 4. Let us start with the first question raised in the introduction: does the spectral clustering constructed on a finite sample converge to a partition of the whole data space if the sample size increases? In the normalized case, convergence results have recently been obtained in von Luxburg et al. (2004). However, those methods were specifically designed for the normalized Laplacian and cannot be used in the unnormalized case. Here we state a convergence result for the normalized case in the form how it can be obtained with our new methods. The theorem is formulated for the symmetric normalization L′ n, but it holds similarly for the normalization L′′ n. Theorem 1 (Convergence of normalized spectral clustering) Under the general assumptions, if the first r eigenvalues of the limit operator U ′ have multiplicity 1, then the same holds for the first r eigenvalues of L′ n for sufficiently large n. In this case, the first r eigenvalues of L′ n converge to the first r eigenvalues of U ′, and the corresponding eigenvectors converge almost surely. The partitions constructed by normalized spectral clustering from the first r eigenvectors on finite samples converge almost surely to a limit partition of the whole data space. Our new result about the convergence in the unnormalized case is the following: Theorem 2 (Convergence of unnormalized spectral clustering) Under the general assumptions, if the first r eigenvalues of the limit operator U have multiplicity 1 and are not element of rg(d), then the same holds for the first r eigenvalues of 1 nLn for sufficiently large n. In this case, the first r eigenvalues of 1 nLn converge to the first r eigenvalues of U, and the the corresponding eigenvectors converge almost surely. The partitions constructed by unnormalized spectral clustering from the first r eigenvectors on finite samples converge almost surely to a limit partition of the whole data space. On the first glance, this theorem looks very similar to Theorem 1: if the general assumptions are satisfied and the first eigenvalues are “nice”, then unnormalized spectral clustering converges. However, the difference between Theorems 1 and 2 is what it means for an eigenvalue to be “nice”. In Theorem 1 we only require the eigenvalues to have multiplicity 1 (and in fact, if the multiplicity is larger than 1 we can still prove convergence of eigenspaces instead of eigenvectors). In Theorem 2, however, the condition λ ̸∈rg(d) has to be satisfied. In the proof this is needed to ensure that the eigenvalue λ is isolated in the spectrum of U, which is a fundamental requirement for applying perturbation theory to the convergence of eigenvectors. If this condition is not satisfied, perturbation theory is in principle unsuitable to obtain convergence results for eigenvectors. The reason why this condition appears in the unnormalized case but not in the normalized case lies in the structure of the respective limit operators, which, surprisingly, is more complicated in the unnormalized case than in the normalized one. In the next section we will construct an example where the second eigenvalue indeed lies within rg(d). This means that there actually exist situations in which Theorem 2 cannot be applied, and hence unnormalized spectral clustering might not converge. Now we want to turn to the second question raised in the introduction: In case of convergence, is the limit clustering a reasonable clustering of the whole data space? To answer this question we analyze the structure of the limit operators (for simplicity we state this for the unnormalized case only). Assume that we are given a partition X = ∪k i=1Xi of the data space into k disjoint sets. If we order the sample points according to their memberships in the sets Xi, then we can write the Laplacian in form of a block matrix Ln ≃(Lij,n)i,j=1,...,k where each sub-matrix Lij,n contains the rows of Ln corresponding to points in set Xi and the columns corresponding to points in Xj. In a similar way, the limit operator U can be decomposed into a matrix of operators Uij : C(Xj) →C(Xi). Now we can show that for all i, j = 1, ..., k the sub-matrices 1 nLij,n converge to the corresponding sub-operators Uij such that their spectra converge in the same way as in Theorems 1 and 2. This is a very strong result as it means that for every given partition of X, the structure of the operators is preserved in the limit process. Theorem 3 (Structure of the limit operators) Let X = ∪k i=1Xi be a partition of the data space. Let Lij,n be the sub-matrices of Ln introduced above, Uij : C(Xj) →C(Xi) the restrictions of U corresponding to the sets Xi and Xj, and U ′ ij,n and U ′ ij the analogous quantities for the normalized case. Then under the general assumptions, 1 nLij,n converges compactly to Uij a.s. and L′ ij,n converges compactly to U ′ ij a.s. With this result it is then possible to give a first answer on the question how the limit partitions look like. In Meila and Shi (2001) it has been established that normalized spectral clustering tries to find a partition such that a random walk on the sample points tends to stay within each of the partition sets Xi instead of jumping between them. With the help of Theorem 3, the same can now be said for the normalized limit partition, and this can also be extended to the unnormalized case. The operators U ′ and U can be interpreted as diffusion operators on the data space. The limit clusterings try to find a partition such that the diffusion tends to stay within the sets Xi instead of jumping between them. In particular, the limit partition segments the data space into sets such that the similarity within the sets is high and the similarity between the sets is low, which intuitively is what clustering is supposed to do. 4 Mathematical details In this section we want to explain the general constructions and steps that need to be taken to prove Theorems 1, 2, and 3. However, as the proofs are rather technical we only present proof sketches that convey the overall strategy. Detailed proofs can be found in von Luxburg (2004) where all proofs are spelled out in full length. Moreover, we will focus on the proof of Theorem 2 as the other results can be proved similarly. To be able to define convergence of linear operators, all operators have to act on the same space. As this is not the case for the matrices Ln for different n, for each Ln we will construct a related operator Un on the space C(X) which will be used instead of Ln. In Step 2 we show that the interesting eigenvalues and eigenvectors of 1 nLn and Un are in a one-to-one relationship. Then we will prove that the Un converge in a strong sense to some limit operator U on C(X) in Step 3. As we can show in Step 4, this convergence implies the convergence of eigenvalues and eigenvectors of Un. Finally, assembling the parts will finish the proof of Theorem 2. Step 1: Construction of the operators Un on C(X). We first define the empirical and true degree functions in C(X) as dn(x) := Z s(x, y)dPn(y) and d(x) := Z s(x, y)dP(y). Corresponding to the matrices Dn and Kn we introduce the following multiplication and integral operators on C(X): Mdnf(x) := dn(x)f(x) and Mdf(x) := d(x)f(x) Snf(x) := Z s(x, y)f(y)dPn(y) and Sf(x) := Z s(x, y)f(y)dP(y). Note that dn(Xi) = 1 ndi, and for f ∈C(X) and v := (f(X1), ..., f(Xn))′ it holds that 1 n(Dnv)i = Mdnf(Xi) and 1 n(Knv)i = Snf(Xi). Hence the function dn and the operators Mdn and Sn are the counterparts of the discrete degrees 1 ndi and the matrices 1 nDn and 1 nKn. The scaling factor 1/n comes from the hidden 1/n-factor in the empirical distribution Pn. The natural pointwise limits of dn, Mdn, and Sn for n →∞are given by d, Md, and S. The operators corresponding to the unnormalized Laplacians 1 nLn = 1 n(Dn −Kn) and its limit operator are Unf(x) := Mdnf(x) −Snf(x) and Uf(x) := Mdf(x) −Sf(x). Step 2: Relations between σ( 1 nLn) and σ(Un). Proposition 4 (Spectral properties) 1. The spectrum of Un consists of rg(dn), plus some isolated eigenvalues with finite multiplicity. The same holds for U and rg(d). 2. If f ∈C(X) is an eigenfunction of Un with arbitrary eigenvalue λ, then the vector v ∈IRn with vi = f(Xi) is an eigenvector of the matrix 1 nLn with eigenvalue λ. 3. If v is an eigenvector of the matrix 1 nLn with eigenvalue λ ̸∈rg(dn), then the function f(x) = 1 n(P j s(x, Xj)vj)/(dn(x) −λ) is the unique eigenfunction of Un with eigenvalue λ satisfying f(Xi) = vi. Proof. It is well-known that the (essential) spectrum of a multiplication operator coincides with the range of the multiplier function. Moreover, the spectrum of a sum of a bounded operator with a compact operator contains the essential spectrum of the bounded operator. Additionally, it can only contain some isolated eigenvalues with finite multiplicity (e.g., Theorem IV.5.35 in Kato, 1966). The proofs of the other parts of this proposition can be obtained by elementary shuffling of eigenvalue equations and will be skipped. , Step 3: Convergence of Un to U. Dealing with the randomness. Recall that the operators Un are random operators as they depend on the given sample points X1, ..., Xn via the empirical distribution Pn. One important tool to cope with this randomness will be the following proposition: Proposition 5 (Glivenko-Cantelli class) Let (X, d) be a compact metric space and s : X×X →IR continuous. Then F := {s(x, ·); x ∈X} is a Glivenko-Cantelli class, that is supx∈X | R s(x, y)dPn(y) − R s(x, y)dP(y)| →0 almost surely. Proof. This proposition follows from Theorem 2.4.1. of v. d. Vaart and Wellner (1996). , Note that one direct consequence of this proposition is that ∥dn −d∥∞→0 a.s. Types of convergence. Let E be an arbitrary Banach space and B its unit ball. A sequence (Sn)n of linear operators on E is called collectively compact if the set S n SnB is relatively compact in E (with respect to the norm topology). A sequence of operators converges collectively compactly if it converges pointwise and if there exists some N ∈IN such that the operators (Sn −S)n>N are collectively compact. A sequence of operators converges compactly if it converges pointwise and if for every sequence (xn)n in B, the sequence (S−Sn)xn is relatively compact. See Anselone (1971) and Chatelin (1983) for background reading. A sequence (xn)n in E converges up to a change of sign to x ∈E if there exists a sequence (an)n of signs an ∈{−1, +1} such that the sequence (anxn)n converges to x. Proposition 6 (Un converges compactly to U a.s.) Let X be a compact metric space and s : X×X →IR continuous. Then Un converges to U compactly a.s. Proof. (a) Sn converges to S collectively compactly a.s. With the help of the GlivenkoCantelli property in Proposition 5 it is easy to see that Sn converges to S pointwise, that is ∥Snf −Sf∥∞→0 a.s. for all f ∈C(X). As the limit operator S is compact, to prove that (Sn −S)n are collectively compact a.s. it is enough to prove that (Sn)n are collectively compact a.s. This can be done by the Arzela-Ascoli theorem. (b) Mdn converges to Md in operator norm a.s. This is a direct consequence of the Glivenko-Cantelli properties of Proposition 5. (c) Un = Sn −Mdn converges to U = S −Md compactly a.s. Both operator norm convergence and collectively compact convergence imply compact convergence (cf. Proposition 3.18 of Chatelin, 1983). Moreover, it is easy to see that the sum of two compactly converging operators converges compactly. , Step 4: Convergence of the eigenfunctions of Un to those of U. It is a result of perturbation theory (see the comprehensive treatment in Chatelin, 1983, especially Section 5.1) that compact convergence of operators implies the convergence of eigenvalues and spectral projections in the following way. If λ is an isolated eigenvalue in σ(U) with finite multiplicity, then there exists a sequence λn ∈σ(Un) of isolated eigenvalues with finite multiplicity such that λn →λ. If the first r eigenvalues of T have multiplicity 1, then the same holds for the first r eigenvalues of Tn for sufficiently large n, and the i-th eigenvalues of Tn converge to the i-th eigenvalue of T. The corresponding eigenvectors converge up to a change of sign. If the multiplicity of the eigenvalues is larger than 1 but finite, then the corresponding eigenspaces converge. Note that for eigenvalues which are not isolated in the spectrum, convergence cannot be asserted, and the same holds for the corresponding eigenvectors (e.g., Section IV.3 of Kato, 1966). In our case, by Proposition 4 we know that the spectrum of U consists of the whole interval rg(d), plus eventually some isolated eigenvalues. Hence an eigenvalue λ ∈σ(U) is isolated in the spectrum iff λ ̸∈rg(d) holds, in which case convergence holds as stated above. Step 5: Convergence of unnormalized spectral clustering. Now we can to put together the different parts. In the first two steps we transferred the problem of the convergence of the eigenvectors of 1 nLn to the convergence of eigenfunctions of Un. In Step 3 we showed that Un converges compactly to the limit operator U, which according to Step 4 implies the convergence of the eigenfunctions of Un. In terms of the eigenvectors of 1 nLn this means the following: if λ denotes the j-th eigenvalue of U with eigenfunction f ∈C(X) and λn the j-th eigenvalue of 1 nLn with eigenvector vn = (vn,1, ..., vn,n)′, then there exists a sequence of signs ai ∈{−1, +1} such that supi=1,...,n |aivn,i −f(Xi)| →0 a.s. As spectral clustering is constructed from the coordinates of the eigenvectors, this leads to the convergence of spectral clustering in the unnormalized case. This completes the proof of Theorem 2. , The proof for Theorem 1 can be obtained in a very similar way. Here the limit operator is U ′f(x) := (I −S′)f(x) := f(x) − Z (s(x, y)/ p d(x)d(y) )f(y)dP(y). The main difference to the unnormalized case is that the operator Md in U gets replaced by the identity operator I in U ′. This simplifies matters as one can easily express the spectrum of (I −S′) via the spectrum of the compact operator S′. From a different point of view, consider the identity operator as the operator of multiplication by the constant one function 1. Its range is the single point rg(1) = {1}, and hence the critical interval rg(d) ⊂σ(U) shrinks to the point 1 ∈σ(U ′), which in general is a non-isolated eigenvalue with infinite multiplicity. Finally, note that it is also possible to prove more general versions of Theorems 1 and 2 where the eigenvalues have finite multiplicity larger than 1. Instead of the convergence of the eigenvectors we then obtain the convergence of the projections on the eigenspaces. The proof of Theorem 3 works as the ones of the other two theorems. The exact definitions of the operators considered in this case are U ′ ij : C(Xj) →C(Xi), δijfi(x) − Z (sij(x, y)/ q di(x)dj(y) )fj(y)dPj(y) Uij : C(Xj) →C(Xi), Uijf(x) = δijdi(x)fi(x) − Z sij(x, y)fj(y)dPj(y) where di, fi, Pi, and sij denote the restrictions of the functions to Xi and Xi×Xj, respectively, and δij is 1 if i = j and 0 otherwise. For the diffusion interpretation, note that if there exists an ideal partition of the data space (that is, s(xi, xj) = 0 for xi, xj in different sets Xi and Xj), then the off-diagonal operators U ′ ij and Uij with i ̸= j vanish, and the first k eigenvectors of U ′ and U can be reconstructed by the piecewise constant eigenvectors of the diagonal operators U ′ ii and Uii. In this situation, spectral clustering recovers the ideal clustering. If there exists no ideal clustering, but there exists a partition such that the off-diagonal operators are “small” and the diagonal operators are “large”, then it can be seen by perturbation theory arguments that spectral clustering will find such a partition. The off-diagonal operators can be interpreted as diffusion operators between different clusters (note that even in the unnormalized case, the multiplication operator only appears in the diagonal operators). Hence, constructing a clustering with small off-diagonal operators corresponds to a partition such that few diffusion between the clusters takes place. Finally, we want to construct an example where the second eigenvalue of U satisfies λ ∈ rg(d). Let X = [1, 2] ⊂IR, s(x, y) := xy, and p a piecewise constant probability density on X with p(x) = c if 4/3 ≤x < 5/3 and p(x) = (3 −c)/2 otherwise, for some fixed constant c ∈[0, 3] (e.g., for small c this density has two clearly separated high density regions). The degree function in this case is d(x) = 1.5x (independently of c) and has range [1.5, 3] on X. We can see that an eigenfunction of U for eigenvalue λ ̸∈rg(d) has the form f(x) = βx/(3x −λ), where the equation β = R x2/(3x −λ)p(x)dx has to be satisfied. This means that λ ̸∈rg(d) is an eigenvalue of U iff the equation g(λ) := R 2 1 x2/(3x −λ)p(x)dx != 1 is satisfied. For our simple density function p, this integral can be solved analytically. It can then been seen that g(λ) = 1 is only satisfied for λ = 0, hence the only eigenvalue outside of rg(d) is the trivial eigenvalue 0. Note that in applications of spectral clustering, we do not know the limit operator U and hence cannot test whether its relevant eigenvalues are in its essential spectrum rg(d) or not. If, for some special reason, one really wants to use unnormalized spectral clustering, one should at least estimate the critical region rg(d) by [mini di/n, maxi di/n] and check whether the relevant eigenvalues of 1 nLn are inside or close to this interval or not. This observation then gives an indication whether the results obtained can considered to be reliable or not. However, this observation is not a valid statistical test. 5 Conclusions We have shown that under standard assumptions, normalized spectral clustering always converges to a limit partition of the whole data space which depends only on the probability distribution P and the similarity function s. For unnormalized spectral clustering, this can only be guaranteed under the strong additional assumption that the first eigenvalues of the Laplacian do not fall inside the range of the degree function. As shown by our example, this condition has to be taken seriously. Consistency results are a basic sanity check for behavior of statistical learning algorithms. Algorithms which do not converge cannot be expected to exhibit reliable results on finite samples. Therefore, in the light of our theoretical analysis we assert that the normalized version of spectral clustering should be preferred in practice. This suggestion also extends to other applications of graph Laplacians including semi-supervised learning. References P. Anselone. Collectively compact operator approximation theory. Prentice-Hall, 1971. M. Belkin and P. Niyogi. Using manifold structure for partially labeled classification. In Advances in Neural Information Processing Systems 15, 2003. O. Chapelle, J. Weston, and B. Sch¨olkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing Systems 15, 2003. F. Chatelin. Spectral Approximation of Linear Operators. Academic Press, 1983. S. Guattery and G. L. Miller. On the quality of spectral separators. SIAM Journal of Matrix Anal. Appl., 19(3), 1998. J. Hartigan. Statistical theory in clustering. Journal of classification, 2:63–76, 1985. R. Kannan, S. Vempala, and A. Vetta. On clusterings - good, bad and spectral. Technical report, Computer Science Department, Yale University, 2000. T. Kato. Perturbation theory for linear operators. Springer, Berlin, 1966. M. Meila and J. Shi. A random walks view of spectral segmentation. In 8th International Workshop on Artificial Intelligence and Statistics, 2001. A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems 14, 2001. D. Pollard. Strong consistency of k-means clustering. Ann. of Stat., 9(1):135–140, 1981. D. Spielman and S. Teng. Spectral partitioning works: planar graphs and finite element meshes. In 37th Annual Symposium on Foundations of Computer Science, 1996. A. v. d. Vaart and J. Wellner. Weak Convergence and Empirical Processes. Springer, 1996. R. Van Driessche and D. Roose. An improved spectral bisection algorithm and its application to dynamic load balancing. Parallel Comput., 21(1), 1995. U. von Luxburg. Statistical Learning with Similarity and Dissimilarity Functions. PhD thesis, draft, available at http://www.kyb.tuebingen.mpg.de/˜ule, 2004. U. von Luxburg, O. Bousquet, and M. Belkin. On the convergence of spectral clustering on random samples: the normalized case. In COLT, 2004. Y. Weiss. Segmentation using eigenvectors: A unifying view. In Proceedings of the International Conference on Computer Vision, pages 975–982, 1999. D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, 2004. X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML, 2003.
|
2004
|
164
|
2,579
|
Using the Equivalent Kernel to Understand Gaussian Process Regression Peter Sollich Dept of Mathematics King’s College London Strand, London WC2R 2LS, UK peter.sollich@kcl.ac.uk Christopher K. I. Williams School of Informatics University of Edinburgh 5 Forrest Hill, Edinburgh EH1 2QL, UK c.k.i.williams@ed.ac.uk Abstract The equivalent kernel [1] is a way of understanding how Gaussian process regression works for large sample sizes based on a continuum limit. In this paper we show (1) how to approximate the equivalent kernel of the widely-used squared exponential (or Gaussian) kernel and related kernels, and (2) how analysis using the equivalent kernel helps to understand the learning curves for Gaussian processes. Consider the supervised regression problem for a dataset D with entries (xi, yi) for i = 1, . . . , n. Under Gaussian Process (GP) assumptions the predictive mean at a test point x∗ is given by ¯f(x∗) = k⊤(x∗)(K + σ2I)−1y, (1) where K denotes the n × n matrix of covariances between the training points with entries k(xi, xj), k(x∗) is the vector of covariances k(xi, x∗), σ2 is the noise variance on the observations and y is a n × 1 vector holding the training targets. See e.g. [2] for further details. We can define a vector of functions h(x∗) = (K + σ2I)−1k(x∗) . Thus we have ¯f(x∗) = h⊤(x∗)y, making it clear that the mean prediction at a point x∗is a linear combination of the target values y. Gaussian process regression is thus a linear smoother, see [3, section 2.8] for further details. For a fixed test point x∗, h(x∗) gives the vector of weights applied to targets y. Silverman [1] called h⊤(x∗) the weight function. Understanding the form of the weight function is made complicated by the matrix inversion of K + σ2I and the fact that K depends on the specific locations of the n datapoints. Idealizing the situation one can consider the observations to be “smeared out” in x-space at some constant density of observations. In this case analytic tools can be brought to bear on the problem, as shown below. By analogy to kernel smoothing Silverman [1] called the idealized weight function the equivalent kernel (EK). The structure of the remainder of the paper is as follows: In section 1 we describe how to derive the equivalent kernel in Fourier space. Section 2 derives approximations for the EK for the squared exponential and other kernels. In section 3 we show how use the EK approach to estimate learning curves for GP regression, and compare GP regression to kernel regression using the EK. 1 Gaussian Process Regression and the Equivalent Kernel It is well known (see e.g. [4]) that the posterior mean for GP regression can be obtained as the function which minimizes the functional J[f] = 1 2∥f∥2 H + 1 2σ2n n X i=1 (yi −f(xi))2, (2) where ∥f∥H is the RKHS norm corresponding to kernel k. (However, note that the GP framework gives much more than just this mean prediction, for example the predictive variance and the marginal likelihood p(y) of the data under the model.) Let η(x) = E[y|x] be the target function for our regression problem and write E[(y − f(x))2] = E[(y −η(x))2] + (η(x) −f(x))2. Using the fact that the first term on the RHS is independent of f motivates considering a smoothed version of equation 2, Jρ[f] = ρ 2σ2 Z (η(x) −f(x))2dx + 1 2∥f∥2 H, where ρ has dimensions of the number of observations per unit of x-space (length/area/volume etc. as appropriate). If we consider kernels that are stationary, k(x, x′) = k(x −x′), the natural basis in which to analyse equation 1 is the Fourier basis of complex sinusoids so that f(x) is represented as R ˜f(s)e2πis·xds and similarly for η(x). Thus we obtain Jρ[f] = 1 2 Z ρ σ2 | ˜f(s) −˜η(s)|2 + | ˜f(s)|2 S(s) ! ds, as ∥f∥2 H = R | ˜f(s)|2/S(s)ds where S(s) is the power spectrum of the kernel k, S(s) = R k(x)e−2πis·xdx. Jρ[f] can be minimized using calculus of variations to obtain ˜f(s) = S(s)η(s)/(σ2/ρ + S(s)) which is recognized as the convolution f(x∗) = R h(x∗−x)η(x)dx. Here the Fourier transform of the equivalent kernel h(x) is ˜h(s) = S(s) S(s) + σ2/ρ = 1 1 + σ2/(ρS(s)). (3) The term σ2/ρ in the first expression for ˜h(s) corresponds to the power spectrum of a white noise process, whose delta-function covariance function becomes a constant in the Fourier domain. This analysis is known as Wiener filtering; see, e.g. [5, §14-1]. Notice that as ρ →∞, h(x) tends to the delta function. If the input density is non-uniform the analysis above should be interpreted as computing the equivalent kernel for np(x) = ρ. This approximation will be valid if the scale of variation of p(x) is larger than the width of the equivalent kernel. 2 The EK for the Squared Exponential and Related Kernels For certain kernels/covariance functions the EK h(x) can be computed exactly by Fourier inversion. Examples include the Ornstein-Uhlenbeck process in D = 1 with covariance k(x) = e−α|x| (see [5, p. 326]), splines in D = 1 corresponding to the regularizer ∥Pf∥2 = R (f (m))2dx [1, 6], and the regularizer ∥Pf∥2 = R (∇2f)2dx in two dimensions, where the EK is given in terms of the Kelvin function kei [7]. We now consider the commonly used squared exponential (SE) kernel k(r) = exp(−r2/2ℓ2), where r2 = ||x−x′||2. (This is sometimes called the Gaussian or radial basis function kernel.) Its Fourier transform is given by S(s) = (2πℓ2)D/2 exp(−2π2ℓ2|s|2), where D denotes the dimensionality of x (and s) space. From equation 3 we obtain ˜hSE(s) = 1 1 + b exp(2π2ℓ2|s|2), where b = σ2/ρ(2πℓ2)D/2. We are unaware of an exact result in this case, but the following initial approximation is simple but effective. For large ρ, b will be small. Thus for small s = |s| we have that ˜hSE ≃1, but for large s it is approximately 0. The change takes place around the point sc where b exp(2π2ℓ2s2 c) = 1, i.e. s2 c = log(1/b)/2π2ℓ2. As exp(2π2ℓ2s2) grows quickly with s, the transition of ˜hSE between 1 and 0 can be expected to be rapid, and thus be well-approximated by a step function. Proposition 1 The approximate form of the equivalent kernel for the squared-exponential kernel in D-dimensions is given by hSE(r) = sc r D/2 JD/2(2πscr). Proof: hSE(s) is a function of s = |s| only, and for D > 1 the Fourier integral can be simplified by changing to spherical polar coordinates and integrating out the angular variables to give hSE(r) = 2πr Z ∞ 0 s r ν+1 Jν(2πrs)˜hSE(s) ds (4) ≃2πr Z sc 0 s r ν+1 Jν(2πrs) ds = sc r D/2 JD/2(2πscr). where ν = D/2 −1, Jν(z) is a Bessel function of the first kind and we have used the identity zν+1Jν(z) = (d/dz)[zν+1Jν+1(z)]. □ Note that in D = 1 by computing the Fourier transform of the boxcar function we obtain hSE(x) = 2scsinc(2πscx) where sinc(z) = sin(z)/z. This is consistent with Proposition 1 and J1/2(z) = (2/πz)1/2 sin(z). The asymptotic form of the EK in D = 2 is shown in Figure 2(left) below. Notice that sc scales as (log(ρ))1/2 so that the width of the EK (which is proportional to 1/sc) will decay very slowly as ρ increases. In contrast for a spline of order m (with power spectrum ∝|s|−2m) the width of the EK scales as ρ−1/2m [1]. If instead of RD we consider the input set to be the unit circle, a stationary kernel can be periodized by the construction kp(x, x′) = P n∈Z k(x −x′ + 2nπ). This kernel will be represented as a Fourier series (rather than with a Fourier transform) because of the periodicity. In this case the step function in Fourier space approximation would give rise to a Dirichlet kernel as the EK (see [8, section 4.4.3] for further details on the Dirichlet kernel). We now show that the result of Proposition 1 is asymptotically exact for ρ →∞, and calculate the leading corrections for finite ρ. The scaling of the width of the EK as 1/sc suggests writing hSE(r) = (2πsc)Dg(2πscr). Then from equation 4 and using the definition of sc g(z) = z sc(2πsc)D Z ∞ 0 2πscs z ν+1 Jν(zs/sc) 1 + exp[2π2ℓ2(s2 −s2c)] ds = z Z ∞ 0 u 2πz ν+1 Jν(zu) 1 + exp[2π2ℓ2s2c(u2 −1)] du (5) where we have rescaled s = scu in the second step. The value of sc, and hence ρ, now enters only in the exponential via a = 2π2ℓ2s2 c. For a →∞, the exponential tends to zero for u < 1 and to infinity for u > 1. The factor 1/[1 + exp(. . .)] is therefore a step function Θ(1 −u) in the limit and Proposition 1 becomes exact, with g∞(z) ≡lima→∞g(z) = (2πz)−D/2JD/2(z). To calculate corrections to this, one uses that for large but finite a the difference ∆(u) = {1 + exp[a(u2 −1)]}−1 −Θ(1 −u) is non-negligible only in a range of order 1/a around u = 1. The other factors in the integrand of equation 5 can thus be Taylor-expanded around that point to give g(z) = g∞(z) + z ∞ X k=0 Ik k! dk duk u 2πz ν+1 Jν(zu) u=1 , Ik = Z ∞ 0 ∆(u)(u −1)k du The problem is thus reduced to calculating the integrals Ik. Setting u = 1 + v/a one has ak+1Ik = Z 0 −a 1 1 + exp(v2/a + 2v) −1 vk dv + Z ∞ 0 vk 1 + exp(v2/a + 2v) dv = Z a 0 (−1)k+1vk 1 + exp(−v2/a + 2v) dv + Z ∞ 0 vk 1 + exp(v2/a + 2v) dv In the first integral, extending the upper limit to ∞gives an error that is exponentially small in a. Expanding the remaining 1/a-dependence of the integrand one then gets, to leading order in 1/a, I0 = c0/a2, I1 = c1/a2 while all Ik with k ≥2 are smaller by at least 1/a2. The numerical constants are −c0 = c1 = π2/24. This gives, using that (d/dz)[zν+1Jν(z)] = zνJν(z) + zν+1Jν−1(z) = (2ν + 1)zνJν(z) −zν+1Jν+1(z): Proposition 2 The equivalent kernel for the squared-exponential kernel is given for large ρ by hSE(r) = (2πsc)Dg(2πscr) with g(z) = 1 (2πz) D 2 n JD/2(z) + z a2 (c0 + c1(D −1))JD/2−1(z) −c1zJD/2(z) o +O( 1 a4 ) For e.g. D = 1 this becomes g(z) = π−1{sin(z)/z −π2/(24a2)[cos(z)+z sin(z)]}. Here and in general, by comparing the second part of the 1/a2 correction with the leading order term, one estimates that the correction is of relative size z2/a2. It will therefore provide a useful improvement as long as z = 2πscr < a; for larger z the expansion in powers of 1/a becomes a poor approximation because the correction terms (of all orders in 1/a) are comparable to the leading order. 2.1 Accuracy of the approximation To evaluate the accuracy of the approximation we can compute the EK numerically as follows: Consider a dense grid of points in RD with a sampling density ρgrid. For making predictions at the grid points we obtain the smoother matrix K(K + σ2 gridI)−1, where1 σ2 grid = σ2ρgrid/ρ, as per equation 1. Each row of this matrix is an approximation to the EK at the appropriate location, as this is the response to a y vector which is zero at all points except one. Note that in theory one should use a grid over the whole of RD but in practice one can obtain an excellent approximation to the EK by only considering a grid around the point of interest as the EK typically decays with distance. Also, by only considering a finite grid one can understand how the EK is affected by edge effects. 1To understand this scaling of σ2 grid consider the case where ρgrid > ρ which means that the effective variance at each of the ρgrid points per unit x-space is larger, but as there are correspondingly more points this effect cancels out. This can be understood by imagining the situation where there are ρgrid/ρ independent Gaussian observations with variance σ2 grid at a single x-point; this would be equivalent to one Gaussian observation with variance σ2. In effect the ρ observations per unit x-space have been smoothed out uniformly. −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Numerical Proposition 1 Sample 0 5 10 15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Numerical Proposition 1 Proposition 2 0 5 10 15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Numerical Proposition 1 Proposition 2 Figure 1: Main figure: plot of the weight function corresponding to ρ = 100 training points/unit length, plus the numerically computed equivalent kernel at x = 0.0 and the sinc approximation from Proposition 1. Insets: numerically evaluated g(z) together with sinc and Proposition 2 approximations for ρ = 100 (left) and ρ = 104 (right). Figure 1 shows plots of the weight function for ρ = 100, the EK computed on the grid as described above and the analytical sinc approximation. These are computed for parameter values of ℓ2 = 0.004 and σ2 = 0.1, with ρgrid/ρ = 5/3. To reduce edge effects, the interval [−3/2, 3/2] was used for computations, although only the centre of this is shown in the figure. There is quite good agreement between the numerical computation and the analytical approximation, although the sidelobes decay more rapidly for the numerically computed EK. This is not surprising because the absence of a truly hard cutoff in Fourier space means one should expect less “ringing” than the analytical approximation predicts. The figure also shows good agreement between the weight function (based on the finite sample) and the numerically computed EK. The insets show the approximation of Proposition 2 to g(z) for ρ = 100 (a = 5.67, left) and ρ = 104 (a = 9.67, right). As expected, the addition of the 1/a2-correction gives better agreement with the numerical result for z < a. Numerical experiments also show that the mean squared error between the numerically computed EK and the sinc approximation decreases like 1/ log(ρ). The is larger than the na¨ıve estimate (1/a2)2 ∼1/(log(ρ))4 based on the first correction term from Proposition 2, because the dominant part of the error comes from the region z > a where the 1/a expansion breaks down. 2.2 Other kernels Our analysis is not in fact restricted to the SE kernel. Consider an isotropic kernel, for which the power spectrum S(s) depends on s = |s| only. Then we can again define from equation 3 an effective cutoff sc on the range of s in the EK via σ2/ρ = S(sc), so that ˜h(s) = [1 + S(sc)/S(s)]−1. The EK will then have the limiting form given in Proposition 1 if ˜h(s) approaches a step function Θ(sc −s), i.e. if it becomes infinitely “steep” around the point s = sc for sc →∞. A quantitative criterion for this is that the slope |˜h′(sc)| should become much larger than 1/sc, the inverse of the range of the step function. Since ˜h′(s) = S′(s)S(sc)S−2(s)[1 + S(sc)/S(s)]−2, this is equivalent to requiring that −scS′(sc)/4S(sc) ∝−d log S(sc)/d log sc must diverge for sc →∞. The result of Proposition 1 therefore applies to any kernel whose power spectrum S(s) decays more rapidly than any positive power of 1/s. A trivial example of a kernel obeying this condition would be a superposition of finitely many SE kernels with different lengthscales ℓ2; the asymptotic behaviour of sc is then governed by the smallest ℓ. A less obvious case is the “rational quadratic” k(r) = [1 + (r/l)2]−(D+1)/2 which has an exponentially decaying power spectrum S(s) ∝ exp(−2πℓs). (This relationship is often used in the reverse direction, to obtain the power spectrum of the Ornstein-Uhlenbeck (OU) kernel exp(−r/ℓ).) Proposition 1 then applies, with the width of the EK now scaling as 1/sc ∝1/ log(ρ). The previous example is a special case of kernels which can be written as superpositions of SE kernels with a distribution p(ℓ) of lengthscales ℓ, k(r) = R exp(−r2/2ℓ2)p(ℓ) dℓ. This is in fact the most general representation for an isotropic kernel which defines a valid covariance function in any dimension D, see [9, §2.10]. Such a kernel has power spectrum S(s) = (2π)D/2 Z ∞ 0 ℓD exp(−2π2ℓ2s2)p(ℓ) dℓ (6) and one easily verifies that the rational quadratic kernel, which has S(s) ∝exp(−2πℓ0s), is obtained for p(ℓ) ∝ℓ−D−2 exp(−ℓ2 0/2ℓ2). More generally, because the exponential factor in equation 6 acts like a cutoff for ℓ> 1/s, one estimates S(s) ∼ R 1/s 0 ℓDp(ℓ) dℓ for large s. This will decay more strongly than any power of 1/s for s →∞if p(ℓ) itself decreases more strongly than any power of ℓfor ℓ→0. Any such choice of p(ℓ) will therefore yield a kernel to which Proposition 1 applies. 3 Understanding GP Learning Using the Equivalent Kernel We now turn to using EK analysis to get a handle on average case learning curves for Gaussian processes. Here the setup is that a function η is drawn from a Gaussian process, and we obtain ρ noisy observations of η per unit x-space at random x locations. We are concerned with the mean squared error (MSE) between the GP prediction f and η. Averaging over the noise process, the x-locations of the training data and the prior over η we obtain the average MSE ϵ as a function of ρ. See e.g. [10] and [11] for an overview of earlier work on GP learning curves. To understand the asymptotic behaviour of ϵ for large ρ, we now approximate the true GP predictions with the EK predictions from noisy data, given by fEK(x) = R h(x − x′)y(x′)dx′ in the continuum limit of “smoothed out” input locations. We assume as before that y = target + noise, i.e. y(x) = η(x) + ν(x) where E[ν(x)ν(x′)] = (σ2 ∗/ρ)δ(x −x′). Here σ2 ∗denotes the true noise variance, as opposed to the noise variance assumed in the EK; the scaling of σ2 ∗with ρ is explained in footnote 1. For a fixed target η, the MSE is ϵ = ( R dx)−1 R [η(x) −fEK(x)]2dx. Averaging over the noise process ν and target function η gives in Fourier space ϵ = Z n Sη(s)[1 −˜h(s)]2 + (σ2 ∗/ρ)˜h2(s) o ds = σ2 ρ Z (σ2/ρ)Sη(s)/S2(s) + σ2 ∗/σ2 [1 + σ2/(ρS(s))]2 ds (7) where Sη(s) is the power spectrum of the prior over target functions. In the case S(s) = Sη(s) and σ2 = σ2 ∗where the kernel is exactly matched to the structure of the target, equation 7 gives the Bayes error ϵB and simplifies to ϵB = (σ2/ρ) R [1 + σ2/(ρS(s))]−1ds (see also [5, eq. 14-16]). Interestingly, this is just the analogue (for a continuous power spectrum of the kernel rather than a discrete set of eigenvalues) of the lower bound of [10] −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −0.005 0 0.005 0.01 0.015 0.02 0.025 0.03 α=2 α=4 0.05 0.1 0.5 0.5 25 50 100 ρ 250 500 ε Figure 2: Left: plot of the asymptotic form of the EK (sc/r)J1(2πscr) for D = 2 and ρ = 1225. Right: log-log plot of ϵ against log(ρ) for the OU and Matern-class processes (α = 2, 4 respectively). The dashed lines have gradients of −1/2 and −3/2 which are the predicted rates. on the MSE of standard GP prediction from finite datasets. In experiments this bound provides a good approximation to the actual average MSE for large dataset size n [11]. This supports our approach of using the EK to understand the learning behaviour of GP regression. Treating the denominator in the expression for ϵB again as a hard cutoff at s = sc, which is justified for large ρ, one obtains for an SE target and learner ϵ ≈σ2sc/ρ ∝(log(ρ))D/2/ρ. To get analogous predictions for the mismatched case, one can write equation 7 as ϵ = σ2 ∗ ρ Z [1 + σ2/(ρS(s))] −σ2/(ρS(s)) [1 + σ2/(ρS(s))]2 ds + Z Sη(s) [S(s)ρ/σ2 + 1]2 ds. The first integral is smaller than (σ2 ∗/σ2)ϵB and can be neglected as long as ϵ ≫ϵB. In the second integral we can again make the cutoff approximation—though now with s having to be above sc – to get the scaling ϵ ∝ R ∞ sc sD−1Sη(s) ds. For target functions with a power-law decay Sη(s) ∝s−α of the power spectrum at large s this predicts ϵ ∝sD−α c ∝ (log(ρ))(D−α)/2. So we generically get slow logarithmic learning, consistent with the observations in [12]. For D = 1 and an OU target (α = 2) we obtain ϵ ∼(log(ρ))−1/2, and for the Matern-class covariance function k(r) = (1 + r/ℓ) exp(−r/ℓ) (which has power spectrum ∝(3/ℓ2 + 4π2s2)−2, so α = 4) we get ϵ ∼(log(ρ))−3/2. These predictions were tested experimentally using a GP learner with SE covariance function (ℓ= 0.1 and assumed noise level σ2 = 0.1) against targets from the OU and Matern-class priors (with ℓ= 0.05) and with noise level σ2 ∗= 0.01, averaging over 100 replications for each value of ρ. To demonstrate the predicted power-law dependence of ϵ on log(ρ), in Figure 2(right) we make a log-log plot of ϵ against log(ρ). The dashed lines show the gradients of −1/2 and −3/2 and we observe good agreement between experimental and theoretical results for large ρ. 3.1 Using the Equivalent Kernel in Kernel Regression Above we have used the EK to understand how standard GP regression works. One could alternatively envisage using the EK to perform kernel regression, on given finite data sets, producing a prediction ρ−1 P i h(x∗−xi)yi at x∗. Intuitively this seems appealing as a cheap alternative to full GP regression, particularly for kernels such as the SE where the EK can be calculated analytically, at least to a good approximation. We now analyze briefly how such an EK predictor would perform compared to standard GP prediction. Letting ⟨·⟩denote averaging over noise, training input points and the test point and setting fη(x∗) = R h(x, x∗)η(x)dx, the average MSE of the EK predictor is ϵpred = ⟨[η(x) −(1/ρ) P i h(x, xi)yi]2⟩ = ⟨[η(x) −fη(x)]2 + σ2 ∗ ρ R h2(x, x′)dx′⟩+ 1 ρ⟨ R h2(x, x′)η2(x′)dx′⟩−1 ρ⟨f 2 η(x)⟩ = σ2 ρ Z (σ2/ρ)Sη(s)/S2(s) + σ2 ∗/σ2 [1 + σ2/(ρS(s))]2 ds + ⟨η2⟩ ρ Z ds [1 + σ2/(ρS(s))]2 Here we have set ⟨η2⟩= ( R dx)−1 R η2(x) dx = R Sη(s) ds for the spatial average of the squared target amplitude. Taking the matched case, (Sη(s) = S(s) and σ2 ∗= σ2) as an example, the first term (which is the one we get for the prediction from “smoothed out” training inputs, see eq. 7) is of order σ2sD c /ρ, while the second one is ∼⟨η2⟩sD c /ρ. Thus both terms scale in the same way, but the ratio of the second term to the first is the signalto-noise ratio ⟨η2⟩/σ2, which in practice is often large. The EK predictor will then perform significantly worse than standard GP prediction, by a roughly constant factor, and we have confirmed this prediction numerically. This result is somewhat surprising given the good agreement between the weight function h(x∗) and the EK that we saw in figure 1, leading to the conclusion that the detailed structure of the weight function is important for optimal prediction from finite data sets. In summary, we have derived accurate approximations for the equivalent kernel (EK) of GP regression with the widely used squared exponential kernel, and have shown that the same analysis in fact extends to a whole class of kernels. We have also demonstrated that EKs provide a simple means of understanding the learning behaviour of GP regression, even in cases where the learner’s covariance function is not well matched to the structure of the target function. In future work, it will be interesting to explore in more detail the use of the EK in kernel smoothing. This is suboptimal compared to standard GP regression as we saw. However, it does remain feasible even for very large datasets, and may then be competitive with sparse methods for approximating GP regression. From the theoretical point of view, the average error of the EK predictor which we calculated may also provide the basis for useful upper bounds on GP learning curves. Acknowledgments: This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors’ views. References [1] B. W. Silverman. Annals of Statistics, 12:898–916, 1984. [2] C. K. I. Williams. In M. I. Jordan, editor, Learning in Graphical Models, pages 599–621. Kluwer Academic, 1998. [3] T. J. Hastie and R. J. Tibshirani. Generalized Additive Models. Chapman and Hall, 1990. [4] F. Girosi, M. Jones, and T. Poggio. Neural Computation, 7(2):219–269, 1995. [5] A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, 1991. Third Edition. [6] C. Thomas-Agnan. Numerical Algorithms, 13:21–32, 1996. [7] T. Poggio, H. Voorhees, and A. Yuille. Tech. Report AI Memo 833, MIT AI Laboratory, 1985. [8] B. Sch¨olkopf and A. Smola. Learning with Kernels. MIT Press, 2002. [9] M. L. Stein. Interpolation of Spatial Data. Springer-Verlag, New York, 1999. [10] M. Opper and F. Vivarelli. In NIPS 11, pages 302–308, 1999. [11] P. Sollich and A. Halees. Neural Computation, 14:1393–1428, 2002. [12] P. Sollich. In NIPS 14, pages 519–526, 2002.
|
2004
|
165
|
2,580
|
Newscast EM Wojtek Kowalczyk Department of Computer Science Vrije Universiteit Amsterdam The Netherlands wojtek@cs.vu.nl Nikos Vlassis Informatics Institute University of Amsterdam The Netherlands vlassis@science.uva.nl Abstract We propose a gossip-based distributed algorithm for Gaussian mixture learning, Newscast EM. The algorithm operates on network topologies where each node observes a local quantity and can communicate with other nodes in an arbitrary point-to-point fashion. The main difference between Newscast EM and the standard EM algorithm is that the M-step in our case is implemented in a decentralized manner: (random) pairs of nodes repeatedly exchange their local parameter estimates and combine them by (weighted) averaging. We provide theoretical evidence and demonstrate experimentally that, under this protocol, nodes converge exponentially fast to the correct estimates in each M-step of the EM algorithm. 1 Introduction Advances in network technology, like peer-to-peer networks on the Internet or sensor networks, have highlighted the need for efficient ways to deal with large amounts of data that are distributed over a set of nodes. Examples are financial data reported on the Internet, weather data observed by a set of sensors, etc. In particular, in many data mining applications we are interested in learning a global model from such data, like a probability distribution or a clustering of the data, without first transferring all the data to a central repository. Ideally, we would like to have a fully decentralized algorithm that computes and disseminates aggregates of the data, with minimal processing and communication requirements and good fault-tolerant behavior. A recent development in distributed systems technology is the use of gossip-based models of computation [1, 2, 3]. Roughly, in a gossip-based protocol each node repeatedly contacts some other node at random and the two nodes exchange information. Gossip-based protocols are very simple to implement, while they enjoy strong performance guarantees as a result of randomization. Their use in data mining and machine learning applications is currently finding inroads [4, 5]. In this paper we propose a gossip-based, fully decentralized implementation of the Expectation-Maximization (EM) algorithm for Gaussian mixture learning [6]. Our algorithm, which we call ‘Newscast EM’, assumes a set of data {xi} that are drawn independently from a common Gaussian mixture and are distributed over the nodes of a network (one data point per node). Newscast EM utilizes a gossip-based protocol in its M-step to learn a global Gaussian mixture model p(x) from the data. The main idea is to perform the M-step in a number of cycles. Each node starts with a local estimate of the model parameters. Then, in every cycle, each node contacts some other node that is chosen at random from a list of known nodes, and the two nodes replace their local model estimates by their (weighted) averages. As we show below, under such a protocol the (erroneous) local models of the individual nodes converge exponentially fast to the (correct) global model in each M-step of the algorithm. Our approach is fundamentally different from other distributed exact implementations of the EM algorithm that resort on global broadcasting [7] or routing trees [8]. In the latter, for instance, data sufficient statistics are propagated through a spanning tree in the network, combined with an incremental learning scheme as in [9]. A disadvantage of that approach is that only one node is carrying out computations at any time step, whereas in Newscast EM all nodes are running the same protocol in parallel. This results in a batch M-step that has average runtime at most logarithmic in the number of nodes, as we will see next. 2 Gaussian mixtures and the EM algorithm A k-component Gaussian mixture for a random vector x ∈IRd is defined as the convex combination p(x) = k X s=1 πsp(x|s) (1) of k Gaussian densities p(x|s) = (2π)−d/2|Cs|−1/2 exp[−(x −ms)⊤C−1 s (x −ms)/2], each parameterized by its mean ms and covariance matrix Cs. The components of the mixture are indexed by the random variable s that takes values from 1 to k, and πs = p(s) defines a discrete prior distribution over the components. Given a set {x1, . . . , xn} of independent and identically distributed samples from p(x), the learning task is to estimate the parameter vector θ = {πs, ms, Cs}k s=1 of the k components that maximizes the loglikelihood L = Pn i=1 log p(xi; θ). Throughout we assume that the likelihood function is bounded from above (e.g., by placing appropriate bounds on the components covariance matrices). Maximization of the data log-likelihood L can be carried out by the EM algorithm [6] which can be seen as iteratively maximizing a lower bound of L [9]. This bound F is a function of the current mixture parameters θ and a set of ‘responsibility’ distributions {qi(s)}, i = 1, . . . , n, where each qi(s) corresponds to a data point xi and defines an arbitrary discrete distribution over s. This lower bound is given by: F = n X i=1 k X s=1 qi(s) log p(xi, s; θ) −log qi(s) . (2) In the E-step of the EM algorithm, the responsibility qi(s) for each point xi is set to the Bayes posterior p(s|xi) given the parameters found in the previous step. In the M-step we solve for the unknown parameters of the mixture by maximizing F for fixed qi(s). This yields the following updates: πs = Pn i=1 qi(s) n , ms = Pn i=1 qi(s)xi nπs , Cs = Pn i=1 qi(s)xix⊤ i nπs −msm⊤ s . (3) Note that the main operation of the M-step is averaging: πs is the average of qi(s), ms is the average of products qi(s)xi (divided by πs), and the covariance matrix Cs is the average of matrices qi(s)xix⊤ i (divided by πs and decreased by msm⊤ s ). This observation is essential for the proposed algorithm, as we will shortly see. 3 Newscast computing and averaging The proposed distributed EM algorithm for Gaussian mixture learning relies on the use of the Newscast protocol for distributed computing [3]. Newscast is a gossip-based protocol that applies to networks where arbitrary point-to-point communication between nodes is possible, and it involves repeated data exchange between nodes using randomization: with constant frequency each node contacts some other node at random, and the two nodes exchange application-specific data as well as caches with addresses of other nodes. The protocol is very robust, scalable, and simple to implement—its Java implementation is only a few kBytes of code and can run on small network-enabled computing devices such as mobile phones, PDAs, or sensors. As with other gossip-based protocols [2], Newscast can be used for computing the mean of a set of values that are distributed over a network. Suppose that values v1, . . . , vn are stored in the nodes of a network, one value per node. Moreover suppose that each node knows the addresses of all other nodes. To compute µ = 1 n Pn i=1 vi, each node i initially sets µi = vi as its local estimate of µ, and then runs the following protocol for a number of cycles: Uniform Newscast (for node i) 1. Contact a node j = f(i) that is chosen uniformly at random from 1, . . . , n. 2. Nodes i and j update their estimates as follows: µ′ i = µ′ j = (µi + µj)/2. For the purpose of analysis we will assume that in each cycle every node initiates a single contact (but in practice the algorithm can be fully asynchronous). Note that the mean of the local estimates {µi} is always the correct mean µ, while for their variance holds: Lemma 1. In each cycle of uniform Newscast the variance of the local estimates drops on the average by factor λ, with λ ≤ 1 2√e. Proof.1 Let Φt = Pn i=1(µi −µ)2 be the unnormalized variance of the local estimates µi at cycle t. Suppose, without loss of generality, that within cycle t nodes initiate contacts in the order 1, 2, . . . , n. The new variance after node’s 1 contact is: Φ1 = Φt −(µ1 −µ)2 −(µf(1) −µ)2 + 2 µ1 + µf(1) 2 −µ 2 (4) = Φt −1 2(µ1 −µ)2 −1 2(µf(1) −µ)2 + (µ1 −µ)(µf(1) −µ). (5) Taking expectation over f, and using the fact that P[f(i) = j] = 1 n for all i and j, gives: E[Φ1|Φt = φ] = φ −1 2(µ1 −µ)2 −1 2n n X j=1 (µj −µ)2 = 1 −1 2n φ −1 2(µ1 −µ)2. (6) After n such exchanges, the variance Φt+1 is on the average: E[Φt+1|Φt = φ] = 1 −1 2n n φ −1 2 n X i=1 1 −1 2n n−i (µi −µ)2. (7) Bounding the term (1 − 1 2n)n−i by (1 − 1 2n)n finally gives: E[Φt+1|Φt = φ] ≤1 2 1 −1 2n n φ ≤ φ 2√e. (8) 1See [3] for an alternative proof of the same bound. Thus after t cycles of uniform Newscast, the original variance φ0 of the local estimates is reduced on the average to φt ≤φ0/(2√e)t. The fact that the variance drops at an exponential rate means that the nodes learn the correct average very fast. Indeed, using Chebyshev’s inequality Pt[|µi −µ| ≥ε] ≤φt/(nε2) we see that for any ε > 0, the probability that some node makes an estimation error larger than ε is dropping exponentially fast with the number of cycles t. In particular, we can derive a bound on the number of cycles that are needed in order to guarantee with high probability that all nodes know the correct answer with some specific accuracy: Theorem 1. With probability 1−δ, after ⌈0.581(log n+2 log σ +2 log 1 ε +log 1 δ )⌉cycles of uniform Newscast holds maxi |µi −µ| ≤ε, for any ε > 0 and data variance σ2. Proof. Using Lemma 1 and the fact that φ0 = nσ2, we obtain E[Φt] ≤nσ2/(2√e)t. Setting τ = log( nσ2 ε2δ )/ log(2√e) we obtain E[Φτ] ≤ε2δ. Using Markov inequality, with probability at least 1 −δ holds Φτ ≤ε2. Therefore, since Φτ is the sum of local terms, for each of them must hold |µi −µ| ≤ε. It is straightforward to show by induction over τ that the same inequality will hold for any time τ ′ > τ. For example, for unit-variance data and a network with n = 104 nodes we need 49 cycles to guarantee with probability 95% that each node is within 10−10 from the correct answer. Note that in uniform Newscast, each node in the network is assumed to know the addresses of all other nodes, and therefore can choose in each cycle one node uniformly at random to exchange data with. In practice, however, each node can only have a limited cache of addresses of other nodes. In this case, the averaging algorithm is modified as follows: Non-uniform Newscast (for node i) 1. Contact a node j = f(i) that is appropriately chosen from i’s local cache. 2. Nodes i and j update their estimates as follows: µ′ i = µ′ j = (µi + µj)/2. 3. Nodes i and j update their caches appropriately. Step 3 implements a ‘membership management’ schedule which effectively defines a dynamically changing random graph topology over the network. In our experiments we adopted the protocol of [10] which roughly operates as follows. Each entry k in node’s i cache contains an ‘age’ attribute that indicates the number of cycles that have been elapsed since node k created that entry. In step 1 above, node i contacts the node j with the largest age from i’s cache, and increases by one the age of all other entries in i’s cache. Then node i exchanges estimates with node j as in step 2. In step 3, both nodes i and j select a random subset of their cache entries and mutually exchange them, filling empty slots and discarding self-pointers and duplicates. Finally node i creates an entry with i’s address in it and age zero, which is added in j’s cache. The resulting protocol is particularly effective and, as we show in the experiments below, in some cases it even outperforms the uniform Newscast. We refer to [10] for more details. 4 The Newscast EM algorithm Newscast EM (NEM) is a gossip-based distributed implementation of the EM algorithm for Gaussian mixture learning, that applies to the following setting. We are given a set of data {xi} that are distributed over the nodes of a network (one data point per node). The data are assumed independent samples from a common k-component Gaussian mixture p(x) with (unknown) parameters θ = {πs, ms, Cs}k s=1. The task is to learn the parameters of the mixture with maximum likelihood in a decentralized manner: that is, all learning steps should be performed locally at the nodes, and they should involve as little communication as possible. The NEM algorithm is a direct application of the averaging protocol of Section 3 for estimating the parameters θ of p(x) using the EM updates (3). The E-step of NEM is identical to the E-step of the standard EM algorithm, and it can be performed by all nodes in parallel. The novel characteristic of NEM is the M-step which is implemented as a sequence of gossip-based cycles: At the beginning of each M-step, each node i starts with a local estimate θi of the ‘correct’ parameter vector θ (correct according to EM and for the current EM iteration). Then, in every cycle, each node contacts some other node at random, and the two nodes replace their local estimates θi by their (weighted) averages. At the end of the M-step each node has converged (within machine precision) to the correct parameter θ. To simplify notation, we will denote by θi = {πsi, msi, ˜Csi} the local estimates of node i for the parameters of component s at any point of the algorithm. The parameter ˜Csi is defined such that Csi = ˜Csi −msim⊤ si. The complete algorithm, which runs identically and in parallel for each node, is as follows: Newscast EM (for node i) 1. Initialization. Set qi(s) to some random positive value and then normalize all qi(s) to sum to 1 over all s. 2. M-step. Initialize i’s local estimates for each component s as follows: πsi = qi(s), msi = xi, ˜Csi = xix⊤ i . Then repeat for τ cycles: a. Contact a node j = f(i) from i’s local cache. b. Nodes i and j update their local estimates for each component s as follows: π′ si = π′ sj = πsi + πsj 2 , (9) m′ si = m′ sj = πsimsi + πsjmsj πsi + πsj , (10) ˜C′ si = ˜C′ sj = πsi ˜Csi + πsj ˜Csj πsi + πsj . (11) c. Nodes i and j update their caches appropriately. 3. E-step. Compute new responsibilities qi(s) = p(s|xi) for each component s using the M-step estimates πsi, msi, and Csi = ˜Csi −msim⊤ si. 4. Loop. Go to step 2, unless a stopping criterion is satisfied that involves the parameter estimates themselves or the energy F.2 A few observations about the algorithm are in order. First note that both the initialization of the algorithm (step 1) as well as the E-step are completely local to each node. Similarly, a stopping criterion involving the parameter estimates can be implemented locally if each node caches its estimates from the previous EM-iteration. The M-step involves a total of k[1 + d + d(d + 1)/2] averages, for each one of the k components and for dimensionality d, which are computed with the Newscast protocol. Given that all nodes agree on the number τ of Newscast cycles in the M-step, and assuming that τ is large enough to guarantee convergence to the correct parameter estimates, the complete NEM algorithm can be performed identically and in parallel by all nodes in the network. It is easy to see that at any cycle of an M-step, and for any component s, the weighted 2Note that F is a sum of local terms, and thus it can also be computed using the same protocol. averages over all nodes of the local estimates are always the EM-correct estimates, i.e., Pn i=1 πsimsi Pn i=1 πsi = ms (12) and similarly for the ˜Csi. Moreover, note that the weighted averages of the msi in (10) and the ˜Csi in (11), with weights given by (9), can be written as unweighted averages of the corresponding products πsimsi and πsi ˜Csi. In other words, each local estimate can be written as the ratio of two local estimates that converge to the correct values at the same exponential rate (as shown in the previous section). The above observations establish the following: Theorem 2. In every M-step of Newscast EM, each node converges exponentially fast to the correct parameter estimates for each component of the mixture. Similarly, the number of cycles τ for each M-step can be chosen according to Theorem 1. However, note that in every M-step each node has to wait τ cycles before its local estimates have converged, and only then can it use these estimates in a new next E-step. We describe here a modification of NEM that allows a node to run a local E-step before its M-step has converged. This ‘partial’ NEM algorithm is based on the following ‘self-correction’ idea: instead of waiting until the M-step converges, after a small number of cycles each node runs a local E-step, adjusts its responsibilities, and propagates appropriate corrections through the network. Such a scheme additionally requires that each node caches its responsibilities from the previous E-step, denoted by ˜qi(s). The only modification is in the initialization of the Mstep: instead of fully resetting the local estimates as in step 2 above, a node makes the following corrections to its current estimates πsi, msi, ˜Csi for each component s: π′ si = πsi + qi(s) −˜qi(s), (13) m′ si = {msiπsi + xi[qi(s) −˜qi(s)]}/π′ si, (14) ˜C′ si = { ˜Csiπsi + xix⊤ i [qi(s) −˜qi(s)]}/π′ si. (15) After these corrections, the Newscast averaging protocol is executed for a number of cycles (smaller than the number τ of the ‘full’ NEM). These corrections may increase the variance of the local estimates, but in most cases the corresponding increase of variance is relatively small. This results in speed-ups (often as large as 10-fold), however guaranteed convergence is hard to establish.3 5 Experiments To get an insight into the behavior of the presented algorithms we ran several experiments using a Newscast simulator.4 In Fig. 1 we demonstrate the the performance of uniform and non-uniform Newscast in typical averaging tasks involving zero-mean unit-variance data. In Fig. 1(left) we plot the variance reduction rate λ (mean and one standard deviation for 50 runs) as a function of the number of cycles, for averaging problems involving n = 105 data. Note that in uniform Newscast the observed rate is always below the derived bound 1/(2√e) ≈0.303 from Lemma 1. Moreover note that in non-uniform Newscast the variance drops faster than in uniform Newscast. This is due to the fact that the dynamic cache exchange scheme of [10] results in in-degree network distributions that are very peaked around the cache size. In practice this means that on the average each node is 3This would require, for instance, that individual nodes have estimates of the total variance over the network, which is not obvious how it can be done. 4Available from http://www.cs.vu.nl/˜steen/globesoul/sim.tgz 0 5 10 15 20 25 30 35 40 0.285 0.29 0.295 0.3 0.305 Number of cycles Variance reduction rate Uniform Newscast Non−uniform (cache 20) 10 3 10 4 10 5 10 6 39 39.5 40 40.5 41 41.5 42 42.5 Number of nodes Cycles for convergence Uniform Newscast Non−uniform (cache 20) Figure 1: (Left) Variance reduction rate of uniform and non-uniform Newscast, in averaging tasks involving n = 105 nodes. (Right) Number of cycles to achieve convergence within ε = 10−10 for unit-variance datasets of various sizes. equally often contacted to by other nodes in each cycle of the protocol. We also observed that the variance reduction rate is on the average unaffected by the network size, while larger networks result in smaller deviations. For n = 8 ∗105, for instance, the standard deviation is half the one plotted above. In Fig. 1(right) we plot the number of cycles that are required to achieve model accuracy at all nodes within ε = 10−10 as a function of the network size. Note that all observed quantities are below the derived bound of Theorem 1, while non-uniform Newscast performs slightly better than uniform Newscast. We also ran experiments involving synthetic data drawn from Gaussian mixtures of different number of data points, where we observed results essentially identical to those obtained by the standard (centralized) EM. We also performed some experiments with the ‘partial’ NEM, where it turned out that in most cases we could obtain the same model accuracy with a much smaller number of cycles (5–10 times than the ‘full’ NEM), but in some cases the algorithm did not converge. 6 Summary and extensions We presented Newscast EM, a distributed gossip-based implementation of the EM algorithm for learning Gaussian mixture models. Newscast EM applies on networks where each one of a (large) number of nodes observes a local quantity, and can communicate with other nodes in a point-to-point fashion. The algorithm utilizes a gossip-based protocol in its M-step to learn a global Gaussian mixture model from the data: each node starts with a local estimate of the parameters of the mixture and then, for a number of cycles till convergence, pairs of nodes repeatedly exchange their local parameter estimates and combine them by (weighted) averaging. Newscast EM implements a batch M-step that has average runtime logarithmic in the network size. We believe that gossip-based protocols like Newscast can be used in several other algorithms that learn models from distributed data. Several extensions of the algorithm are possible. Here we have assumed that each node in the network observes one data point. We can easily generalize this to situations where each node observes (and stores) a collection of points, like in [8]. On the other hand, if the locally observed data are too many, one may consider storing only some sufficient statistics of these data locally, and appropriately bound the energy F in each iteration to get a convergent EM algorithm [11]. Another interesting extension is to replace the averaging step 2 of uniform and non-uniform Newscast with weighted averaging (for some choice of weights), and study the variance reduction rate in this case. Another interesting problem is when the E-step cannot be performed locally at a node but it requires distributing some information over the network. This could be the case, for instance, when each node observes only a few elements of a vector-valued quantity while, for instance, all nodes together observe the complete sample. We note that if the component models factorize, several useful quantities can be computed by averaging in the log domain. Finally, it would be interesting to investigate the applicability of the Newscast protocol in problems involving distributed inference/learning in more general graphical models [12]. Acknowledgments We want to thank Y. Sfakianakis for helping in the experiments, T. Pylak for making his Newscast simulator publicly available, and D. Barber, Z. Ghahramani, and J.J. Verbeek for their comments. N. Vlassis is supported by PROGRESS, the embedded systems research program of the Dutch organization for Scientific Research NWO, the Dutch Ministry of Economic Affairs and the Technology Foundation STW, project AES 5414. References [1] R. Karp, C. Schindelhauer, S. Shenker, and B. V¨ocking. Randomized rumour spreading. In Proc. 41th IEEE Symp. on Foundations of Computer Science, Redondo Beach, CA, November 2000. [2] D. Kempe, A. Dobra, and J. Gehrke. Gossip-based computation of aggregate information. In Proc. 44th IEEE Symp. on Foundations of Computer Science, Cambridge, MA, October 2003. [3] M. Jelasity, W. Kowalczyk, and M. van Steen. Newscast computing. Technical report, Dept. of Computer Science, Vrije Universiteit Amsterdam, 2003. IR-CS-006. [4] C. C. Moallemi and B. Van Roy. Distributed optimization in adaptive networks. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [5] D. Kempe and F. McSherry. A decentralized algorithm for spectral analysis. In Proc. 36th ACM Symp. on Theory of Computing, Chicago, IL, June 2004. [6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B, 39:1–38, 1977. [7] G. Forman and B. Zhang. Distributed data clustering can be efficient and exact. ACM SIGKDD Explorations, 2(2):34–38, 2000. [8] R. D. Nowak. Distributed EM algorithms for density estimation and clustering in sensor networks. IEEE Trans. on Signal Processing, 51(8):2245–2253, August 2003. [9] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in graphical models, pages 355–368. Kluwer Academic Publishers, 1998. [10] S. Voulgaris, D. Gavidia, and M. van Steen. Inexpensive membership management for unstructured P2P overlays. Journal of Network and Systems Management, 2005. To appear. [11] J. R. J. Nunnink, J. J. Verbeek, and N. Vlassis. Accelerated greedy mixture learning. In Proc. Belgian-Dutch Conference on Machine Learning, Brussels, Belgium, January 2004. [12] M. A. Paskin and C. E. Guestrin. Robust probabilistic inference in distributed systems. In Proc. 20th Int. Conf. on Uncertainty in Artificial Intelligence, Banff, Canada, July 2004.
|
2004
|
166
|
2,581
|
Dynamic Bayesian Networks for Brain-Computer Interfaces Pradeep Shenoy Department of Computer Science University of Washington Seattle, WA 98195 pshenoy@cs.washington.edu Rajesh P. N. Rao Department of Computer Science University of Washington Seattle, WA 98195 rao@cs.washington.edu Abstract We describe an approach to building brain-computer interfaces (BCI) based on graphical models for probabilistic inference and learning. We show how a dynamic Bayesian network (DBN) can be used to infer probability distributions over brain- and body-states during planning and execution of actions. The DBN is learned directly from observed data and allows measured signals such as EEG and EMG to be interpreted in terms of internal states such as intent to move, preparatory activity, and movement execution. Unlike traditional classification-based approaches to BCI, the proposed approach (1) allows continuous tracking and prediction of internal states over time, and (2) generates control signals based on an entire probability distribution over states rather than binary yes/no decisions. We present preliminary results of brain- and body-state estimation using simultaneous EEG and EMG signals recorded during a self-paced left/right hand movement task. 1 Introduction The problem of building a brain-computer interface (BCI) has received considerable attention in recent years. Several researchers have demonstrated the feasibility of using EEG signals as a non-invasive medium for building human BCIs [1, 2, 3, 4, 5] (see also [6] and articles in the same issue). A central theme in much of this research is the postulation of a discrete brain state that the user maintains while performing one of a set of physical or imagined actions. The goal is to decode the hidden brain state from the observable EEG signal, and to use the decoded state to control a robot or a cursor on a computer screen. Most previous approaches to BCI (e.g., [1, 2, 4]) have utilized classification methods applied to time slices of EEG data to discriminate between a small set of brain states (e.g., left versus right hand movement). These methods typically involve various forms of preprocessing (such as band-pass filtering or temporal smoothing) as well as feature extraction on time slices known to contain one of the chosen set of brain states. The output of the classifier is typically a yes/no decision regarding class membership. A significant drawback of such an approach is the need to have a “point of reference” for the EEG data, i.e., a synchronization point in time where the behavior of interest was performed. Also, classifier-based approaches typically do not model the uncertainty in their class estimates. As a result, it is difficult to have a continuous estimate of the brain state and to associate an uncertainty with the current estimate. In this paper, we propose a new framework for BCI based on probabilistic graphical models [7] that overcomes some of the limitations of classification-based approaches to BCI. We model the dynamics of hidden brain- and body-states using a Dynamic Bayesian Network (DBN) that is learned directly from EEG and EMG data. We show how a DBN can be used to infer probability distributions over hidden state variables, where the state variables correspond to brain states useful for BCI (such as “Intention to move left hand”, “Left hand in motion”, etc). Using a DBN gives us several advantages in addition to providing a continuous probabilistic estimate of brain state. First, it allows us to explicitly model the hidden causal structure and dependencies between different brain states. Second, it facilitates the integration of information from multiple modalities such as EEG and EMG signals, allowing, for example, EEG-derived estimates to be bootstrapped from EMG-derived estimates. In addition, learning a dynamic graphical model for time-varying data such as EEG allows other useful operations such as prediction, filling in of missing data, and smoothing of state estimates using information from future data points. These capabilities are difficult to obtain while working exclusively in the frequency domain or using whole slices of the data (or its features) for training classifiers. We illustrate our approach in a simple Left versus Right hand movement task and present preliminary results showing supervised learning and Bayesian inference of hidden state for a dataset containing simultaneous EEG and EMG recordings. 2 The DBN Framework We study the problem of modeling spontaneous movement of the left/right arm using EEG and EMG signals. It is well known that EEG signals show a slow potential drift prior to spontaneous motor activity. This potential drift, known as the Bereitschaftspotential (BP, see [8] for an excellent survey), shows variation in distribution over scalp with respect to the body part being moved. In particular, the BP related to movement of left versus right arm shows a strong lateral asymmetry. This allows one to not only estimate the intent to move prior to actual movement, but also distinguish between left and right movements. Previous approaches [1, 2] have utilized BP signals in classification-based BCI protocols based on synchronization cues that identify points of movement onset. In our case, the challenge was to model the structure of BPs and related movement signals using the states of the DBN, and to recognize actions without explicit synchronization cues. Figure 1 shows the complete DBN (referred to as Nfull in this paper) used to model the leftright hand movement task. The hidden state Bt in Figure 1(a) tracks the higher-level brain state over time and generates the hidden EEG and EMG states Et and Mt respectively. These hidden states in turn generate the observed EEG and EMG signals. The dashed arrows indicate that the hidden states make transitions over time. As shown in Figure 1(b), the state Bt is intended to model the high-level intention of the subject. The figure shows both the values Bt can take as well the constraints on the transition between values. The actual probabilities of the allowed transitions are learned from data. The hidden states Et and Mt are intended to model the temporal structure of the EEG and EMG signals, which are generated using a mixture of Gaussians conditioned on Et and Mt respectively. In the same way as the values of Bt are customized for our particular experiment, we would like the state transitions of Et and Mt to also reflect their respective constraints. This is important since it allows us to independently learn the simpler DBN Nemg consisting of only the node Mt and the observed EMG signal. Similarly, we can also independently learn the model Neeg consisting of the node Et and the observed EEG signal. We use the models shown in Figure 2 for allowed transitions of the states Mt and Et respectively. In particular, Figure 2(a) indicates that the EMG state can transition along one B
t
EEG
t
EMG
t
transition
M
t
E
t
transition
B
t+1
EEG
t+1
M
t+1
E
t+1
EMG
t+1
(a) The Complete Network
Rest
Post
Movt
Rt
Movt
Rt
Intent
Post
Movt
Left
Movt
Left
Intent
(b) Allowed Brain States
B
t
Figure 1: Dynamic graphical model for modeling brain and body processes in a selfpaced movement task: (a) At each time instant t, the brain state Bt generates the EEG and EMG internal states Et and Mt respectively, which in turn generate the observed EEG and EMG. The dotted arrows represent transitions to a state at the next time step. (b) The transition graph for the brain state Bt. The probability of each allowed transition is learned from input data. of three chains of states (labeled (1), (2), and (3)), representing the rest state, a left-hand action and a right-hand action respectively. In each chain, the state Mt in each time step either retains its old value with a given probability (self-pointing arrow) or transitions to the next state value in that particular chain. The transition graph of Figure 2(b) shows similar constraints on the EEG, except that the left and right action chains are further partitioned into intent, action, and post-action subgroups of states, since each of these components are discernible from the BP in EEG (but not from EMG) signals. (chain of states)
(1)
(2)
(3)
m
p
+1
m
q+1
m
p
m
1
m
0
m
r
m
q
(chain of states)
(
c
h
a
i
n
o
f
s
t
a
t
e
s
)
(a) Transition Constraints on
M
t
(1)
(2)
e
k
e
1
e
0
(
c
h
a
i
n
o
f
s
t
a
t
e
s
)
LI
LM
LPM
RI
RM
RPM
(3)
(chain of states)
(chain of states)
(b) Transition Constraints on
E
t
Figure 2: Constrained transition graphs for the hidden EMG and EEG states Et and Mt respectively. (a) The EMG state transitions between its values mi are constrained to be in one of three chains: the chains model (1) rest, (2) left arm movement, and (3) right arm movement. (b) In the EEG state transition graph, the left and right movement chains are further divided into state values encoding intent (LI/RI), movement (LM/RM), and post movement (LPM/RPM). 3 Experiments and Results 3.1 Data Collection and Processing The task: The subject pressed two distinct keys on a keyboard with the left hand or right hand at random at a self-initiated pace. We recorded 8 EEG channels around the motor area of cortex (C3, Cz, C4, FC1, FC2, CP1, CP2, Pz) using averaged ear electrodes as reference, and 2 differential pairs of EMG (one on each arm). Data was recorded at 2048Hz for a period of 20 minutes, with the movements being separated by approximately 3-4s. −0.5 0 0.5 1 −6 −4 −2 0 2 Time Average for Left Hand C3 C4 −0.5 0 0.5 1 −6 −4 −2 0 2 4 Time Average for Right Hand C3 C4 Figure 3: Movement-related potential drift recorded during the hand-movement task: The two plots show the EEG signals averaged over all trials from the motor-related channels C3 and C4 for left (left panel) and right hand movement (right panel). The averages indicate the onset and laterality of upcoming movements. Processing: The EEG channels were bandpass-filtered 0.5Hz-5Hz, before being downsampled and smoothed at 128Hz. The EMG channels were converted to RMS values computed over windows for an effective sampling rate of 128Hz. Data Analysis: The recorded data were first analyzed in the traditional manner by averaging across all trials. Figure 3 shows the average of EEG channels C3 and C4 for left and right hand movement actions respectively. As can be seen, the averages for both channels are different for the two classes. Furthermore, there is a slow potential drift preceding the action and a return to the baseline potential after the action is performed. Previous researchers [1] have classified EEG data over a window leading up to the instant of action with high accuracy (over 90%) into left or right movement classes. Thus, there appears to be a reliable amount of information in the EEG signal for at least discriminating between left versus right movements. Data Evaluation using SVMs: To obtain a baseline and to evaluate the quality of our recorded data, we tested the performance of linear support vector machines (SVMs) on classifying our EEG data into left and right movement classes. The choice of linear SVMs was motivated by their successful use on similar problems by other researchers [1]. Time slices of 0.5 seconds before each movement were concatenated from all EEG channels and used for classification. We performed hyper-parameter selection using leave-one-out crossvalidation on 15 minutes of data and obtained an error of 15% on the remaining 5 minutes of data. Such an error rate is comparable to those obtained in previous studies on similar tasks, suggesting that the recorded data contains sufficient movement-related information to be tested in experiments involving DBNs. Learning the parameters of the DBN: We used the Graphical Models Toolkit (GMTK) [9] for learning the parameters of our DBN. GMTK provides support for expressing constraints on state transitions (as described in Section 2). It learns the constrained conditional probability tables and the parameters for the mixture of Gaussians using the expectation-maximization (EM) algorithm. We constructed a supervisory signal from the recorded key-presses as follows: A period of 100ms around each keystroke was labeled “motor action” for the appropriate hand. This signal was used to train the network Nemg in a supervised manner. To generate a supervisory signal for the network Neeg, or the full combined network Nfull (Figure 1), we added prefixes and postfixes of 150ms each to each action in this signal, and labeled them “preparatory” and “post-movement” activity respectively. These time-periods were chosen by examining the average EEG and EMG activity over all actions. Thus, we can use partial (EEG only) or full evidence in the inference step to obtain probability distributions over brain state. The following sections describe our learning procedure and inference results in greater detail. 3.2 Learning and Inference with EMG Our first step is to learn the simpler model Nemg that has only the hidden Mt state and the observed EMG signal. This is to test inference using the EMG signal alone. The parameters of this DBN were learned in a supervised manner. We used 15 minutes of EMG data to train our simplified model, and then tested it on the remaining 5 minutes of data. The model was tested using Viterbi decoding (a single pass of max-product inference over the network). In other words, the maximum a posteriori (MAP) sequence of values for hidden states was computed. Figure 4 shows a 100s slice of data containing 2 channels of EMG, and the predicted hidden EMG state Mt. The states 0, 1 and 2 correspond to “no action”, left, and right actions respectively. In the shown figure, the state Mt successfully captures not only all the obvious arm movements but also the actions that are obscured by noise. 3.3 Learning the EEG Model We used the supervisory signal described earlier to learn the corresponding EEG model Neeg. Note that the brain-state can be inferred from the hidden EEG state Et directly, since the state space is appropriately partitioned as shown in Figure 2(b). Figure 5 shows the result of inference on the learned model Neeg using only the EEG signals as evidence. The figure shows a subset of the EEG channels (C3,Cz,C4), the supervisory signal, and the predicted brain state Bt (the MAP estimate). The figure shows that many of the instances of action (but not all) are correctly identified by the model. Our model gives us at each time instant a MAP-estimated state sequence that best describes the past, and the probability associated with that state sequence. This gives us, at each time instant, a measure of how likely each brain state Bt is, with reference to the others. For convenience, we can use the probability associated with the REST state (see Figure 1) as reference. Figure 6 shows a graphical illustration of this instantaneous time estimate. The plotted graphs are, in order, the supervisory signal (i.e., the “ground truth value”) and the instantaneous measures of likelihood of intention/movement/post-movement states for the left and right hand respectively. For convenience, we represent the likelihood ratio of each state’s MAP probability estimate to that of the rest state, and use a logarithmic scale. We see that the true hand movements are correctly inferred in a surprisingly large number of cases (log likelihood ratio crosses 0). Furthermore, the actual likelihood values convey a measure of the uncertainty in the inference, a property that would be of great value for critical BCI applications such as controlling a robotic wheelchair. In summary, our graphical models Nemg and Neeg have shown promising results in correctly identifying movement onset from EMG and EEG signals respectively. Ongoing work is focused on improving accuracy by using features extracted from EEG, and inference using both EEG and EMG in Nfull (the full model). 10 20 30 40 50 60 70 80 90 100 2 4 6 8 10 12 14 Left Hand EMG 10 20 30 40 50 60 70 80 90 100 10 20 30 Right Hand EMG 10 20 30 40 50 60 70 80 90 100 0 1 2 Predicted State Time (seconds) Figure 4: Bayesian Inference of Movement using EMG: The figure shows 100 seconds of EMG data from two channels along with the MAP state sequence predicted by our trained EMG model. The states 0,1,2 correspond to “no action”, left, and right actions respectively. Our model correctly identifies the obscured spikes in the noisy right EMG channel 4 Discussion and Conclusion We have shown that dynamic Bayesian networks (DBNs) can be used to model the transitions between brain- and muscle-states as a subject performs a motor task. In particular, a two-level hierarchical network was proposed for simultaneously estimating higher-level brain state and lower-level EEG and EMG states in a left/right hand movement task. The results demonstrate that for a self-paced movement task, hidden brain states useful for BCI such as intention to move the left or right hand can be decoded from a DBN learned directly from EEG and EMG data. Previous work on BCIs can be grouped into two broad classes: self-regulatory BCIs and BCIs based on detecting brain state. Self-regulatory BCIs rely on training the user to regulate certain features of the EEG, such as cortical positivity [10], or oscillatory activity (the µ rhythm, see [5]), in order to control, for example, a cursor on a display. The approach presented in this paper falls in the second class of BCIs, those based on detecting brain states [1, 2, 3, 4]. However, rather than employing classification methods, we use probabilistic graphical models for inferring brain state and learning the transition probabilities between brain states. Successfully learning a dynamic graphical model as suggested in this paper offers several advantages over traditional classification-based schemes for BCI. It allows one to explicitly model the hidden causal structure and dependencies between different brain states. It provides a probabilistic framework for integrating information from multiple modalities 0 1000 2000 3000 4000 5000 6000 7000 8000 −2 0 2 C3 0 1000 2000 3000 4000 5000 6000 7000 8000 −2 0 2 Cz 0 1000 2000 3000 4000 5000 6000 7000 8000 −2 0 2 C4 0 1000 2000 3000 4000 5000 6000 7000 8000 0 2 4 6 True Brain State 0 1000 2000 3000 4000 5000 6000 7000 8000 0 2 4 6 Predicted Brain State Figure 5: Bayesian Inference of Brain State using EEG: The figure shows 1 minute of EEG data (at 128Hz) for the channels C3, Cz, C4, along with the “true” brain state and the brain state inferred using our DBN model with only EEG evidence. State 0 is the rest state, states 1 through 3 represent left hand movement, and 4 through 6 represent right hand movement (see Figure 1(b)). such as EEG and EMG signals, allowing, for example, EEG-derived estimates to be bootstrapped from EMG-derived estimates. A dynamic graphical model for time-varying data such as EEG also allows prediction, filling in of missing data, and smoothing of state estimates using information from future data points, properties not easily achieved in methods that work exclusively in the frequency domain or use data slices for training classifiers. Our current efforts are focused on investigating methods for learning dynamic graphical models for motor tasks of varying complexity and using these models to build robust, probabilistic BCI systems. References [1] B. Blankertz, G. Curio, and K.R. Mueller. Classifying single trial EEG: Towards brain computer interfacing. In Advances in Neural Information Processing Systems 12, 2001. [2] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Mueller. Combining features for BCI. In Advances in Neural Information Processing Systems 15, 2003. [3] J. D. Bayliss and D. H. Ballard. Recognizing evoked potentials in a virtual environment. In Advances in Neural Information Processing Systems 12, 2000. [4] P. Meinicke, M. Kaper, F. Hoppe, M. Heumann, and H. Ritter. Improving transfer rates in brain computer interfacing: a case study. In Advances in Neural Information Processing Systems 15, 2003. [5] J.R. Wolpaw, D.J. McFarland, and T.M. Vaughan. Brain-computer interfaces for communication and control. IEEE Trans Rehab Engg, pages 222–226, 2000. 0 500 1000 1500 2000 2500 3000 3500 4000 0 2 4 6 Supervisory Signal 0 500 1000 1500 2000 2500 3000 3500 4000 −80 −60 −40 −20 0 20 Left action Log(P(state)/P(rest)) 0 500 1000 1500 2000 2500 3000 3500 4000 −80 −60 −40 −20 0 20 Right action Log(P(state)/P(rest)) zero pre movt post zero pre movt post Figure 6: Probabilistic Estimation of Brain State: The figure shows the supervisory signal, along with a probabilistic measure of the current state for left and right actions respectively. The measure shown is the log ratio of the instantaneous MAP estimate for the relevant state and the estimate for the rest state. [6] J. R. Wolpaw et al. Brain-computer interface technology: a review of the first international meeting. IEEE Trans Rehab Engg, 8:164–173, 2000. [7] R. E. Neapolitan. Learning Bayesian Networks. Prentice Hall, NJ, 2004. [8] M. Jahanshahi and M. Hallet. The Bereitschaftspotential: movement related cortical potentials. Kluwer Academic, New York, 2002. [9] J. Bilmes and G. Zweig. The graphical models toolkit: An open source software system for speech and time-series processing. In IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, Orlando FL, 2002. [10] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iverson, B. Kotchubey, A. Kiibler, J. Perelmouter, E. Taub, and H. Flor. A spelling device for the paralyzed. In Nature, 398: 297-298, 1999.
|
2004
|
167
|
2,582
|
Edge of Chaos Computation in Mixed-Mode VLSI - “A Hard Liquid” Felix Sch¨urmann, Karlheinz Meier, Johannes Schemmel KirchhoffInstitute for Physics University of Heidelberg Im Neuenheimer Feld 227, 69120 Heidelberg, Germany felix.schuermann@kip.uni-heidelberg.de, WWW home page: http://www.kip.uni-heidelberg.de/vision Abstract Computation without stable states is a computing paradigm different from Turing’s and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation are reproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network. 1 Introduction Using artificial neural networks for problem solving immediately raises the issue of their general trainability and the appropriate learning strategy. Topology seems to be a key element, especially, since algorithms do not necessarily perform better when the size of the network is simply increased. Hardware implemented neural networks, on the other hand, offer scalability in complexity and gain in speed but naturally do not compete in flexibility with software solutions. Except for specific applications or highly iterative algorithms [1], the capabilities of hardware neural networks as generic problem solvers are difficult to assess in a straight-forward fashion. Independently, Maass et al.[2] and Jaeger [3] proposed the idea of computing without stable states. They both used randomly connected neural networks as non-linear dynamical systems with the inputs causing perturbations to the transient response of the network. In order to customize such a system for a problem, a readout is trained which requires only the network reponse of a single time step for input. The readout may be as simple as a linear classifier: ‘training’ then reduces to a well defined least-squares linear regression. Justification for this splitting into a nonlinear transformation followed by a linear one originates from Cover [4]. He proved that the probability for a pattern classification problem to be linearly separable is higher when cast in a high-dimensional space by a non-linear mapping. In the terminology of Maass et al., the non-linear dynamical system is called a liquid and together with the readouts it represents a liquid state machine (LSM). It has been proven that under certain conditions the LSM concept is universal on functions of time [2]. Adopting the liquid computing strategy for mixed-mode hardware implemented networks using very large scale integration (VLSI) offers two promising prospects: First, such a system profits immediately from scaling, i.e., more neurons increase the complexity of the network dynamics while not increasing training complexity. Second, it is expected that the liquid approach can cope with an imperfect substrate as commonly present in analog hardware. Configuring highly integrated analog hardware as a liquid therefore seems a promising way for analog computing. This conclusion is not unexpected since the liquid computing paradigm was inspired by a complex and ‘analog’ system in the first place: the biological nervous system [2]. This publication presents initial results on configuring a general purpose mixedmode neural network ASIC (application specific integrated circuit) as a liquid. The used custom-made ANN ASIC [5] provides 256 McCulloch-Pitts neurons with about 33k analog synapses and allows a wide variety of topologies, especially highly recurrent ones. In order to operate the ASIC as a liquid a generation procedure proposed by Bertschinger et al. [6] is adopted that generates the network topology and weights. These authors as well showed that the performance of those inputdriven networks—meant are the suitable properties of the network dynamics to act as a liquid—depends on whether the response of the liquid to the inputs is ordered or chaotic. Precisely, according to a special measure the performance peaks when the liquid is inbetween order and chaos. The reconfigurability of the used ANN ASIC allows to explore various generation parameters, i.e., physically different liquids are evaluated; the obtained experimental results are in accordance with the previously published software simulations [6]. 2 Substrate The substrate used in the following is a general purpose ANN ASIC manufactured in a 0.35µm CMOS process [5]. Its design goals were to implement small synapses while being fast reconfigurable and capable of operating at high speed; it therefore combines analog computation with digital signaling. It is comprised of 33k analog synapses with capacitive weight storage (nominal 10-bit plus sign) and 256 McCulloch-Pitts neurons. For efficiency it employs mostly current mode circuits. Experimental benchmark results using evolutionary algorithms training strategies have previously been published [1]. A full weight refresh can be performed within 200µs and in the current setup one network cycle, i.e., the time base of the liquid, lasts about 0.5µs. This is due to the prototype nature of the ASIC and its input/output; the core can already be operated about 20 times faster. The analog operation of the chip is limited to the synaptic weights ωij and the input stage of the output neurons. Since both, input (Ij) and output signals (Oi) of the network are binary, the weight multiplication is reduced to a summation and the activation function g(x) of the output neurons equals the Heaviside function Θ(x): Oi = g( X j ωijIj), g(x) = Θ(x), I, O ∈{0, 1}. (1) The neural network chip is organized in four identical blocks; each represents a fully connected one-layer perceptron with McCulloch-Pitts neurons. One block basically consists of 128×64 analog synapses that connect each of the 128 inputs to each of the 64 output neurons. The network operates in a discrete time update scheme, i.e., Eq. 1 is calculated once for each network cycle. By feeding outputs back to the Figure 1: Network blocks can be configured for different input sources. inputs a block can be configured as a recurrent network (c.f. Fig. 1). Additionally, outputs of the other network blocks can be fed back to the block’s input. In this case the output of a neuron at time t depends not only on the actual input but also on the previous network cycle and the activity of the other blocks. Denoting the time needed for one network cycle with ∆t, the output function of one network block becomes: O(t + ∆t)a i = Θ X j ωijI(t)a j + X x∈{a,b,c,d} X k ωx ikO(t)x k . (2) Here, ∆t denotes the time needed for one network cycle. The first term in the argument of the activation function is the external input to the network block I a j . The second term models the feedback path from the output of block a, Oa k, as well as the other 3 blocks b,c,d back to its input. For two network blocks this is illustrated in Fig. 1. Principally, this model allows an arbitrarily large network that operates synchronously at a common network frequency fnet = 1/∆t since the external input can be the output of other identical network chips. external input Figure 2: Intra- and inter-block routing schematic of the used ANN ASIC. For the following experiments one complete ANN ASIC is used. Since one output neuron has 128 inputs, it cannot be connected to all 256 neurons simultaneously. Furthermore, it can only make arbitrary connections to neurons of the same block, whereas the inter-block feedback fixes certain output neurons to certain inputs. Details of the routing are illustrated in Fig. 2. The ANN ASIC is connected to a standard PC with a custom-made PCI-based interface card using a programmable logic to control the neural network chip. 3 Liquid Computing Setup Following the terminology introduced by Maass et al. the ANN ASIC represents the liquid. Appropriately configured, it acts as a non-linear filter to the input. The response of the neural network ASIC at a certain time step is called the liquid state x(t). This output is provided to the readout. In our case these are one or more linear classifiers implemented in software. The classifier result, and thus the response of the liquid state machine at a time t, is given by: v(t) = Θ( X wixi(t)). (3) The weights wi are determined with a least-squares linear regression calculated for the desired target values y(t). Using the same liquid state x(t) multiple readouts can be used to predict differing target functions simultaneously (c.f. Fig. 3). …101110001 …010001110 Linear Classifier : …10111000 1 …10100111 0 …01001001 0 input neural net (liquid) å i i i w t x ) ( readouts liquid state x(t) bias å i i i w t x ~ ) ( hardware software u(t) v(t) Linear Classifier v(t) ~ Figure 3: The liquid state machine setup. The used setup is similar to the one used by Bertschinger et al. [6] with the central difference that the liquid here is implemented in hardware. The specific hardware design imposes McCulloch-Pitts type neurons that are either on or off(O ∈{0, 1}) and not symmetric (O ∈{−1, 1}). Besides of this, the topology and weight configuration of the ANN ASIC follow the procedure used by Bertschinger et al. The random generation of such input-driven networks is governed by the following parameters: N, the number of neurons; k, the number of incoming connections per neuron; σ2, the variance of the zero-centered Gaussian distribution from which the weights for the incoming connections are drawn; u(t), the external input signal driving each neuron. Bertschinger et al. used a random binary input signal u(t) which assumes with equal chance u + 1 or u −1. Since the used ANN ASIC has a fixed dynamic range for a single synapse, a weight can assume a normalized value in the interval [−1, 1] with 11 bit accuracy. For this reason, the input signal u(t) is split to a constant bias part u and the varying part, which again is split to an excitatory and its inverse contribution. Each neuron of the network then gets k inputs from other neurons, one constant bias of weight u, and two mutually exclusive input neurons with weights 0.5 and −0.5. The latter modification was introduced to account for the fact that the inner neurons assume only the values {0, 1}. Using the input and its inverse accordingly recovers a differential weight change of 1 between the active and inactive state. The performance of the liquid state machine is evaluated according to the mutual information of the target values y(t) and the predicted values v(t). This measure is defined as: MI(v, y) = X v′ X y′ p(v′, y′) log2 p(v′, y′) p(v′)p(y′), (4) where p(v′) = probability{v(t) = v′} with v′ ∈{0, 1} and p(v′, y′) is the joint probability. It can be calculated from the confusion matrix of the linear classifier and can be given the dimension bits. In order to assess the capability to account for inputs of preceeding time steps, it is sensible to define another measure, the memory capacity MC (cf. [7]): MC = ∞ X τ=0 MI(vτ, yτ). (5) Here, vτ and yτ denote the prediction and target shifted in time by τ time steps (i.e. yτ(t) = y(t −τ)). It is as well measured in bits. 4 Results A linear classifier by definition cannot solve a linearily non-separable problem. It therefore is a good test for the non-trivial contribution of the liquid if a liquid state machine with a linear readout has to solve a linearly non-separable problem. The benchmark problem used in the following is 3-bit parity in time, i.e., yτ(t) = PARITY (u(t −τ), u(t −τ −1), u(t −τ −2)), which is known to be linearly nonseparable. The linear classifiers are trained to predict the linearly non-separable yτ(t) simply from the liquid state x(t). To do this it is necessary that in the liquid state at time t there is information present of the previous time steps. Bertschinger et al. showed theoretically and in simulation that depending on the parameters k, σ2, and u an input-driven neural network shows ordered or chaotic dynamics. This causes input information either to disappear quickly (the simplest case would be an identity map from input to output) or stay forever in the network respectively. Although the transition of the network dynamics from order to chaos happens gradually with the variation of the generation parameters (k, σ2, u), the performance as a liquid shows a distinct peak when the network exhibits dynamics inbetween order and chaos. These critical dynamics suggest the term “computation at the edge of chaos” which is originated by Langton [8]. The following results are obtained using the ANN ASIC as the liquid on a random binary input string (u(t)) of length 4000 for which the linear classifier is calculated. The shown mutual information and memory capacity are the measured performance on a random binary test string of length 8000. For each time shift τ, a separate classifier is calculated. For each parameter set k, σ2, u this procedure is repeated several times (for exact numbers compare the individual plots), i.e. several liquids are generated. Fig. 4 shows the mutual information MI versus the shift in time τ for the 3-bit delayed parity problem and the network parameters fixed to N = 256, k = 6, σ2 = 0.14, and u = 0. Plotted are the mean values of 50 liquids evaluated in PSfrag replacements time shift (τ) MI [bit] MC=3.4 bit memory curve (k=0.14, σ2=6) 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Figure 4: The mutual information between prediction and target for the 3-bit delayed parity problem versus the delay for k=6, σ2=0.14). The plotted limits are the 1-sigma spreads of 50 different liquids. The integral under this curve is the mean MC and is the maximum in the left plot of Fig. 5. hardware and the given limits are the standard deviation in the mean. From the error limits it can be inferred that the parity problem is solved in all runs for τ = 0, and in some for τ = 1. For larger time shifts the performance decreases until the liquid has no information on the input anymore. 30 25 20 15 10 5 3 2 1 0 0.1 0.2 0.3 0.4 0.5 4 3 2 1 0 30 25 20 15 10 5 0.1 0.2 0.3 0.4 0.5 s 2 s 2 inputs (k) inputs (k) MC [bit] MC [bit] mean MC (hardware, {0,1} neurons) mean MC (simulation, {-1,1} neurons) chaos order chaos order Figure 5: Shown are two parameter sweeps for the 3-bit delayed parity in dependence of the generation parameters k and σ2 with fixed N = 256, u = 0. Left: 50 liquids per parameter set evaluated in hardware. Right: 35 liquids per parameter set using software simulation of ASIC but with symmetric neurons. Actual data points are marked with black dots, the gray shading shows an interpolation. The largest three mean MCs are marked with a white dot, asterisk, plus sign. In order to assess how different generation parameters influence the quality of the liquid, parameter sweeps are performed. For each parameter set several liquids are generated and readouts trained. The obtained memory capacities of the runs are averaged and used as the performance measure. Fig. 5 shows a parameter sweep of k and σ2 for the memory capacity MC for N = 256, and u = 0. On the left side, results obtained with the hardware are shown. The shading shows an interpolation of the actual measured values marked with dots. The largest three mean MCs are marked in order with a white circle, white asterisk, and white plus. It can be seen that the memory capacity peaks distinctly along a hyperbola-like band. The area below the transition band goes along with ordered dynamics; above it, the network exhibits chaotic behavior. The shape of the transition indicates a constant network activity for critical dynamics. The standard deviation in the mean of 50 liquids per parameter set is below 2%, i.e., the transition is significant. The transition is not shown in a u-σ2-sweep as originally by Bertschinger et al. because in the hardware setup only a limited parameter range of σ2 and u is accessible due to synapses of the range [−1, 1] with a limited resolution. The accessible region (σ2 ∈[0, 1] and u ∈[0, 1]) nonetheless exhibits a similar transition as described by Bertschinger et al. (not shown). The smaller overall performance in memory capacity compared to their liquids, on the other hand, is simply due to the anti-symmetric neurons and not to other hardware restrictions as it can be seen from the right side of Fig. 5. There the same parameter sweep is shown, but this time the liquid is implemented in a software simulation of the ASIC with symmetric neurons. While all connectivity constraints of the hardware are incorporated in the simulation, the only other change in the setup is the adjustment of the input signal to u ± 1. 35 liquids per parameter set are evaluated. The observed performance decrease results from the asymmetry of the 0,1 neurons; a similar effect is observed by Bertschinger et al. for u ̸= 0. 30 25 20 15 10 5 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.12 0.1 0.08 0.06 0.04 0.02 0 30 25 20 15 10 5 0.1 0.2 0.3 0.4 0.5 s 2 s 2 inputs (k) inputs (k) MI [bit] sigma of MI [bit] mean MI of 50 random 5-bit Boolean functions standard deviations of distributions chaos order Figure 6: Mean mutual information of 50 simultaneously trained linear classifiers on randomly drawn 5-bit Boolean functions using the hardware liquid (10 liquids per parameter set evaluated). The right plot shows the 1-sigma spreads. Finally, the hardware-based liquid state machine was tested on 50 randomly drawn Boolean functions of the last 5 inputs (5 bit in time) (cf. Fig. 6). In this setup, 50 linear classifiers read out the same liquid simultaneously to calculate their independent predictions at each time step. The mean mutual information (τ = 0) for the 50 classifiers in 10 runs is plotted. From the right plot it can be seen that the standard deviation for the single measurement along the critical line is fairly small; this shows that critical dynamics yield a generic liquid independent of the readout. 5 Conclusions & Outlook Computing without stable states manifests a new computing paradigm different to the Turing approach. By different authors this has been investigated for various types of neural networks, theoretically and in software simulation. In the present publication these ideas are transferred back to an analog computing device: a mixedmode VLSI neural network. Earlier published results of Bertschinger et al. were reproduced showing that the readout with linear classifiers is especially successful when the network exhibits critical dynamics. Beyond the point of solving rather academic problems like 3-bit parity, the liquid computing approach may be well suited to make use of the massive resources found in analog computing devices, especially, since the liquid is generic, i.e. independent of the readout. The experiments with the general purpose ANN ASIC allow to explore the necessary connectivity and accuracy of future hardware implementations. With even higher integration densities the inherent unreliability of the elementary parts of VLSI systems grows, making fault-tolerant training and operation methods necessary. Even though it has not be shown in this publication, initial experiments support that the used liquids show a robustness against faults introduced after the readout has been trained. As a next step it is planned to use parts of the ASIC to realize the readout. Such a liquid state machine can make use of the hardware implementation and will be able to operate in real-time on continuous data streams. References [1] S. Hohmann, K. Fieres, J. Meier, T. Schemmel, J. Schmitz, and F. Sch¨urmann. Training fast mixed-signal neural networks for data classification. In Proceedings of the International Joint Conference on Neural Networks IJCNN’04, pages 2647–2652. IEEE Press, July 2004. [2] W. Maass, T. Natschl¨ager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531–2560, 2002. [3] H. Jaeger. The “echo state” approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Center for Information Technology, 2001. [4] T. M. Cover. Geometrical and statistical properties of systems of linear inequalities with application in pattern recognition. IEEE Transactions on Electronic Computers, EC-14:326–334, 1965. [5] J. Schemmel, S. Hohmann, K. Meier, and F. Sch¨urmann. A mixed-mode analog neural network using current-steering synapses. Analog Integrated Circuits and Signal Processing, 38(2-3):233–244, February-March 2004. [6] N. Bertschinger and T. Natschl¨ager. Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16(7):1413 – 1436, July 2004. [7] T. Natschl¨ager and W. Maass. Information dynamics and emergent computation in recurrent circuits of spiking neurons. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch¨olkopf, editors, Proc. of NIPS 2003, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [8] C. G. Langton. Computation at the edge of chaos. Physica D, 42, 1990.
|
2004
|
168
|
2,583
|
Breaking SVM Complexity with Cross-Training G¨okhan H. Bakır Max Planck Institute for Biological Cybernetics, T¨ubingen, Germany gb@tuebingen.mpg.de L´eon Bottou NEC Labs America Princeton NJ, USA leon@bottou.org Jason Weston NEC Labs America Princeton NJ, USA jasonw@nec-labs.com Abstract We propose to selectively remove examples from the training set using probabilistic estimates related to editing algorithms (Devijver and Kittler, 1982). This heuristic procedure aims at creating a separable distribution of training examples with minimal impact on the position of the decision boundary. It breaks the linear dependency between the number of SVs and the number of training examples, and sharply reduces the complexity of SVMs during both the training and prediction stages. 1 Introduction The number of Support Vectors (SVs) has a dramatic impact on the efficiency of Support Vector Machines (Vapnik, 1995) during both the learning and prediction stages. Recent results (Steinwart, 2004) indicate that the number k of SVs increases linearly with the number n of training examples. More specifically, k/n −→2BK (1) where n is the number of training examples and BK is the smallest classification error achievable with the SVM kernel K. When using a universal kernel such as the Radial Basis Function kernel, BK is the Bayes risk B, i.e. the smallest classification error achievable with any decision function. The computational requirements of modern SVM training algorithms (Joachims, 1999; Chang and Lin, 2001) are very largely determined by the amount of memory required to store the active segment of the kernel matrix. When this amount exceeds the available memory, the training time increases quickly because some kernel matrix coefficients must be recomputed multiple times. During the final phase of the training process, the active segment always contains all the k(k + 1)/2 dot products between SVs. Steinwart’s result (1) then suggests that the critical amount of memory scales at least like B2n2. This can be practically prohibitive for problems with either big training sets or large Bayes risk (noisy problems). Large numbers of SVs also penalize SVMs during the prediction stage as the computation of the decision function requires a time proportional to the number of SVs. When the problem is separable, i.e. B = 0, equation (1) suggests1 that the number k of SVs increases less than linearly with the number n of examples. This improves the scaling laws for the SVM computational requirements. 1See also (Steinwart, 2004, remark 3.8) In this paper, we propose to selectively remove examples from the training set using probabilistic estimates inspired by training set editing algorithms (Devijver and Kittler, 1982). The removal procedure aims at creating a separable set of training examples without modifying the location of the decision boundary. Making the problem separable breaks the linear dependency between the number of SVs and the number of training examples. 2 Related work 2.1 Salient facts about SVMs We focus now on the C-SVM applied to the two-class pattern recognition problem. See (Burges, 1998) for a concise reference. Given n training patterns xi and their associated classes yi = ±1, the SVM decision function is: f(x) = n X i=1 α∗ i yiK(xi, x) + b∗ (2) The coefficient α∗ i in (2) are obtained by solving a quadratic programing problem: α∗ = arg max α X i αi −1 2 X i,j αiαjyiyjK(xi, xj) (3) subject to ∀i, 0 ≤αi ≤C and X i αiyi = 0 This optimization yields three categories of training examples depending on α∗ i . Within each category, the possible values of the margins yif(xi) are prescribed by the KarushKuhn-Tucker optimality conditions. - Examples such that α∗ i = C are called bouncing SVs or margin errors and satisfy yif(xi) < 1. The set of bouncing SVs includes all training examples misclassified by the SVM, i.e. those which have a negative margin yif(xi) < 0. - Examples such that 0 < α∗ i < C are called ordinary SVs and satisfy yif(xi) = 1. - Examples such that α∗ i = 0 satisfy relation yif(xi) > 1. These examples play no role in the SVM decision function (2). Retraining after discarding these examples would still yield the same SVM decision function (2). These facts provide some insight into Steinwart’s result (1). The SVM decision function, like any other decision rule, must asymptotically misclassify at least Bn examples, where B is the Bayes risk. All these examples must therefore become bouncing SVs. To illustrate dependence on the Bayes risk, we perform a linear classification task in two dimensions under varying amount of class overlap. The class distributions were uniform on a unit square with centers c1 and c2. Varying the distance between c1 and c2 allows us to control the Bayes risk. The results are shown in figure 1. 2.2 A posteriori reduction of the number of SVs. Several techniques aim to reduce the prediction complexity of SVMs by expressing the SVM solution (2) with a smaller kernel expansion. Since one must compute the SVM solution before applying these post-processing techniques, they are not suitable for reducing the complexity of the training stage. Reduced Set Construction. Burges (Burges, 1996) proposes to construct new patterns zj in order to define a compact approximation of the decision function (2). Reduced set construction usually involves solving a non convex optimization problem and is not applicable on arbitrary inputs such as graphs or strings. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 10 0 10 1 10 2 10 3 Bayes risk log #SV # α=C for SVM # α<C for SVM rank Ksv Figure 1: Effect of noise on the number of support vectors. The number of ordinary SVs stays almost constant whereas the number of bouncing SVs grows. Additional support vectors do not give extra information as indicated by the rank of the kernel matrix. See section 2.1. 1000 2000 3000 4000 5000 6000 7000 −1.5 −1 −0.5 0 0.5 1 1.5 y.f(x) α Figure 2: Histogram of SVs selected by the ℓ1 penalization method on the MNIST 3-8 discrimination task. The initial SVs have been ordered on the x-axis by increasing margin yf(x) and decreasing α. See last paragraph in section 2.2. Reduced Set Selection. The set of basis2 functions K(xi, ·) associated with the SVs xi do not necessarily constitute a linearly independent family. The same decision function f(·) can then be expressed by multiple linear combination of the functions K(xi, ·). Reduced set selection methods attempt to select a subset of the SVs that is sufficient to express the SVM decision function. For instance, (Downs, Gates and Masters, 2001) propose to compute the row echelon form of the kernel matrix and discard SVs that lead to zero rows. This approach maintains the original SVM decision function. In contrast, the ℓ1 penalization method suggested in (Sch¨olkopf and Smola, 2002, sect. 18.4.2) simply attempts to construct a sufficiently good approximation of the original SVM decision function by solving arg min β
X i α∗ i yiK(xi, ·) − X i βiyiK(xi, ·)
2 K + λ X i |βi| (4) where parameter λ trades accuracy versus sparsity, and ∥·∥K denotes the Reproducing Kernel Hilbert Space norm (Sch¨olkopf and Smola, 2002, definition 2.9). Simplifying expression (4) yields a numerically tractable quadratic programming problem. Which examples are selected? We have investigated the ℓ1 penalization method (4) as follows. We train a first SVM to discriminate digits 3 and 8 on the MNIST dataset (see section 4.2) after randomly swapping 10% of the class labels in the training set. We then select a subset of the resulting support vectors using the ℓ1 penalization method. Choosing λ is quite difficult in practice. To evaluate the accuracy of the procedure, we train a second SVM on the selected vectors, compare its recognition accuracy with that of the first SVM. This was best achieved by enforcing the constraint βi ≥0 in (4) because the second SVM cannot return an expansion with negative coefficients. Figure 2 shows the histogram of selected SVs. The initial support vectors have been ordered on the x-axis by increasing values of yif(xi), and, in the case of margin SVs, by decreasing values of αi. The selected SVs includes virtually no misclassified SVs, but instead concentrates on SVs with large αi. This result suggests that simple pre-processing methods might indicate which training examples are really critical for SVM classification. 2We use the customary name basis functions despite linear dependence... 2.3 Training set editing techniques We now consider techniques for reducing the set of training examples before running a training algorithm. Reducing the amount of training data is indeed an obvious way to reduce the complexity of training. Quantization and clustering methods might be used to achieve this goal. These methods however reduce the training data without considering the loss function of interest, and therefore sacrifice classification accuracy. We focus instead on editing techniques, i.e. techniques for discarding selected training examples with the aim of achieving similar or better classification accuracy. Two prototypical editing techniques, MULTIEDIT and CONDENSE, have been thoroughly studied (Devijver and Kittler, 1982, chapter 3) in the context of the nearest neighbor (1NN) classification rule. Removing interior examples. The CONDENSE algorithm was first described by (Hart, 1968). This algorithm selects a subset of the training examples whose 1-NN decision boundary still classifies correctly all of the initial training examples: Algorithm 1 (CONDENSE). 1 Select a random training example and put it in set R. 2 For each training example i = 1, . . . , n : classify example i using the 1-NN rule with set R as the training set, and insert it into R if it is misclassified. 3 Return to step 2 if R has been modified during the last pass. 4 The final contents of R constitute the condensed training set. This is best understood when both classes form homogeneous clusters in the feature space. Algorithm 1 discards training examples located in the interior of each cluster. This strategy works poorly when there is a large overlap between the pattern distributions of both classes, that is to say when the Bayes risk B is large. Consider for instance a feature space region where P(y = +1 | x) > P(y = −1 | x) > 0. A small number of training examples of class y = −1 can still appear in such a region. We say that they are located on the wrong side of the Bayes decision boundary. Asymptotically, all such training examples belong to the condensed training set in order to ensure that they are properly recognized as members of class y = −1. Removing noise examples. The Edited Nearest Neighbor rule (Wilson, 1972) suggests to first discard all training examples that are misclassified when applying the 1-NN rule using all n −1 remaining examples as the training set. It was shown that removing these examples improves the asymptotic performance of the nearest neighbor rule. Whereas the 1-NN risk is asymptotically bounded by 2B, the Edited 1-NN risk is asymptotically bounded by 1.2 B, where B is the Bayes risk. The MULTIEDIT algorithm (Devijver and Kittler, 1982, section 3.11) asymptotically discards all the training examples located on the wrong side of the Bayes decision boundary. The asymptotic risk of the multi-edited nearest neighbor rule is the Bayes risk B. Algorithm 2 (MULTIEDIT). 1 Divide randomly the training data into s splits S1, . . . , Ss. Let us call fi the 1-NN classifier that uses Si as the training set. 2 Classify all examples in Si using the classifier f(i+1) mod s and discard all misclassified examples. 3 Gather all the remaining examples and return to step 1 if any example has been discarded during the last T iterations. 4 The remaining examples constitute the multiedited training set. By discarding examples located on the wrong side of the Bayes decision boundary, algorithm MULTIEDIT constructs a new training set whose apparent distribution has the same Bayes decision boundary as the original problem, but with Bayes risk equal to 0. Devijver and Kittler claim that MULTIEDIT produces an ideal training set for CONDENSE. Algorithm MULTIEDIT also discards some proportion of training examples located on the correct side of Bayes decision boundary. Asymptotically this does not matter. However this is often a problem in practice... 2.4 Editing algorithms and SVMs Training examples recognized with high confidence usually do not appear in the SVM solution (2) because they do not become support vectors. On the other hand, outliers always become support vectors. Intuitively, SVMs display the properties of the CONDENSE but lack the properties of the MULTIEDIT algorithm. The mathematical proofs for the asymptotic properties of MULTIEDIT depend on the specific nature of the 1-NN classification rule. The MULTIEDIT algorithm itself could be identically defined for any classifier. This suggests (but does not prove) that these properties might remain valid for SVM classifiers3. This contribution is an empirical attempt to endow Support Vector Machines with the properties of the MULTIEDIT algorithm. Editing SVM training sets implicitly modifies the SVM loss function in a way that relates to robust statistics. Editing alters the apparent distribution of training examples such that the class distributions P(x | y = 1) and P(x | y = −1) no longer overlap. If the class distributions were known, this could be done by trimming the tails of the class distributions. A similar effect could be obtained by altering the SVM loss function (the hinge loss) into a non convex loss function that gives less weight to outliers. 3 Cross-Training Cross-Training is a representative algorithm of such combinations of SVMs and editing algorithms. It begins with creating s subsets of the training set with r examples each. Independent SVMs are then trained on each subset. The decision functions of these SVMs are then used to discard two types of training examples: those which are confidently recognized, as in CONDENSE, and those which are misclassified, as in MULTIEDIT. A final SVM is then trained using the remaining examples. Algorithm 3 (CROSSTRAINING). 1 Create s subsets of size r by randomly drawing r/2 examples of each class. 2 Train s independent SVMs f1, . . . , fs using each of the subsets as the training set. 3 For each training example (xi, yi) estimate the margin average mi and variance vi: mi = 1 s Ps r=1 yifr(xi) vi = 1 s Ps r=1 (mi −yifr(xi))2 4 Discard all training examples for which mi + vi < 0. 5 Discard all training examples for which mi −vi > 1. 6 Train a final SVM on the remaining training examples. The apparent simplicity of this algorithm hides a lot of hyperparameters. The value of the C parameters for the SVMs at steps [2] and [6] has a considerable effect on the overall performance of the algorithm. For the first stage SVMs, we choose the C parameter which yields the best performance on training sets of size r. For the second stage SVMs, we choose the C parameter which yields the best overall performance measured on a separate validation set. Furthermore, we discovered that the discarding steps tend to produce a final set of training examples with very different numbers of examples for each class. Specific measures to alleviate this problem are discussed in section 4.3. 3Further comfort comes from the knowledge that a SVM with the RBF kernel and without threshold term b implements the 1-NN rule when the RBF radius tends to zero. 0 1000 2000 3000 4000 5000 6000 7000 8000 0 500 1000 1500 2000 2500 3000 SVs Training set size LIBSVM X−Train LIBSVM 0 1000 2000 3000 4000 5000 6000 7000 8000 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2 Test Error Training set size LIBSVM X−Train LIBSVM 0 1000 2000 3000 4000 5000 6000 7000 8000 0 50 100 150 200 250 300 350 400 450 500 Time (secs) Training set size LIBSVM X−Train LIBSVM Figure 3: Comparing LIBSVM and Cross-Training on a toy problem of two Gaussian clouds for increasing number of training points. Cross-Training gives an almost constant number of support vectors (left figure) for increasing training set size, whereas in LIBSVM the number of support vectors increases linearly. The error rates behave similarly (middle figure), and Cross-Training gives an improved training time (right figure). See section 4.1. 4 Experiments 4.1 Artificial Data We first constructed artificial data, by generating two classes from two Gaussian clouds in 10 dimensions with means (1, 1, 1, 1, 1, 0, 0, 0, 0, 0) and (−1, −1, −1, −1, −1, 0, 0, 0, 0, 0) and standard deviation 4. We trained a linear SVM for differing amounts of training points, selecting C via cross validation. We compare the performance of LIBSVM4 with CrossTraining using LIBSVM with s = 5, averaging over 10 splits. The results given in figure 3 show a reduction in SVs and computation time using Cross-Training, with no loss in accuracy. 4.2 Artificial Noise Our second experiment involves the discrimination of digits 3 and 8 in the MNIST5 database. Artificial noise was introduced by swapping the labels of 0%, 5%, 10% and 15% of the examples. There are 11982 training examples and 1984 testing examples. All experiments were carried out using LIBSVM’s ν-SVM (Chang and Lin, 2001) with the RBF kernel (γ = 0.005). Cross-Training was carried out by splitting the 11982 training examples into 5 subsets. Figure 4 reports our results for various amounts of label noise. The number of SVs (left figure) increases linearly for the standard SVM and stays constant for the Cross-Training SVM. The test errors (middle figure) seem similar. Since our label noise is artificial, we can also measure the misclassification rate on the unmodified testing set (right figure). This measurement shows a slight loss of accuracy without statistical significance. 4.3 Benchmark Data Finally the cross-training algorithm was applied to real data sets from both the ANU repository6 and from the the UCI repository7. Experimental results were quite disappointing until we realized that the discarding steps tends to produce training sets with very different numbers of examples for each class. To alleviate this problem, after training each SVM, we choose the value of the threshold b∗ 4 http://www.csie.ntu.edu.tw/∼cjlin/libsvm/ 5 http://yann.lecun.com/exdb/mnist 6 http://mlg.anu.edu.au/∼raetsch/data/index.html 7 ftp://ftp.ics.uci.edu/pub/machine-learning-databases 0% 5% 10% 15% 0 2000 4000 6000 8000 0% 5% 10% 15% 0.0% 5.0% 10.0% 15.0% 0% 5% 10% 15% 0.0% 0.5% 1.0% 1.5% 2.0% 2.5% Figure 4: Number of SVs (left figure) and test error (middle figure) for varying amounts of label noise on the MNIST 3-8 discrimination task. The x-axis in all graphs shows the amount of label noise; white squares correspond to LIBSVM; black circles to Cross-Training; dashed lines to bagging the first stage Cross-Training SVMs. The last graph (right figure) shows the test error measured without label noise. See section 4.2 in (2) which achieves the best validation performance. We also attempt to balance the final training set by re-inserting examples discarded during step [5] of the cross-training algorithm. Experiments were carried out using RBF kernels with the kernel width reported in the literature. In the SVM experiments, the value of parameter C was determined by crossvalidation and then used for training a SVM on the full dataset. In the cross-training experiments, we make a validation set by taking r/3 examples from the training set. These examples are only used for choosing the values of C and for adjusting the SVM thresholds. Details and source code are available8. Train Test SVM SVM XTrain XTrain XTrain Dataset Size Size Perf.[%] #SV Subsets Perf.[%] #SV Banana 400 4900 89.0 111 5×200 88.2 51 Waveform 400 4600 90.2 172 5×200 88.7 87 Splice 1000 2175 90.0 601 5×300 89.9 522 Adult 3185 16280 84.2 1207 5×700 84.2 606 Adult 32560 16280 85.1 11325 5×6000 84.8 1194 Forest 50000 58100 90.3 12476 5×10000 89.2 7967 Forest 90000 58100 91.6 18983 5×18000 90.7 13023 Forest 200000 58100 — — 8×30000 92.1 19526 Table 1: Comparison of SVM and Cross-Training results on standard benchmark data sets. The columns in table 1 contain the dataset name, the size of the training set used for the experiment, the size of the test set, the SVM accuracy and number of SVs, the CrossTraining subset configuration, accuracy, and final number of SVs. Bold typeface indicates which differences were statistically significant according to a paired test. These numbers should be considered carefully because they are impacted by the discrete nature of the grid search for parameter C. The general trend still indicates that Cross-Training causes a slight loss of accuracy but requires much less SVs. Our largest training set contains 200000 examples. Training a standard SVM on such a set takes about one week of computation. We do not report this result because it was not practical to determine a good value of C for this experiment. Cross-Training with specified hyperparameters runs overnight. Cross-Training with hyperparameter grid searches runs in two days. We do not report detailled timing results because much of the actual time can be attributed 8 http://www.kyb.tuebingen.mpg.de/bs/people/gb/xtraining to the search for the proper hyperparameters. Timing results would then depend on loosely controlled details of the hyperparameter grid search algorithms. 5 Discussion We have suggested to combine SVMs and training set editing techniques to break the linear relationship between number of support vectors and number of examples. Such combinations raise interesting theoretical questions regarding the relative value of each of the training examples. Experiments with a representative algorithm, namely Cross-Training, confirm that both the training and the recognition time are sharply reduced. On the other hand, Cross-Training causes a minor loss of accuracy, comparable to that of reduced set methods (Burges, 1996), and seems to be more sensitive than SVMs in terms of parameter tuning. Despite these drawbacks, Cross-Training provides a practical means to construct kernel classifiers with significantly larger training sets. 6 Acknowledgement We thank Hans Peter Graf, Eric Cosatto and Vladimir Vapnik for their advice and support. Part of this work was funded by NSF grant CCR-0325463. References Burges, C. J. C. (1996). Simplified Support Vector Decision Rules. In Saitta, L., editor, Proceedings of the 13th International Conference on Machine Learning, pages 71–77, San Mateo, CA. Morgan Kaufmann. Burges, C. J. C. (1998). A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2):121–167. Chang, C.-C. and Lin, C.-J. (2001). Training ν-Support Vector Classifiers: Theory and Algorithms. Neural Computation, 13(9):2119–2147. Devijver, P. and Kittler, J. (1982). Pattern Recogniton, A statistical approach. Prentice Hall, Englewood Cliffs. Downs, T., Gates, K. E., and Masters, A. (2001). Exact Simplification of Support Vector Solutions. Journal of Machine Learning Research, 2:293–297. Hart, P. (1968). The condensed nearest neighbor rule. IEEE Transasctions on Information Theory, 14:515–516. Joachims, T. (1999). Making Large–Scale SVM Learning Practical. In Sch¨olkopf, B., Burges, C. J. C., and Smola, A. J., editors, Advances in Kernel Methods — Support Vector Learning, pages 169–184, Cambridge, MA. MIT Press. Sch¨olkopf, B. and Smola, A. J. (2002). Learning with Kernels. MIT Press, Cambridge, MA. Steinwart, I. (2004). Sparseness of Support Vector Machines—Some Asymptotically Sharp Bounds. In Thrun, S., Saul, L., and Sch¨olkopf, B., editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA. Vapnik, V. N. (1995). The Nature of Statistical Learning Theory. Springer Verlag, New York. Wilson, D. L. (1972). Asymptotic properties of the nearest neighbor rules using edited data. IEEE Transactions on Systems, Man, and Cybernetics, 2:408–420.
|
2004
|
169
|
2,584
|
An Investigation of Practical Approximate Nearest Neighbor Algorithms Ting Liu, Andrew W. Moore, Alexander Gray and Ke Yang School of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 USA {tingliu, awm, agray, yangke}@cs.cmu.edu Abstract This paper concerns approximate nearest neighbor searching algorithms, which have become increasingly important, especially in high dimensional perception areas such as computer vision, with dozens of publications in recent years. Much of this enthusiasm is due to a successful new approximate nearest neighbor approach called Locality Sensitive Hashing (LSH). In this paper we ask the question: can earlier spatial data structure approaches to exact nearest neighbor, such as metric trees, be altered to provide approximate answers to proximity queries and if so, how? We introduce a new kind of metric tree that allows overlap: certain datapoints may appear in both the children of a parent. We also introduce new approximate k-NN search algorithms on this structure. We show why these structures should be able to exploit the same randomprojection-based approximations that LSH enjoys, but with a simpler algorithm and perhaps with greater efficiency. We then provide a detailed empirical evaluation on five large, high dimensional datasets which show up to 31-fold accelerations over LSH. This result holds true throughout the spectrum of approximation levels. 1 Introduction The k-nearest-neighbor searching problem is to find the k nearest points in a dataset X ⊂ RD containing n points to a query point q ∈RD, usually under the Euclidean distance. It has applications in a wide range of real-world settings, in particular pattern recognition, machine learning [7] and database querying [11]. Several effective methods exist for this problem when the dimension D is small (e.g. 1 or 2), such as Voronoi diagrams [26], or when the dimension is moderate (i.g. up to the 10’s), such as kd-trees [8] and metric trees. Metric trees [29], or ball-trees [24], so far represent the practical state of the art for achieving efficiency in the largest dimensionalities possible [22, 6]. However, many realworld problems are posed with very large dimensionalities which are beyond the capability of such search structures to achieve sub-linear efficiency, for example in computer vision, in which each pixel of an image represents a dimension. Thus, the high-dimensional case is the long-standing frontier of the nearest-neighbor problem. Approximate searching. One approach to dealing with this apparent intractability has been to define a different problem, the (1 + ε) approximate k-nearest-neighbor searching problem, which returns points whose distance from the query is no more than (1+ε) times the distance of the true kth nearest-neighbor. Further, the problem is often relaxed to only do this with high probability, and without a certificate property telling the user when it has failed to do so, nor any guarantee on the actual rank of the distance of the points returned, which may be arbitrarily far from k [4]. Another commonly used modification to the problem is to perform the search under the L1 norm rather than L2. Locality Sensitive Hashing. Several methods of this general nature have been proposed [17, 18, 12], and locality-sensitive hashing (LSH) [12] has received considerable recent attention because it was shown that its runtime is independent of the dimension D and has been put forth as a practical tool [9]. Roughly speaking, a locality sensitive hashing function has the property that if two points are “close,” then they hash to same bucket with “high” probability; if they are “far apart,” then they hash to same bucket with “low” probability. Formally, a function family H = {h : S →U} is (r1,r2, p1, p2)-sensitive, where r1 < r2, p1 > p2, for distance function D if for any two points p,q ∈S, the following properties hold: 1. if p ∈B(q,r1), then Prh∈H [h(q) = h(p)] ≥p1, and 2. if p ̸∈B(q,r2), then Prh∈H [h(q) = h(p)] ≤p2, where B(q,r) denotes a hypersphere of radius r centered at q. By defining a LSH scheme, namely a (r,r(1+ε), p1, p2)-sensitive hash family, the (1+ε)-NN problem can be solved by performing a series of hashing and searching within the buckets. See [12, 13] for details. Applications such as computer vision, e.g. [23, 28] have found (1 + ε) approximation to be useful, for example when the k-nearest-neighbor search is just one component in a large system with many parts, each of which can be highly inaccurate. In this paper we explore the extent to which the most successful exact search structures can be adapted to perform (1 + ε) approximate high-dimensional searches. A notable previous approach along this line is a simple modification of kd-trees [3] – ours takes the more powerful metric trees as a starting point. We next review metric trees, then introduce a variant, known as spill trees. 2 Metric Trees and Spill Trees 2.1 Metric Trees The metric tree [29, 25, 5] is a data structure that supports efficient nearest neighbor search. We briefly A metric tree organizes a set of points in a spatial hierarchical manner. It is a binary tree whose nodes represent a set of points. The root node represents all points, and the points represented by an internal node v is partitioned into two subsets, represented by its two children. Formally, if we use N(v) to denote the set of points represented by node v, and use v.lc and v.rc to denote the left child and the right child of node v, then we have N(v) = N(v.lc)∪N(v.rc) (1) /0 = N(v.lc)∩N(v.rc) (2) for all the non-leaf nodes. At the lowest level, each leaf node contains very few points. Partitioning. The key to building a metric-tree is how to partition a node v. A typical way is as follows. We first choose two pivot points from N(v), denoted as v.lpv and v.rpv. Ideally, v.lpv and v.rpv are chosen so that the distance between them is the largest of allpair distances within N(v). More specifically, ||v.lpv −v.rpv|| = maxp1,p2∈N(v) ||p1−p2||. However, it takes O(n2) time to find the optimal v.lpv and v.rpv. In practice, we resort to a linear-time heuristic that is still able to find reasonable pivot points.1 After v.lpv and v.rpv are found, we can go ahead to partition node v. Here is one possible strategy for partitioning. We first project all the points down to the vector ⃗u = ⃗ v.rpv −⃗ v.lpv, and then find the median point A along ⃗u. Next, we assign all the points projected to the left of A to v.lc, and all the points projected to the right of A to v.rc. We use L to denote the d-1 dimensional plane orthogonal to ⃗u and goes through A. It is known as the decision boundary since all points to the left of L belong to v.lc 1Basically, we first randomly pick a point p from v. Then we search for the point that is the farthest to p and set it to be v.lpv. Next we find a third point that is farthest to v.lpv and set it as v.rpv. Figure 1: partitioning in a metric tree. Figure 2: partitioning in a spill tree. and all points to the right of L belong to v.rc (see Figure 1). By using a median point to split the datapoints, we can ensure that the depth of a metric-tree is logn. However, in our implementation, we use a mid point (i.e. the point at 1 2( ⃗ v.lpv + ⃗ v.rpv)) instead, since it is more efficient to compute, and in practice, we can still have a metric tree of depth O(logn). Each node v also has a hypersphere B, such that all points represented by v fall in the ball centered at v.center with radius v.r, i.e. we have N(v) ⊆B(v.center,v.r). Notice that the balls of the two children nodes are not necessarily disjoint. Searching. A search on a metric-tree is simply a guided DFS (for simplicity, we assume that k = 1). The decision boundary L is used to decide which child node to search first. If the query q is on the left of L, then v.lc is searched first, otherwise, v.rc is searched first. At all times, the algorithm maintains a “candidate NN”, which is the nearest neighbor it finds so far while traversing the tree. We call this point x, and denote the distance between q and x by r. If DFS is about to exploit a node v, but discovers that no member of v can be within distance r of q, then it prunes this node (i.e., skip searching on this node, along with all its descendants). This happens whenever ∥v.center −q∥−v.r ≥r. We call this DFS search algorithm MT-DFS thereafter. In practice, the MT-DFS algorithm is very efficient for NN search, and particularly when the dimension of a dataset is low (say, less than 30). Typically for MT-DFS, we observe an order of magnitude speed-up over na¨ıve linear scan and other popular data structures such as SR-trees. However, MT-DFS starts to slow down as the dimension of the datasets increases. We have found that in practice, metric tree search typically finds a very good NN candidate quickly, and then spends up to 95% of the time verifying that it is in fact the true NN. This motivated our new proposed structure, the spill-tree, which is designed to avoid the cost of exact NN verification. 2.2 Spill-Trees A spill-tree (sp-tree) is a variant of metric-trees in which the children of a node can “spill over” onto each other, and contain shared datapoints. The partition procedure of a metrictree implies that point-sets of v.lc and v.rc are disjoint: these two sets are separated by the decision boundary L. In a sp-tree, we change the splitting criteria to allow overlaps between two children. In other words, some datapoints may belong to both v.lc and v.rc. We first explain how to split an internal node v. See Figure 2 as an example. Like a metrictree, we first choose two pivots v.lpv and v.rpv, and find the decision boundary L that goes through the mid point A, Next, we define two new separating planes, LL and LR, both of which are parallel to L and at distance τ from L. Then, all the points to the right of plane LL belong to the child v.rc, and all the points to the left of plane LR belong to the child v.lc. Mathematically, we have N(v.lc) = {x|x ∈N(v),d(x,LR)+2τ > d(x,LL)} (3) N(v.rc) = {x|x ∈N(v),d(x,LL)+2τ > d(x,LR)} (4) Notice that points fall in the region between LL and LR are shared by v.lc and v.rc. We call this region the overlapping buffer, and we call τ the overlapping size. For v.lc and v.rc, we can repeat the splitting procedure, until the number of points within a node is less than a specific threshold, at which point we stop. 3 Approximate Spill-tree-based Nearest Neighbor Search It may seem strange that we allow overlapping in sp-trees. The overlapping obviously makes both the construction and the MT-DFS less efficient than regular metric-trees, since the points in the overlapping buffer may be searched twice. Nonetheless, the advantage of sp-trees over metric-trees becomes clear when we perform the defeatist search, an (1+ε)NN search algorithm based on sp-trees. 3.1 Defeatist Search As we have stated, the MT-DFS algorithm typically spends a large fraction of time backtracking to prove a candidate point is the true NN. Based on this observation, a quick revision would be to descends the metric tree using the decision boundaries at each level without backtracking, and then output the point x in the first leaf node it visits as the NN of query q. We call this the defeatist search on a metric-tree. Since the depth of a metric-tree is O(logn), the complexity of defeatist search is O(logn) per query. The problem with this approach is very low accuracy. Consider the case where q is very close to a decision boundary L, then it is almost equally likely that the NN of q is on the same side of L as on the opposite side of L, and the defeatist search can make a mistake with probability close to 1/2. In practice, we observe that there exists a non-negligible fraction of the query points that are close to one of the decision boundaries. Thus the average accuracy of the defeatist search algorithm is typically unacceptably low, even for approximate NN search. This is precisely the place where sp-trees can help: the defeatist search on sp-trees has much higher accuracy and remains very fast. We first describe the algorithm. For simplicity, we continue to use the example shown in Figure 2. As before, the decision boundary at node v is plane L. If a query q is to the left of L, we decide that its nearest neighbor is in v.lc. In this case, we only search points within N(v.lc), i.e., the points to the left of LR. Conversely, if q is to the right of L, we only search node v.rc, i.e. points to the right of LL. Notice that in either case, points in the overlapping buffer are always searched. By introducing this buffer of size τ, we can greatly reduce the probability of making a wrong decision. To see this, suppose that q is to the left of L, then the only points eliminated are the one to the right of plane LR, all of which are at least distance τ away from q. 3.2 Hybrid Sp-Tree Search One problem with spill-trees is that their depth varies considerably depending on the overlapping size τ. If τ = 0, a sp-tree turns back to a metric tree with depth O(logn). On the other hand, if τ ≥||v.rpv −v.lpv||/2, then N(v.lc) = N(v.rc) = N(v). In other words, both children of node v contain all points of v. In this case, the construction of a sp-tree does not even terminate and the depth of the sp-tree is ∞. To solve this problem, we introduce hybrid sp-trees and actually use them in practice. First we define a balance threshold ρ < 1, which is usually set to 70%. The constructions of a hybrid sp-tree is similar to that of a sp-tree except the following. For each node v, we first split the points using the overlapping buffer. However, if either of its children contains more than ρ fraction of the total points in v, we undo the overlapping splitting. Instead, a conventional metric-tree partition (without overlapping) is used, and we mark v as a nonoverlapping node. In contrast, all other nodes are marked as overlapping nodes. In this way, we can ensure that each split reduces the number of points of a node by at least a constant factor and thus we can maintain the logarithmic depth of the tree. The NN search on a hybrid sp-tree also becomes a hybrid of the MT-DFS search and the defeatist search. We only do defeatist search on overlapping nodes, for non-overlapping nodes, we still do backtracking as MT-DFS search. Notice that we can control the hybrid by varying τ. If τ = 0, we have a pure sp-tree with defeatist search — very efficient but not accurate enough; if τ ≥||v.rpv −v.lpv||/2, then every node is a non-overlapping node (due to the balance threshold mechanism) — in this way we get back to the traditional metric-tree with MT-DFS, which is perfectly accurate but inefficient. By setting τ to be somewhere in between, we can achieve a balance of efficiency and accuracy. As a general rule, the greater τ is, the more accurate and the slower the search algorithm becomes. 3.3 Further Efficiency Improvement Using Random Projection The hybrid sp-tree search algorithm is much more efficient than the traditional MT-DFS algorithm. However, this speed-up becomes less pronounced when the dimension of a dataset becomes high (say, over 30). In some sense, the hybrid sp-tree search algorithm also suffer from the curse of dimensionality, only much less severely than MT-DFS. However, a well-known technique, namely, random projection is readily available to deal with the high-dimensional datasets. In particular, the Johnson-Lindenstrauss Lemma [15] states that one can embed a dataset of n points in a subspace of dimension O(logn) with little distortion on the pair-wise distances. Furthermore, the embedding is extremely simple: one simply picks a random subspace S and project all points to S. In our (1 + ε)-NN search algorithm, we use random projection as a pre-processing step: project the datapoints to a subspace of lower dimension, and then do the hybrid sptree search. Both the construction of sp-tree and the search are conducted in the lowdimensional subspace. Naturally, by doing random projection, we will lose some accuracy. But we can easily fix this problem by doing multiple rounds of random projections and doing one hybrid sp-tree search for each round. Assume the failure probability of each round is δ, then by doing L rounds, we drive down this probability to δL. The core idea of the hash function used in [9] can be viewed as a variant of random projection.2 Random projection can be used as a pre-processing step in conjunction with other techniques such as conventional MT-DFS ˙We conducted a series of experiments which show that a modest speed-up is obtained by using random projection with MT-DFS (about 4-fold), but greater (up to 700-fold) speed-up when used with sp-tree search. Due to limited space these results will appear in the full version of this paper [19]. 4 Experimental Results We report our experimental results based on hybrid sp-tree search on a variety of real-world datasets, with the number of datapoints ranging from 20,000 to 275,465, and dimensions from 60 to 3,838. The first two datasets are same as the ones used in [9], where it is demonstrated that LSH can have a significant speedup over SR-trees. Aerial Texture feature data contain 275,465 feature vectors of 60 dimensions representing texture information of large aerial photographs [21, 20]. Corel hist 20,000 histograms (64-dimensional) of color thumbnail-sized images taken from the COREL STOCK PHOTO library. However, of the 64 dimensions, only 44 of them contain non-zero entries. See [27] for more discussions. We are unable to obtain the original dataset used in [9] from the authors, and so we reproduced our own version, following their description. We expect that the two datasets to be almost identical. Corel uci 68,040 histograms (64-dimensional) of color images from the COREL library. This dataset differs significantly from Corel hist and is available from the UCI repository [1]. Disk trace 40,000 content traces of disk-write operations, each being a 1 Kilo-Byte block (therefore having dimension 1,024). The traces are generated from a desktop computer running SuSe Linux during daily operation. Galaxy Spectra of 40,000 galaxies from the Sloan Digital Sky Survey, with 4000 dimensions. Besides the sp-tree search algorithm, we also run a number of other algorithms: LSH The original LSH implementation used in [9] is not public and we were unable to obtain it from the authors. So we used our own efficient implementation. Experiments (described later) show that ours is comparable to the one in [9]. Na¨ıve The na¨ıve linear-scan algorithm. 2The Johnson-Lindenstrauss Lemma only works for L2 norm. The “random sampling” done in the LSH in [9] roughly corresponds to the L1 version of the Johnson-Lindenstrauss Lemma. SR-tree We use the implementation of SR-trees by Katayama and Satoh [16]. Metric-Tree This is highly optimized k-NN search based on metric trees [29, 22], and code is publicly available [2]. The experiments are run on a dual-processor AMD Opteron machine of 1.60 GHz with 8 GB RAM. We perform 10-fold cross-validation on all the datasets. We measure the CPU time and accuracy of each algorithm. Since all the experiments are memory-based (all the data fit into memory completely), there is no disk access during our experiments. To measure accuracy, we use the effective distance error [3, 9], which is defined as E = 1 Q ∑q∈Q dalg d∗−1 , where dalg is the distance from a query q to the NN found by the algorithm, and d∗is the distance from q to the true NN. The sum is taken over all queries. For the k-NN case where (k > 1), we measure separately the distance ratios between the closest points found to the nearest neighbor, the 2nd closest one to the 2nd nearest neighbor and so on, and then take the average. Obviously, for all exact k-NN algorithms, E = 0, for all approximate algorithms, E ≥0. 4.1 The Experiments First, as a benchmark, we run the Na¨ıve, SR-tree, and the Metric-tree algorithms. All of them find exact NN. The results are summarized in Table 1. Table 1: the CPU time of exact SR-tree, Metric-tree, and Na¨ıve search Algorithm Aerial Corel hist Corel uci Disk trace Galaxy (%) (k = 1) (k = 10) Naive 43620 462 465 5460 27050 46760 SR-tree 23450 184 330 3230 n/a n/a Metric-tree 3650 58.4 91.2 791 19860 6600 All the datasets are rather large, and the metric-tree is consistently the fastest. On the other hand, the SR-tree implement only has limited speedup over the Na¨ıve algorithm, and it fails to run on Disk trace and Galaxy, both of which have very high dimensions. Then, for approximate NN search, we compare sp-tree with three other algorithms: LSH, traditional Metric-tree and SR-tree. For each algorithm, we measure the CPU time needed for the error E to be 1%, 2%, 5%, 10% and 20%, respectively. Since Metric-tree and SRtree are both designed for exact NN search, we also run them on randomly chosen subsets of the whole dataset to produce approximate answers. We show the comparison results of all algorithms for the Aerial and the Corel hist datasets, both for k = 1, in Figure 3. We also examine the speed-up of sp-tree over other algorithms. In particular, the CPU time and the speedup of sp-tree search over LSH is summarized in Table 2. 0 5000 10000 15000 20000 25000 1 2 5 10 20 CPU time(s) Error (%) Aerial (D=60, n=275,476, k=1) Sp-tree LSH Metric-tree SR-tree 0 50 100 150 200 250 300 350 400 1 2 5 10 20 CPU time(s) Error (%) Corel_hist (D=64, n=20,000, k=1) Sp-tree LSH Metric-tree SR-tree Figure 3: CPU time (s) vs. Error (%) for selected datasets. Since we used our own implementation of LSH, we first need to verify that it has comparable performance as the one used in [9]. We do so by examining the speedup of both implementations over SR-tree on the Aerial and Corel hist datasets, with both k = 1 and Table 2: the CPU time (s) of Sp-tree and its speedup (in parentheses) over LSH Error Aerial Corel hist Corel uci Disk trace Galaxy (%) (k = 1) (k = 10) 20 33.5 (31) 1.67 (8.3) 3.27 (6.3) 8.7 (8.0) 13.2 (5.3) 24.9 (5.5) 10 73.2 (27) 2.79 (9.8) 5.83 (7.0) 19.1 (4.9) 43.1 (2.9) 44.2 (7.8) 5 138 (31) 4.5 (11) 9.58 (6.8) 33.1 (4.8) 123 (3.7) 76.9 (11) 2 286 (26) 8 (9.5) 20.6 (4.2) 61.9 (4.4) 502 (2.5) 110 (14) 1 426 (23) 13.5 (6.4) 27.9 (4.1) 105 (4.1) 1590 (3.2) 170 (12) k = 10 for the latter.3 For the Aerial dataset, in the case where E varies from 10% to 20%, the speedup of LSH in [9] over SR-tree varies from 4 to 6, and as for our implementation, the speedup varies from 4.5 to 5.4. For Corel hist, when E ranges from 2% to 20%, in the case k = 1, the speedups of LSH in [9] ranges from 2 to 7, ours from 2 to 13. In the case k = 10, the speedup in [9] is from 3 to 12, and ours from 4 to 16. So overall, our implementation is comparable to, and often outperforms the one in [9]. Perhaps a little surprisingly, the Metric-tree search algorithm (MT-DFS) performs very well on Aerial and Corel hist datasets. In both cases, when the E is small (1%), MTDFS outperforms LSH by a factor of up to 2.7, even though it aims at finding the exact NN, while LSH only finds an approximate NN. Furthermore, the approximate MT-DFS algorithm (conventional metric-tree based search using a random subset of the training data) consistently outperforms LSH across the entire error spectrum on Aerial. We believe that it is because that in both datasets, the intrinsic dimensions are quite low and thus the Metric-tree does not suffer from the curse of dimensionality. For the rest of the datasets, namely, Corel uci, Disk trace, and Galaxy, metric-tree becomes rather inefficient because of the curse of dimensionality, and LSH becomes competitive. But in all cases, sp-tree search remains the fastest among all algorithms, frequently achieving 2 or 3 orders of magnitude in speed-up. Space does not permit a lengthy conclusion, but the summary of this paper is that there is empirical evidence that with appropriate redesign of the data structures and search algorithms, spatial data structures remain a useful tool in the realm of approximate k-NN search. 5 Related Work The idea of defeastist search, i.e., non-backtracking search, has been explored by various researchers in different contexts. See, for example, Goldstein and Ramakrishnan [10], Yianilos [30], and Indyk [14]. The latter also proposed a data structure similar to the spilltree, where the decision boundary needs to be aligned with a coordinate and there is no hybrid version. Indyk proved how this data structure can be used to solve approximate NN in the L∞norm. References [1] http://kdd.ics.uci.edu/databases/CorelFeatures/CorelFeatures.data.html. [2] http://www.autonlab.org/autonweb/showsoftware/154/. [3] S. Arya, D. Mount, N. Netanyahu, R. Silverman, and A. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM, 45(6):891–923, 1998. [4] Kevin Beyer, Jonathan Goldstein, Raghu Ramakrishnan, and Uri Shaft. When is “nearest neighbor”meaningful? Lecture Notes in Computer Science, 1540:217–235, 1999. [5] P. Ciaccia, M. Patella, and P. Zezula. M-tree: An efficient access method for similarity search in metric spaces. In Proceedings of the 23rd VLDB International Conference, September 1997. 3The comparison in [9] is on disk access while we compare CPU time. So strictly speaking, these results are not comparable. Nonetheless we expect them to be more or less consistent. [6] K. Clarkson. Nearest Neighbor Searching in Metric Spaces: Experimental Results for sb(S). , 2002. [7] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley & Sons, 1973. [8] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 3(3):209–226, September 1977. [9] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing. In Proc 25th VLDB Conference, 1999. [10] J. Goldstein and R. Ramakrishnan. Constrast Polots and P-Sphere Trees: Speace vs. Time in Nearest Neighbor Searches. In Proc. 26th VLDB conference, 2000. [11] A. Guttman. R-trees: A dynamic index structure for spatial searching. In Proceedings of the Third ACM SIGACT-SIGMOD Symposium on Principles of Database Systems. Assn for Computing Machinery, April 1984. [12] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In STOC, pages 604–613, 1998. [13] Piotr Indyk. High Dimensional Computational Geometry. PhD. Thesis, 2000. [14] Piotr Indyk. On approximate nearest neighbors under l∞norm. J. Comput. Syst. sci., 63(4), 2001. [15] W. Johnson and J. Lindenstrauss. Extensions of lipschitz maps into a hilbert space. Contemp. Math., 26:189–206, 1984. [16] Norio Katayama and Shin’ichi Satoh. The SR-tree: an index structure for high-dimensional nearest neighbor queries. pages 369–380, 1997. [17] J. Kleinberg. Two Algorithms for Nearest Neighbor Search in High Dimension. In Proceedings of the Twenty-ninth Annual ACM Symposium on the Theory of Computing, pages 599–608, 1997. [18] E. Kushilevitz, R. Ostrovsky, and Y. Rabani. Efficient Search for Approximate Nearest Neighbors in High Dimensional Spaces. In Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, 1998. [19] T. Liu, A. W. Moore, A. Gray, and Ke. Yang. An investigation of practical approximate nearest neighbor algorithms (full version). Manuscript in preparation. [20] B. S. Manjunath. Airphoto dataset, http://vivaldi.ece.ucsb.edu/Manjunath/research.htm. [21] B. S. Manjunath and W. Y. Ma. Texture features for browsing and retrieval of large image data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(8):837–842, 1996. [22] A. W. Moore. The Anchors Hierarchy: Using the Triangle Inequality to Survive HighDimensional Data. In Twelfth Conference on Uncertainty in Artificial Intelligence. AAAI Press, 2000. [23] G. Mori, S. Belongie, and J. Malik. Shape contexts enable efficient retrieval of similar shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2001. [24] S. M. Omohundro. Efficient Algorithms with Neural Network Behaviour. Journal of Complex Systems, 1(2):273–347, 1987. [25] S. M. Omohundro. Bumptrees for Efficient Function, Constraint, and Classification Learning. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3. Morgan Kaufmann, 1991. [26] F. P. Preparata and M. Shamos. Computational Geometry. Springer-Verlag, 1985. [27] Y. Rubnet, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99–121, 2000. [28] Gregory Shakhnarovich, Paul Viola, and Trevor Darrell. Fast pose estimation with parameter sensitive hashing. In Proceedings of the International Conference on Computer Vision, 2003. [29] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information Processing Letters, 40:175–179, 1991. [30] P. Yianilos. Excluded middle vantage point forests for nearest neighbor search. In DIMACS Implementation Challenge, 1999.
|
2004
|
17
|
2,585
|
Linear Multilayer Independent Component Analysis for Large Natural Scenes Yoshitatsu Matsuda ∗ Kazunori Yamaguchi Laboratory Department of General Systems Studies Graduate School of Arts and Sciences The University of Tokyo Japan 153-8902 matsuda@graco.c.u-tokyo.ac.jp Kazunori Yamaguchi yamaguch@graco.c.u-tokyo.ac.jp Abstract In this paper, linear multilayer ICA (LMICA) is proposed for extracting independent components from quite high-dimensional observed signals such as large-size natural scenes. There are two phases in each layer of LMICA. One is the mapping phase, where a one-dimensional mapping is formed by a stochastic gradient algorithm which makes more highlycorrelated (non-independent) signals be nearer incrementally. Another is the local-ICA phase, where each neighbor (namely, highly-correlated) pair of signals in the mapping is separated by the MaxKurt algorithm. Because LMICA separates only the highly-correlated pairs instead of all ones, it can extract independent components quite efficiently from appropriate observed signals. In addition, it is proved that LMICA always converges. Some numerical experiments verify that LMICA is quite efficient and effective in large-size natural image processing. 1 Introduction Independent component analysis (ICA) is a recently-developed method in the fields of signal processing and artificial neural networks, and has been shown to be quite useful for the blind separation problem [1][2][3] [4]. The linear ICA is formalized as follows. Let s and A are N-dimensional source signals and N × N mixing matrix. Then, the observed signals x are defined as x = As. (1) The purpose is to find out A (or the inverse W ) when the observed (mixed) signals only are given. In other words, ICA blindly extracts the source signals from M samples of the observed signals as follows: ˆS = W X, (2) ∗http://www.graco.c.u-tokyo.ac.jp/˜matsuda where X is an N × M matrix of the observed signals and ˆS is the estimate of the source signals. This is a typical ill-conditioned problem, but ICA can solve it by assuming that the source signals are generated according to independent and non-gaussian probability distributions. In general, the ICA algorithms find out W by maximizing a criterion (called the contrast function) such as the higher-order statistics (e.g. the kurtosis) of every component of ˆS. That is, the ICA algorithms can be regarded as an optimization method of such criteria. Some efficient algorithms for this optimization problem have been proposed, for example, the fast ICA algorithm [5][6], the relative gradient algorithm [4], and JADE [7][8]. Now, suppose that quite high-dimensional observed signals (namely, N is quite large) are given such as large-size natural scenes. In this case, even the efficient algorithms are not much useful because they have to find out all the N 2 components of W . Recently, we proposed a new algorithm for this problem, which can find out global independent components by integrating the local ICA modules. Developing this approach in this paper, we propose a new efficient ICA algorithm named “ the linear multilayer ICA algorithm (LMICA).” It will be shown in this paper that LMICA is quite efficient than other standard ICA algorithms in the processing of natural scenes. This paper is an extension of our previous works [9][10]. This paper is organized as follows. In Section 2, the algorithm is described. In Section 3, numerical experiments will verify that LMICA is quite efficient in image processing and can extract some interesting edge detectors from large natural scenes. Lastly, this paper is concluded in Section 4. 2 Algorithm 2.1 basic idea LMICA can extract all the independent components approximately by repetition of the following two phases. One is the mapping phase, which brings more highly-correlated signals nearer. Another is local-ICA phase, where each neighbor pair of signals in the mapping is separated by MaxKurt algorithm [8]. The mechanism of LMICA is illustrated in Fig. 1. Note that this illustration holds just in the ideal case where the mixing matrix A is given according to such a hierarchical model. In other words, it does not hold for an arbitrary A. It will be shown in Section 3 that this hierarchical model is quite effective at least in natural scenes. 2.2 mapping phase In the mapping phase, given signals X are arranged in a one-dimensional array so that pairs (i, j) taking higher P k x2 ikx2 jk are placed nearer. Letting Y = (yi) be the coordinate of the i-th signal xik, the following objective function µ is defined: µ (Y ) = X i,j X k x2 ikx2 jk (yi −yj)2 . (3) The optimal mapping is found out by minimizing µ with respect to Y under the constraints that P yi = 0 and P y2 i = 1. It has been well-known that such optimization problems can be solved efficiently by a stochastic gradient algorithm [11][12]. In this case, the stochastic gradient algorithm is given as follows (see [10] for the details of the derivation of this algorithm): yi (T + 1) := yi (T) −λT (ziyiζ + ziη) , (4) Figure 1: The illustration of LMICA (the ideal case): Each number from 1 to 8 means a source signal. In the first local-ICA phase, each neighbor pair of the completely-mixed signals (denoted “1-8”) is partially separated into “1-4” and “5-8.” Next, the mapping phase rearranges the partially-separated signals so that more highly-correlated signals are nearer. In consequence, the four “1-4” signals (similarly, “5-8” ones) are brought nearer. Then, the local-ICA phase partially separates the pairs of neighbor signals into “1-2,” “3-4,” “56,” and “7-8.” By repetition of the two phases, LMICA can extract all the sources quite efficiently. where λT is the step size at the T-th time step, zi = x2 ik (k is randomly selected from {1, . . . , M} at each time step), ζ = X i zi, (5) and η = X i ziyi. (6) By calculating ζ and η before the update for each i, each update requires just O (N) computation. Eq. (4) is guaranteed to converge to a local minimum of the objective function µ (Y ) if λT decreases sufficiently slowly (limT →∞λT = 0 and P λT = ∞). Because the Y in the above method is continuous, each continuous yi is replaced by the ranking of itself in Y in the last of the mapping phase. That is, yi := 1 for the largest yi, yj := N for the smallest one, and so on. The corresponding permutation σ is given as σ (i) = yi. The total procedure of the mapping phase for given X is described as follows: mapping phase 1. xik := xik −¯xi for each i, k, where ¯xi is the mean P k xik M . 2. yi = i, and σ (i) = i for each i. 3. Until the convergence, repeat the following steps: (a) Select k randomly from {1, . . . , M}, and let zi = x2 ik for each i. (b) Update each yi by Eq. (4). (c) Normalize Y to satisfy P i yi = 0 and P i y2 i = 1. 4. Discretize yi. 5. Update X by xσ(i)k := xik for each i and k. 2.3 local-ICA phase In the local-ICA phase, the following contrast function φ (X) (the sum of kurtoses) is used (MaxKurt algorithm in [8]): φ (X) = − X i,k x4 ik, (7) and φ (X) is minimized by “rotating” the neighbor pairs of signals (namely, under an orthogonal transformation). For each neighbor pair (i, i + 1), a rotation matrix Ri (θ) is given as Ri (θ) = Ii−1 0 0 0 0 cos θ sin θ 0 0 −sin θ cos θ 0 0 0 0 IN−i−2 , (8) where In is the n × n identity matrix. Then, the optimal angle ˆθ is given as ˆθ = argminθφ ¡ X′¢ , (9) where X′ (θ) = Ri (θ) X. After some tedious transformation of the equations (see [8]), it is shown that ˆθ is determined analytically by the following equations: sin 4ˆθ = αij q α2 ij + β2 ij , cos 4ˆθ = βij q α2 ij + β2 ij , (10) where αij = X k ¡ x3 ikxjk −xikx3 jk ¢ , βij = P k ³ x4 ik + x4 jk −6x2 ikx2 jk ´ 4 , (11) and j = i + 1. Now, the procedure of the local-ICA phase for given X is described as follows: local-ICA phase 1. Let W local = IN, Alocal = IN 2. For each i = {1, . . . , N −1}, (a) Find out the optimal angle ˆθ by Eq. (10). (b) X := Ri(ˆθ)X, W local := RiW local, and Alocal := AlocalRt i. 2.4 complete algorithm The complete algorithm of LMICA for any given observed signals X is given by repeating the mapping phase and the local-ICA phase alternately. Here, P σ is the permutation matrix corresponding to σ. linear multilayer ICA algorithm 1. Initial Settings: Let X be the given observed signal matrix, and W and A be IN. 2. Repetition: Do the following two phases alternately over L times. (a) Mapping Phase: Find out the optimal permutation matrix P σ and the optimally-arranged signals X by the mapping phase. Then, W := P σW and A := AP t σ. (b) Local-ICA Phase: Find out the optimal matrices W local, Alocal, and X. Then, W := W localW and A := AAlocal. 2.5 some remarks Relation to MaxKurt algorithm. Eq. (10) is just the same as MaxKurt algorithm [8]. The crucial difference between our LMICA and MaxKurt is that LMICA optimizes just the neighbor pairs instead of all the N(N−1) 2 ones in MaxKurt. In LMICA, the pairs with higher “costs” (higher P k x2 ikx2 jk) are brought nearer in the mapping phase. So, independent components can be extracted effectively by optimizing just the neighbor pairs. Contrast function. In order to make consistency between this paper and our previous work [10], the following contrast function φ instead of Eq. (7) is used in Section 3: φ (X) = X i,j,k x2 ikx2 jk. (12) The minimization of Eq. (12) is equivalent to that of Eq. (7) under the orthogonal transformation. Pre-whitening. Though LMICA (which is based on MaxKurt) presupposes that X is pre-whitened, the algorithm in Section 2.4 is applicable to any raw X without the pre-whitening. Because any pre-whitening method suitable for LMICA has not been found out yet, raw images of natural scenes are given as X in the numerical experiments in Section 3. In this non-whitening case, the mixing matrix A is limited to be orthogonal and the influence of the second-order statistics is not removed. Nevertheless, it will be shown in Section 3 that the higher-order statistics of X cause some interesting results. 3 Results It has been well-known that various local edge detectors can be extracted from natural scenes by the standard ICA algorithm [13][14]. Here, LMICA was applied to the same problem. 30000 samples of natural scenes of 12×12 pixels were given as the observed signals X. That is, N and M were 144 and 30000. Original natural scenes were downloaded at http://www.cis.hut.fi/projects/ica/data/images/. The number of layers L was set 720, where one layer means one pair of the mapping and the local-ICA phases. For comparison, the experiments without the mapping phase were carried out, where the mapping Y was randomly generated. In addition, the standard MaxKurt algorithm [8] was used with 10 iterations. The contrast function φ (Eq. (12)) was calculated at each layer, and it was averaged over 10 independently generated Xs. Fig. 2-(a) shows the decreasing curves of φ of normal LMICA and the one without the mapping phase. The cross points show the result at each iteration of MaxKurt. Because one iteration of MaxKurt is equivalent to 72 layers of LMICA with respect to the times of the optimizations for the pairs of signals, a scaling (×72) is applied. Surprisingly, LMICA nearly converged to the optimal point within just 10 layers. The number of parameters within 10 layers is 143×10, which is much fewer than the degree of freedom of A ( 144×143 2 ). It suggests that LMICA gives a quite suitable model for natural scenes. The calculation time with the values of φ is shown in Table. 1. It shows that the time costs of the mapping phase are not much higher than those of the local-ICA phase. The fact that 10 layers of LMICA required much less time (22sec.) than one iteration of MaxKurt (94sec.) and optimized φ approximately (4.91) verifies the efficiency of LMICA. Note that each iteration of MaxKurt can not be stopped halfway. Fig. 3 shows 5 × 5 representative edge detectors at each layer of LMICA. At the 20th layer (Fig. 3-(a)), rough and local edge detectors were recognized, though they were a little unclear. As the layer proceeded, edge detectors became clearer and more global (see Figs. 3-(b) and 3-(c)). It is interesting that ICA-like local edges (where the higherorder statistics are dominant) at the early stage were transformed to PCA-like global edges (the second-order statistics are dominant) at the later stage (see [13]). For comparison, Fig. 3-(d) show the result at the 10th iteration of MaxKurt. It is similar to Fig. 3-(c) as expected. In addition, we used large-size natural scenes. 100000 samples of natural scenes of 64 × 64 pixels were given as X. MaxKurt and other well-known ICA algorithms are not available for such a large-scale problem because they require huge computation. Fig. 2-(b) shows the decreasing curve of φ in the large-size natural scenes. LMICA was carried out in 1000 layers, and it consumed about 69 hours with Intel 2.8GHz CPU. It shows that LMICA rapidly decreased in the first 20 layers and converged around the 500th layer. It verifies that LMICA is quite efficient in the analysis of large-size natural scenes. Fig. 4 shows some edge detectors generated at the 1000th layer. It is interesting that some “compound” detectors such as a “cross” were generated in addition to simple “long-edge” detectors. In a famous previous work [13] which applied ICA and PCA to small-size natural scenes, symmetric global edge detectors similar to our “compound” ones could be generated by PCA which manages only the second-order statistics. On the other hand, asymmetric local edge detectors similar to our simple “long-edge” ones could not be generated by PCA and could be extracted by ICA utilizing the higher-order statistics. In comparison with it, our LMICA could extract various local and global detectors simultaneously from large-size natural scenes. Besides, it is expected from the results for small-size images (see Fig. 3) that other various detectors are generated at each layer. In summary, those results show that LMICA can extract quite many useful and various detectors from large-size natural scenes efficiently. It is also interesting that there was a plateau in the neighborhood of the 10th layer. It suggests that large-size natural scenes may be generated by two different generative models. But, the close inspection is beyond the scope of this paper. 4 Conclusion In this paper, we proposed the linear multilayer ICA algorithm (LMICA). We carried out some numerical experiments on natural scenes, which verified that LMICA can find out the approximations of independent components quite efficiently and it is applicable to large problems. We are now analyzing the results of LMICA in large-size natural scenes of 64 × 64 pixels, and we are planning to apply this algorithm to quite large-scale images such as the ones of 256 × 256 pixels. We are also planning to utilize LMICA in the data mining Table 1: Calculation time with the values of the contrast function φ (Eq. (12)): They are the averages over 10 runs at the 10th layer (approximation) and the 720th layer (convergence) in LMICA (the normal one and the one without the mapping phase). In addition, those of 10 iterations in MaxKurt (approximately corresponding to L = 10 × 72 = 720) are shown. They were calculated in Intel 2.8GHz CPU. LMICA LMICA without mapping MaxKurt (10 iterations) 10th layer 22sec. (4.91) 9.3sec. (17.6) 720th layer 1600sec. (4.57) 670sec. (4.57) 940sec. (4.57) of quite high-dimensional data space, such as the text mining. In addition, we are trying to find out the pre-whitening method suitable for LMICA. Some normalization techniques in the local-ICA phase may be promising. References [1] C. Jutten and J. Herault. Blind separation of sources (part I): An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24(1):1–10, jul 1991. [2] P. Comon. Independent component analysis - a new concept? Signal Processing, 36:287–314, 1994. [3] A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7:1129–1159, 1995. [4] J.-F. Cardoso and Beate Laheld. Equivariant adaptive source separation. IEEE Transactions on Signal Processing, 44(12):3017–3030, dec 1996. [5] A. Hyv¨arinen and E. Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7):1483–1492, 1997. [6] A. Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [7] Jean-Franc¸ois Cardoso and Antoine Souloumiac. Blind beamforming for non Gaussian signals. IEE Proceedings-F, 140(6):362–370, dec 1993. [8] Jean-Franc¸ois Cardoso. High-order contrasts for independent component analysis. Neural Computation, 11(1):157–192, jan 1999. [9] Yoshitatsu Matsuda and Kazunori Yamaguchi. Linear multilayer ica algorithm integrating small local modules. In Proceedings of ICA2003, pages 403–408, Nara, Japan, 2003. [10] Yoshitatsu Matsuda and Kazunori Yamaguchi. Linear multilayer independent component analysis using stochastic gradient algorithm. In Independent Component Analysis and Blind source separation - ICA2004, volume 3195 of LNCS, pages 303–310, Granada, Spain, sep 2004. Springer-Verlag. [11] Yoshitatsu Matsuda and Kazunori Yamaguchi. Global mapping analysis: stochastic approximation for multidimensional scaling. International Journal of Neural Systems, 11(5):419–426, 2001. [12] Yoshitatsu Matsuda and Kazunori Yamaguchi. An efficient MDS-based topographic mapping algorithm. Neurocomputing, 2005. in press. [13] A. J. Bell and T. J. Sejnowski. The ”independent components” of natural scenes are edge filters. Vision Research, 37(23):3327–3338, dec 1997. [14] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London: B, 265:359–366, 1998. (a). for small-size images. (b). for large-size images. Figure 2: Decreasing curve of the contrast function φ along the number of layers (in logscale): (a). It is for small-size natural scenes of 12 × 12 pixels. The normal and dotted curves show the decreases of φ by LMICA and the one without the mapping phase (random mapping), respectively. The cross points show the results of MaxKurt. Each iteration in MaxKurt approximately corresponds to 72 layers with respect to the times of the optimizations for the pairs of signals. (b). It is for large-size natural scenes of 64 × 64 pixels. The curve displays the decrease of φ by LMICA in 1000 layers. (a). at 20th layer. (b). at 100th layer. (c). at 720th layer. (d). MaxKurt. Figure 3: Representative edge detectors from natural scenes of 12 × 12 pixels: (a). It displays the basis vectors generated by LMICA at the 20th layer. (b). at the 100th layer. (c). at the 720th layer. (d). It shows the ones after 10 iterations of MaxKurt algorithm. Figure 4: Representative edge detectors from natural scenes of 64 × 64 pixels.
|
2004
|
170
|
2,586
|
Identifying protein-protein interaction sites on a genome-wide scale Haidong Wang∗Eran Segal≀Asa Ben-Hur† Daphne Koller∗Douglas L. Brutlag‡ ∗Computer Science Department, Stanford University, CA 94305 {haidong, koller}@cs.stanford.edu ≀Center for Studies in Physics and Biology, Rockefeller University, NY 10021 eran@cs.stanford.edu † Department of Genome Sciences, University of Washington, WA 98195 asa@gs.washington.edu ‡ Department of Biochemistry, Stanford University, CA 94305 brutlag@stanford.edu Abstract Protein interactions typically arise from a physical interaction of one or more small sites on the surface of the two proteins. Identifying these sites is very important for drug and protein design. In this paper, we propose a computational method based on probabilistic relational model that attempts to address this task using high-throughput protein interaction data and a set of short sequence motifs. We learn the model using the EM algorithm, with a branch-and-bound algorithm as an approximate inference for the E-step. Our method searches for motifs whose presence in a pair of interacting proteins can explain their observed interaction. It also tries to determine which motif pairs have high affinity, and can therefore lead to an interaction. We show that our method is more accurate than others at predicting new protein-protein interactions. More importantly, by examining solved structures of protein complexes, we find that 2/3 of the predicted active motifs correspond to actual interaction sites. 1 Introduction Many cellular functions are carried out through physical interactions between proteins. Discovering the protein interaction map can therefore help to better understand the workings of the cell. Indeed, there has been much work recently on developing high-throughput methods to produce a more complete map of protein-protein interactions [1, 2, 3]. Interactions between two proteins arise from physical interactions between small regions on the surface of the proteins [4] (see Fig. 2(b)). Finding interaction sites is an important task, which is of particular relevance to drug design. There is currently no highthroughput experimental method to achieve this goal, so computational methods are required. Existing methods either require solving a protein’s 3D structure (e.g., [5]), and therefore are computationally very costly and not applicable on a genome-wide scale, or use known interaction sites as training data (e.g., [6]), which are relatively scarce and hence have poor coverage. Other work focuses on refining the highly noisy high-throughput interaction maps [7, 8, 9], or on assessing the confidence levels of the observed interactions [10]. In this paper, we propose a computational method for predicting protein interactions b b a c d d P1 P5 P2 P3 P4 d b P1 P2 Ad Ab Ad P5 Ad Aa T12 Adb Aab Bdb Bab O S T15 Aad Bab O S T25 Abd Bab O S Add Bab (a) (b) Figure 1: (a) Simple illustration of our assumptions for protein-protein interactions. The small elements denote motif occurrences on proteins, with red denoting active and gray denoting inactive motifs. (b) A fragment of our probabilistic model, for the proteins P1, P2, P5. We use yellow to denote an assignment of the value true, and black to denote the value false; full circles denote an assignment observed in the data, and patterned circles an assignment hypothesized by our algorithm. The dependencies involving inactive motif pairs were removed from the graph because they do not affect the rest of the model. and the sites at which the interactions take place, which uses as input only high-throughput protein-protein interaction data and the protein sequences. In particular, our method assumes no knowledge of the 3D protein structure, or of the sites at which binding occurs. Our approach is based on the assumption that interaction sites can be described using a limited repertoire of conserved sequence motifs [11]. This is a reasonable assumption since interaction sites are significantly more conserved than the rest of the protein surface [12]. Given a protein interaction map, our method tries to explain the observed interactions by identifying a set of sites of motif occurrence on every pair of interacting proteins through which the interaction is mediated. To understand the intuition behind our approach, consider the example of Fig. 1(a). Here, the interaction pattern of the protein P1 can best be explained using the motif pair a, b, where a appears in P1 and b in the proteins P2, P3, P4 but not in P5. By contrast, the motif pair d, b is not as good an explanation, because d also appears in P5, which has a different interaction pattern. In general, our method aims to identify motif pairs that have high affinity, potentially leading to interaction between protein pairs that contain them. However, a sequence motif might be used for a different purpose, and not give rise to an active binding site; it might also be buried inside the protein, and thus be inaccessible for interaction. Thus, the appearance of an appropriate motif does not always imply interaction. A key feature of our approach is that we allow each motif occurrence in a protein to be either active or inactive. Interactions are then induced only by the interactions of highaffinity active motifs in the two proteins. Thus, in our example, the motif d in p2 is inactive, and hence does not lead to an interaction between p2 and p4, despite the affinity between the motif pair c, d. We note that Deng et al. [8] proposed a somewhat related method for genome-wide analysis of protein interaction data, based on protein domains. However, their method is focused on predicting protein-protein interactions and not on revealing the site of interaction, and they do not allow for the possibility that some domains are inactive. Our goal is thus to identify two components: the affinities between pairs of motifs, and the activity of the occurrences of motifs in different proteins. Our algorithm addresses this problem by using the framework of Bayesian networks [13] and probabilistic relational models [14], which allows us to handle the inherent noise in the protein interaction data and the uncertain relationship between interactions and motif pairs. We construct a model encoding our assumption that protein interactions are induced by the interactions of active motif pairs. We then use the EM algorithm [15], to fill in the details of the model, learning both the motif affinities and activities from the observed data of protein-protein interactions and protein motif occurrences. We address the computational complexity of the E-step in these large, densely connected models by using an approximate inference procedure based on branch-and-bound. We evaluated our model on protein-protein interactions in yeast and Prosite motifs [11]. As a basic performance measure, we evaluated the ability of our method to predict new protein-protein interactions, showing that it achieves better performance than several other models. In particular, our results validate our assumption that we can explain interactions via the interactions of active sequence motifs. More importantly, we analyze the ability of our method to discover the mechanism by which the interaction occurs. Finally, we examined co-crystallized protein pairs where the 3D structure of the interaction is known, so that we can determine the sites at which the interaction took place. We show that our active motifs are more likely to participate in interactions. 2 The Probabilistic Model The basic entities in our probabilistic model are the proteins and the set of sequence motifs that can mediate protein interactions. Our model therefore contains a set of protein entities P = {P1, . . . , Pn}, with the motifs that occur in them. Each protein P is associated with the set of motifs that occur in it, denoted by P.M. As we discussed, a key premise of our approach is that a specific occurrence of a sequence motif may or may not be active. Thus, each motif occurrence a ∈P.M is associated with a binary-value variable P.Aa, which takes the value true if Aa is active in protein P and false otherwise. We structure the prior probability P(P.Aa = true) = min{0.8, 3+0.1·|P.M| |P.M| }, to capture our intuition that the number of active motifs in a protein is roughly a constant fraction of the total number of motifs in the protein, but that even proteins with few motifs tend to have at least some number of active motifs. A pair of active motifs in two proteins can potentially bind and induce an interaction between the corresponding proteins. Thus, in our model, a pair of proteins interact if each contains an active motif, and this pair of motifs bind to each other. The probability with which two motifs bind to each other is called their affinity. We encode this assumption by including in our model entities Tij corresponding to a pair of proteins Pi, Pj. For each pair of motifs a ∈Pi.M and b ∈Pj.M, we introduce a variable Tij.Aab, which is a deterministic AND of the activity of these two motifs. Intuitively, this variable represents whether the pair of motifs can potentially interact. The probability with which two active motif occurrences bind is their affinity. We model the binding event between two motif occurrences using a variable Tij.Bab, and define: P(Tij.Bab = true | Tij.Aab = true) = θab and P(Tij.Bab = true | Tij.Aab = false) = 0, where θab is the affinity between motifs a and b. This model reflects our assumption that two motif occurrences can bind only if they are both active, but their actual binding probability depends on their affinity. Note that this affinity is a feature of the motif pair and does not depend on the proteins in which they appear. We must also account for interactions that are not explained by our set of motifs, whether because of false positives in the data, or because of inadequacies of our model or of our motif set. Thus, we add a spurious binding variable Tij.S, for cases where an interaction between Pi and Pj exists, but cannot be explained well by our set of active motifs. The probability that a spurious binding occurs is given by P(Tij.S = true) = θS. Finally, we observe an interaction between two proteins if and only if some form of binding occurs, whether by a motif pair or a spurious binding. Thus, we define a variable Tij.O, which represents whether protein i was observed to interact with protein j, to be a deterministic OR of all the binding variables Tij.S and Tij.Bab. Overall, Tij.O is a noisy-OR [13] of all motif pair variables Tij.Aab. Note that our model accounts for both types of errors in the protein interaction data. False negatives (missing interactions) in the data are addressed through the fact that the presence of an active motif pair only implies that binding takes place with some probability. False positives (wrong interactions) in the data are addressed through the introduction of the spurious interaction variables. The full model defines a joint probability distribution over the entire set of attributes: P(P.A, T.A, T.B, T.S, T.O) = Q i Q a∈Pi.M P(Pi.Aa) Q ij Q a∈Pi.M,b∈Pj.M P(Tij.Aab | Pi.Aa, Pj.Ab)P(Tij.Bab | Tij.Aab) P(Tij.S)P(Tij.O | Tij.B, Tij.S) where each of these conditional probability distributions is as specified above. We use Θ to denote the entire set of model parameters {θa,b}a,b ∪{θS}. An instantiation of our probabilistic model is illustrated in Fig. 1(b). 3 Learning the Model We now turn to the task of learning the model from the data. In a typical setting, we are given as input a protein interaction data set, specifying a set of proteins P and a set of observed interacting pairs T.O. We are also given a set of potentially relevant motifs, and the occurrences of these motifs in the different proteins in P. Thus, all the variables except for the O variables are hidden. Our learning task is thus twofold: we need to infer the values of the hidden variables, both the activity variables P.A, T.A, and the binding variables T.B, T.S; we also need to find a setting of the model parameters Θ, which specify the motif affinities. We use a variant of the EM algorithm [15] to find both an assignment to the parameters Θ, and an assignment to the motif variables P.A, which is a local maximum of the likelihood function P(T.O, P.A | Θ). Note that, to maximize this objective, we search for a MAP assignment to the motif activity variables, but sum out over the other hidden variables. This design decision is reasonable in our setting, where determining motif activities is an important goal; it is a key assumption for our computational procedure. As in most applications of EM, our main difficulty arises in the E-step, where we need to compute the distribution over the hidden variables given the settings of the observed variables and the current parameter settings. In our model, any two motif variables (both within the same protein and across different proteins) are correlated, as there exists a path of influence between them in the underlying Bayesian network (see Fig. 1(c)). These correlations make the task of computing the posterior distribution over the hidden variables intractable, and we must resort to an approximate computation. While we could apply a general purpose approximate inference algorithm such as loopy belief propagation [16], such methods may not converge in densely connected model such as this one, and there are few guarantees on the quality of the results even if they do converge. Fortunately, our model turns out to have additional structure that we can exploit. We now describe an approximate inference algorithm that is tailored to our model, and is guaranteed to converge to a (strong) local maximum. Our first observation is that the only variables that correlate the different protein pairs Tij are the motif variables P.A. Given an assignment to these activity variables, the network decomposes into a set of independent subnetworks, one for each protein pair. Based on this observation, we divide our computation of the E-step into two parts. In the first, we find an assignment to the motif variables in each protein, P.A; in the second, we compute the posterior probability over the binding motif pair variables T.B, T.S, given the assignment to the motif variables. We begin by describing the second phase. We observe that, as all the motif pair variables, T.A, are fully determined by the motif variables, the only variables left to reason about are the binding variables T.B and T.S. The variables for any pair Tij are independent of the rest of the model given the instantiation to T.A and the interaction evidence. That fact, combined with the noisy-OR form of the interaction, allows us to compute the posterior probability required in the E-step exactly and efficiently. Specifically, the computation for the variables associated with a particular protein pair Tij is as follows, where we omit the common prefix Tij to simplify notation. If Aab = false, then P(Bab = true | Aab = false, O, Θ) = 0. Otherwise, if Aab = true, then P(Bab = true | A, O, Θ) = P(Bab | A, Θ)P(O | Bab = true, A, Θ) P(O | A, Θ) . The first term in the numerator is simply the motif affinity θab; the second term is 1 if O = true and 0 otherwise. The numerator can easily be computed as P(O | A, Θ) = 1 −(1 −θS) Q Aa,b=true(1 −θab). The computation for P(S) is very similar. We now turn to the first phase, of finding a setting to all of the motif variables. Unfortunately, as we discussed, the model is highly interconnected, and a finding an optimal joint setting to all of these variables P.A is intractable. We thus approximate finding this joint assignment using a method that exploits our specific structure. Our method iterates over proteins, finding in each iteration the optimal assignment to the motif variables of each protein given the current assignment to the motif activities in the remaining proteins. The process repeats, iterating over proteins, until convergence. As we discussed, the likelihood of each assignment to Pi.A can be easily computed using the method described above. However, the computation for each protein is still exponential in the number of motifs it contains, which can be large (e.g., 15). However, in our specific model, we can apply the following branch-and-bound algorithm (similar to an approach proposed by Henrion [17] for BN2O networks) to find the globally optimal assignment to the motif variables of each protein. The idea is that we search over the space of possible assignments Pi.A for one that maximizes the objective we wish to maximize. We can show that if making a motif active relative to one assignment does not improve the objective, it will also not improve the objective relative to a large set of other assignments. More precisely, let f(Pi.A) = P(Pi.A, P−i.A|O, θ) denote the objective we wish to maximize, where P−i.A is the fixed assignment to motif variables in all proteins except Pi. Let Pi.A−a denote the assignment to all the motif variables in Pi except for Aa. We compute the ratio of f after we switch Pi.Aa from false to true. Let ha(Pj) = Q Pj.Ab=true(1 −θab) denote the probability that motif a does not bind with any active motif in Pj. We can now compute: ∆a(Pi.A−a) = f(Pi.Aa = true, Pi.A−a) f(Pi.Aa = false, Pi.A−a) = g 1 −g · Y 1≤j≤n Tij.O=false ha(Pj) · Y 1≤j≤n Tij.O=true 1 −(1 −θS)ha(Pj) Q a̸=b,Pi.Ab=true hb(Pj) 1 −(1 −θS) Q a̸=b,Pi.Ab=true hb(Pj) (1) where g is the prior probability for a motif in protein Pi to be active. Now, consider a different point in the search, where our current motif activity assignment is Pi.A′ −a, which has all the active motifs in Pi.A−a and some additional ones. The first two terms in the product of Eq. (1) are the same for ∆a(Pi.A−a) and ∆a(Pi.A′ −a). For the final term (the large fraction), one can show using some algebraic manipulation that this term in ∆a(Pi.A′ −a) is lower than that for ∆a(Pi.A−a). We conclude that ∆a(Pi.A−a) ≥∆a(Pi.A′ −a), and hence that: f(Pi.Aa = true, Pi.A−a) f(Pi.Aa = false, Pi.A−a) ≤1 ⇒f(Pi.Aa = true, Pi.A′ −a) f(Pi.Aa = false, Pi.A′ −a) ≤1. It follows that, if switching motif a from inactive to active relative to Pi.A decreases f, it will also decrease f if we have some additional active motifs. We can exploit this property in a branch-and-bound algorithm in order to find the globally optimal assignment Pi.A. Our algorithm keeps a set V of viable candidates for motif assignments. For presentation, we encode assignments via the set of active motifs they contain. Initially, V contains only the empty assignment {}. We start out by considering motif assignments with a single active motif. We put such an assignment {a} in V if its f-score is higher than f({}). Now, we consider assignments {a, b} that have two active motifs. We consider {a, b} only if both {a} and {b} are in V . If so, we evaluate its f-score, and add it to V if this score is greater than that of {a} and {b}. Otherwise, we throw it away. We continue this process for all assignments of size k: For each assignment with active motif set S, we test whether S −{a} ∈V for all a ∈S; if we compare f(S) to each f(S −{a}), and add it if it dominates all of them. The algorithm terminates when, from some k, no assignment of size k is saved. To understand the intuition behind this pruning procedure, consider a candidate assignment {a, b, c, d}, and assume that {a, b, c} ∈V , but {b, c, d} ̸∈V . In this case, we must have that {b, c} ∈V , but adding d to that assignment reduces the f-score. In this case, as shown by our analysis, adding d to the superset {a, b, c} would also reduce the f-score. This algorithm is still exponential in worst case. However, in our setting, a protein with many motifs has a low prior probability that each of them is active. Hence, adding new motifs is less likely to increase the f-score, and the algorithm tends to terminate quickly. As we show in Section 4, this algorithm significantly reduces the cost of our procedure. Our E-step finds an assignment to P.A which is a strong local optimum of the objective function max P(P.A | T.O, Θ): The assignment has higher probability than any assignment that changes any of the motif variables for any single protein. For that assignment, our algorithm also computes the distribution over all of the binding variables, as described above. Using this completion, we can now easily compute the (expected) sufficient statistics for the different parameters in the model. As each of these parameters is a simple binomial distribution, the maximum likelihood estimation in the M-step is entirely standard; we omit details. 4 Results We evaluated our model on reliable S. cerevisiae protein interactions data from MIPS [2] and DIP [3] databases. As for non-interaction data, we randomly picked pairs of proteins that have no common function and cellular location. This results in a dataset of 2275 proteins, 4838 interactions (Tij.O = true), and 9037 non-interactions (Tij.O = false). We used sequence motifs from the Prosite database [11] resulting in a dataset of 516 different motifs with an average of 7.1 motif occurrences per protein. If a motif pair doesn’t appear between any pair of interacting proteins, we initialize its affinity to be 0 to maximize the joint likelihood. Its affinity will stay at 0 during the EM iterations and thus simplify our model structure. We set the initial affinity for the remaining 8475 motif pairs to 0.03. We train our model with motifs initialized to be either all active (P.A = true) or all inactive (P.A = false). We get similar results with these two different initializations, indicating the robustness of our algorithm. Below we only report the results based on all motifs initialized to be active. Our branch-and-bound algorithm is able to significantly reduce the number of motif activity assignments that need to be evaluated. For a protein with 15 motifs, the number of assignments evaluated is reduced from 215 = 32768 in exhaustive search to 802 using our algorithm. Since majority of the computation is spent on finding the activity assignments, this resulted in a 40 fold reduction in running time. Predicting protein-protein interactions. We test our model by evaluating its performance in predicting interactions. We test this performance using 5-fold cross validation on the set of interacting and non-interacting protein pairs. In each fold, we train a model and predict P(Tij.O) = true for pairs Pi, Pj in the held-out interactions. Many motif pairs are over-represented in interacting proteins. We thus compare our method to a baseline method that ranks pairs of proteins on the basis of the maximum enrichment of over-represented motif pairs (see [18] for details). We also compare it to a model where all motifs are set to be active; this is analogous to the method of Deng et al. [8]. For completeness, we compare the two variants of the model using data on the domain (Pfam and ProDom [19]) content of the proteins as well as the Prosite motif content. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Proportion of all non-interactions predicted to interact Proportion of all interactions predicted to interact Association (Sprinzak & Margalit) Prosite motif (Allow inactive motif. P.A = {0, 1}) Prosite motif (All motifs active. P.A = 1) Pfam&ProDom (All domains active. P.A = 1. Deng et al) (a) (b) Figure 2: (a) ROC curve for different methods. The X-axis is the proportion of all non-interacting protein pairs in the training data predicted to interact. Y-axis is the proportion of all interacting protein pairs in the training data predicted to interact. Points are generated using different cutoff probabilties. A larger area under the curve indicates better prediction. Our method (square marker) outperforms all other methods. (b) Two protein chains that form a part of the 1ryp complex in PDB, interacting at the site of two short sequence motifs. The ROC curves in Fig. 2(a) show that our method outperforms the other methods, and that the additional degree of freedom of allowing motifs to be inactive is essential. These results validate our modeling assumptions; they also show that our method can be used to suggest new interactions and to assign confidence levels on observed interactions, which is much needed in view of the inaccuracies and large fraction of missing interactions in current interaction databases. Evaluating predicted active motifs. A key feature of our approach is its ability to detect pairs of interacting motifs. We evaluate these predictions against the data from Protein Data Bank (PDB) [20], which contains some solved structures of interacting proteins Fig. 2(b). While the PDB data is scarce, it provides the ultimate evaluation of our predicted active motifs. We extracted all structures from PDB that have at least two co-crystallized chains, and whose chains are nearly identical to yeast proteins. From the residues that are in contact between two chains (distance < 5 Angstr¨oms), we infer which protein motifs participate in interactions. Among our training data, 105 proteins have co-crystallized structure in PDB. On these proteins, our data contained a total of 620 motif occurrences, of which 386 are predicted to be active. Among those motifs predicted to be active, 257 of them (66.6%) are interacting in PDB. Among the 234 motifs predicted to be inactive, only 120 of them (51.3%) are interacting. The chi-square p-value is 10−4. On the residue level, our predicted active motifs consist of 3736 amino acids, and 1388 of them (37.2%) are interacting. In comparison, our predicted inactive motifs consist of 3506 amino acids, and only 588 of them (16.0%) are interacting. This significant enrichment provides support for the ability of our method to detect motifs that participate in interactions. In fact, the set of interactions in PDB is only a subset of the interactions those proteins participate in. Therefore, the actual rate of false positive active motifs is likely to be lower than we report here. 5 Discussion and Conclusions In this paper, we presented a probabilistic model which explicitly encodes elements in the protein sequence that mediate protein-protein interactions. By using a variant of the EM algorithm and a branch-and-bound algorithm for the E-step, we make the learning procedure tractable. Our result shows that our method successfully uncovers motif activities and binding affinities, and uses them to predict both protein interactions and specific binding sites. The ability of our model to predict structural elements, without a full structure analysis, provides support for the viability of our approach. Our use of a probabilistic model provides us with a general framework to incorporate different types of data into our model, allowing it to be extended in varies ways. First, we can incorporate additional signals for protein interactions, such as gene expression data (as in [9]), cellular location, or even annotations from the literature (as in [7]). We can also integrate protein interaction data across multiple species; for example, we might try to use the yeast interaction data to provide more accurate predictions for the protein-protein interactions in fly [10]. References [1] P. Uetz, et al.A comprehensive analysis of protein-protein interactions in saccharomyces cerevisiae. Nature, 403(6770):623–7, 2000. 0028-0836 Journal Article. [2] H. W. Mewes, et al. Mips: a database for genomes and protein sequences. Nucleic Acids Res, 2002. [3] I. Xenarios, et al.Dip ; the database of interacting proteins: a research tool for studying cellular networks of protein interactions. Nucleic Acids Research, 30(1):303–305, 2002. (c) 2002 Inst. For Sci. Info. [4] P. Chakrabarti and J. Janin. Dissecting protein protein recognition sites. PROTEINS: Structure, Function, and Genetics, 47:334–343, 2002. [5] J. J. Gray, et al.Protein protein docking with simultaneous optimization of rigid-body displacement and side-chain conformations. Journal of Molecular Biology, 331:281–299, 2003. [6] Y. Ofran and B. Rost. Predicted protein-protein interaction sites from local sequence information. FEBS Lett., 544(1-3):236–239, 2003. [7] R. Jansen, et al.A bayesian networks approach for predicting protein-protein interactions from genomic data. Science, 302:449–53, 2003. [8] M. Deng, S. Mehta, F. Sun, and T. Chen. Inferring domain-domain interactions from proteinprotein interactions. Genome Res, 12(10):1540–8, 2002. 22253763 1088-9051 Journal Article. [9] E. Segal, H. Wang, and D. Koller. Discovering molecular pathways from protein interaction and gene expression data. Bioinformatics, 19 Suppl 1:I264–I272, 2003. 1367-4803 Journal Article. [10] L. Giot, et al.A protein interaction map of drosophila melanogaster. Science, 302(5651):1727– 36, 2003. [11] L. Falquet, et al. The PROSITE database, its status in 2002. Nucliec Acids Research, 30:235– 238, 2002. [12] D. R. Caffrey, et al. Are protein protein interfaces more conserved in sequence than the rest of the protein surface? Protein Science, 13:190–202, 2003. [13] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [14] D. Koller and A. Pfeffer. Probabilistic frame-based systems. In Proc. AAAI, pages 580–587, 1998. [15] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the em algorithm. J. Roy. Stat. Soc., B(39):1–39, 1977. [16] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS, pages 689–695, 2000. [17] M. Henrion. Search-based methods to bound diagnostic probabilities in very large belief nets. In Uncertainty in Artificial Intelligence, pages 142–150, 1991. [18] E. Sprinzak and H. Margalit. Correlated sequence-signatures as markers of protein-protein interaction. Journal of Molecular Biology, 311:681–692, 2001. [19] R. Apweiler, et al. The interpro database, an integrated documentation resource for protein families, domains and functional sites. Nucleic Acids Res, 29(1):37–40, 2001. 1362-4962 Journal Article. [20] H.M. Berman, et al.The protein data bank. Nucleic Acids Research, 28:235–242, 2000.
|
2004
|
171
|
2,587
|
Proximity graphs for clustering and manifold learning Miguel ´A. Carreira-Perpi˜n´an Richard S. Zemel Dept. of Computer Science, University of Toronto 6 King’s College Road. Toronto, ON M5S 3H5, Canada Email: {miguel,zemel}@cs.toronto.edu Abstract Many machine learning algorithms for clustering or dimensionality reduction take as input a cloud of points in Euclidean space, and construct a graph with the input data points as vertices. This graph is then partitioned (clustering) or used to redefine metric information (dimensionality reduction). There has been much recent work on new methods for graph-based clustering and dimensionality reduction, but not much on constructing the graph itself. Graphs typically used include the fullyconnected graph, a local fixed-grid graph (for image segmentation) or a nearest-neighbor graph. We suggest that the graph should adapt locally to the structure of the data. This can be achieved by a graph ensemble that combines multiple minimum spanning trees, each fit to a perturbed version of the data set. We show that such a graph ensemble usually produces a better representation of the data manifold than standard methods; and that it provides robustness to a subsequent clustering or dimensionality reduction algorithm based on the graph. 1 Introduction Graph-based algorithms have long been popular, and have received even more attention recently, for two of the fundamental problems in machine learning: clustering [1–4] and manifold learning [5–8]. Relatively little attention has been paid to the properties and construction methods for the graphs that these algorithms depend on. A starting point for this study is the question of what constitutes a good graph. In the applications considered here, the graphs are an intermediate form of representation, and therefore their utility to some extent depends on the algorithms that they will ultimately be used for. However, in the case of both clustering and manifold learning, the data points are assumed to lie on some small number of manifolds. Intuitively, the graph should represent these underlying manifolds well: it should avoid shortcuts that travel outside a manifold, avoid gaps that erroneously disconnect regions of a manifold, and be dense within the manifold and clusters. Also, while the algorithms differ with respect to connectedness, in that clustering wants the graph to be disconnected, while for manifold learning the graph should be connected, they both want at least the inside of the clusters, or dense areas of the manifold, to be enhanced relative to the between-cluster, or sparse manifold connections. Dataset MST k-NNG ϵ-ball graph Delaunay triangulation PSfrag replacements Perturbed dataset Figure 1: Sensitivity to noise of proximity graphs. Top row: several proximity graphs constructed on a noisy sample of points lying on a circle. Bottom row: the same graphs constructed on a different sample; specifically, we added to each point Gaussian noise of standard deviation equal to the length of the small segment shown in the centre of the dataset (top row), built the graph, and drew it on the original dataset. This small perturbation can result in large changes in the graphs, such as disconnections, shortcuts or changes in connection density. Many methods employ simple graph constructions. A fully-connected graph is used for example in spectral clustering and multidimensional scaling, while a fixed grid, with each point connecting to some small fixed number of neighbors in a pre-defined grid of locations, is generally used in image segmentation. An ϵ-ball, in which each point connects to points within some distance ϵ, and k-nearest neighbors (k-NNG) are generalization of these approaches, as they take into account distance in some features associated with each point instead of simply the grid locations. The ϵ-ball or k-NNG provide an improvement over the fully-connected graph or fixed grid (clustering: [3, 9]; manifold learning: [5, 7]). These traditional methods contain parameters (ϵ, k) that strongly depend on the data; they generally require careful, costly tuning, as typically graphs must be constructed for a range of parameter values, the clustering or dimensionality-reduction algorithm run on each, and then performance curves compared to determine the best settings. Figure 1 shows that these methods are quite sensitive to sparsity and noise in the data points, and that the parameters should ideally vary within the data set. It also shows that other traditional graphs (e.g. the Delaunay triangulation) are not good for manifolds, since they connect points nonlocally. In this paper we propose a different method of graph construction, one based on minimum spanning trees (MSTs). Our method involves an ensemble of trees, each built on a perturbed version of the data. We first discuss the motivation for this new type of graph, and then examine its robustness properties, and its utility to both subsequent clustering or dimensionality reduction methods. 2 Two new types of proximity graphs A minimum spanning tree is a tree subgraph that contains all the vertices and has a minimum sum of edge weights. As a skeleton of a data set, the MST has some good properties: it tends to avoid shortcuts between branches (typically caused by long edges, which are contrary to the shortest-length criterion) and it gives a connected graph (usually a problem for other methods with often-occurring random small groupings of points). In fact, the MST was an early approach to clustering [10]. However, the MST is too sparse (having only N −1 edges for an N-point data set, and no cycles) and is sensitive to noise. One way to flesh it out and attain robustness to noise is to form an MST ensemble that combines multiple MSTs; we give two different algorithms for this. 2.1 Perturbed MSTs (PMSTs) Perturbed MSTs combine a number of MSTs, each fit to a perturbed version of the data set. The perturbation is done through a local noise model that we estimate separately for each data point based on its environment: point xi is perturbed by adding to it zero-mean uniform noise of standard deviation si = rdi where di is the average distance to the k nearest neighbors of xi, and r ∈[0, 1]. In this paper we use k = 5 throughout and study the effect of r. The locality of the noise model allows points to move more or less depending on the local data structure around them and to connect to different numbers of neighbors at different distances—in effect we achieve a variable k and ϵ. To build the PMST ensemble, we generate T > 1 perturbed copies of the entire data set according to the local noise model and fit an MST to each. The PMST ensemble assigns a weight eij ∈[0, 1] to the edge between points xi and xj equal to the average number of times that edge appears on the trees. For T = 1 this gives the MST of the unperturbed data set; for T →∞it gives a stochastic graph where eij is the probability (in the Laplace sense) of that edge under the noise model. The PMST ensemble contains at most T(N −1) edges (usually much less). Although the algorithm is randomized, the PMST ensemble for large T is essentially deterministic, and insensitive to noise by construction. In practice a small T is enough; we use T = 20 in the experiments. 2.2 Disjoint MSTs (DMSTs) Here we build a graph that is a deterministic collection of t MSTs that satisfies the property that the nth tree (for n = 1, . . . , t) is the MST of the data subject to not using any edge already in the previous 1, . . . , t −1 trees. One possible construction algorithm is an extension of Kruskal’s algorithm for the MST where we pick edges without replacement and restart for every new tree. Specifically, we sort the list of N(N−1) 2 edges eij by increasing distance dij and visit each available edge in turn, removing it if it merges two clusters (or equivalently does not create a cycle); whenever we have removed N −1 edges, we go back to the start of the list. We repeat the procedure t times in total. The DMST ensemble consists of the union of all removed edges and contains t(N −1) edges each of weight 1. The t parameter controls the overall density of the graph, which is always connected; unlike ϵ or k (for the ϵ-ball or k-NNG), t is not a parameter that depends locally on the data, and again points may connect to different numbers of neighbors at different distances. We obtain the original MST for t = 1; values t = 2–4 (and usually quite larger) work very well in practice. t need not be integer, i.e., we can fix the total number of edges instead. In any case we should use t ≪N 2 . 2.3 Computational complexity For a data set with N points, the computational complexity is approximately O(TN 2 log N) (PMSTs) or O(N 2(log N + t)) (DMSTs). In both cases the resulting graphs are sparse (number of edges is linear on number of points N). If imposing an a priori sparse structure (e.g. an 8-connected grid in image segmentation) the edge list is much shorter, so the graph construction is faster. For the perturbed MST ensemble, the perturbation of the data set results in a partially disordered edge list, which one should be able to sort efficiently. The bottleneck in the graph construction itself is the computation of pairwise distances, or equivalently of nearest neighbors, of a set of N points (which affects the ϵ-ball and k-NNG graphs too): in 2D this is O(N log N) thanks to properties of planar geometry, but in higher dimensions the complexity quickly approaches O(N 2). Overall, the real computational bottleneck is the graph postprocessing, typically O(N 3) in spectral methods (for clustering or manifold learning). This can be sped up to O(cN 2) by using sparsity (limiting a priori the edges allowed, thus approximating the true solution) but then the graph construction is likewise sped up. Thus, even if our graphs are slightly more costly to construct than the ϵ-ball or k-NNG, the computational savings are very large if we avoid having to run the spectral technique multiple times in search for a good ϵ or k. 3 Experiments We present two sets of experiments on the application of the graphs to clustering and manifold learning, respectively. 3.1 Clustering In affinity-based clustering, our data is an N × N affinity matrix W that defines a graph (where nonzeros define edge weights) and we seek a partition of the graph that optimizes a cost function, such as mincut [1] or normalized cut [2]. Typically the affinities are wij = exp(−1 2(dij/σ)2) (where dij is the problem-dependent distance between points xi and xj) and depend on a scale parameter σ ∈(0, ∞). This graph partitioning problem is generally NP-complete, so approximations are necessary, such as spectral clustering algorithms [2]. In spectral clustering we seek to cluster in the leading eigenvectors of the normalized affinity matrix N = D−1 2 WD−1 2 (where D = diag (P i wij), and discarding a constant eigenvector associated with eigenvalue 1). Spectral clustering succeeds only for a range of values of σ where N displays the natural cluster structure of the data; if σ is too small W is approximately diagonal and if σ is too large W is approximately a matrix of ones. It is thus crucial to determine a good σ, which requires computing clusterings over a range of σ values—an expensive computation since each eigenvector computation is O(N 3) (or O(cN 2) under sparsity conditions). Fig. 2 shows segmentation results for a grayscale image from [11] where the objective is to segment the occluder from the underlying background, a hard task given the intensity gradients. We use a standard weighted Euclidean distance on the data points (pixels) x = (pixel location, intensity). One method uses the 8-connected grid (where each pixel is connected to its 8 neighboring pixels). The other method uses the PMST or DMST ensemble (constrained to contain only edges in the 8-connected grid) under different values of the r, t parameters; the graph has between 44% and 98% the number of edges in the 8-grid, depending on the parameter value. We define the affinity matrix as wij = eij exp(−1 2(dij/σ)2) (where eij ∈[0, 1] are the edge values). In both methods we apply the spectral clustering algorithm of [2]. The plot shows the clustering error (mismatched occluder area) for a range of scales. The 8-connected grid succeeds in segmenting the occluder for σ ∈[0.2, 1] approximately, while the MST ensembles (for all parameter values tested) succeed for a wider range—up to σ = ∞in many cases. The reason for this success even for such high σ is that the graph lacks many edges around the occluder, so those affinities are zero no matter how high the scale is. In other words, for clustering, our graphs enhance the inside of clusters with respect to the bridges between clusters, and so ease the graph partitioning. 3.2 Manifold learning For dimensionality reduction, we concentrate on applying Isomap [5], a popular and powerful algorithm. We first estimate the geodesic distances (i.e., along the manifold) ˆgij beSegmentation ←−−−First 5 eigenvectors (except the constant one) −−−→ PMST ensemble σ1 = 0.5 8-grid PMST ensemble σ2 = 1.6 8-grid PMST ensemble σ = ∞ 8-grid Original image PMST ensemble in 3D view (x,y,intensity) 0 1 2 3 4 5 6 7 8 0 2 4 6 8 0 5 10 PSfrag replacements y x I 10 −1 10 0 10 1 0 10 20 30 40 50 60 PSfrag replacements σ ∞ Clustering error 8-grid graph PMST, DMST (various parameters) Figure 2: Using a proximity graph increases the scale range over which good segmentations are possible. We consider segmenting the greyscale image at the bottom (an occluder over a background) with spectral clustering, asking for K = 5 clusters. The color diagrams represent the segmentation (column 1) and first 5 eigenvectors of the affinity matrix (except the constant eigenvector, columns 2–4) obtained with spectral clustering, using a PMST ensemble with r = 0.4 (upper row) or an 8-connectivity graph (lower row), for 3 different scales: σ1 = 0.5, σ2 = 1.6 and σ = ∞. The PMST ensemble succeeds at all scales (note how several eigenvectors are constant over the occluder), while the 8-connectivity graph progressively deteriorates as σ increases to give a partition of equal-sized clusters for large scale. In the bottom part of the figure we show: the PMST ensemble graph in 3D space; the clustering error vs σ (where the right end is σ = ∞) for the 8-connectivity graph (thick blue line) and for various other PMST and DMST ensembles under various parameters (thin lines). The PMST and DMST ensembles robustly (for many settings of their parameters) give an almost perfect segmentation over a large range of scales. tween pairs of points in the data set as the shortest-path lengths in a graph learned from the data. Then we apply multidimensional scaling to these distances to obtain a collection of low-dimensional points {yi}N i=1 that optimally preserves the estimated geodesic distances. In fig. 3 we show the results of applying Isomap using different graphs to two data sets (ellipse and Swiss roll) for which we know the true geodesic distances gij. In a real application, since the true geodesic distances are unknown, error and variance cannot be computed; an estimated residual variance has been proposed [5] to determine the optimal graph parameter. For the perturbed MST ensemble, we binarize the edge values by making 1 any eij > 0. (It is often possible to refine the graph by zeroing edges with small eij, since this removes shortcuts that may have arisen by chance, particularly if T is large; but it is difficult to estimate the right threshold reliably.) The plots show 3 curves as a function of the graph parameter: the average error E in the geodesic distances; Isomap’s estimated residual variance ˆV ; and the true residual variance V . From the plots we can see that ˆV correlates well with V (though it underestimates it) and also with E for the Swiss roll, but not for the ellipse; this can make the optimal graph parameter difficult to determine in a real application. Given this, the fact that our graphs work well over a larger region of their parameter space than the ϵ-ball or k-NNG graphs makes them particularly attractive. The plots for the Swiss roll show that, while for the low noise case the ϵ-ball or k-NNG graphs work well over a reasonable region of their parameter space, for the high noise case this region decreases a lot, almost vanishing for the ϵ-ball. This is because for low values of the parameter the graph is disconnected, while for high values it has multiple shortcuts; the difficulty of the task is compounded by the small number of points used, N = 500 (an unavoidable fact in high dimensions). However, for the PMSTs the region remains quite wide and for the DMSTs the approximate region t ∈[2, 8] gives good results. For very low r or t = 1 the graph is the single MST, thus the large errors. It is also important to realize that the range of the r parameter of the PMST ensemble does not depend on the data, while the range for ϵ and k does. The range of the t parameter of the DMST ensemble does depend on the data, but we have found empirically that t = 2–4 gives very good results with all data sets we have tried. 4 Discussion One main contribution of this paper is to highlight the relatively understudied problem of converting a data set into a graph, which forms an intermediate representation for many clustering and manifold learning algorithms. A second contribution is novel construction algorithms, which are: easy to implement, not expensive to compute, robust across many noise levels and parameter settings, and useful for clustering and manifold learning. In general, a careful selection of the graph construction algorithm makes the results of these machine learning methods robust, and avoids or limits the required parameter search. Finally, the combination of many graphs, formed from perturbed versions of the data, into an ensemble of graphs, is a novel approach to the construction problem. Our idea of MST ensembles is an extension to graphs of the well-known technique of combining predictors by averaging (regression) or voting (classification), as is the regularizing effect of training with noise [12]. An ensemble of predictors improves the generalization to unseen data if the individual predictors are independent from each other and disagree with each other, and can be explained by the bias-variance tradeoff. Unlike regression or classification, unsupervised graph learning lacks at present an error function, so it seems difficult to apply the bias-variance framework here. However, we have conducted a wide range of empirical tests to understand the properties of the ensemble MSTs, and to compare them to the other graph construction methods, in terms of the error in the geodesic distances (if known a priori). In summary, we have found that the variance of the error for the geodesic Ellipse, high noise Swiss roll, low noise Swiss roll, high noise Data set −1.5 −1 −0.5 0 0.5 1 1.5 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 PSfrag replacements ϵkrt −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 −50 0 50 PSfrag replacements ϵkrt −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 −50 0 50 PSfrag replacements ϵkrt ϵ-ball 0 0.2583 0.5166 0.7749 1.0332 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵ krt E 0 12.4818 24.9636 37.4454 49.9272 1 2 3 4 5 6 7 8 9 10 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵ krt 0 12.4818 24.9636 37.4454 49.9272 1 2 3 4 5 6 7 8 9 10 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵ krt V, ˆV k-NNG 0 0.2583 0.5166 0.7749 1.0332 5 10 15 20 25 30 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵ k rt E 0 12.4818 24.9636 37.4454 49.9272 5 10 15 20 25 30 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵ k rt 0 12.4818 24.9636 37.4454 49.9272 5 10 15 20 25 30 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵ k rt V, ˆV PMST ensemble 0 0.2583 0.5166 0.7749 1.0332 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵk r t E 0 12.4818 24.9636 37.4454 49.9272 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵk r t 0 12.4818 24.9636 37.4454 49.9272 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵk r t V, ˆV DMST ensemble 0 0.2583 0.5166 0.7749 1.0332 5 10 15 20 25 30 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵkr t E 0 12.4818 24.9636 37.4454 49.9272 5 10 15 20 25 30 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵkr t 0 12.4818 24.9636 37.4454 49.9272 5 10 15 20 25 30 0 0.25 0.5 0.75 1 avg error in gd resvar (estimated) resvar (true) PSfrag replacements ϵkr t V, ˆV Figure 3: Performance of Isomap with different graphs in 3 data sets: ellipse with N = 100 points, high noise; Swiss roll with N = 500 points, low and high noise (where high noise means Gaussian with standard deviation equal to 9% of the separation between branches). All plots show on the X axis the graph parameter (ϵ, k, r or t); on the left Y axis the average error in the geodesic distances (red curve, E = 1 N2 PN i,j=1 |˜gij −gij|); and on the right Y axis Isomap’s estimated residual variance (solid blue curve, ˆV = 1 −R2( ˆG, DY)) and true residual variance (dashed blue curve, V = 1 −R2(G, DY)), where ˆG and G are the matrices of estimated and true geodesic distances, respectively, DY is the matrix of Euclidean distances in the low-dimensional embedding, and R(A, B) is the standard linear correlation coefficient, taken over all entries of matrices A and B. Where the curves for ϵ-ball and k-NNG are missing, the graph was disconnected. distances decreases for the ensemble when the individual graphs are sparse (e.g. MSTs as used here, or ϵ-ball and k-NNG with low ϵ or k); but not necessarily when the graphs are not sparse. The typical cut [9, 13] is a clustering criterion that is based on the probability pij that points xi and xj are in the same cluster over all possible partitions (under the Boltzmann distribution for the mincut cost function). The pij need to be estimated: [9] use SwendsenWang sampling, while [13] use randomized trees sampling. However, these trees are not used to define a proximity graph, unlike in our work. An important direction for future work concerns the noise model for PMSTs. The model we propose is isotropic, in that every direction of perturbation is equally likely. A better way is to perturb points more strongly in directions likely to lie within the manifold and less strongly in directions away from the manifold, using a method such as k nearest neighbors to estimate appropriate directions. Preliminary experiments with such a manifold-aligned model are very promising, particularly when the data is very noisy or its distribution on the manifold is not uniform. The noise model can also be extended to deal with non-Euclidean data by directly perturbing the similarities. Acknowledgements Funding provided by a CIHR New Emerging Teams grant. References [1] Zhenyu Wu and Richard Leahy. An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation. IEEE Trans. on Pattern Anal. and Machine Intel., 15(11):1101–1113, November 1993. [2] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Trans. on Pattern Anal. and Machine Intel., 22(8):888–905, August 2000. [3] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Efficient graph-based image segmentation. Int. J. Computer Vision, 59(2):167–181, September 2004. [4] Romer Rosales, Kannan Achan, and Brendan Frey. Learning to cluster using local neighborhood structure. In ICML, 2004. [5] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 22 2000. [6] Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, December 22 2000. [7] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, June 2003. [8] Kilian Q. Weinberger and Lawrence K. Saul. Unsupervised learning of image manifolds by semidefinite programming. In CVPR, 2004. [9] Marcelo Blatt, Shai Wiseman, and Eytan Domany. Data clustering using a model granular magnet. Neural Computation, 9(8):1805–1842, November 1997. [10] C. T. Zahn. Graph-theoretical methods for detecting and describing gestalt clusters. IEEE Trans. Computers, C–20(1):68–86, April 1971. [11] Chakra Chennubhotla and Allan Jepson. EigenCuts: Half-lives of EigenFlows for spectral clustering. In NIPS, 2003. [12] Christopher M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, New York, Oxford, 1995. [13] Yoram Gdalyahu, Daphna Weinshall, and Michael Werman. Self organization in vision: Stochastic clustering for image segmentation, perceptual grouping, and image database organization. IEEE Trans. on Pattern Anal. and Machine Intel., 23(10):1053–1074, October 2001.
|
2004
|
172
|
2,588
|
Fast Rates to Bayes for Kernel Methods Ingo Steinwart∗and Clint Scovel Modeling, Algorithms and Informatics Group, CCS-3 Los Alamos National Laboratory {ingo,jcs}@lanl.gov Abstract We establish learning rates to the Bayes risk for support vector machines (SVMs) with hinge loss. In particular, for SVMs with Gaussian RBF kernels we propose a geometric condition for distributions which can be used to determine approximation properties of these kernels. Finally, we compare our methods with a recent paper of G. Blanchard et al.. 1 Introduction In recent years support vector machines (SVM’s) have been the subject of many theoretical considerations. In particular, it was recently shown ([1], [2], and [3]) that SVM’s can learn for all data-generating distributions. However, these results are purely asymptotic, i.e. no performance guarantees can be given in terms of the number n of samples. In this paper we will establish such guarantees. Since by the no-free-lunch theorem of Devroye (see [4]) performance guarantees are impossible without assumptions on the data-generating distribution we will restrict our considerations to specific classes of distributions. In particular, we will present a geometric condition which describes how distributions behave close to the decision boundary. This condition is then used to establish learning rates for SVM’s. To obtain learning rates faster than n−1/2 we also employ a noise condition of Tsybakov (see [5]). Combining both concepts we are in particular able to describe distributions such that SVM’s with Gaussian kernel learn almost linearly, i.e. with rate n−1+ε for all ε > 0, even though the Bayes classifier cannot be represented by the SVM. Let us now formally introduce the statistical classification problem. To this end assume that X is a set. We write Y := {−1, 1}. Given a training set T = (x1, y1), . . . , (xn, yn) ∈ (X × Y )n the classification task is to predict the label y of a new sample (x, y). In the standard batch model it is assumed that T is i.i.d. according to an unknown probability measure P on X × Y . Furthermore, the new sample (x, y) is drawn from P independently of T. Given a classifier C that assigns to every training set T a measurable function fT : X →R the prediction of C for y is sign fT (x), where we choose a fixed definition of sign(0) ∈{−1, 1}. In order to “learn” from the samples of T the decision function fT should guarantee a small probability for the misclassification of the example (x, y). To make this precise the risk of a measurable function f : X →R is defined by RP (f) := P {(x, y) : sign f(x) ̸= y} . The smallest achievable risk RP := inf RP (f) | f : X →R measurable is called the Bayes risk of P. A function fP : X →Y attaining this risk is called a Bayes decision function. Obviously, a good classifier should produce decision functions whose risks are close to the Bayes risk. This leads to the definition: a classifier is called universally consistent if ET ∼P nRP (fT ) −RP →0 (1) holds for all probability measures P on X × Y . The next naturally arising question is whether there are classifiers which guarantee a specific rate of convergence in (1) for all distributions. Unfortunately, this is impossible by the so-called no-free-lunch theorem of Devroye (see [4, Thm. 7.2]). However, if one restricts considerations to certain smaller classes of distributions such rates exist for various classifiers, e.g.: • Assuming that the conditional probability η(x) := P(1|x) satisfies certain smoothness assumptions Yang showed in [6] that some plug-in rules (cf. [4]) achieve rates for (1) which are of the form n−α for some 0 < α < 1/2 depending on the assumed smoothness. He also showed that these rates are optimal in the sense that no classifier can obtain faster rates under the proposed smoothness assumptions. • It is well known (see [4, Sec. 18.1]) that using structural risk minimization over a sequence of hypothesis classes with finite VC-dimension every distribution which has a Bayes decision function in one of the hypothesis classes can be learned with rate n−1/2. • Let P be a noise-free distribution, i.e. RP = 0 and F be a class with finite VCdimension. If F contains a Bayes decision function then up to a logarithmic factor the convergence rate of the ERM classifier over F is n−1 (see [4, Sec. 12.7]). Restricting the class of distributions for classification always raises the question of whether it is likely that these restrictions are met in real world problems. Of course, in general this question cannot be answered. However, experience shows that the assumption that the distribution is noise-free is almost never satisfied. Furthermore, it is rather unrealistic to assume that a Bayes decision function can be represented by the algorithm. Finally, assuming that the conditional probability is smooth, say k-times continuously differentiable, seems to be unjustifiable for many real world classification problems. We conclude that the above listed rates are established for situations which are rarely met in practice. Considering the ERM classifier and hypothesis classes F containing a Bayes decision function there is a large gap in the rates for noise-free and noisy distributions. In [5] Tsybakov proposed a condition on the noise which describes intermediate situations. In order to present this condition we write η(x) := P(y = 1|x), x ∈X, for the conditional probability and PX for the marginal distribution of P on X. Now, the noise in the labels can be described by the function |2η −1|. Indeed, in regions where this function is close to 1 there is only a small amount of noise, whereas function values close to 0 only occur in regions with a high noise. We will use the following modified version of Tsybakov’s noise condition which describes the size of the latter regions: Definition 1.1 Let 0 ≤q ≤∞and P be a distribution on X × Y . We say that P has Tsybakov noise exponent q if there exists a constant C > 0 such that for all sufficiently small t > 0 we have PX |2η −1| ≤t ≤C · tq . (2) All distributions have at least noise exponent 0. In the other extreme case q = ∞the conditional probability η is bounded away from 1 2. In particular this means that noise-free distributions have exponent q = ∞. Finally note, that Tsybakov’s original noise condition assumed PX(f ̸= fP ) ≤c(RP (f) −RP ) q 1+q for all f : X →Y which is satisfied if e.g. (2) holds (see [5, Prop. 1]). In [5] Tsybakov showed that if P has a noise exponent q then ERM-type classifiers can obtain rates in (1) which are of the form n− q+1 q+pq+2 , where 0 < p < 1 measures the complexity of the hypothesis class. In particular, rates faster than n−1/2 are possible whenever q > 0 and p < 1. Unfortunately, the ERM-classifier he considered is usually hard to implement and in general there exists no efficient algorithm. Furthermore, his classifier requires substantial knowledge on how to approximate the Bayes decision rules of the considered distributions. Of course, such knowledge is rarely present in practice. 2 Results In this paper we will use the Tsybakov noise exponent to establish rates for SVM’s which are very similar to the above rates of Tsybakov. We begin by recalling the definition of SVM’s. To this end let H be a reproducing kernel Hilbert space (RKHS) of a kernel k : X × X →R, i.e. H is a Hilbert space consisting of functions from X to R such that the evaluation functionals are continuous, and k is symmetric and positive definite (see e.g. [7]). Throughout this paper we assume that X is a compact metric space and that k is continuous, i.e. H contains only continuous functions. In order to avoid cumbersome notations we additionally assume ∥k∥∞≤1. Now given a regularization parameter λ > 0 the decision function of an SVM is (fT,λ, bT,λ) := arg min f∈H b∈R λ∥f∥2 H + 1 n n X i=1 l yi(f(xi) + b) , (3) where l(t) := max{0, 1 −t} is the so-called hinge loss. Unfortunately, only a few results on learning rates for SVM’s are known: In [8] it was shown that SVM’s can learn with linear rate if the distribution is noise-free and the two classes can be strictly separated by the RKHS. For RKHS which are dense in the space C(X) of continuous functions the latter condition is satisfied if the two classes have strictly positive distance in the input space. Of course, these assumptions are far too strong for almost all real-world problems. Furthermore, Wu and Zhou (see [9]) recently established rates under the assumption that η is contained in a Sobolev space. In particular, they proved rates of the form (log n)−p for some p > 0 if the SVM uses a Gaussian kernel. Obviously, these rates are much too slow to be of practical interest and the difficulties with smoothness assumptions have already been discussed above. For our first result, which is much stronger than the above mentioned results, we need to introduce two concepts both of which deal with the involved RKHS. The first concept describes how well a given RKHS H can approximate a distribution P. In order to introduce it we define the l-risk of a function f : X →R by Rl,P (f) := E(x,y)∼P l(yf(x)). The smallest possible l-risk is denoted by Rl,P := inf{Rl,P (f) | f : X →R}. Furthermore, we define the approximation error function by a(λ) := inf f∈H λ∥f∥2 H + Rl,P (f) −Rl,P , λ ≥0 . (4) The function a(.) quantifies how well an infinite sample SVM with RKHS H approximates the minimal l-risk (note that we omit the offset b in the above definition for simplicity). If H is dense in the space of continuous functions C(X) then for all P we have a(λ) →0 if λ →0 (see [3]). However, in non-trivial situations no rate of convergence which uniformly holds for all distributions P is possible. The following definition characterizes distributions which guarantee certain polynomial rates: Definition 2.1 Let H be a RKHS over X and P be a distribution on X × Y . Then H approximates P with exponent β ∈(0, 1] if there is a C > 0 such that for all λ > 0: a(λ) ≤Cλβ . It can be shown (see [10]) that the extremal case β = 1 is equivalent to the fact that the minimal l-risk can be achieved by an element of H. Because of the specific structure of the approximation error function values β > 1 are only possible for distributions with η ≡1 2. Finally, we need a complexity measure for RKHSs. To this end let A ⊂E be a subset of a Banach space E. Then the covering numbers of A are defined by N(A, ε, E) := min n n ≥1 : ∃x1, . . . , xn ∈E with A ⊂ n [ i=1 (xi + εBE) o , ε > 0, where BE denotes the closed unit ball of E. Now our complexity measure is: Definition 2.2 Let H be a RKHS over X and BH its closed unit ball. Then H has complexity exponent 0 < p ≤2 if there is an ap > 0 such that for all ε > 0 we have log N(BH, ε, C(X)) ≤apε−p . Note, that in [10] the complexity exponent was defined in terms of N(BH, ε, L2(TX)), where L2(TX) is the L2-space with respect to the empirical measure of (x1, . . . , xn). Since we always have N(BH, ε, L2(T)) ≤N(BH, ε, C(X)) the Definition 2.2 is stronger than the one in [10]. Here, we only used Definition 2.2 since it enables us to compare our results with [11]. However, all results remain true if one uses the original definition of [10]. For many RKHSs bounds on the complexity exponents are known (see e.g. [3] and [10]). Furthermore, many SVMs use a parameterized family of RKHSs. For such SVMs the constant ap may play a crucial role. We will see below, that this is in particular true for SVMs using a family of Gaussian RBF kernels. Let us now formulate our first result on rates: Theorem 2.3 Let H be a RKHS of a continuous kernel on X with complexity exponent 0 < p < 2, and let P be a probability measure on X × Y with Tsybakov noise exponent 0 < q ≤∞. Furthermore, assume that H approximates P with exponent 0 < β ≤1. We define λn := n− 4(q+1) (2q+pq+4)(1+β) . Then for all ε > 0 there is a constant C > 0 such that for all x ≥1 and all n ≥1 we have Pr∗ T ∈(X × Y )n : RP (fT,λn + bT,λn) ≤RP + Cx2n− 4β(q+1) (2q+pq+4)(1+β) +ε ≥1 −e−x . Here Pr∗denotes the outer probability of P n in order to avoid measurability considerations. Remark 2.4 With a tail bound of the form of Theorem 2.3 one can easily get rates for (1). In the case of Theorem 2.3 these rates have the form n− 4β(q+1) (2q+pq+4)(1+β) +ε for all ε > 0. Remark 2.5 For brevity’s sake our major aim was to show the best possible rates using our techniques. Therefore, Theorem 2.3 states rates for the SVM under the assumption that (λn) is optimally chosen. However, we emphasize, that the techniques of [10] also give rates if (λn) is chosen in a different (and thus sub-optimal) way. This is also true for our results on SVM’s using Gaussian kernels which we will establish below. Remark 2.6 In [5] it is assumed that a Bayes classifier is contained in the function class the algorithm minimizes over. This assumption corresponds to a perfect approximation of P by H, i.e. β = 1. In this case our rate is (essentially) of the form n− 2(q+1) 2q+pq+4 . If we rescale the complexity exponent p from (0, 2) to (0, 1) and write p′ for the new complexity exponent this rate becomes n− q+1 q+p′q+2 . This is exactly the form of Tsybakov’s result in [5]. However, as far as we know our complexity measure cannot be compared to Tsybakov’s. Remark 2.7 By the nature of Theorem 2.3 it suffices that P satisfies Tsybakov’s noise assumption for every q′ < q. It also suffices to suppose that H approximates P with exponent β′ for all β′ < β, and that H has complexity exponent p′ for all p′ > p. Now, it is shown in [10] that the RKHS H has an approximation exponent β = 1 if and only if H contains a minimizer of the l-risk. In particular, if H has approximation exponent β for all β < 1 but not for β = 1 then H does not contain such a minimizer but Theorem 2.3 gives the same result as for β = 1. If in addition the RKHS consists of C∞functions we can choose p arbitrarily close to 0, and hence we can obtain rates up to n−1 even though H does not contain a minimizer of the l-risk, that means e.g. a Bayes decision function. In view of Theorem 2.3 and the remarks concerning covering numbers it is often only necessary to estimate the approximation exponent. In particular this seems to be true for the most popular kernel, that is the Gaussian RBF kernel kσ(x, x′) = exp(−σ2∥x −x′∥2 2), x, x′ ∈X on (compact) subsets X of Rd with width 1/σ. However, to our best knowledge no non-trivial condition on η or fP = sign ◦(2η −1) which ensures an approximation exponent β > 0 for fixed width has been established and [12] shows that Gaussian kernels poorly approximate smooth functions. Hence plug-in rules based on Gaussian kernels may perform poorly under smoothness assumptions on η. In particular, many types of SVM’s using other loss functions are plug-in rules and therefore, their approximation properties under smoothness assumptions on η may be poor if a Gaussian kernel is used. However, our SVM’s are not plug-in rules since their decision functions approximate the Bayes decision function (see [13]). Intuitively, we therefore only need a condition that measures the cost of approximating the “bump” of the Bayes decision function at the “decision boundary”. We will now establish such a condition for Gaussian RBF kernels with varying widths 1/σn. To this end let X−1 := {x ∈X : η < 1 2} and X1 := {x ∈X : η > 1 2}. Recall that these two sets are the classes which have to be learned. Since we are only interested in distributions P having a Tsybakov exponent q > 0 we always assume that X = X−1 ∪X1 holds PX-almost surely. Now we define τx := d(x, X1), if x ∈X−1, d(x, X−1), if x ∈X1, 0, otherwise . (5) Here, d(x, A) denotes the distance of x to a set A with respect to the Euclidian norm. Note that roughly speaking τx measures the distance of x to the “decision boundary”. With the help of this function we can define the following geometric condition for distributions: Definition 2.8 Let X ⊂Rd be compact and P be a probability measure on X × Y . We say that P has geometric noise exponent α ∈(0, ∞] if we have Z X τ −αd x |2η(x) −1|PX(dx) < ∞. (6) Furthermore, P has geometric noise exponent ∞if (6) holds for all α > 0. In the above definition we make neither any kind of smoothness assumption nor do we assume a condition on PX in terms of absolute continuity with respect to the Lebesgue measure. Instead, the integral condition (6) describes the concentration of the measure |2η−1|dPX near the decision boundary. The less the measure is concentrated in this region the larger the geometric noise exponent can be chosen. In particular, we have x 7→τ −1 x ∈ L∞ |2η−1|dPX if and only if the two classes X−1 and X1 have strictly positive distance! If (6) holds for some 0 < α < ∞then the two classes may “touch”, i.e. the decision boundary ∂X−1 ∩∂X1 is nonempty. Using this interpretation we easily can construct distributions which have geometric noise exponent ∞and touching classes. In general for these distributions there is no Bayes classifier in the RKHS Hσ of kσ for any σ > 0. Example 2.9 We say that η is H¨older about 1 2 with exponent γ > 0 on X ⊂Rd if there is a constant cγ > 0 such that for all x ∈X we have |2η(x) −1| ≤cγτ γ x . (7) If η is H¨older about 1/2 with exponent γ > 0, the graph of 2η(x) −1 lies in a multiple of the envelope defined by τ γ x at the top and by −τ γ x at the bottom. To be H¨older about 1/2 it is sufficient that η is H¨older continuous, but it is not necessary. A function which is H¨older about 1/2 can be very irregular away from the decision boundary but it cannot jump across the decision boundary discontinuously. In addition a H¨older continuous function’s exponent must satisfy 0 < γ ≤1 where being H¨older about 1/2 only requires γ > 0. For distributions with Tsybakov exponent such that η is H¨older about 1/2 we can bound the geometric noise exponent. Indeed, let P be a distribution which has Tsybakov noise exponent q ≥0 and a conditional probability η which is H¨older about 1/2 with exponent γ > 0. Then (see [10]) P has geometric noise exponent α for all α < γ q+1 d . For distributions having a non-trivial geometric noise exponent we can now bound the approximation error function for Gaussian RBF kernels: Theorem 2.10 Let X be the closed unit ball of the Euclidian space Rd, and Hσ be the RKHS of the Gaussian RBF kernel kσ on X with width 1/σ > 0. Furthermore, let P be a distribution with geometric noise exponent 0 < α < ∞. We write aσ(.) for the approximation error function with respect to Hσ. Then there is a C > 0 such that for all λ > 0, σ > 0 we have aσ(λ) ≤C σdλ + σ−αd . (8) In order to let the right hand side of (8) converge to zero it is necessary to assume both λ →0 and σ →∞. An easy consideration shows that the fastest rate of convergence can be achieved if σ(λ) := λ− 1 (α+1)d . In this case we have aσ(λ)(λ) ≤2Cλ α α+1 . Roughly speaking this states that the family of spaces Hσ(λ) approximates P with exponent α α+1. Note, that we can obtain approximation rates up to linear order in λ for sufficiently benign distributions. The price for this good approximation property is, however, an increasing complexity of the hypothesis class Hσ(λ) for σ →∞, i.e. λ →0. The following theorem estimates this in terms of the complexity exponent: Theorem 2.11 Let Hσ be the RKHS of the Gaussian RBF kernel kσ on X. Then for all 0 < p ≤2 and δ > 0, there is a cp,d,δ > 0 such that for all ε > 0 and all σ ≥1 we have sup T ∈Zn log N(BHσ, ε, L2(TX)) ≤cp,d,δ σ(1−p 2 )(1+δ)dε−p. Having established both results for the approximation and complexity exponent we can now formulate our main result for SVM’s using Gaussian RBF kernels: Theorem 2.12 Let X be the closed unit ball of the Euclidian space Rd, and P be a distribution on X × Y with Tsybakov noise exponent 0 < q ≤∞and geometric noise exponent 0 < α < ∞. We define λn := ( n−α+1 2α+1 if α ≤q+2 2q n− 2(α+1)(q+1) 2α(q+2)+3q+4 otherwise , and σn := λ − 1 (α+1)d n in both cases. Then for all ε > 0 there is a C > 0 such that for all x ≥1 and all n ≥1 the SVM using λn and Gaussian RBF kernel with width 1/σn satisfies Pr∗ T ∈(X × Y )n : RP (fT,λn + bT,λn) ≤RP + Cx2n− α 2α+1 +ε ≥1 −e−x if α ≤q+2 2q and Pr∗ T ∈(X × Y )n : RP (fT,λn + bT,λn) ≤RP + Cx2n− 2α(q+1) 2α(q+2)+3q+4 +ε ≥1 −e−x otherwise. If α = ∞the latter holds if σn = σ is a constant with σ > 2 √ d. Most of the remarks made after Theorem 2.3 also apply to the above theorem up to obvious modifications. In particular this is true for Remark 2.4, Remark 2.5, and Remark 2.7. 3 Discussion of a modified support vector machine Let us now discuss a recent result (see [11]) on rates for the following modification of the original SVM: f ∗ T,λ := arg min f∈H λ∥f∥H + 1 n n X i=1 l yif(xi) . (9) Note that unlike in (3) the norm of the regularization term is not squared in (9). To describe the result of [11] we need the following modification of the approximation error function: a∗(λ) := inf f∈H λ∥f∥H + Rl,P (f) −Rl,P , λ ≥0 . (10) Obviously, a∗(.) plays the same role for (9) as a(.) does for (3). Moreover, it is easy to see that for all λ > 0 with ∥fP,λ∥≥1 we have a∗(λ) ≤a(λ). Now, a slightly simplified version of the result in [11] reads as follows: Theorem 3.1 Let H be a RKHS of a continuous kernel on X with complexity exponent 0 < p < 2, and let P be a distribution on X × Y with Tsybakov noise exponent ∞. We define λn := n− 2 2+p . Then for all x ≥1 there is a Cx > 0 such that for all n ≥1 we have Pr∗ T ∈(X × Y )n : RP (f ∗ T,λn) ≤RP + Cx a∗(λn) + n− 2 2+p ≥1 −e−x . Besides universal constants the exact value of Cx is given in [11]. Also note, that the original result of [11] used the eigenvalue distribution of the integral operator Tk : L2(PX) → L2(PX) as a complexity measure. If H has complexity exponent p it can be shown that these eigenvalues decay at least as fast as n−2/p. Since we only want to compare Theorem 3.1 with our results we do not state the eigenvalue version of Theorem 3.1. It was also mentioned in [11] that using the techniques therein it is possible to derive rates for the original SVM. In this case a∗(λn) has to be replaced by a(λn) and the stochastic term n− 2 2+p has to be replaced by “some more involved term” (see [11, p.10]). Since typically a∗(.) decreases faster than a(.) the authors conclude that using a regularization term ∥.∥instead of the original ∥.∥2 will “necessarily yield an improved convergence rate” (see [11, p.11]). Let us now show that this conclusion is not justified. To this end let us suppose that H approximates P with exponent 0 < β ≤1, i.e. a(λ) ≤Cλβ for some C > 0 and all λ > 0. It was shown in [10] that this equivalent to inf ∥f∥≤λ−1/2 Rl,P (f) −Rl,P ≤c1λ β 1−β (11) for some constant c1 > 0 and all λ > 0. Furthermore, using the techniques in [10] it is straightforward to show that (11) is equivalent to a∗(λ) ≤c2λ 2β 1−β . Therefore, if H approximates P with exponent β then the rate in Theorem 3.1 becomes n− 4β (2+p)(1+β) which is the rate we established in Theorem 2.3 for the original SVM. Although the original SVM (3) and the modification (9) learn with the same rate there is a substantial difference in the way the regularization parameter has to be chosen in order to achieve this rate. Indeed, for the original SVM we have to use λn = n− 4 (2+p)(1+β) while for (9) we have to choose λn = n− 2 2+p . In other words, since p is known for typical RKHS’s but β is not, we know the asymptotically optimal choice of λn for (9) while we do not know the corresponding optimal choice for the standard SVM. It is naturally to ask whether a similar observation can be made if we have a Tsybakov noise exponent which is smaller than ∞. The answer to this question is “yes” and “no”. More precisely, using our techniques in [10] one can show that for 0 < q ≤∞the optimal choice of the regularization parameter in (9) is λn = n− 2(q+1) 2q+pq+4 leading to the rate n− 4β(q+1) (2q+pq+4)(1+β) . As for q = ∞this rate coincides with the rate we obtained for the standard SVM. Furthermore, the asymptotically optimal choice of λn is again independent of the approximation exponent β. However, it depends on the (typically unknown) noise exponent q. This leads to the following important questions: Question 1: Is it easier to find an almost optimal choice of λ for (9) than for the standard SVM? And if so, what are the computational requirements of solving (9)? Question 2: Can a similar observation be made for the parametric family of Gaussian RBF kernels used in Theorem 2.12 if P has a non-trivial geometric noise exponent α? References [1] I. Steinwart. Support vector machines are universally consistent. J. Complexity, 18:768–791, 2002. [2] T. Zhang. Statistical behaviour and consistency of classification methods based on convex risk minimization. Ann. Statist., 32:56–134, 2004. [3] I. Steinwart. Consistency of support vector machines and other regularized kernel machines. IEEE Trans. Inform. Theory, to appear, 2005. [4] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996. [5] A.B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statist., 32:135–166, 2004. [6] Y. Yang. Minimax nonparametric classification—part I and II. IEEE Trans. Inform. Theory, 45:2271–2292, 1999. [7] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [8] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res., 2:67–93, 2001. [9] Q. Wu and D.-X. Zhou. Analysis of support vector machine classification. Tech. Report, City University of Hong Kong, 2003. [10] C. Scovel and I. Steinwart. Fast rates for support vector machines. Ann. Statist., submitted, 2003. http://www.c3.lanl.gov/˜ingo/publications/ ann-03.ps. [11] G. Blanchard, O. Bousquet, and P. Massart. Statistical performance of support vector machines. Ann. Statist., submitted, 2004. [12] S. Smale and D.-X. Zhou. Estimating the approximation error in learning theory. Anal. Appl., 1:17–41, 2003. [13] I. Steinwart. Sparseness of support vector machines. J. Mach. Learn. Res., 4:1071– 1105, 2003.
|
2004
|
173
|
2,589
|
Discriminant Saliency for Visual Recognition from Cluttered Scenes Dashan Gao Nuno Vasconcelos Department of Electrical and Computer Engineering, University of California, San Diego Abstract Saliency mechanisms play an important role when visual recognition must be performed in cluttered scenes. We propose a computational definition of saliency that deviates from existing models by equating saliency to discrimination. In particular, the salient attributes of a given visual class are defined as the features that enable best discrimination between that class and all other classes of recognition interest. It is shown that this definition leads to saliency algorithms of low complexity, that are scalable to large recognition problems, and is compatible with existing models of early biological vision. Experimental results demonstrating success in the context of challenging recognition problems are also presented. 1 Introduction The formulation of recognition as a problem of statistical classification has enabled significant progress in the area, over the last decades. In fact, for certain types of problems (face detection/recognition, vehicle detection, pedestrian detection, etc.) it now appears to be possible to design classifiers that “work reasonably well most of the time”, i.e. classifiers that achieve high recognition rates in the absence of a few factors that stress their robustness (e.g. large geometric transformations, severe variations of lighting, etc.). Recent advances have also shown that real-time recognition is possible on low-end hardware [1]. Given all this progress, it appears that one of the fundamental barriers remaining in the path to a vision of scalable recognition systems, capable of dealing with large numbers of visual classes, is an issue that has not traditionally been considered as problematic: training complexity. In this context, an aspect of particular concern is the dependence, of most modern classifiers, on carefully assembled and pre-processed training sets. Typically these training sets are large (in the order of thousands of examples per class) and require a combination of 1) painstaking manual labor of image inspection and segmentation of good examples (e.g. faces) and 2) an iterative process where an initial classifier is applied to a large dataset of unlabeled data, the classification results are manually inspected to detect more good examples (usually examples close to the classification boundary, or where the classifier fails), and these good examples are then manually segmented and added to the training set. Overall, the process is extremely laborious, and good training sets usually take years to establish through the collaborative efforts of various research groups. This is completely opposite to what happens in truly scalable learning systems (namely biological ones) that are able to quickly bootstrap the learning process from a small number of virtually unprocessed examples. For example while humans can bootstrap learning with weak clues and highly cluttered scenes (such as “Mr. X is the person sitting at the end of the room, the one with gray hair”), current faces detectors require training faces to be cropped into (a) (b) (c) (d) Figure 1: (a)(b)(c) Various challenging examples for current saliency detectors. (a) Apple hanging from a tree; (b) a bird in a tree; (c) an egg in a nest. (d) some DCT basis functions. From left to right, top to bottom, detectors of: edges, bars, corners, t-junctions, spots, flow patches, and checkerbords. 20 × 20 pixel arrays, with all the hair precisely cropped out, lighting gradients explicitly removed, and so on. One property of biological vision that plays an important role in this ability to learn from highly cluttered examples, is the existence of saliency mechanisms. For example, humans rarely have to exhaustively scan a scene to detect an object of interest. Instead, salient locations simply pop-out in result of the operation of pre-recognition saliency mechanisms. While saliency has been the subject of significant study in computer vision, most formulations do not pose saliency itself as a major goal of recognition. Instead saliency is usually an auxiliary pre-filtering step, whose goal is to reduce computation by eliminating image locations that can be universally classified as non-salient, according to some definition of saliency which is completely divorced from the particular recognition problem at hand. In this work, we propose an alternative definition of saliency, which we denote by discriminant saliency, and which is intrinsically grounded on the recognition problem. This new formulation is based on the intuition that, for recognition, the salient features of a visual class are those that best distinguish it from all other visual classes of recognition interest. We show that 1) this intuition translates naturally into a computational principle for the design of saliency detectors, 2) this principle can be implemented with great computational simplicity, 3) it is possible to derive implementations which scale to recognition problems with large numbers of classes, and 4) the resulting saliency mechanisms are compatible with classical models of biological perception. We present experimental results demonstrating success on image databases containing complex scenes and substantial amounts of clutter. 2 Saliency detection The extraction of salient points from images has been a subject of research for at least a few decades. Broadly speaking, saliency detectors can be divided into three major classes. The first, and most popular, treats the problem as one of the detection of specific visual attributes. These are usually edges or corners (also called “interest points”) [2] and their detection has roots in the structure-from-motion literature, but there have also been proposals for other low-level visual attributes such as contours [3]. A major limitation of these saliency detectors is that they do not generalize well. For example, a corner detector will always produce a stronger response in a region that is strongly textured than in a smooth region, even though textured surfaces are not necessarily more salient than smooth ones. This is illustrated by the image of Figure 1(a). While a corner detector would respond strongly to the highly textured regions of leaves and tree branches, it is not clear that these are more salient than the smooth apple. We would argue for the contrary. Some of these limitations are addressed by more recent, and generic, formulations of saliency. One idea that has recently gained some popularity is to define saliency as image complexity. Various complexity measures have been proposed in this context. Lowe [4] measures complexity by computing the intensity variation in an image using the difference of Gaussian function; Sebe [5] measures the absolute value of the coefficients of a wavelet decomposition of the image; and Kadir [6] relies on the entropy of the distribution of local intensities. The main advantage of these data-driven definitions of saliency is a significantly greater flexibility, as they could detect any of the low-level attributes above (corners, contours, smooth edges, etc.) depending on the image under consideration. It is not clear, however, that saliency can always be equated with complexity. For example, Figures 1(b) and (c), show images containing complex regions, consisting of clustered leaves and straw, that are not terribly salient. On the contrary, the much less complex image regions containing the bird or the egg appear to be significantly more salient. Finally, a third formulation is to start from models of biological vision, and derive saliency detection algorithms from these models [7]. This formulation has the appeal of its roots on what are the only known full-functioning vision systems, and it has been shown to lead to interesting saliency behavior [7]. However, these solutions have one significant limitation: the lack of a clearly stated optimality criteria for saliency. In the absence of such a criteria it is difficult to evaluate, in an objective sense, the goodness of the proposed algorithms or to develop a theory (and algorithms) for optimal saliency. 3 Discriminant saliency The basic intuition for discriminant saliency is somewhat of a “statement of the obvious”: the salient attributes of a given visual concept are the attributes that most distinguish it from all other visual concepts that may be of possible interest. While close to obvious, this definition is a major departure from all existing definitions in the vision literature. First, it makes reference to a “set of visual concepts of possible interest”. While such a set may not be well defined for all vision problem (e.g. tracking or structure-from-motion where many of the current saliency detectors have roots [2]), it is an intrinsic component of the recognition problem: the set of visual classes to be recognized. It therefore makes saliency contingent upon the existence of a collection of classes and, therefore, impossible to compute from an isolated image. It also means that, for a given object, different visual attributes will be salient in different recognition contexts. For example while contours and shape will be most salient to distinguish a red apple from a red car, color and texture will be most salient when the same apple is compared to an orange. All these properties appear to be a good idea for recognition. Second, it sets as a goal for saliency that of distinguishing between classes. This implies that the optimality criterion for the design of salient features is discrimination, and therefore very different from traditional criteria such as repetitiveness under image transformations [8]. Robustness in terms of these criteria (which, once again, are well justified for tracking but do not address the essence of the recognition problem) can be learned if needed to achieve discriminant goals [9]. Due to this equivalence between saliency and discrimination, the principle of discriminant saliency can be easily translated into an optimality criteria for the design of saliency algorithms. In particular, it is naturally formulated as an optimal feature selection problem: optimal features for saliency are the most discriminant features for the one-vs-all classification problem that opposes the class of interest to all remaining classes. Or, in other words, the most salient features are the ones that best separate the class of interest from all others. Given the well known equivalence between features and image filters, this can also be seen as a problem of designing optimal filters for discrimination. 3.1 Scalable feature selection In the context of scalable recognition systems, the implementation of discriminant saliency requires 1) the design of a large number of classifiers (as many as the total number of classes to recognize) at set up time, and 2) classifier tuning whenever new classes are added to, or deleted from, the problem. It is therefore important to adopt feature selection techniques that are computationally efficient, preferably reusing computation from the design of one classifier to the next. The design of such feature selection methods is a non-trivial problem, which we have been actively pursuing in the context of research in feature selection itself [11]. This research has shown that information-theoretic methods, based on maximization of mutual information between features and class labels, have the appealing property of enabling a precise control (through factorizations based on known statistical properties of images) over the trade off between optimality, in a minimum Bayes error sense, and computationally efficiency [11]. Our experience of applying algorithms in this family to the saliency detection problem is that, even those strongly biased towards efficiency can consistently select good saliency detection filters. This is illustrated by all the results presented in this paper, where we have adopted the maximization of marginal diversity (MMD) [10] as the guiding principle for feature selection. Given a classification problem with class labels Y , prior class probabilities PY (i), a set of n features, X = (X1, . . . , Xn), and such that the probability density of Xk given class i is PXk|Y (x|i), the marginal diversity (MD) of feature Xk is md(Xk) =< KL[PXk|Y (x|i)||PXk(x) >Y (1) where < f(i) >Y = M i=1 PY (i)f(i), and KL[p||q] = p(s) log p(x) q(x)dx the KullbackLeibler divergence between p and q. Since it only requires marginal density estimates, the MD can be computed with histogram-based density estimates leading to an extremely efficient algorithm for feature selection [10]. Furthermore, in the one-vs-all classification scenario, the histogram of the “all” class can be obtained by a weighted average of the class conditional histograms of the image classes that it contains, i.e. PXk|Y (x|A) = i∈A PXk|Y (x|i)PY (i) (2) where A is the set of image classes that compose the “all” class. This implies that the bulk of the computation, the density estimation step, only has to be performed once for the design of all saliency detectors. 3.2 Biologically plausible models Ever since Hubel and Wiesel’s showing that different groups in V1 are tuned for detecting different types of stimulae (e.g. bars, edges, etc.) it has been known that, the earliest stages of biological vision can be modeled as a multiresolution image decomposition followed by some type of non-linearity. Indeed, various “biologically plausible” models of early vision are based on this principle [12]. The equivalence between saliency detection and the design of optimally discriminant filters, makes discriminant saliency compatible with most of these models. In fact, as detailed in the experimental section, our experience is that remarkably simple mechanisms, inspired by biological vision, are sufficient to achieve good saliency results. In particular, all the results reported in this paper were achieved with the following two step procedure, based on the Malik-Perona model of texture perception [13]. First, a saliency map (i.e. a function describing the saliency at each pixel location) is obtained by pooling the responses of the different saliency filters after half-wave rectification S(x, y) = 2n i=1 ωiR2 i (x, y), (3) where S(x, y) is the saliency at location (x, y), Ri(x, y), i = 1, . . . , 2n the channels resulting from half-wave rectification of the outputs of the saliency filters Fi(x, y), i = 1, . . . , n R2k−1 = max[−I ∗Fk(x, y), 0] R2k = max[I ∗Fk(x, y), 0] (4) I(x, y) the input image, and wi = md(i) a weight equal to the feature’s marginal diversity. Second, the saliency map is fed to a peak detection module that consists of a winnertake-all network. The location of largest saliency is first found. Its spatial scale is set to the size of the region of support of the saliency filter with strongest response at that location. All neighbors within a circle whose radius is this scale are then suppressed (set to zero) and the process is iterated. The procedure is illustrated by Figure 2, and produces *F1 *Fj *Fn R2 R1 Scale Selection Saliency Map WTA Salient Locations I wi R2n Figure 2: Schematic of the saliency detection model. a list of salient locations, their saliency strengths, and scales. It is important to limit the number of channels that contribute to the saliency map since, for any given class, there are usually many features which are not discriminant but have strong response at various image locations (e.g. areas of clutter). This is done through a cross-validation step that we discuss in section 4.3. All the experiments presented in the following section were obtained using the coefficients of the discrete cosine transform (DCT) as features. While the precise set of features is likely not to be crucial for the quality of the saliency results (e.g. other invertible multiresolution decompositions, such as Gabor or other wavelets, would likely work well) the DCT feature set has two appealing properties. First, previous experience has shown that they perform quite well in large scale recognition problems [14]. Second, as illustrated by Figure 1(d), the DCT basis functions contain detectors for various perceptually relevant low-level image attributes, including edges, bars, corners, t-junctions, spots, etc. This can obviously only make the saliency detection process easier. 4 Results and discussion We start the experimental evaluation of discriminant saliency with some results from the Brodatz texture database, that illustrate various interesting properties of the former. 4.1 Saliency maps Brodatz is an interesting database because it stresses aspects of saliency that are quite problematic for most existing saliency detection algorithms: 1) the need to perform saliency judgments in highly textured regions, 2) classes that contain salient regions of diverse shapes, and 3) a great variety of salient attributes - e.g. corners, closed and open contours, regular geometric geometric figures (circles, squares, etc.), texture gradients, crisp and soft edges, etc. The entire collection of textures in the database was divided into a train and test set, using the set-up of our previous retrieval work [14]. The training database was used to determine the salient features of each class, and saliency maps were then computed on the test images. The process was repeated for all texture classes, on a one-vs-all setting (class of interest against all others) with each class sequentially considered as the “one” class. As illustrated by the examples shown in Figure 3, none of the challenges posed by Brodatz seem very problematic for discriminant saliency. Note, in particular, that the latter does not appear to have any difficulty in 1) ignoring highly textured background areas in favor of a more salient foreground object (two leftmost images), which could itself be another texture, 2) detecting as salient a wide variety of shapes, contours of different crispness and scale, or 3) even assign strong saliency to texture gradients (rightmost image). This robustness is a consequence of the fact that the saliency features are tuned to discriminate the class of interest from the rest. We next show that it can lead to significantly better saliency detection performance than that achievable with the algorithms currently available in the literature. Figure 3: Saliency maps (bottom row) obtained on various textures (shown in top row) from Brodatz. Bright pixels flag salient locations. Note: the saliency maps of the second row are best viewed on paper. A gamma-corrected version would be best for viewing on CRT displays and is available at www.svcl.ucsd.edu/publications/nips04-crt.ps Dataset DSD SSD HSD pixel-based constellation [15] Faces 97.24 77.3 61.87 93.05 96.4 Motorbikes 96.25 81.3 74.83 87.83 92.5 Airplanes 93.00 78.7 80.17 90.33 90.2 Table 1: SVM classification accuracy based on different detectors. 4.2 Comparison to existing methods While the results of the previous section provide interesting anecdotal evidence in support of discriminant saliency, objective conclusions can only be drawn by comparison to existing techniques. Unfortunately, it is not always straightforward to classify saliency detectors objectively by simple inspection of saliency maps, since different people frequently attribute different degrees of saliency to a given image region. In fact, in the absence of a larger objective for saliency, e.g. recognition, it is not even clear that the problem is well defined. To avoid the obvious biases inherent to a subjective evaluation of saliency maps, we tried to design an experiment that could lead to an objective comparison. The goal was to quantify if the saliency maps produced by the different techniques contained enough information for recognition. The rational is the following. If, when applied to an image, a saliency detector has an output which is highly correlated with the appearance/absence of the class of interest in that image, then it should be possible to classify the image (as belonging/not belonging to the class) by classifying the saliency map itself. We then built the simplest possible saliency map classifier that we could conceive of: the intensity values of the saliency map were histogrammed and fed to a support vector machine (SVM) classifier. We compared the performance of the discriminant saliency detector (DSD) described above, with one representative from each of the areas of the literature discussed in section 2: the Harris saliency detector (HSD) and the scale saliency detector (SSD) of [6]. To evaluate performance on a generic recognition scenario, we adopted the Caltech database, using the experimental set up proposed in [15]. To obtain an idea of what would be acceptable classification results on this database, we used two benchmarks: the performance, on the same classification task, of 1) a classifier of equivalent simplicity but applied to the images themselves and 2) the constellation-based classifier proposed in [15] (which we believe to be representative of the state-of-the-art for this database). For the simple classifier, we reduced the luminance component of each image to a vector (by stacking all pixels into a column) and used a SVM to classify the resulting set of points. All parameters were set to assure a fair comparison between the saliency detectors (e.g. a multiscale version of Harris was employed, all detectors combined information from three scales, etc.). Table 1 presents the two benchmarks and the results of classifying the saliency histograms generated by the three detectors. The table supports various interesting conclusions. First, both the HSD and the SSD have Figure 4: Original images (top row), saliency maps generated by DSD (second row), and a comparison of salient locations detected by: DSD in the third row, SSD in the fourth, and HSD at the bottom. Salient locations are the centers of the white circles, the circle radii representing scale. Note: the saliency maps of the second row are best viewed on paper. A gamma-corrected version would be best for viewing in CRT displays and is available at www.svcl.ucsd.edu/svclwww/publications/nips04-crt.ps very poor performance, indicating that they produce saliency maps that have weak correlation with the presence/absence of the class of interest in the image to classify. Second, the simple pixel-based classifier works surprisingly well on this database, given that there is indeed a substantial amount of clutter in its images (see Figure 4). Its performance is, nevertheless, inferior to that of the constellation classifier. The third, and likely most surprising, observation is that the classification of the DSD histograms clearly outperform this classifier, achieving the overall best performance. It should be noted that this is somewhat of an unfair comparison for the constellation classifier, since it tries to solve a problem that is more difficult than the one considered in this experiment. While the question of interest here is “is class x present in the image or not?” this classifier can actually determine the location of the element from the class (e.g. a face) in the image. In any case, these results seem to support the claim that DSD produces saliency maps which contain most of the saliency information required for classification. The issue of translating these saliency maps into a combined segmentation/recognition solution will be addressed in future research. Finally, the superiority of the DSD over the other two saliency detectors considered in this experiment is also clearly supported by the inspection of the resulting salient locations. Some examples are presented in Figure 4. 4.3 Determining the number of salient features In addition to experimental validation of the performance of discriminant saliency, the experiment of the previous section suggests a classification-optimal strategy to determine the number of features that contribute to the saliency maps of a given class of interest. Note that, while the training examples from each class are not carefully segmented (and can contain large areas of clutter), the working assumption is that each image is labeled with respect to the presence or absence in it of the class of interest. Hence, the classification problem of the previous section is perfectly well defined before segmentation (e.g. separation of the pixels containing objects in the class and pixels of background) takes place. It follows that a natural way to determine the optimal number of features is to search for the number that maximizes the classification rate on this problem. This search can be performed by 0 10 20 30 40 50 60 70 80 85 90 95 100 Number of features Accuracy(%) 0 10 20 30 40 50 60 70 80 85 90 95 100 Number of features Accuracy(%) 0 10 20 30 40 50 60 70 80 85 90 95 100 Number of features Accuracy(%) (a) (b) (c) Figure 5: Classification accuracy vs number of features considered by the saliency detector for (a) faces, (b) motorbikes and (c) airplanes. a traditional cross-validation strategy, the strategy that we have adopted for all the results presented in this paper. One interesting question is whether the performance of the DSD is very sensitive to the number of features chosen. Our experience is that, while it is important to limit the number of features, there is usually a range that leads to results very close to optimal. This is shown in Figure 5 where we present the variation of the classification rate on the problem of the previous section for various classes on Caltech. Visual inspection of the saliency detection results obtained with feature sets within this range showed no substantial differences with respect to that obtained with the optimal feature set. References [1] P. Viola and M. Jones. Robust real-time object detection. 2nd Int. Workshop on Statistical and Computational Theories of Vision Modeling, Learning, Computing and Sampling, July 2001. [2] C. Harris and M. Stephens. A combined corner and edge detector. Alvey Vision Conference, 1988. [3] A. Sha’ashua and S. Ullman. Structural saliency: the detection of globally salient structures using a locally connected network. Proc. Internat. Conf. on Computer Vision, 1988. [4] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of International Conference on Computer Vision, pp. 1150-1157, 1999. [5] N. Sebe, M. S. Lew. Comparing salient point detectors. Pattern Recognition Letters, vol.24, no.1-3, Jan. 2003, pp.89-96. [6] T. Kadir and M.l Brady. Scale, Saliency and Image Description. International Journal of Computer Vision, Vol.45, No.2, p83-105, November 2001 [7] L. Itti, C. Koch and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis and Machine Intelligence, 20(11), Nov. 1998. [8] C. Schmid, R. Mohr and C. Bauckhage. Comparing and Evaluating Interest Points. Proceedings of International Conference on Computer Vision 1998, p.230-235. [9] D. Claus and A. Fitzgibbon. Reliable Fiducial Detection in Natural Scenes. Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 2004 [10] N. Vasconcelos. Feature Selection by Maximum Marginal Diversity. In Neural Information Processing System, Vancouver, Canada, 2002 [11] N. Vasconcelos. Scalable Discriminant Feature Selection for Image Retrieval and Recgnition. To appear in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2004 [12] D. Sagi, ”The Psychophysics of Texture Segmentation, in Early Vision and Beyond, T. Papathomas, Ed., chapter 7. MIT Press, 1996. [13] J. Malik, P. Perona. Preattentive texture discrimination with early vision mechanisms. J Opt Soc Am A. 7(5), 1990 May, p923-32. [14] N. Vasconcelos and G. Carneiro. What is the Role of Independence for Visual Regognition? In Proc. European Conference on Computer Vision, Copenhagen, Denmark, 2002. [15] R. Fergus, P. Perona and A. Zisserman. Object Class Recognition by Unsupervised ScaleInvariant Learning. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition 2003.
|
2004
|
174
|
2,590
|
Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation Yuanqing Lin, Daniel D. Lee GRASP Laboratory, Department of Electrical and System Engineering University of Pennsylvania, Philadelphia, PA 19104 linyuanq, ddlee@seas.upenn.edu Abstract Bayesian Regularization and Nonnegative Deconvolution (BRAND) is proposed for estimating time delays of acoustic signals in reverberant environments. Sparsity of the nonnegative filter coefficients is enforced using an L1-norm regularization. A probabilistic generative model is used to simultaneously estimate the regularization parameters and filter coefficients from the signal data. Iterative update rules are derived under a Bayesian framework using the Expectation-Maximization procedure. The resulting time delay estimation algorithm is demonstrated on noisy acoustic data. 1 Introduction Estimating the time difference of arrival is crucial for binaural acoustic sound source localization[1]. A typical scenario is depicted in Fig. 1 where the azimuthal angle φ to the sound source is determined by the difference in direct propagation times of the sound to the two microphones. The standard signal processing algorithm for determining the time delay between two signals s(t) and x(t) relies upon computing the cross-correlation function[2]: C(∆t) = R dt x(t)s(t −∆t) and determining the time delay ∆t that maximizes the cross-correlation. In the presence of uncorrelated white noise, this procedure is equivalent to the optimal matched filter for detection of the time delayed signal. However, a typical room environment is reverberant and the measured signal is contaminated with echoes from multiple paths as shown in Fig. 1. In this case, the cross-correlation and related algorithms may not be optimal for estimating the time delays. An alternative approach would be to estimate the multiple time delays as a linear deconvolution problem: min α ∥x(t) − X i αis(t −∆ti)∥2 (1) Unfortunately, this deconvolution can be ill-conditioned resulting in very noisy solutions for the coefficients α. Recently, we proposed incorporating nonnegativity constraints α ≥ 0 in the deconvolution to overcome the ill-conditioned linear solutions [3]. The use of these constraints is justified by acoustic models that describe the theoretical room impulse response with nonnegative filter coeffients [4]. The resulting optimization problem can be written as the nonnegative quadratic programming problem: min α≥0 ∥x −Sα∥2 (2) φ d s x1 x2 ∆t1 ∆t2 ∆tΕ Figure 1: The typical scenario of reverberant signal. x2(t) comes from the direct path (∆t2) and echo paths(∆tE). -10 -5 0 5 10 -300 -200 -100 0 100 200 Linear Deconvolution time delay(Ts ) -10 -5 0 5 10 1.2 1.4 1.6 1.8 2 2.2 x 1011 Cross-correlation time delay(Ts ) -10 -5 0 5 10 -200 0 200 400 600 800 Phase Alignment Transform time delay(Ts) -10 -5 0 5 10 0 0.2 0.4 0.6 0.8 1 Nonnegative Deconvolution time delay(Ts ) a b c d Figure 2: Time delay estimation of a speech signal with a) cross-correlation, b) phase alignment transform, c) linear deconvolution, d) nonnegative deconvolution. The observed signal x(t) = s(t − Ts) + 0.5s(t −8.75Ts) contains an additional time-delayed echo. Ts is the sampling interval. where x = {x(t1) x(t2) . . . x(tN)}T is a N × 1 data vector, S = {s(t −∆t1) s(t − ∆t2) . . . s(t −∆tM)} is an N × M matrix, and α is a M × 1 vector of nonnegative coefficients. Figure 2 compares the performance of cross-correlation, phase alignment transform(a generalized cross-correlation algorithm), linear deconvolution, and nonnegative deconvolution for estimating the time delays in a clean speech signal containing an echo. From the structure of the estimated coefficients, it is clear that nonnegative deconvolution can successfully discover the structure of the time delays present in the signal. However, in the presence of large background noise, it may be necessary to regularize the nonnegative quadratic optimization to prevent overfitting. In this case, we propose using an L1-norm regularization to favor sparse solutions [5]: min α≥0 ∥x −Sα∥2 + ˆλ X i αi (3) In this formula, the parameter ˆλ (ˆλ ≥0) describes the trade-off between fitting the observed data and enforcing sparse solutions. The proper choice of this parameter may be crucial in obtaining the optimal time delay estimates. In the rest of this manuscript, we introduce a proper generative model for these regularization parameters and filter coefficients within a probabilistic Bayesian framework. We show how these parameters can be efficiently determined using appropriate iterative estimates. We conclude by demonstrating and discussing the performance of our algorithm on noisy acoustic signals in reverberant environments. 2 Bayesian regularization Instead of arbitrarily setting values for the regularization parameters, we show how a Bayesian framework can be used to automatically estimate the correct values from the data. Bayesian regularization has previously been successfully applied to neural network learning [6], model selection, and relevance vector machine (RVM) [7]. In these works, the fitting coefficients are assumed to have Gaussian priors, which lead to an L2-norm regularization. In our model, we use L1-norm sparsity regularization, and Bayesian framework will be used to optimally determine the appropriate regularization parameters. Our probabilistic model assumes the observed data signal is generated by convolving the source signal with a nonnegative filter describing the room impulse response. This signal is then contaminated by additive Gaussian white noise with zero-mean and covariance σ2: P(x|S, α, σ2) = 1 (2πσ2)N/2 exp µ −1 2σ2 ∥x −Sα∥2 ¶ . (4) To enforce sparseness in the filter coefficients α, an exponential prior distribution is used. This prior only has support in the nonnegative orthant and the sharpness of the distribution is given by the regularization parameter λ: P(α|λ) = λM exp{−λ M X i=1 αi}, α ≥0 . (5) In order to infer the optimal settings of the regularization parameters σ2 and λ, Bayes rule is used to maximize the posterior distribution: P(λ, σ2|x, S) = P(x|λ, σ2, S)P(λ, σ2) P(x|S) . (6) Assuming that P(λ, σ2) is relatively flat [8], estimating σ2 and λ is then equivalent to maximizing the likelihood: P(x|λ, σ2, S) = λM (2πσ2)N/2 Z α≥0 dα exp[−F(α)] (7) where F(α) = 1 2σ2 (x −Sα)T (x −Sα) + λeT α (8) and e = [1 1 . . . 1]T . Unfortunately, the integral in Eq. 7 cannot be directly maximized. Previous approaches to Bayesian regularization have used iterative updates heuristically derived from selfconsistent fixed point equations. In our model, the following iterative update rules for λ and σ2 can be derived using Expectation-Maximization: 1 λ ←− 1 M Z α≥0 dα eT αQ(α) (9) σ2 ←− 1 N Z α≥0 dα (x −Sα)T (x −Sα)Q(α) (10) where the expectations are taken over the distribution Q(α) = exp[−F(α)] Zα , (11) with normalization Zα = R α≥0 dα exp[−F(α)]. These updates have guaranteed convergence properties and can be intuitively understood as iteratively reestimating λ and σ2 based upon appropriate expectations over the current estimate for Q(α). 2.1 Estimation of αML The integrals in Eqs. 9–10 are dominated by α ≈αML where the most likely αML is given by: αML = arg min α≥0 1 2σ2 (x −Sα)T (x −Sα) + λT α. (12) This optimization is equivalent to the nonnegative quadratic programming problem in Eq. 3 with ˆλ = λσ2. To efficiently compute αML, we have recently developed two distinct methods for optimizing Eq. 12. The first method is based upon a multiplicative update rule for nonnegative quadratic programming [9]. We first write the problem in the following form: min α≥0 1 2αT Aα + bT α, (13) where A = 1 σ2 ST S, and b = −1 σ2 ST x. First, we decompose the matrix A = A+ −A−into its positive and negative components such that: A+ ij = ½ Aij if Aij > 0 0 if Aij ≤0 A− ij = ½ 0 if Aij ≥0 −Aij if Aij < 0 (14) Then the following is an auxiliary function that upper bounds Eq. 13 [9]: G(α, ˜α) = bT α + 1 2 X i (A+ ˜α)i ˜αi α2 i −1 2 X i,j A− ij ˜αi˜αj(1 + ln αiαj ˜αi ˜αj ). (15) Minimizing Eq. 15 yields the following iterative multiplicative rule with guaranteed convergence to αML: αi ←−αi −bi + p b2 i + 4(A+α)i(A−α)i 2(A+α)i . (16) The iterative formula in Eq. 16 is used to efficiently compute a reasonable estimate for αML from an arbitrary initialization. However, its convergence is similar to other interior point methods in that small components of αML will continually decrease but never equal zero. In order to truly sparsify the solution, we employ an alternative method based upon the simplex algorithm for linear programming. Our other optimization method is based upon finding a solution αML that satistifies the Karush-Kuhn-Tucker (KKT) conditions for Eq. 13: Aα + b = β, α ≥0, β ≥0, αiβi = 0, i = 1, 2, . . . , M. (17) By introducing additional artificial variables a, the KKT conditions can be transformed into the linear optimization min P i ai subject to the constraints: a ≥ 0 (18) α ≥ 0 (19) β ≥ 0 (20) Aα −β + sign(−b)a = −b (21) αiβi = 0, i = 1, 2, . . . , M. (22) The only nonlinear constraint is the product αiβi = 0. However, this can be effectively implemented in the simplex procedure by modifying the selection of the pivot element to ensure that αi and βi are never both in the set of basic variables. With this simple modification of the simplex algorithm, the optimal αML can be efficiently computed. 2.2 Approximation of Q(α) Once the most likely αML has been determined, the simplest approach for estimating the new λ and σ2 in Eqs. 9–10 is to replace Q(α) ≈δ(α −αML) in the integrals. Unfortunately, this simple approximation will cause λ and σ to diverge from bad initial estimates. To overcome these difficulties, we use a slightly more sophisticated method of estimating the expectations to properly consider variability in the distribution Q(α). We first note that the solution αML of the nonnegative quadratic optimization in Eq. 12 naturally partitions the elements of the vector α into two distinct subsets αI and αJ, consisting of components i ∈I such that (αML)i = 0, and components j ∈J such that (αML)j > 0, respectively. It will then be useful to approximate the distribution Q(α) as the factored form: Q(α) ≈QI(αI)QJ(αJ) (23) Consider the components αJ near the maximum likelihood solution αML. Among these components, none of nonnegativity constraints are active, so it is reasonable to approximate the distribution QJ(αJ) by the unconstrained Gaussian: QJ(αJ) ∝exp[−F(αJ|αI = 0)] (24) This Gaussian distribution has mean αML J and inverse covariance given by the submatrix AJJ of A = 1 σ2 ST S. For the other components αI, it is important to consider the nonnegativity constraints, since αML I = 0 is on the boundary of the distribution. We can represent QI(αI) with the first two order Tyler expansion: QI(αI) ∝ exp{−[(∂F ∂α)|αML]T I αI −1 2αT I AIIαI)}, ∝ exp[−(AαML + b)T I αI −1 2αT I AIIαI] αI ≥0. (25) QI(αI) is then approximated with factorial exponential distribution ˆQI(αI) so that the integrals in Eqs. 9–10 can be easily evaluated. ˆQI(αI) = Y i∈I 1 µi e−αi/µi, αI ≥0 (26) which has support only for nonnegative αI ≥0. The mean-field parameters µ are optimally obtained by minimizing the KL-divergence: min µ≥0 Z αI≥0 dαI ˆQI(αI) ln ˆQI(αI) QI(αI). (27) This integral can easily be computed in terms of the parameters µ and yields the minimization: min µ≥0 − X i∈I ln µi + ˆbT I µ + 1 2µT ˆAµ, (28) where ˆbI = (AαML + b)I, ˆA = AII + diag(AII). To solve this minimization problem, we use an auxiliary function for Eq. 28 similar to the auxiliary function for nonnegative quadratic programming: G(µ, ˜µ) = − X i∈I ln µi + ˆbT I µ+ 1 2 X i∈I ( ˆA+ ˜µ)i ˜µi µ2 i −1 2 X i,j∈I ˆA− ij ˜µi˜µj(1+ln µiµj ˜µi˜µj ), (29) where ˆA = ˆA+−ˆA−is the decomposition of ˆA into its positive and negative components. Minimization of this auxiliary function yields the following multiplicative update rules for µi: µi ←−µi −ˆbi + q ˆb2 i + 4( ˆA+µ)i[( ˆA−µ)i + 1 µi ] 2( ˆA+µ)i . (30) These iterations are then guaranteed to converge to the optimal mean-field parameters for the distribution QI(αI). Given the factorized approximation ˆQI(αI)QJ(αJ), the expectations in Eqs. 9–10 can be analytically calculated. The mean value of α under this distribution is given by: ¯αi = ½ αML i if i ∈J µi if i ∈I (31) and its covariance C is: Cij = ½ (AJJ −1)ij if i, j ∈J µ2 i δij otherwise (32) The update rules for λ and σ2 are then given by: λ ←− M P i ¯αi (33) σ2 ←− 1 N [(x −S¯α)T (x −S¯α) + Tr(ST SC)] (34) To summarize, the complete algorithm consists of the following steps: 1. Initialize λ and σ2. 2. Determine αML by solving the nonnegative quadratic programming in Eq. 12. 3. Approximate the distribution Q(α) ≈ˆQI(αI)QJ(αJ) by solving the mean field equations for µ in ˆQI. 4. Calculate the mean ¯α and covariance C for this distribution. 5. Reestimate regularization parameters λ and σ2 using Eqs. 33–34. 6. Go back to Step 2 until convergence. 0 5 10 15 20 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Iteration Number Μ/λ 0 5 10 15 20 10 4 10 3 10 2 10 1 10 0 10 1 10 2 10 3 Iteration Number σ2 40dB 20dB 5dB 40dB 20dB 5dB 0 5 10 15 20 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Iteration Number Μ/λ 0 5 10 15 20 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 Iteration Number σ2 -40dB -20dB - 5dB - 40dB - 20dB - 5dB Figure 3: Iterative estimation of λ (M/λ in the figure, indicating the reverberation level) and σ2 when x(t) is contaminated by background white noise at -5 dB, -20 dB, and -40 dB levels. The horizontal dotted lines indicate the true levels. 3 Results We illustrate the performance of our algorithm in estimating the regularization parameters as well as the nonnegative filter coefficients of a speech source signal s(t). The observed signal x(t) is simulated by a time-delayed version of the source signal mixed with an echo along with additive Gaussian white noise η(t): x(t) = s(t −Ts) + 0.5s(t −16.5Ts) + η(t). (35) We compare the results of the algorithm as the noise level is changed. Fig. 3 shows the convergence of the estimates for λ and σ2 as the noise level is varied between -5 dB and -40 dB. There is rapid convergence of both parameters even with bad initial estimates. The resulting value of the σ2 parameter is very close to the true noise level. Additionally, the estimated λ parameter is inversely related to the reverberation level of the environment, given by the sum of the true filter coefficients. Fig. 4 demonstrates the importance of correctly determining the regularization parameters in estimating the time delay structure in the presence of noise. Using the Bayesian regularization procedure, the resulting estimate for αML correctly models the direct path time delay as well as the secondary echo. However, if the regularization parameters are manually set incorrectly to over-sparsify the solution, the resulting estimates for the time delays may be quite inaccurate. 4 Discussion In summary, we propose using a Bayesian framework to automatically regularize nonnegative deconvolutions for estimating time delays in acoustic signals. We present two methods for efficiently solving the resulting nonnegative quadratic programming problem. We also derive an iterative algorithm from Expectation-Maximization to estimate the regularization parameters. We show how these iterative updates can simulataneously estimate the timedelay structure in the signal, as well as the background noise level and reverberation level of the room. Our results indicate that the algorithm is able to quickly converge to an optimal solution, even with bad initial estimates. Preliminary tests with an acoustic robotic platform indicate that these algorithms can successfully be implemented on a real-time system. 0 5 10 15 20 0 0.2 0.4 0.6 0.8 1 -20 -15 -10 -5 0 5 10 15 20 0 0.2 0.4 0.6 0.8 1 λ=50, σ = 0.12 2
-20 -15 -10 -5 a b λσ =200 2 Time delay (T ) s Time delay (T ) s α α Figure 4: Estimated time delay structure from αML with different regularizations: a) Bayesian regularization, b) manually set regularization. Dotted lines indicate the true positions of the time delays. We are currently working to extend the algorithm to the situation where the source signal needs to also be estimated. In this case, priors for the source signal are used to regularize the source estimates. These priors are similar to those used for blind source separation. We are investigating algorithms that can simultaneously estimate the hyperparameters for these priors in addition to the other parameters within a consistent Bayesian framework. References [1] E. Ben-Reuven and Y. Singer, “Discriminative binaural sound localization,” in Advances in Neural Information Processing Systems, S. T. Suzanna Becker and K. Obermayer, Eds., vol. 15. The MIT Press, 2002. [2] C. H. Knapp and G. C. Carter., “The generalized correlation method for estimation of time delay,” IEEE Transactions on ASSP, vol. 24, no. 4, pp. 320–327, 1976. [3] Y. Lin, D. D. Lee, and L. K. Saul, “Nonnegative deconvolution for time of arrival estimation,” in ICASSP, 2004. [4] J. B. Allen and D. A. Berkley, “Image method for efficient simulating small-room acoustics,” J. Acoust. Soc. Am., vol. 65, pp. 943–950, 1979. [5] B. Olshausen and D. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for nature images,” Nature, vol. 381, pp. 607–609, 1996. [6] D. Foresee and M. Hagan, “Gauss-Newton approximation to Bayesian regularization,” in Proceedings of the 1997 International Joint Conference on Neural Networks, 1997, pp. 1930–1935. [7] M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001. [8] D. MacKay, “Bayesian interpolation,” Neural Computation, vol. 4, pp. 415–447, 1992. [9] F. Sha, L. K. Saul, and D. Lee, “Multiplicative updates for nonnegative quadratic programming in support vector machines,” in Advances in Neural Information Processing Systems, S. T. Suzanna Becker and K. Obermayer, Eds., vol. 15. The MIT Press, 2002.
|
2004
|
175
|
2,591
|
Algebraic Set Kernels with Application to Inference Over Local Image Representations Amnon Shashua and Tamir Hazan ∗ Abstract This paper presents a general family of algebraic positive definite similarity functions over spaces of matrices with varying column rank. The columns can represent local regions in an image (whereby images have varying number of local parts), images of an image sequence, motion trajectories in a multibody motion, and so forth. The family of set kernels we derive is based on a group invariant tensor product lifting with parameters that can be naturally tuned to provide a cook-book of sorts covering the possible ”wish lists” from similarity measures over sets of varying cardinality. We highlight the strengths of our approach by demonstrating the set kernels for visual recognition of pedestrians using local parts representations. 1 Introduction In the area of learning from observations there are two main paths that are often mutually exclusive: (i) the design of learning algorithms, and (ii) the design of data representations. The algorithm designers take pride in the fact that their algorithm can generalize well given straightforward data representations (most notable example is SVM [11]), whereas those who work on data representations demonstrate often remarkable results with sophisticated data representations using only straightforward learning algorithms (e.g. [5, 10, 6]). This dichotomy is probably most emphasized in the area of computer vision, where image understanding from observations involve data instances of images or image sequences containing huge amounts of data. A straightforward representation treating all the measurements as a single vector, such as the raw pixel data, or a transformed raw-pixel data, places unreasonable demands on the learning algorithm. The ”holistic” representations suffer also from sensitivity to occlusions, invariance to local and global transformations, non-rigidity of local parts of the object, and so forth. Practitioners in the area of data representations have long noticed that a collection of local representations (part-based representations) can be most effective to ameliorate changes of appearance [5, 10, 6]. The local data representations vary in their sophistication, but share the same principle where an image corresponds to a collection of points each in a relatively small dimensional space — instead of a single point in high-dimensional space induced by holistic representations. In general, the number of points (local parts) per image may vary and the dimension of each point may vary as well. The local representations tend ∗School of Engineering and Computer Science, Hebrew University of Jerusalem, Jerusalem 91904, Israel to be robust against occlusions, local and global transformations and preserve the original resolution of the image (the higher the resolution the more parts are generated per image). The key for unifying local and holistic representations for inference engines is to design positive definite similarity functions (a.k.a. kernels) over sets (of vectors) of varying cardinalities. A Support Vector Machine (SVM) [11] can then handle sets of vectors as a single instance via application of those ”set kernels”. A set kernel would be useful also to other types of inference engines such as kernel versions of PCA, LDA, CCA, ridge regression and any algorithm which can be mapped onto inner-products between pairs of data instances (see [8] for details on kernel methods). Formally, we consider an instance being represented by a collection of vectors, which for the sake of convenience, form the columns of a matrix. We would like to find an algebraic family of similarity functions sim(A, B) over matrices A, B which satisfy the following requirements: (i) sim(A, B) is an inner product, i.e., sim(A, B) = φ(A)⊤φ(B) for some mapping φ() from matrices to vectors, (ii) sim(A, B) is built over local kernel functions k(ai, bj) over columns ai and bj of A, B respectively, (iii) The column cardinality (rank of column space) of A and B need not be the same (number of local parts may differ from image to image), and (iv) the parameters of sim(A, B) should induce the properties of invariance to order (alignement) of parts, part occlusions, and degree of interactions between local parts. In a nutshell, our work provides a cook-book of sorts which fundamentally covers the possible algebraic kernels over collections of local representations built on top of local kernels by combining (linearly and non-linearly) local kernels to form a family of global kernels over local representations. The design of a kernel over sets of vectors has been recently attracting much attention in the computer vision and machine learning literature. A possible approach is to fit a distribution to the set of vectors and define the kernel as a distribution matching measure [9, 12, 4]. This has the advantage that the number of local parts can vary but at the expense of fitting a distribution to the variation over parts. The variation could be quite complex at times, unlikely to fit into a known family of distributions in many situations of interest, and in practice the sample size (number of columns of A) is not sufficiently large to reliably fit a distribution. The alternative, which is the approach taken in this paper, is to create a kernel over sets of vectors in a direct manner. When the column cardinality is equal it is possible to model the similarity measure as a function over the principal angles between the two column spaces ([14] and references therein) while for varying column cardinality only heuristic similarity measures (which are not positive definite) have so far been introduced [13]. It is important to note that although we chose SVM over local representations as the application to demonstrate the use of set kernels, the need for adequately working with instances made out of sets of various cardinalities spans many other application domains. For example, an image sequence may be represented by a set (ordered or unordered) of vectors, where each vector stands for an image, the pixels in an image can be represented as a tuple consisting of position, intensity and other attributes, motion trajectories of multiply moving bodies can be represented as a collection of vectors, and so on. Therefore, the problem addressed in this paper is fundamental both theoretically and from a practical perspective as well. 2 The General Family of Inner-Products over Matrices We wish to derive the general family of positive definite similarity measures sim(A, B) over matrices A, B which have the same number of rows but possibly different column rank (in particular, different number of columns). Let A be of dimensions n × k and B of dimension n × q where n is fixed and k, q can vary at will over the application of sim(·, ·) on pairs of matrices. Let m = max{n, k, q} be the upper bound over all values of k, q encountered by the data. Let ai, bj be the column vectors of matrices A, B and let k(ai, bj) be the local kernel function. For example, in the context where the column vectors represent local parts of an image, then the matching function k(·, ·) between pairs of local parts provides the building blocks of the overall similarity function. The local kernel is some positive definite function k(x, y) = φ(x)⊤φ(y) which is the inner-product between the ”feature”-mapped vectors x, y for some feature map φ(·). For example, if φ(·) is the polynomial map of degree up to d, then k(x, y) = (1 + x⊤y)d. The local kernels can be combined in a linear or non-linear manner. When the combination is linear the similarity becomes the analogue of the inner-product between vectors extended to matrices. We will refer to the linear family as sim(A, B) =< A, B > and that will be the focus of this section. In the next section we will derive the general (algebraic) nonlinear family which is based on ”lifting” the input matrices A, B onto higher dimensional spaces and feeding the result onto the < ·, · > machinery developed in this section, i.e., sim(A, B) =< ψ(A), ψ(B) >. We will start by embedding A, B onto m × m matrices by zero padding as follows. Let ei denote the i’th standard basis vector (0, .., 0, 1, 0, .., 0) of Rm. The the embedding is represented by linear combinations of tensor products: A → n X i=1 k X j=1 aijei ⊗ej, B → n X l=1 q X t=1 bltel ⊗et. Note that A, B are the upper-left blocks of the zero-padded matrices. Let S be a positive semi definite m2 ×m2 matrix represented by S = Pp r=1 Gr ⊗Fr where Gr, Fr are m×m matrices1. Let ˆFr be the q × k upper-left sub-matrix of F ⊤ r , and let ˆGr be the n × n upper-left sub-matrix of Gr. We will be using the following three identities: Gx1 ⊗Fx2 = (G ⊗F)(x1 ⊗x2), (G ⊗F)(G′ ⊗F ′) = GG′ ⊗FF ′, < x1 ⊗x2, y1 ⊗y2 >= (x⊤ 1 y1)(x⊤ 2 y2). The inner-product < A, B > over all p.s.d. matrices S has the form: < A, B > = < X i,j aijei ⊗ej, ( X r Gr ⊗Fr) X l,t bltel ⊗et > = X r X i,j,l,t aijblt < ei ⊗ej, Grel ⊗Fret > = X r X i,j,l,t aijblt(e⊤ i Grel)(e⊤ j Fret) = X r X i,j,l,t aijblt(Gr)il(Fr)jt = X r X lt (A⊤ˆGrB)jt(Fr)jt = trace X r (A⊤ˆGrB) ˆFr ! We have represented the inner product < A, B > using the choice of m × m matrices Gr, Fr instead of the choice of a single m2 × m2 p.s.d. matrix S. The matrices Gr, Fr 1Any S can be represented as a sum over tensor products: given column-wise ordering, the matrix G ⊗F is composed of n × n blocks of the form fijG. Therefore, take Gr to be the n × n blocks of S and Fr to be the elemental matrices which have ”1” in coordinate r = (i, j) and zero everywhere else. must be selected such that Pp r=1 Gr ⊗Fr is positive semi definite. The problem of deciding on the the necessary conditions on Fr and Gr such that the sum over tensor products is p.s.d is difficult. Even deciding whether a given S has a separable decomposition is known to be NP-hard [3]. The sufficient conditions are easy — choosing Gr, Fr to be positive semi definite would make Pp r=1 Gr ⊗Fr positive semi definite as well. In this context (of separable S) we need one more constraint in order to work with non-linear local kernels k(x, y) = φ(x)⊤φ(y): the matrices ˆGr = ˜ M ⊤ r ˜ Mr must ”distribute with the kernel”, namely there exist Mr such that k(Mrx, Mry) = φ(Mrx)⊤φ(Mry) = φ(x)⊤˜ M ⊤ r ˜ Mrφ(y) = φ(x)⊤ˆGrφ(y). To summarize the results so far, the most general, but seperable, analogue of the innerproduct over vectors to the inner-product of matrices of varying column cardinality has the form: < A, B >= X r trace(Hr ˆFr) (1) Where the entries of Hr consists of k(Mrai, Mrbj) over the columns of A, B after possibly undergoing global coordinate changes by Mr (the role of ˆGr), and ˆFr are the q × k upperleft sub-matrix of positive definite m × m matrices F ⊤ r . The role of the matrices ˆGr is to perform global coordinate changes of Rn before application of the kernel k() on the columns of A, B. These global transformations include projections (say onto prototypical ”parts”) that may be given or ”learned” from a training set. The matrices ˆFr determine the range of interaction between columns of A and columns of B. For example, when ˆGr = I then < A, B >= trace(A⊤B ˆF) where ˆF is the upper-left submatrix with the appropriate dimension of some fixed m × m p.s.d matrix F = P r Fr. Note that entries of A⊤B are k(ai, bj). In other words, when Gr = I, < A, B > boils down to a simple linear super-position of the local kernels, P ij k(ai, bj)fij where the entries fij are part of the upper-left block of a fixed positive definite matrix F where the block dimensions are commensurate with the number of columns of A and those of B. The various choices of F determine the type of invariances one could obtain from the similarity measure. For example, when F = I the similarity is simply the sum (average) of the local kernels k(ai, bi) thereby assuming we have a strict alignment between the local parts represented by A and the local parts represented by B. On the other end of the invariance spectrum, when F = 11⊤(all entries are ”1”) the similarity measure averages over all interactions of local parts k(ai, bj) thereby achieving an invariance to the order of the parts. A decaying weighted interaction such as fij = σ−|i−j| would provide a middle ground between the assumption of strict alignment and the assumption of complete lack of alignment. In the section below we will derive the non-linear version of sim(A, B) based on the basic machinery of < A, B > of eqn. (1) and lifting operations on A, B. 3 Lifting Matrices onto Higher Dimensions The family of sim(A, B) =< A, B > forms a weighted linear superposition of the local kernel k(ai, bj). Non-linear combinations of local kernels emerge using mappings ψ(A) from the input matrices onto other higher-dimensional matrices, thus forming sim(A, B) =< ψ(A), ψ(B) >. Additional invariance properties and parameters controlling the perfromance of sim(A, B) emerge with the introduction of non-linear combinations of local kernels, and those will be discussed later on in this section. Consider the general d-fold lifting ψ(A) = A⊗d which can be viewed as a nd × kd matrix. Let Fr be a p.s.d. matrix of dimension md × md and ˆFr be the upper-left qd × kd block of Fr. Let Gr = ( ˆGr)⊗d be a p.s.d matrix of dimension nd × nd where ˆGr is p.s.d. n × n matrix. Using the identity (A⊗d)⊤B⊗d = (A⊤B)⊗d we obtain the inner-product in the lifted space: < A⊗d, B⊗d >= X r trace (A⊤ˆGrB)⊗d ˆFr . By taking linear combinations of < A⊗l, B⊗l >, l = 1, ..., d, we get the general nonhomogenous d-fold inner-product simd(A, B). A this point the formulation is general but somewhat unwieldy computational-wise. The key for computational simplification lay in the fact that choices of Fr determine not only local interactions (as in the linear case) but also group invariances. The group invariances are a result of applying symmetric operators on the tensor product space — we will consider two of those operators here, known as the the d-fold alternating tensor A∧d = A ∧.... ∧A and the d-fold symmetric tensor A·d = A · ... · A. These lifting operations introduce the determinant and permanent operations on submatrices of A⊤ˆGrB, as described below. The alternating tensor is a multilinear map of Rn, (A ∧.... ∧A)(x1 ∧... ∧xd) = Ax1 ∧ ... ∧Axd, where x1 ∧... ∧xd = 1 d! X σ∈Sd sign(σ)xσ(1) ⊗.... ⊗xσ(d), where Sd is the symmetric group over d letters and σ ∈Sd are the permutations of the group. If x1, ..., xn form a basis of Rn, then the n d elements xi1 ∧... ∧xid, where 1 ≤ i1 < ... < id ≤n form a basis of the alternating d −fold tensor product of Rn, denoted as ΛdRn. If A ∈Rn×k is a linear map on Rn sending points to Rk, then A∧d is a linear map on ΛdRn sending x1 ∧... ∧xd to Ax1 ∧... ∧Axd, i.e., sending points in ΛdRn to points in ΛdRk. The matrix representation of A∧d is called the ”d’th compound matrix” Cd(A) whose (i1, ..., id|j1, ..., jd) entry has the value det(A[i1, ..., id : j1, ..., jd]) where the determinant is of the d × d block constructed by choosing the rows i1, ..., id and the columns j1, ..., jd of A. In other words, Cd(A) has n d rows and k d columns (instead of nd × kd necessary for A⊗d) whose entries are equal to the d × d minors of A. When k = d, Ck(A) is a vector known as the Grasmanian of A, and when n = k = d then Cd(A) = det(A). Finally, the identity (A⊗d)⊤B⊗d = (A⊤B)⊗d specializes to (A∧d)⊤B∧d = (A⊤B)∧d which translates to the identity Cd(A)⊤Cd(B) = Cd(A⊤B) known as the Binet-Cauchy theorem [1]. Taken together, the ”d-fold alternating kernel” Λd(A, B) is defined by: Λd(A, B) =< A∧d, B∧d >=< Cd(A), Cd(B) >= X r trace Cd(A⊤ˆGrB) ˆFr , (2) where ˆFr is the q d × k d upper-left submatrix of the p.s.d m d × m d matrix Fr. Note that the local kernel plugs in as the entries of (A⊤ˆGrB)ij = k(Mrai, Mrbj) where ˆGr = M ⊤ r Mr. Another symmetric operator on the tensor product space is via the d-fold symmetric tensor space SymdRn whose points are: x1 · · · xd = 1 d! X σ∈Sd xσ(1) ⊗.... ⊗xσ(d). The analogue of Cd(A) is the ”d’th power matrix” Rd(A) whose (i1, ..., id|j1, ..., jd) entry has the value perm(A[i1, ..., id : j1, ..., jd]) and which stands for the map A·d (A · · · A)(x1 · · · xd) = Ax1 · · · Axd. In other words, Rd(A) has n+d−1 d rows and k+d−1 d columns whose entries are equal to the d×d permanents of A. The analogue of the Binet-Cauchy theorem is Rd(A)⊤Rd(B) = Rd(A⊤B). The ensuing kernel similarity function, referred to as the ”d-fold symmetric kernel” is: Symd(A, B) =< A·d, B·d >=< Rd(A), Rd(B) >= X r trace Rd(A⊤ˆGrB) ˆFr (3) where ˆFr is the q+d−1 d × k+d−1 d upper-left submatrix of the positive definite m+d−1 d × n+d−1 d matrix Fr. Due to lack of space we will stop here and spend the remainder of this section in describing in laymen terms what are the properties of these similarity measures, how they can be constructed in practice and in a computationally efficient manner (despite the combinatorial element in their definition). 3.1 Practical Considerations To recap, the family of similarity functions sim(A, B) comprise of the linear version < A, B > (eqn. 1) and non-linear versions Λl(A, B), Syml(A, B) (eqns. 2,3) which are group projections of the general kernel < A⊗d, B⊗d >. These different similarity functions are controlled by the choice of three items: Gr, Fr and the parameter d representing the degree of the tensor product operator. Specifically, we will focus on the case Gr = I and on Λd(A, B) as a representative of the non-linear family. The role of ˆGr is fairly interesting as it can be viewed as a projection operator from ”parts” to prototypical parts that can be learned from a training set but we leave this to the full length article that will appear later. Practically, to compute Λd(A, B) one needs to run over all d × d blocks of the k × q matrix A⊤B (whose entries are k(ai, bj)) and for each block compute the determinant. The similarity function is a weighted sum of all those determinants weighted by fij. By appropriate selection of F one can control both the complexity (avoid running over all possible d × d blocks) of the computation and the degree of interaction between the determinants. These determinants have an interesting geometric interpretation if those are computed over unitary matrices — as described next. Let A = QARA and B = QBRB be the QR factorization of the matrices, i.e., QA has orthonormal columns which span the column space of A, then it has been recently shown [14] that R−1 A can be computed from A using only operations over k(ai, aj). Therefore, the product Q⊤ AQB, which is equal to R−T A A⊤BR−1 B , can be computed using only local kernel applications. In other words, for each A compute R−1 A (can be done using only inner-products over columns of A), then when it comes to compute A⊤B compute instead R−T A A⊤BR−1 B which is equivalent to computing Q⊤ AQB. Thus effectively we have replaced every A with QA (unitary matrix). Now, Λd(QA, QB) for unitary matrices is the sum over the product of the cosine principal angles between d-dim subspaces spanned by columns of A and B. The value of each determinant of the d × d blocks of Q⊤ AQB is equal to the product of the cosine principal angles between the respective d-dim subspaces determined by corresponding selection of d columns from A and d columns from B. For example, the case k = q = d produces Λd(QA, QB) = det(Q⊤ AQB) which is the product of the eigenvalues of the matrix Q⊤ AQB. Those eigenvalues are the cosine of the principal angles between the column space of A and the column space of B [2]. Therefore, det(Q⊤ AQB) measures the ”angle” between the two subspaces spanned by the respective columns of the input matrices — in particular is invariant to the order of the columns. For smaller values of d we obtain the sum over such products between subspaces spanned by subsets of d columns between A and B. The advantage of smaller values of d is two fold: first it enables to compute the similarity when k ̸= q and second breaks down the similarity between subspaces into smaller pieces. The entries of the matrix F determine which subspaces are being considered and the interaction between subspaces in A and B. A diagonal F compares corresponding subspaces (a) (b) Figure 1: (a) The configuration of the nine sub-regions is displayed over the gradient image. (b) some of the positive examples — note the large variation in appearance, pose and articulation. between A and B whereas off-diagonal entries would enable comparisons between different choices of subspaces in A and in B. For example, we may want to consider choices of d columns arranged in a ”sliding” fashion, i.e., column sets {1, .., d}, {2, ..., d + 1}, ... and so forth, instead of the combinatorial number of all possible choices. This selection is associated with a sparse diagonal F where the non-vanishing entries along the diagonal have the value of ”1” and correspond to the sliding window selections. To conclude, in the linear version < A, B > the role of F is to determine the range of interaction between columns of A and columns of B, whereas with the non-linear version it is the interaction between d-dim subspaces rather than individual columns. We could select all possible interactions (exponential number) or any reduced interaction set such as the sliding window rule (linear number of choices) as described above. 4 Experiments We examined the performance of sim(A, B) on part-based representations for pedestrian detection using SVM for the inference engine. The dataset we used (courtesy of Mobileye Ltd.) covers a challenging variability of appearance, viewing position and body articulation (see Fig. 1). We ran a suit of comparative experiments using sim(A, B) =< A, B > with three versions of F = {I, 11⊤, decay} with local kernels covering linear, d’th degree polynomial (d = 2, 6) and RBF kernel, and likewise with sim(A, B) = Λd(A, B) with d = 2, sparse diagonal F (covering a sliding window configuration) and with linear, polynomial and RBF local kernels. We compared our results to the conventional down-sampled holistic representation where the raw images were down-sampled to size 20 × 20 and 32 × 32. Our tests also included simulation of occlusions (in the test images) in order to examine the sensitivity of our sim(A, B) family to occlusions. For the local part representation, the input image was divided into 9 fixed regions where for each image local orientation statistics were were generated following [5, 7] with a total of 22 numbers per region (see Fig 1a), thereby making a 22 × 9 matrix representation to be fed into sim(A, B). The size of the training set was 4000 split evenly between positive and negative examples and a test set of 4000 examples was used to evaluate the performance of each trial. The table below summarizes the accuracy results for the raw-pixel (holistic) representation over three trials: (i) images down-sampled to 20 × 20, (ii) images down-sampled to 32 × 32, and (iii) test images were partially occluded (32 × 32 version). The accuracy figures are the ratio between the sum of the true positives and true negatives and the total number of test examples. raw linear poly d = 2 poly d = 6 RBF 20 × 20 78% 83% 84% 86% 32 × 32 78% 84% 85% 82% occlusion 73.5% 72% 77% 76.5% The table below displays sim(A, B) with linear and RBF local kernels. local kernel < A, B >, F = I < A, B >, F = 11⊤ < A, B >, fij = 2−|i−j| Λ2(A, B) linear 90.8% 85% 90.6% 88% RBF 91.2% 85% 90.4% 90% One can see that the local part representation provides a sharp increase in accuracy compared to the raw pixel holistic representation. The added power of invariance to order of parts induced by < A, B >, F = 11⊤is not required since the parts are aligned and therefore the accuracy is the highest for the linear combination of local RBF < A, B >, F = I. The same applies for the non-linear version Λd(A, B) — the additional invariances that come with a non-linear combination of local parts are apparently not required. The power of non-linearity associated with the combination of local parts comes to bear when the test images have occluded parts, i.e., at random one of the columns of the input matrix is removed (or replaced with a random vector), as shown in the table below: local kernel < A, B >, F = I Λ2(A, B) linear 62% 87% RBF 83% 88% One can notice that a linear combination of local parts suffers from reduced accuracy whereas the non-linear combination maintains a stable accuracy (compare the right-most columns of the two tables above). Although the experiments above are still preliminary they show the power and potential of the sim(A, B) family of kernels defined over local kernels. With the principles laid down in Section 3 one can construct a large number (we touched only a few) of algebraic kernels which combine the local kernels in non-linear ways thus creating invariances to order and increased performance against occlusion. Further research is required for sifting through the various possibilities with this new family of kernels and extracting their properties, their invariances and behavior under changing parameters (Fr, Gr, d). References [1] A.C. Aitken. Determinants and Matrices. Interscience Publishers Inc., 4th edition, 1946. [2] G.H. Golub and C.F. Van Loan. Matrix computations. John Hopkins University Press, 1989. [3] L. Gurvits. Classical deterministic complexity of edmonds’ problem and quantum entanglement. In ACM Symp. on Theory of Computing, 2003. [4] R. Kondor and T. Jebara. A kernel between sets of vectors. In International Conference on Machine Learning, ICML, 2003. [5] D.G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004. [6] C. Schmidt and R. Mohr. Local grey-value invariants for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(5):530–535, 1997. [7] A. Shashua, Y. Gdalyahu, G. Hayun and L. Mann. “Pedestrian Detection for Driving Assistance Systems”. IEEE Intelligent Vehicles Symposium (IV2004), June. 2004, Parma Italy. [8] B. Sch¨olkopf and A.J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [9] G. Shakhnarovich, J.W. Fisher, and T. Darrell. Face recognition from long-term observations. In Proceedings of the European Conference on Computer Vision, 2002. [10] S. Ullman, M. Vidal-Naquet, and E. Sali. Visual features of intermediate complexity and their use in classification. Nature Neuroscience, 5(7):1–6, 2002. [11] V.N. Vapnik. The nature of statistical learning. Springer, 2nd edition, 1998. [12] N. Vasconcelos, P. Ho, and P. Moreno. The kullback-leibler kernel as a framework for discriminant and localized representations for visual recognition. In Proceedings of the European Conference on Computer Vision, pages 430–441, Prague, Czech Republic, May 2004. [13] C. Wallraven, B. Caputo, and A. Graf. Recognition with local features: the kernel recipe. In Proceedings of the International Conference on Computer Vision, 2003. [14] L. Wolf and A. Shashua. Learning over sets using kernel principal angles. Journal of Machine Learning Research (JMLR), 4(10):913–931, 2003.
|
2004
|
176
|
2,592
|
The Correlated Correspondence Algorithm for Unsupervised Registration of Nonrigid Surfaces Dragomir Anguelov1, Praveen Srinivasan1, Hoi-Cheung Pang1, Daphne Koller1, Sebastian Thrun1, James Davis2 ∗ 1 Stanford University, Stanford, CA 94305 2 University of California, Santa Cruz, CA 95064 e-mail:{drago,praveens,hcpang,koller,thrun,jedavis}@cs.stanford.edu Abstract We present an unsupervised algorithm for registering 3D surface scans of an object undergoing significant deformations. Our algorithm does not need markers, nor does it assume prior knowledge about object shape, the dynamics of its deformation, or scan alignment. The algorithm registers two meshes by optimizing a joint probabilistic model over all point-topoint correspondences between them. This model enforces preservation of local mesh geometry, as well as more global constraints that capture the preservation of geodesic distance between corresponding point pairs. The algorithm applies even when one of the meshes is an incomplete range scan; thus, it can be used to automatically fill in the remaining surfaces for this partial scan, even if those surfaces were previously only seen in a different configuration. We evaluate the algorithm on several real-world datasets, where we demonstrate good results in the presence of significant movement of articulated parts and non-rigid surface deformation. Finally, we show that the output of the algorithm can be used for compelling computer graphics tasks such as interpolation between two scans of a non-rigid object and automatic recovery of articulated object models. 1 Introduction The construction of 3D object models is a key task for many graphics applications. It is becoming increasingly common to acquire these models from a range scan of a physical object. This paper deals with an important subproblem of this acquisition task — the problem of registering two deforming surfaces corresponding to different configurations of the same non-rigid object. The main difficulty in the 3D registration problem is determining the correspondences of points on one surface to points on the other. Local regions on the surface are rarely distinctive enough to determine the correct correspondence, whether because of noise in the scans, or because of symmetries in the object shape. Thus, the set of candidate correspondences to a given point is usually large. Determining the correspondence for all object points results in a combinatorially large search problem. The existing algorithms for deformable surface ∗A results video is available at http://robotics.stanford.edu/∼drago/cc/video.mp4 Figure 1: A) Registration results for two meshes. Nonrigid ICP and its variant augmented with spin images get stuck in local maxima. Our CC algorithm produces a largely correct registration, although with an artifact in the right shoulder (inset). B) Illustration of the link deformation process C) The CC algorithm which uses only deformation potentials can violate mesh geometry. Near regions can map to far ones (segment AB) and far regions can map to near ones (points C,D). registration make the problem tractable by assuming significant prior knowledge about the objects being registered. Some rely on the presence of markers on the object [1, 20], while others assume prior knowledge about the object dynamics [16], or about the space of nonrigid deformations [15, 5]. Algorithms that make neither restriction [18, 12] simplify the problem by decorrelating the choice of correspondences for the different points in the scan. However, this approximation is only good in the case when the object deformation is small; otherwise, it results in poor local maxima as nearby points in one scan are allowed to map to far-away points in the other. Our algorithm defines a joint probabilistic model over all correspondences, which explicitly model the correlations between them — specifically, that nearby points in one mesh should map to nearby points in the other. Importantly, the notion of “nearby” used in our model is defined in terms of geodesic distance over the mesh. We define a probabilistic model over the set of correspondences, that encodes these geodesic distance constraints as well as penalties for link twisting and stretching, and high-level local surface features [14]. We then apply loopy belief propagation [21] to this model, in order to solve for the entire set of correspondences simultaneously. The result is a registration that respects the surface geometry. To the best of our knowledge, the algorithm we present in this paper is the first algorithm which allows the registration of 3D surfaces of an object where the object configurations can vary significantly, there is no prior knowledge about object shape or dynamics of deformation, and nothing whatsoever is known about the object alignment. Moreover, unlike many methods, our algorithm can be used to register a partial scan to a complete model, greatly increasing its applicability. We apply our approach to three datasets containing 3D scans of a wooden puppet, a human arm and entire human bodies in different configurations. We demonstrate good registration results for scan pairs exhibiting articulated motion, non-rigid deformations, or both. We also describe three applications of our method. In our first application, we show how a partial scan of an object can be registered onto a fully specified model in a different configuration. The resulting registration allows us to use the model to “complete” the partial scan in a way that preserves the local surface geometry. In the second, we use the correspondences found by our algorithm to smoothly interpolate between two different poses of an object. In our final application, we use a set of registered scans of the same object in different positions to recover a decomposition of the object into approximately rigid parts, and recover an articulated skeleton linking the parts. All of these applications are done in an unsupervised way, using only the output of our Correlated Correspondence algorithm applied to pairs of poses with widely varying deformations, and unknown initial alignments. These results demonstrate the value of a high-quality solution to the registration problem to a range of graphics tasks. 2 Previous Work Surface registration is a fundamental building block in computer graphics. The classical solution for registering rigid surfaces is the Iterative Closest Point algorithm (ICP) [4, 6, 17]. Recently, there has been work extending ICP to non-rigid surfaces [18, 8, 12, 1]. These algorithms treat one of the scans (usually a complete model of the surface) as a deformable template. The links between adjacent points on the surface can be thought of as springs, which are allowed to deform at a cost. Similarly to ICP, these algorithms iterate between two subproblems — estimating the non-rigid transformation Θ and estimating the set of correspondences C between the scans. The step estimating the correspondences assumes that a good estimate of the nonrigid transformation Θ is available. Under this assumption, the assignments to the correspondence variables become decorrelated: each point in the second scan is associated with the nearest point (in the Euclidean distance sense) in the deformed template scan. However, the decomposition also induces the algorithm’s main limitation. By assigning points in the second scan to points on the deformed model independently, nearby points in the scan can get associated to remote points in the model if the estimate of Θ is poor (Fig. 1A). While several approaches have been proposed to address this problem of incorrect correspondences, their applicability is largely limited to problems where the deformation is local, and the initial alignment is approximately correct. Another line of related work is the work on deformable template matching in the computer vision community. In the 3D case, this framework is used for detection of articulated object models in images [13, 22, 19]. The algorithms assume the decomposition of the object into a relatively small number of parts is known, and that a detector for each object part is available. Template matching approaches have also been applied to deformable 2D objects, where very efficient solutions exist [9, 11]. However, these methods do not extend easily to the case of 3D surfaces. 3 The Correlated Correspondence Algorithm The input to the algorithm is a set of two meshes (surfaces tessellated into polygons). The model mesh X = (V X, EX) is a complete model of the object, in a particular pose. V X = (x1, . . . , xN) denotes the mesh points, while EX is the set of links between adjacent points on the mesh surface. The data mesh Z = (V Z, EZ) is either a complete model or a partial view of the object in a different configuration. Each data mesh point zk is associated with a correspondence variable ck, specifying the corresponding model mesh point. The task of registration is one of estimating the set of all correspondences C and a non-rigid transformation Θ which aligns the corresponding points. 3.1 Probabilistic Model We formulate the registration problem as one of finding an embedding of the data mesh Z into the model mesh X, which is encoded as an assignment to all correspondence variables C = (c1, . . . , cK). The main idea behind our approach is to preserve the consistency of the embedding by explicitly correlating the assignments to the correspondence variables. We define a joint distribution over the correspondence variables c1, . . . , cK, represented as a Markov network. For each pair of adjacent data mesh points zk, zl, we want to define a probabilistic potential ψ(ck, cl) that constrains this pair of correspondences to reasonable and consistent. This gives rise to a joint probability distribution of the form p(C) = 1 Z Q k ψ(ck) Q k,l ψ(ck, cl) which contains only single and pairwise potentials. Performing probabilistic inference to find the most likely joint assignment to the entire set of correspondence variables C should yield a good and consistent registration. Deformation Potentials. We want our model to encode a preference for embeddings of mesh Z into mesh X, which minimize the amount of deformation Θ induced by the embedding. In order to quantify the amount of deformation Θ, applied to the model, we will follow the ideas of H¨ahnel et al. [12] and treat the links in the set EX as springs, which resist stretching and twisting at their endpoints. Stretching is easily quantified by looking at changes in the link length induced by the transformation Θ. Link twisting, however, is illspecified by looking only at the Cartesian coordinates of the points alone. Following [12], we attach an imaginary local coordinate system to each point on the model. This local coordinate system allows us to quantify the “twist” of a point xj relative to a neighbor xi. A non-rigid transformation Θ defines, for each point xi, a translation of its coordinates and a rotation of its local coordinate system. To evaluate the deformation penalty, we parameterize each link in the model in terms of its length and its direction relative to its endpoints (see Fig. 1B). Specifically, we define li,j to be the distance between xi and xj; di→j is a unit vector denoting the direction of the point xj in the coordinate system of xi (and vice versa). We use ei,j to denote the set of edge parameters (li,j, di→j, dj→i). It is now straightforward to specify the penalty for model deformations. Let Θ be a transformation, and let ˜ei,j denote the triple of parameters associated with the link between xi and xj after applying Θ. Our model penalizes twisting and stretching, using a separate zero-mean Gaussian noise model for each: P(˜ei,j | ei,j) = P(˜li,j | li,j) P( ˜di→j | di→j) P( ˜dj→i | dj→i) (1) In the absence of prior information, we assume that all links are equally likely to deform. In order to quantify the deformation induced by an embedding C, we need to include a potential ψd(ck, cl) for each link eZ k,l ∈EZ. Every probability ψd(ck = i, cl = j) corresponds to the deformation penalty incurred by deforming model link ei,j to generate link eZ k,l and is defined in (1). We do not restrict ourselves to the set of links in EX, since the original mesh tessellation is sparse and local. Any two points in X are allowed to implicitly define a link. Unfortunately, we cannot directly estimate the quantity P(eZ k,l | ei,j), since the link parameters eZ k,l depend on knowing the nonrigid transformation, which is not given as part of the input. The key issue is estimating the (unknown) relative rotation of the link endpoints. In effect, this rotation is an additional latent variable, which must also be part of the probabilistic model. To remain within the realm of discrete Markov networks, allowing the application of standard probabilistic inference algorithms, we discretize the space of the possible rotations, and fold it into the domains of the correspondence variables. For each possible value of the correspondence variable ck = i we select a small set of candidate rotations, consistent with local geometry. We do this by aligning local patches around the points xi and zk using rigid ICP. We extend the domain of each correspondence variables ck, where each value encodes a matching point and a particular rotation from the precomputed set for that point. Now the edge parameters eZ k,l are fully determined and so is the probabilistic potential. Geodesic Distances. Our proposed approach raises the question as to what constitutes the best constraint between neighboring correspondence variables. The literature on scan registration — for rigid and non-rigid models alike — relies on the preserving Euclidean distance. While Euclidean distance is meaningful for rigid objects, it is very sensitive to deformations, especially those induced by moving parts. For example, in Fig. 1C, we see that the two legs in one configuration of our puppet are fairly close together, allowing the algorithm to map two adjacent points in the data mesh to the two separate legs, with minimal deformation penalty. In the complementary situation, especially when object symmetries are present, two distant yet similar points in one scan might get mapped to the same region in the other. For example, in the same figure, we see that points in both an arm and a leg in the data mesh get mapped to a single leg in the model mesh. We therefore want to enforce constraints preserving distance along the mesh surface (geodesic distance). Our probabilistic framework easily incorporate such constraints as correlations between pairs of correspondence variables. We encode a nearness preservation Figure 2: A) Automatic interpolation between two scans of an arm and a wooden puppet. B) Registration results on two scans of the same man sitting and standing up (select points were displayed) C) Registration results on scans of a larger man and a smaller woman. The algorithm is robust to small changes in object scale. constraint which prevents adjacent points in mesh Z to be mapped to distant points in X in the geodesic distance sense. For adjacent points zk, zl in the data mesh, we define the following potential: ψn(ck = i, cl = j) = 0 distGeodesic(xi, xj) > αρ 1 otherwise (2) where ρ is the data mesh resolution and α is some constant, chosen to be 3.5. The farness preservation potentials encode the complementary constraint. For every pair of points zk, zl whose geodesic distance is more than 5ρ on the data mesh, we have a potential: ψf(ck = i, cl = j) = 0 distGeodesic(xi, xj) < βρ 1 otherwise (3) where β is also a constant, chosen to be 2 in our implementation. The intuition behind this constraint is fairly clear: if zk, zl are far apart on the data mesh, then their corresponding points must be far apart on the model mesh. Local Surface Signatures. Finally, we encode a set of potentials that correspond to the preservation of local surface properties between the model mesh and data mesh. The use of local surface signatures is important, because it helps to guide the optimization in the exponential space of assignments. We use spin images [14] compressed with principal component analysis to produce a low-dimensional signature sx of the local surface geometry around a point x. When data and model points correspond, we expect their local signatures to be similar. We introduce a potential whose values ψs(ck) = i enforce a zero-mean Gaussian penalty for discrepancies between sxi and szk. 3.2 Optimization In the previous section, we defined a Markov network, which encodes a joint probability distribution over the correspondence variables as a product of single and pairwise potentials. Our goal is to find a joint assignment to these variables that maximizes this probability. This problem is one of standard probabilistic inference over the Markov network. However, the Markov network is quite large, and contains a large number of loops, so that exact inference is computationally infeasible. We therefore apply an approximate inference method known as loopy belief propagation (LBP)[21], which has been shown to work in a wide variety of applications. Running LBP until convergence results in a set of probabilistic assignments to the different correspondence variables, which are locally consistent. We then simply extract the most likely assignment for each variable to obtain a correspondence. One remaining complication arises from the form of our farness preservation constraints. In general, most pairs of points in the mesh are not close, so that the total number of such potentials grows as O(M 2), where M is the number of points in the data mesh. However, rather than introducing all these potentials into the Markov net from the start, we introduce them as needed. First, we run LBP without any farness preservation potentials. If the solution violates a set of farness preservation constraints, we add it and rerun BP. In practice, this approach adds a very small number of such constraints. 4 Experimental Results Basic Registration. We applied our registration algorithm to three different datasets, containing meshes of a human arm, wooden puppet and the CAESAR dataset of whole human bodies [1], all acquired by a 3D range scanner. The meshes were not complete surfaces, but several techniques exist for filling the holes (e.g., [10]). We ran the Correlated Correspondence algorithm using the same probabilistic model and the same parameters on all data sets. We use a coarse-to-fine strategy, using the result of a coarse sub-sampling of the mesh surface to constrain the correspondences at a finer-grained level. The resulting set of correspondences were used as markers to initialize the non-rigid ICP algorithm of H¨ahnel et al. [12]. The Correlated Correspondence algorithm successfully aligned all mesh pairs in our human arm data set containing 7 arms. In the puppet data set we registered one of the meshes to the remaining 6 puppets. The algorithm correctly registered 4 out of 6 data meshes to the model mesh. In the two remaining cases, the algorithm produced a registration where the torso was flipped, so that the front was mapped to the back. This problem arises from ambiguities induced by the puppet symmetry, whose front and back are almost identical. Importantly, our probabilistic model assigns a higher likelihood score to the correct solution, so that the incorrect registration is a consequence of local maxima in the LBP algorithm. This fact allows us to address the issue in an unsupervised way simply by running loopy BP several times, with different initialization. For details on the unsupervised initialization scheme we used, please refer to our technical report [2]. We ran the modified algorithm to register one puppet mesh to the remaining 6 meshes in the dataset, obtaining the correct registration in all cases. In particular, as shown in Fig. 1A, we successfully deal with the case on which the straightforward nonrigid ICP algorithm failed. The modified algorithm was applied to the CAESAR dataset and produced very good registration for challenging cases exhibiting both articulated motion and deformation (Fig. 2B), or exhibiting deformation and a (small) change in object scale (Fig. 2C). Overall, the algorithm performed robustly, producing a close-to-optimal registrations even for pairs of meshes that involve large deformations, articulated motion or both. The registration is accomplished in an unsupervised way, without any prior knowledge about object shape, dynamics, or alignment. Partial view completion. The Correlated Correspondence algorithm allows us to register a data mesh containing only a partial scan of an object to a known complete surface model of the object, which serves as a template. We can then transform the template mesh to the partial scan, a process which leaves undisturbed the links that are not involved in the partial mesh. The result is a mesh that matches the data on the observed points, while completing the unknown portion of the surface using the template. We take a partial mesh, which is missing the entire back part of the puppet in a particular pose. The resulting partial model is displayed in Fig. 3B-1; for comparison, the correct complete model in this configuration (which was not available to the algorithm), is shown in Fig. 3B-2. We register the partial mesh to models of the object in a different pose (Fig. 3B3), and compare the completions we obtain (Fig. 3B-4), to the ground truth represented in Fig. 3B-2. The result demonstrates a largely correct reconstruction of the complete surface geometry from the partial scan and the deformed template. We report additional shape completion results in [2]. Interpolation. Current research [20] shows that if a nonrigid transformation Θ between the poses is available, believable animation can be produced by linear interpolation beFigure 3: A) The results produced by the CC algorithm were used for unsupervised recovery of articulated models. 15 puppet parts and 4 arm parts, as well as the articulated object skeletons, were recovered. B) Partial view completion results. The missing parts of the surface were estimated by registering the partial view to a complete model of the object in a different configuration. tween the model mesh and the transformed model mesh. The interpolation is performed in the space of local link parameters (li,j, di→j, dj→i), We demonstrate that transformation estimates produced by our algorithm can be used to automatically generate believable animation sequences between fairly different poses, as shown in Fig. 2A. Recovering Articulated Models. Articulated object models have a number of applications in animation and motion capture, and there has been work on recovering them automatically from 3D data [7, 3]. We show that our unsupervised registration capability can greatly assist articulated model recovery. In particular, the algorithm in [3] requires an estimate of the correspondences between a template mesh and the remaining meshes in the dataset. We supplied it with registration computed with the Correlated Correspondence algorithm. As a result we managed to recover in a completely unsupervised way all 15 rigid parts of the puppet, as well as the joints between them (Fig. 3A). We demonstrate successful articulation recovery even for objects which are not purely rigid, as is the case with the human arm (see Fig. 3A). 5 Conclusion The contribution of this paper is an algorithm for unsupervised registration of non-rigid 3D surfaces in significantly different configurations. Our results show that the algorithm can deal with articulated objects subject to large joint movements, as well as with non-rigid surface deformations. The algorithm was not provided with markers or other cues regarding correspondence, and makes no assumptions about object shape, dynamics, or alignment. We show the quality and the utility of the registration results we obtain by using them as a starting point for compelling computer graphics applications: partial view completion, interpolation between scans, and recovery of articulated object models. Importantly, all these results were generated in a completely unsupervised manner from a set of input meshes. The main limitation of our approach is the fact that it makes the assumption of (approximate) preservation of geodesic distance. Although this assumption is desirable in many cases, it is not always warranted. In some cases, the mesh topology may change drastically, for example, when an arm touches the body. We can try to extend our approach to handle these cases by trying to detect when they arise, and eliminating the associated constraints. However, even this solution is likely to fail on some cases. A second limitation of our approach is that it assumes that the data mesh is a subset of the model mesh. If the data mesh contains clutter, our algorithm will attempt to embed the clutter into the model. We feel that the general nonrigid registration problem becomes underspecified when significant clutter and occlusion are present simultaneously. In this case, additional assumptions about the surfaces will be needed. Despite the fact that our algorithm performs quite well, there are limitations to what can be accurately inferred about the object from just two scans. Given more scans of the same object, we can try to learn the deformation penalty associated with different links, and bootstrap the algorithm. Such an extension would be a step toward the goal of learning models of object shape and dynamics from raw data. Acknowledgments. This work has been supported by the ONR Young Investigator (PECASE) grant N00014-99-1-0464, and ONR Grant N00014-00-1-0637 under the DoD MURI program. References [1] B Allen, B Curless, and Z Popovic. The space of human body shapes:reconstruction and parameterization from range scans. In Proc. SIGGRAPH, 2003. [2] D. Anguelov, D.Koller, P. Srinivasan, S.Thrun, H. Pang, and J.Davis. The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In TR-SAIL-2004-100, at http://robotics.stanford.edu/∼drago/cc/tr100.pdf, 2004. [3] D. Anguelov, D. Koller, H. Pang, P. Srinivasan, and S. Thrun. Recovering articulated object models from 3d range data. In Proc. UAI, 2004. [4] P. Besl and N. McKay. A method for registration of 3d shapes. Transactions on Pattern Analysis and Machine Intelligence, 14(2):239–256, 1992. [5] V Blanz and T Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, 1999. [6] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. In Proc. IEEE Conf. on Robotics and Automation, 1991. [7] K. Cheung, S. Baker, and T. Kanade. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In Proc. IEEE CVPR, 2003. [8] H. Chui and A. Rangarajan. A new point matching algorithm for non-rigid registration. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2000. [9] J. Coughlan and S. Ferreira. Finding deformable shapes using loopy belief propagation. In Proc. ECCV, volume 3, pages 453–468, 2002. [10] J. Davis, S. Marschner, M. Garr, and M. Levoy. Filling holes in complex surfaces using volumetric diffusion. In Symposium on 3D Data Processing, Visualization, and Transmission, 2002. [11] Pedro Felzenszwalb. Representation and detection of shapes in images. In PhD Thesis. Massachusetts Institute of Technology, 2003. [12] D. H¨ahnel, S. Thrun, and W. Burgard. An extension of the ICP algorithm for modeling nonrigid objects with mobile robots. In Proc. IJCAI, Acapulco, Mexico, 2003. [13] D. Huttenlocher and P. Felzenszwalb. Efficient matching of pictorial structures. In CVPR, 2003. [14] Andrew Johnson. Spin-Images: A Representation for 3-D Surface Matching. PhD thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, August 1997. [15] Michael Leventon. Statistic models in medical image analysis. In PhD Thesis. Massachusetts Institute of Technology, 2000. [16] Michael H. Lin. Tracking articulated objects in real-time range image sequences. In ICCV (1), pages 648–653, 1999. [17] S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. In Proc. 3DIM, Quebec City, Canada, 2001. IEEEComputer Society. [18] Christian Shelton. Morphable surface models. In International Journal of Computer Vision, 2000. [19] Leonid Sigal, Michael Isard, Benjamin H. Sigelman, and Michael J. Black. Attractive people: Assembling loose-limbed models using non-parametric belief propagation. In NIPS, 2003. [20] R. Sumner and Jovan Popovi´c. Deformation transfer for triangle meshes. In SIGGRAPH, 2004. [21] J. Yedidia, W. Freeman, and Y Weiss. Understanding belief propagation and its generalizations. In Exploring Artificial Intelligence in the New Millennium. Science & Technology Books, 2003. [22] S. Yu, R. Gross, and J. Shi. Concurrent object recognition and segmentation with graph partitioning. In Proc. NIPS, 2002.
|
2004
|
177
|
2,593
|
Maximum-Margin Matrix Factorization Nathan Srebro Dept. of Computer Science University of Toronto Toronto, ON, CANADA nati@cs.toronto.edu Jason D. M. Rennie Tommi S. Jaakkola Computer Science and Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA, USA jrennie,tommi@csail.mit.edu Abstract We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them. 1 Introduction Fitting a target matrix Y with a low-rank matrix X by minimizing the sum-squared error is a common approach to modeling tabulated data, and can be done explicitly in terms of the singular value decomposition of Y . It is often desirable, though, to minimize a different loss function: loss corresponding to a specific probabilistic model (where X are the mean parameters, as in pLSA [1], or the natural parameters [2]); or loss functions such as hinge loss appropriate for binary or discrete ordinal data. Loss functions other than squared-error yield non-convex optimization problems with multiple local minima. Even with a squarederror loss, when only some of the entries in Y are observed, as is the case for collaborative filtering, local minima arise and SVD techniques are no longer applicable [3]. Low-rank approximations constrain the dimensionality of the factorization X = UV ′. Other constraints, such as sparsity and non-negativity [4], have also been suggested for better capturing the structure in Y , and also lead to non-convex optimization problems. In this paper we suggest regularizing the factorization by constraining the norm of U and V —constraints that arise naturally when matrix factorizations are viewed as feature learning for large-margin linear prediction (Section 2). Unlike low-rank factorizations, such constraints lead to convex optimization problems that can be formulated as semi-definite programs (Section 4). Throughout the paper, we focus on using low-norm factorizations for “collaborative prediction”: predicting unobserved entries of a target matrix Y , based on a subset S of observed entries YS. In Section 5, we present generalization error bounds for collaborative prediction using low-norm factorizations. 2 Matrix Factorization as Feature Learning Using a low-rank model for collaborative prediction [5, 6, 3] is straightforward: A lowrank matrix X is sought that minimizes a loss versus the observed entries YS. Unobserved entries in Y are predicted according to X. Matrices of rank at most k are those that can be factored into X = UV ′, U ∈Rn×k, V ∈Rm×k, and so seeking a low-rank matrix is equivalent to seeking a low-dimensional factorization. If one of the matrices, say U, is fixed, and only the other matrix V ′ needs to be learned, then fitting each column of the target matrix Y is a separate linear prediction problem. Each row of U functions as a “feature vector”, and each column of V ′ is a linear predictor, predicting the entries in the corresponding column of Y based on the “features” in U. In collaborative prediction, both U and V are unknown and need to be estimated. This can be thought of as learning feature vectors (rows in U) for each of the rows of Y , enabling good linear prediction across all of the prediction problems (columns of Y ) concurrently, each with a different linear predictor (columns of V ′). The features are learned without any external information or constraints which is impossible for a single prediction task (we would use the labels as features). The underlying assumption that enables us to do this in a collaborative filtering situation is that the prediction tasks (columns of Y ) are related, in that the same features can be used for all of them, though possibly in different ways. Low-rank collaborative prediction corresponds to regularizing by limiting the dimensionality of the feature space—each column is a linear prediction problem in a low-dimensional space. Instead, we suggest allowing an unbounded dimensionality for the feature space, and regularizing by requiring a low-norm factorization, while predicting with large-margin. Consider adding to the loss a penalty term which is the sum of squares of entries in U and V , i.e. ∥U∥2 Fro + ∥V ∥2 Fro (∥∥Fro denotes the Frobenius norm). Each “conditional” problem (fitting U given V and vice versa) again decomposes into a collection of standard, this time regularized, linear prediction problems. With an appropriate loss function, or constraints on the observed entries, these correspond to large-margin linear discrimination problems. For example, if we learn a binary observation matrix by minimizing a hinge loss plus such a regularization term, each conditional problem decomposes into a collection of SVMs. 3 Maximum-Margin Matrix Factorizations Matrices with a factorization X = UV ′, where U and V have low Frobenius norm (recall that the dimensionality of U and V is no longer bounded!), can be characterized in several equivalent ways, and are known as low trace norm matrices: Definition 1. The trace norm1 ∥X∥Σ is the sum of the singular values of X. Lemma 1. ∥X∥Σ = minX=UV ′ ∥U∥Fro ∥V ∥Fro = minX=UV ′ 1 2(∥U∥2 Fro + ∥V ∥2 Fro) The characterization in terms of the singular value decomposition allows us to characterize low trace norm matrices as the convex hull of bounded-norm rank-one matrices: Lemma 2. {X | ∥X∥Σ ≤B} = conv n uv′ |u ∈Rn, v ∈Rm, |u|2 = |v|2 = B o In particular, the trace norm is a convex function, and the set of bounded trace norm matrices is a convex set. For convex loss functions, seeking a bounded trace norm matrix minimizing the loss versus some target matrix is a convex optimization problem. This contrasts sharply with minimizing loss over low-rank matrices—a non-convex problem. Although the sum-squared error versus a fully observed target matrix can be minimized efficiently using the SVD (despite the optimization problem being non-convex!), minimizing other loss functions, or even minimizing a squared loss versus a partially observed matrix, is a difficult optimization problem with multiple local minima [3]. 1Also known as the nuclear norm and the Ky-Fan n-norm. In fact, the trace norm has been suggested as a convex surrogate to the rank for various rank-minimization problems [7]. Here, we justify the trace norm directly, both as a natural extension of large-margin methods and by providing generalization error bounds. To simplify presentation, we focus on binary labels, Y ∈{±1}n×m. We consider hardmargin matrix factorization, where we seek a minimum trace norm matrix X that matches the observed labels with a margin of one: YiaXia ≥1 for all ia ∈S. We also consider soft-margin learning, where we minimize a trade-off between the trace norm of X and its hinge-loss relative to YS: minimize ∥X∥Σ + c X ia∈S max(0, 1 −YiaXia). (1) As in maximum-margin linear discrimination, there is an inverse dependence between the norm and the margin. Fixing the margin and minimizing the trace norm is equivalent to fixing the trace norm and maximizing the margin. As in large-margin discrimination with certain infinite dimensional (e.g. radial) kernels, the data is always separable with sufficiently high trace norm (a trace norm of p n|S| is sufficient to attain a margin of one). The max-norm variant Instead of constraining the norms of rows in U and V on average, we can constrain all rows of U and V to have small L2 norm, replacing the trace norm with ∥X∥max = minX=UV ′(maxi |Ui|)(maxa |Va|) where Ui, Va are rows of U, V . Lowmax-norm discrimination has a clean geometric interpretation. First, note that predicting the target matrix with the signs of a rank-k matrix corresponds to mapping the “items” (columns) to points in Rk, and the “users” (rows) to homogeneous hyperplanes, such that each user’s hyperplane separates his positive items from his negative items. Hard-margin low-max-norm prediction corresponds to mapping the users and items to points and hyperplanes in a high-dimensional unit sphere such that each user’s hyperplane separates his positive and negative items with a large-margin (the margin being the inverse of the maxnorm). 4 Learning Maximum-Margin Matrix Factorizations In this section we investigate the optimization problem of learning a MMMF, i.e. a low norm factorization UV ′, given a binary target matrix. Bounding the trace norm of UV ′ by 1 2(∥U∥2 Fro + ∥V ∥2 Fro), we can characterize the trace norm in terms of the trace of a positive semi-definite matrix: Lemma 3 ([7, Lemma 1]). For any X ∈Rn×m and t ∈R: ∥X∥Σ ≤t iff there exists A ∈Rn×n and B ∈Rm×m such that 2 A X X′ B ≽0 and tr A + tr B ≤2t. Proof. Note that for any matrix W, ∥W∥Fro = tr WW ′. If A X X′ B ≽0, we can write it as a product [ U V ] [ U ′ V ′ ]. We have X = UV ′ and 1 2(∥U∥2 Fro +∥V ∥2 Fro) = 1 2(tr A+tr B) ≤t, establishing ∥X∥Σ ≤t. Conversely, if ∥X∥Σ ≤t we can write it as X = UV ′ with tr UU ′ + tr V V ′ ≤2t and consider the p.s.d. matrix UU ′ X X′ V V ′ . Lemma 3 can be used in order to formulate minimizing the trace norm as a semi-definite optimization problem (SDP). Soft-margin matrix factorization (1), can be written as: min 1 2(tr A + tr B) + c X ia∈S ξia s.t. A X X′ B ≽0, yiaXia ≥1 −ξia ξia ≥0 ∀ia ∈S (2) 2A ≽0 denotes A is positive semi-definite Associating a dual variable Qia with each constraint on Xia, the dual of (2) is [8, Section 5.4.2]: max X ia∈S Qia s.t. I (−Q ⊗Y ) (−Q ⊗Y )′ I ≽0, 0 ≤Qia ≤c (3) where Q ⊗Y denotes the sparse matrix (Q ⊗Y )ia = QiaYia for ia ∈S and zeros elsewhere. The problem is strictly feasible, and there is no duality gap. The p.s.d. constraint in the dual (3) is equivalent to bounding the spectral norm of Q ⊗Y , and the dual can also be written as an optimization problem subject to a bound on the spectral norm, i.e. a bound on the singular values of Q ⊗Y : max X ia∈S Qia s.t. ∥Q ⊗Y ∥2 ≤1 0 ≤Qia ≤c ∀ia ∈S (4) In typical collaborative prediction problems, we observe only a small fraction of the entries in a large target matrix. Such a situation translates to a sparse dual semi-definite program, with the number of variables equal to the number of observed entries. Large-scale SDP solvers can take advantage of such sparsity. The prediction matrix X∗minimizing (1) is part of the primal optimal solution of (2), and can be extracted from it directly. Nevertheless, it is interesting to study how the optimal prediction matrix X∗can be directly recovered from a dual optimal solution Q∗alone. Although unnecessary when relying on interior point methods used by most SDP solvers (as these return a primal/dual optimal pair), this can enable us to use specialized optimization methods, taking advantage of the simple structure of the dual. Recovering X∗from Q∗ As for linear programming, recovering a primal optimal solution directly from a dual optimal solution is not always possible for SDPs. However, at least for the hard-margin problem (no slack) this is possible, and we describe below how an optimal prediction matrix X∗can be recovered from a dual optimal solution Q∗by calculating a singular value decomposition and solving linear equations. Given a dual optimal Q∗, consider its singular value decomposition Q∗⊗Y = UΛV ′. Recall that all singular values of Q∗⊗Y are bounded by one, and consider only the columns ˜U ∈Rn×p of U and ˜V ∈Rm×p of V with singular value one. It is possible to show [8, Section 5.4.3], using complimentary slackness, that for some matrix R ∈Rp×p, X∗= ˜URR′ ˜V ′ is an optimal solution to the maximum margin matrix factorization problem (1). Furthermore, p(p+1) 2 is bounded above by the number of non-zero Q∗ ia. When Q∗ ia > 0, and assuming hard-margin constraints, i.e. no box constraints in the dual, complimentary slackness dictates that X∗ ia = ˜UiRR′ ˜V ′ a = Yia, providing us with a linear equation on the p(p+1) 2 entries in the symmetric RR′. For hard-margin matrix factorization, we can therefore recover the entries of RR′ by solving a system of linear equations, with a number of variables bounded by the number of observed entries. Recovering specific entries The approach described above requires solving a large system of linear equations (with as many variables as observations). Furthermore, especially when the observations are very sparse (only a small fraction of the entries in the target matrix are observed), the dual solution is much more compact then the prediction matrix: the dual involves a single number for each observed entry. It might be desirable to avoid storing the prediction matrix X∗explicitly, and calculate a desired entry X∗ i0a0, or at least its sign, directly from the dual optimal solution Q∗. Consider adding the constraint Xi0a0 > 0 to the primal SDP (2). If there exists an optimal solution X∗to the original SDP with X∗ i0a0 > 0, then this is also an optimal solution to the modified SDP, with the same objective value. Otherwise, the optimal solution of the modified SDP is not optimal for the original SDP, and the optimal value of the modified SDP is higher (worse) than the optimal value of the original SDP. Introducing the constraint Xi0a0 > 0 to the primal SDP (2) corresponds to introducing a new variable Qi0a0 to the dual SDP (3), appearing in Q⊗Y (with Yi0a0 = 1) but not in the objective. In this modified dual, the optimal solution Q∗of the original dual would always be feasible. But, if X∗ i0a0 < 0 in all primal optimal solutions, then the modified primal SDP has a higher value, and so does the dual, and Q∗is no longer optimal for the new dual. By checking the optimality of Q∗for the modified dual, e.g. by attempting to re-optimize it, we can recover the sign of X∗ i0a0. We can repeat this test once with Yi0a0 = 1 and once with Yi0a0 = −1, corresponding to Xi0a0 < 0. If Yi0a0X∗ i0a0 < 0 (in all optimal solutions), then the dual solution can be improved by introducing Qi0a0 with a sign of Yi0a0. Predictions for new users So far, we assumed that learning is done on the known entries in all rows. It is commonly desirable to predict entries in a new partially observed row of Y (a new user), not included in the original training set. This essentially requires solving a “conditional” problem, where V is already known, and a new row of U is learned (the predictor for the new user) based on a new partially observed row of X. Using maximummargin matrix factorization, this is a standard SVM problem. Max-norm MMMF as a SDP The max-norm variant can also be written as a SDP, with the primal and dual taking the forms: min t + c X ia∈S ξia s.t. A X X′ B ≽0 Aii, Baa ≤t ∀i, a yiaXia ≥1 −ξia ξia ≥0 ∀ia ∈S (5) max X ia∈S Qia s.t. Γ (−Q ⊗Y ) (−Q ⊗Y )′ ∆ ≽0 Γ, ∆are diagonal tr Γ + tr ∆= 1 0 ≤Qia ≤c ∀ia ∈S (6) 5 Generalization Error Bounds for Low Norm Matrix Factorizations Similarly to standard feature-based prediction approaches, collaborative prediction methods can also be analyzed in terms of their generalization ability: How confidently can we predict entries of Y based on our error on the observed entries YS? We present here generalization error bounds that holds for any target matrix Y , and for a random subset of observations S, and bound the average error across all entries in terms of the observed margin error3. The central assumption, paralleling the i.i.d. source assumption for standard feature-based prediction, is that the observed subset S is picked uniformly at random. Theorem 4. For all target matrices Y ∈{±1}n×m and sample sizes |S| > n log n, and for a uniformly selected sample S of |S| entries in Y , with probability at least 1 −δ over 3The bounds presented here are special cases of bounds for general loss functions that we present and prove elsewhere [8, Section 6.2]. To prove the bounds we bound the Rademacher complexity of bounded trace norm and bounded max-norm matrices (i.e. balls w.r.t. these norms). The unit trace norm ball is the convex hull of outer products of unit norm vectors. It is therefore enough to bound the Rademacher complexity of such outer products, which boils down to analyzing the spectral norm of random matrices. As a consequence of Grothendiek’s inequality, the unit max-norm ball is within a factor of two of the convex hull of outer products of sign vectors. The Rademacher complexity of such outer products can be bounded by considering their cardinality. the sample selection, the following holds for all matrices X ∈Rn×m and all γ > 0: 1 nm|{ia|XiaYia ≤0}| < 1 |S||{ia ∈S|XiaYia ≤γ}|+ K ∥X∥Σ γ√nm 4√ ln m s (n + m) ln n |S| + s ln(1 + | log ∥X∥Σ /γ|) |S| + s ln(4/δ) 2|S| (7) and 1 nm|{ia|XiaYia ≤0}| < 1 |S||{ia ∈S|XiaYia ≤γ}|+ 12∥X∥max γ rn + m |S| + s ln(1 + | log ∥X∥Σ /γ|) |S| + s ln(4/δ) 2|S| (8) Where K is a universal constant that does not depend on Y ,n,m,γ or any other quantity. To understand the scaling of these bounds, consider n × m matrices X = UV ′ where the norms of rows of U and V are bounded by r, i.e. matrices with ∥X∥max ≤r2. The trace norm of such matrices is bounded by r2/√nm, and so the two bounds agree up to log-factors—the cost of allowing the norm to be low on-average but not uniformly. Recall that the conditional problem, where V is fixed and only U is learned, is a collection of low-norm (large-margin) linear prediction problems. When the norms of rows in U and V are bounded by r, a similar generalization error bound on the conditional problem would include the term r2 γ q n |S|, matching the bounds of Theorem 4 up to log-factors—learning both U and V does not introduce significantly more error than learning just one of them. Also of interest is the comparison with bounds for low-rank matrices, for which ∥X∥Σ ≤ √ rank X ∥X∥Fro. In particular, for n × m rank-k X with entries bounded by B, ∥X∥Σ ≤ √ knmB, and the second term in the right-hand side of (7) becomes: K B γ 4√ ln m s k(n + m) ln n |S| (9) Although this is the best (up to log factors) that can be expected from scale-sensitive bounds4, taking a combinatorial approach, the dependence on the magnitude of the entries in X (and the margin) can be avoided [9]. 6 Implementation and Experiments Ratings In many collaborative prediction tasks, the labels are not binary, but rather are discrete “ratings” in several ordered levels (e.g. one star through five stars). Separating R levels by thresholds −∞= θ0 < θ1 < · · · < θR = ∞, and generalizing hard-margin constraints for binary labels, one can require θYia + 1 ≤Xia ≤θYia+1 −1. A soft-margin version of these constraints, with slack variables for the two constraints on each observed rating, corresponds to a generalization of the hinge loss which is a convex bound on the zero/one level-agreement error (ZOE) [10]. To obtain a loss which is a convex bound on the mean-absolute-error (MAE—the difference, in levels, between the predicted level and the true level), we introduce R −1 slack variables for each observed rating—one for each 4For general loss functions, bounds as in Theorem 4 depend only on the Lipschitz constant of the loss, and (9) is the best (up to log factors) that can be achieved without explicitly bounding the magnitude of the loss function. of the R −1 constraints Xia ≥θr for r < Yia and Xia ≤θr for r ≥Yia. Both of these soft-margin problems (“immediate-threshold” and “all-threshold”) can be formulated as SDPs similar to (2)-(3). Furthermore, it is straightforward to learn also the thresholds (they appear as variables in the primal, and correspond to constraints in the dual)—either a single set of thresholds for the entire matrix, or a separate threshold vector for each row of the matrix (each “user”). Doing the latter allows users to “use ratings differently” and alleviates the need to normalize the data. Experiments We conducted preliminary experiments on a subset of the 100K MovieLens Dataset5, consisting of the 100 users and 100 movies with the most ratings. We used CSDP [11] to solve the resulting SDPs6. The ratings are on a discrete scale of one through five, and we experimented with both generalizations of the hinge loss above, allowing per-user thresholds. We compared against WLRA and K-Medians (described in [12]) as “Baseline” learners. We randomly split the data into four sets. For each of the four possible test sets, we used the remaining sets to calculate a 3-fold cross-validation (CV) error for each method (WLRA, K-medians, trace norm and max-norm MMMF with immediate-threshold and allthreshold hinge loss) using a range of parameters (rank for WLRA, number of centers for K-medians, slack cost for MMMF). For each of the four splits, we selected the two MMMF learners with lowest CV ZOE and MAE and the two Baseline learners with lowest CV ZOE and MAE, and measured their error on the held-out test data. Table 1 lists these CV and test errors, and the average test error across all four test sets. On average and on three of the four test sets, MMMF achieves lower MAE than the Baseline learners; on all four of the test sets, MMMF achieves lower ZOE than the Baseline learners. Test ZOE MAE Set Method CV Test Method CV Test 1 WLRA rank 2 0.547 0.575 K-Medians K=2 0.678 0.691 2 WLRA rank 2 0.550 0.562 K-Medians K=2 0.686 0.681 3 WLRA rank 1 0.562 0.543 K-Medians K=2 0.700 0.681 4 WLRA rank 2 0.557 0.553 K-Medians K=2 0.685 0.696 Avg. 0.558 0.687 1 max-norm C=0.0012 0.543 0.562 max-norm C=0.0012 0.669 0.677 2 trace norm C=0.24 0.550 0.552 max-norm C=0.0011 0.675 0.683 3 max-norm C=0.0012 0.551 0.527 max-norm C=0.0012 0.668 0.646 4 max-norm C=0.0012 0.544 0.550 max-norm C=0.0012 0.667 0.686 Avg. 0.548 0.673 Table 1: Baseline (top) and MMMF (bottom) methods and parameters that achieved the lowest cross validation error (on the training data) for each train/test split, and the error for this predictor on the test data. All listed MMMF learners use the “all-threshold” objective. 7 Discussion Learning maximum-margin matrix factorizations requires solving a sparse semi-definite program. We experimented with generic SDP solvers, and were able to learn with up to tens of thousands of labels. We propose that just as generic QP solvers do not perform well on SVM problems, special purpose techniques, taking advantage of the very simple structure of the dual (3), are necessary in order to solve large-scale MMMF problems. SDPs were recently suggested for a related, but different, problem: learning the features 5http://www.cs.umn.edu/Research/GroupLens/ 6Solving with immediate-threshold loss took about 30 minutes on a 3.06GHz Intel Xeon. Solving with all-threshold loss took eight to nine hours. The MATLAB code is available at www.ai.mit.edu/˜nati/mmmf (or equivalently, kernel) that are best for a single prediction task [13]. This task is hopeless if the features are completely unconstrained, as they are in our formulation. Lanckriet et al suggest constraining the allowed features, e.g. to a linear combination of a few “base feature spaces” (or base kernels), which represent the external information necessary to solve a single prediction problem. It is possible to combine the two approaches, seeking constrained features for multiple related prediction problems, as a way of combining external information (e.g. details of users and of items) and collaborative information. An alternate method for introducing external information into our formulation is by adding to U and/or V additional fixed (non-learned) columns representing the external features. This method degenerates to standard SVM learning when Y is a vector rather than a matrix. An important limitation of the approach we have described, is that observed entries are assumed to be uniformly sampled. This is made explicit in the generalization error bounds. Such an assumption is typically unrealistic, as, e.g., users tend to rate items they like. At an extreme, it is often desirable to make predictions based only on positive samples. Even in such situations, it is still possible to learn a low-norm factorization, by using appropriate loss functions, e.g. derived from probabilistic models incorporating the observation process. However, obtaining generalization error bounds in this case is much harder. Simply allowing an arbitrary sampling distribution and calculating the expected loss based on this distribution (which is not possible with the trace norm, but is possible with the max-norm [8]) is not satisfying, as this would guarantee low error on items the user is likely to want anyway, but not on items we predict he would like. Acknowledgments We would like to thank Sam Roweis for pointing out [7]. References [1] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning Journal, 42(1):177–196, 2001. [2] M. Collins, S. Dasgupta, and R. Schapire. A generalization of principal component analysis to the exponential family. In Advances in Neural Information Processing Systems 14, 2002. [3] Nathan Srebro and Tommi Jaakkola. Weighted low rank approximation. In 20th International Conference on Machine Learning, 2003. [4] D.D. Lee and H.S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791, 1999. [5] T. Hofmann. Latent semantic models for collaborative filtering. ACM Trans. Inf. Syst., 22(1):89–115, 2004. [6] Benjamin Marlin. Modeling user rating profiles for collaborative filtering. In Advances in Neural Information Processing Systems, volume 16, 2004. [7] Maryam Fazel, Haitham Hindi, and Stephen P. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proceedings American Control Conference, volume 6, 2001. [8] Nathan Srebro. Learning with Matrix Factorization. PhD thesis, Massachusetts Institute of Technology, 2004. [9] N. Srebro, N. Alon, and T. Jaakkola. Generalization error bounds for collaborative prediction with low-rank matrices. In Advances In Neural Information Processing Systems 17, 2005. [10] Amnon Shashua and Anat Levin. Ranking with large margin principle: Two approaches. In Advances in Neural Information Proceedings Systems, volume 15, 2003. [11] B. Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software, 11(1):613–623, 1999. [12] B. Marlin. Collaborative filtering: A machine learning perspective. Master’s thesis, University of Toronto, 2004. [13] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27–72, 2004.
|
2004
|
178
|
2,594
|
Real-Time Pitch Determination of One or More Voices by Nonnegative Matrix Factorization Fei Sha and Lawrence K. Saul Dept. of Computer and Information Science University of Pennsylvania, Philadelphia, PA 19104 {feisha,lsaul}@cis.upenn.edu Abstract An auditory “scene”, composed of overlapping acoustic sources, can be viewed as a complex object whose constituent parts are the individual sources. Pitch is known to be an important cue for auditory scene analysis. In this paper, with the goal of building agents that operate in human environments, we describe a real-time system to identify the presence of one or more voices and compute their pitch. The signal processing in the front end is based on instantaneous frequency estimation, a method for tracking the partials of voiced speech, while the pattern-matching in the back end is based on nonnegative matrix factorization, an unsupervised algorithm for learning the parts of complex objects. While supporting a framework to analyze complicated auditory scenes, our system maintains real-time operability and state-of-the-art performance in clean speech. 1 Introduction Nonnegative matrix factorization (NMF) is an unsupervised algorithm for learning the parts of complex objects [11]. The algorithm represents high dimensional inputs (“objects”) by a linear superposition of basis functions (“parts”) in which both the linear coefficients and basis functions are constrained to be nonnegative. Applied to images of faces, NMF learns basis functions that correspond to eyes, noses, and mouths; applied to handwritten digits, it learns basis functions that correspond to cursive strokes. The algorithm has also been implemented in real-time embedded systems as part of a visual front end [10]. Recently, it has been suggested that NMF can play a similarly useful role in speech and audio processing [16, 17]. An auditory “scene”, composed of overlapping acoustic sources, can be viewed as a complex object whose constituent parts are the individual sources. Pitch is known to be an extremely important cue for source separation and auditory scene analysis [4]. It is also an acoustic cue that seems amenable to modeling by NMF. In particular, we can imagine the basis functions in NMF as harmonic stacks of individual periodic sources (e.g., voices, instruments), which are superposed to give the magnitude spectrum of a mixed signal. The pattern-matching computations of NMF are reminiscent of longstanding template-based models of pitch perception [6]. Our interest in NMF lies mainly in its use for speech processing. In this paper, we describe a real-time system to detect the presence of one or more voices and determine their pitch. Learning plays a crucial role in our system: the basis functions of NMF are trained offline from data to model the particular timbres of voiced speech, which vary across different phonetic contexts and speakers. In related work, Smaragdis and Brown used NMF to model polyphonic piano music [17]. Our work differs in its focus on speech, real-time processing, and statistical learning of basis functions. A long-term goal is to develop interactive voice-driven agents that respond to the pitch contours of human speech [15]. To be truly interactive, these agents must be able to process input from distant sources and to operate in noisy environments with overlapping speakers. In this paper, we have taken an important step toward this goal by maintaining real-time operability and state-of-the-art performance in clean speech while developing a framework that can analyze more complicated auditory scenes. These are inherently competing goals in engineering. Our focus on actual system-building also distinguishes our work from many other studies of overlapping periodic sources [5, 9, 19, 20, 21]. The organization of this paper is as follows. In section 2, we describe the signal processing in our front end that converts speech signals into a form that can be analyzed by NMF. In section 3, we describe the use of NMF for pitch tracking—namely, the learning of basis functions for voiced speech, and the nonnegative deconvolution for real-time analysis. In section 4, we present experimental results on signals with one or more voices. Finally, in section 5, we conclude with plans for future work. 2 Signal processing A periodic signal is characterized by its fundamental frequency, f0. It can be decomposed by Fourier analysis as the sum of sinusoids—or partials—whose frequencies occur at integer multiples of f0. For periodic signals with unknown f0, the frequencies of the partials can be inferred from peaks in the magnitude spectrum, as computed by an FFT. Voiced speech is perceived as having a pitch at the fundamental frequency of vocal cord vibration. Perfect periodicity is an idealization, however; the waveforms of voiced speech are non-stationary, quasiperiodic signals. In practice, one cannot reliably extract the partials of voiced speech by simply computing windowed FFTs and locating peaks in the magnitude spectrum. In this section, we review a more robust method, known as instantaneous frequency (IF) estimation [1], for extracting the stable sinusoidal components of voiced speech. This method is the basis for the signal processing in our front-end. The starting point of IF estimation is to model the voiced speech signal, s(t), by a sum of amplitude and frequency-modulated sinusoids: s(t) = X i αi(t) cos Z t 0 dt ωi(t) + θi . (1) The arguments of the cosines in eq. (1) are called the instantaneous phases; their derivatives with respect to time yield the so-called instantaneous frequencies ωi(t). If the amplitudes αi(t) and frequencies ωi(t) are stationary, then eq. (1) reduces to a weighted sum of pure sinusoids. For nonstationary signals, ωi(t) intuitively represents the instantaneous frequency of the ith partial at time t. The short-time Fourier transform (STFT) provides an efficient tool for IF estimation [2]. The STFT of s(t) with windowing function w(t) is given by: F(ω, t) = Z dτ s(τ)w(τ −t)e−jωτ. (2) Let z(ω, t) = ejωtF(ω, t) denote the analytic signal of the Fourier component of s(t) with frequency ω, and let a = Re[z] and b = Im[z] denote its real and imaginary parts. We Instantaneous Frequency (Hz) 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 1000 800 600 400 200 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 0 100 200 Time (second) Pitch (Hz) Figure 1: Top: instantaneous frequencies of estimated partials for the utterance “The north wind and the sun were disputing.” Bottom: f0 contour derived from a laryngograph recording. can define a mapping from the time-frequency plane of the STFT to another frequency axis λ(ω, t) by: λ(ω, t) = ∂ ∂t arg[z(ω, t)] = a ∂b ∂t −b ∂a ∂t a2 + b2 (3) The derivatives on the right hand side can be computed efficiently via SFFTs [2]. Note that the right hand side of eq. (3) differentiates the instantaneous phase associated with a particular Fourier component of s(t). IF estimation identifies the stable fixed points [7, 8] of this mapping, given by λ(ω∗, t) = ω∗and (∂λ/∂ω)|ω=ω∗< 1, (4) as the instantaneous frequencies of the partials that appear in eq. (1). Intuitively, these fixed points occur where the notions of energy at frequency ω in eqs. (1) and (2) coincide—that is, where the IF and STFT representations appear most consistent. The top panel of Fig. 1 shows the IFs of partials extracted by this method for a speech signal with sliding and overlapping analysis windows. The bottom panels shows the pitch contour. Note that in regions of voiced speech, indicated by nonzero f0 values, the IFs exhibit a clear harmonic structure, while in regions of unvoiced speech, they do not. In summary, the signal processing in our front-end extracts partials with frequencies ω∗ i (t) and nonnegative amplitudes |F(ω∗ i (t), t)|, where t indexes the time of the analysis window and i indexes the number of extracted partials. Further analysis of the signal is performed by the NMF algorithm described in the next section, which is used to detect the presence of one or more voices and to estimate their f0 values. Similar front ends have been used in other studies of pitch tracking and source separation [1, 2, 7, 13]. 3 Nonnegative matrix factorization For mixed signals of overlapping speakers, our front-end outputs the mixture of partials extracted from several voices. How can we analyze this output by NMF? In this section, we show: (i) how to learn nonnegative basis functions that model the characteristic timbres of voiced speech, and (ii) how to decompose mixed signals in terms of these basis functions. We briefly review NMF [11]. Given observations yt, the goal of NMF is to compute basis functions W and linear coefficients xt such that the reconstructed vectors ˜yt = Wxt best match the original observations. The observations, basis functions, and coefficients are constrained to be nonnegative. Reconstruction errors are measured by the generalized Kullback-Leibler divergence: G(y, ˜y) = X α [yα log(yα/˜yα) −yα + ˜yα] , (5) which is lower bounded by zero and vanishes if and only if y = ˜y. NMF works by optimizing the total reconstruction error P t G(yt, ˜yt) in terms of the basis functions W and coefficients xt. We form three matrices by concatenating the column vectors yt, ˜yt and xt separately and denote them by Y, ˜Y and X respectively. Multiplicative updates for the optimization problem are given in terms of the elements of these matrices: Wαβ ←Wαβ "X t Xβt Yαt ˜Yαt # , Xβt ←Xβt P α Wαβ Yαt/ ˜Yαt P γ Wγβ . (6) These alternating updates are guaranteed to converge to a local minimum of the total reconstruction error; see [11] for further details. In our application of NMF to pitch estimation, the vectors yt store vertical “time slices” of the IF representation in Fig. 1. Specifically, the elements of yt store the magnitude spectra |F(ω∗ i (t), t)| of extracted partials at time t; the instantaneous frequency axis is discretized on a log scale so that each element of yt covers 1/36 octave of the frequency spectrum. The columns of W store basis functions, or harmonic templates, for the magnitude spectra of voiced speech with different fundamental frequencies. (An additional column in W stores a non-harmonic template for unvoiced speech.) In this study, only one harmonic template was used per fundamental frequency. The fundamental frequencies range from 50Hz to 400Hz, spaced and discretized on a log scale. We constrained the harmonic templates for different fundamental frequencies to be related by a simple translation on the log-frequency axis. Tying the columns of W in this way greatly reduces the number of parameters that must be estimated by a learning algorithm. Finally, the elements of xt store the coefficients that best reconstruct yt by linearly superposing harmonic templates of W. Note that only partials from the same source form harmonic relations. Thus, the number of nonzero elements in xt indicates the number of periodic sources at time t, while the indices of nonzero elements indicate their fundamental frequencies. It is in this sense that the reconstruction yt ≈Wxt provides an analysis of the auditory scene. 3.1 Learning the basis functions of voiced speech The harmonic templates in W were estimated from the voiced speech of (non-overlapping) speakers in the Keele database [14]. The Keele database provides aligned pitch contours derived from laryngograph recordings. The first halves of all utterances were used for training, while the second halves were reserved for testing. Given the vectors yt computed by IF estimation in the front end, the problem of NMF is to estimate the columns of W and the reconstruction coefficients xt. Each xt has only two nonzero elements (one indicating the reference value for f0, the other corresponding to the non-harmonic template of the basis matrix W); their magnitudes must still be estimated by NMF. The estimation was performed by iterating the updates in eq. (6). Fig. 2 (left) compares the harmonic template at 100 Hz before and after learning. While the template is initialized with broad spectral peaks, it is considerably sharpened by the NMF learning algorithm. Fig. 2 (right) shows four examples from the Keele database (from snippets of voiced speech with f0 = 100 Hz) that were used to train this template. Note that even among these four partial profiles there is considerable variance. The learned template is derived to minimize the total reconstruction error over all segments of voiced speech in the training data. 0 500 1000 1500 2000 2500 0 0.5 1 0 500 1000 1500 2000 2500 0 0.5 1 Frequency (Hz) 0 500 1000 1500 2000 2500 female: cloak 0 500 1000 1500 2000 2500 male: stronger 0 500 1000 1500 2000 2500 male: travel Frequency (Hz) 0 500 1000 1500 2000 2500 male: the Frequency (Hz) Figure 2: Left: harmonic template before and after learning for voiced speech at f0 = 100 Hz. The learned template (bottom) has a much sharper spectral profile. Right: observed partials from four speakers with f0 = 100 Hz. 3.2 Nonnegative deconvolution for estimating f0 of one or more voices Once the basis functions in W have been estimated, computing x such that y ≈Wx under the measure of eq. (5) simplifies to the problem of nonnegativedeconvolution. Nonnegative deconvolution has been applied to problems in fundamental frequency estimation [16], music analysis [17] and sound localization [12]. In our model, nonnegative deconvolution of y ≈Wx yields an estimate of the number of periodic sources in y as well as their f0 values. Ideally, the number of nonzero reconstruction weights in x reveal the number of sources, and the corresponding columns in the basis matrix W reveal their f0 values. In practice, the index of the largest component of x is found, and its corresponding f0 value is deemed to be the dominant fundamental frequency. The second largest component of x is then used to extract a secondary fundamental frequency, and so on. A thresholding heuristic can be used to terminate the search for additional sources. Unvoiced speech is detected by a simple frame-based classifier trained to make voiced/unvoiced distinctions from the observation y and its nonnegative deconvolution x. The pattern-matching computations in NMF are reminiscent of well-known models of harmonic template matching [6]. Two main differences are worth noting. First, the templates in NMF are learned from labeled speech data. We have found this to be essential in their generalization to unseen cases. It is not obvious how to craft a harmonic template “by hand” that manages the variability of partial profiles in Fig. 2 (right). Second, the template matching in NMF is framed by nonnegativity constraints. Specifically, the algorithm models observed partials by a nonnegative superposition of harmonic stacks. The cost function in eq. (5) also diverges if ˜yα =0 when yα is nonzero; this useful property ensures that minima of eq. (5) must explain each observed partial by its attribution to one or more sources. This property does not hold for traditional least-squares linear reconstructions. 4 Implementation and results We have implemented both the IF estimation in section 2 and the nonnegative deconvolution in section 3.2 in a real-time system for pitch tracking. The software runs on a laptop computer with a visual display that shows the contour of estimated f0 values scrolling in real-time. After the signal is downsampled to 4900 Hz, IF estimation is performed in 10 ms shifts with an analysis window of 50 ms. Partials extracted from the fixed points of eq. (4) are discretized on a log-frequency axis. The columns of the basis matrix W provide harKeele database VE (%) UE (%) GPE (%) RMS (Hz) NMF 7.7 4.6 0.9 4.3 RAPT 3.2 6.8 2.2 4.4 Edinburgh database VE (%) UE (%) GPE (%) RMS (Hz) NMF 7.8 4.4 0.7 5.8 RAPT 4.5 8.4 1.9 5.3 Table 1: Comparison between our algorithm and RAPT [18] on the test portion of the Keele database (see text) and the full Edinburgh database, in terms of the percentages of voiced errors (VE), unvoiced errors (UE), and gross pitch errors (GPE), as well as the root mean square (RMS) deviation in Hz. monic templates for f0 = 50 Hz to f0 = 400 Hz with a step size of 1/36 octave. To achieve real-time performance and reduce system latency, the system does not postprocess the f0 values obtained in each frame from nonnegative deconvolution: in particular, there is no dynamic programming to smooth the pitch contour, as commonly done in many pitch tracking algorithms [18]. We have found that our algorithm performs well and yields smooth pitch contours (for non-overlapping voices) even without this postprocessing. 4.1 Pitch determination of clean speech signals Table 1 compares the performance of our algorithm on clean speech to RAPT [18], a stateof-the-art pitch tracker based on autocorrelation and dynamic programming. Four error types are reported: the percentage of voiced frames misclassified as unvoiced (VE), the percentage of unvoiced frames misclassified as voiced (UE), the percentage of voiced frames with gross pitch errors (GPE) where predicted and reference f0 values differ by more than 20%, and the root-mean-squared (RMS) difference between predicted and reference f0 values when there are no gross pitch errors. The results were obtained on the second halves of utterances reserved for testing in the Keele database, as well as the full set of utterances in the Edinburgh database [3]. As shown in the table, the performance of our algorithm is comparable to that of RAPT. 4.2 Pitch determination of overlapping voices and noisy speech We have also examined the robustness of our system to noise and overlapping speakers. Fig. 3 shows the f0 values estimated by our algorithm from a mixture of two voices—one with ascending pitch, the other with descending pitch. Each voice spans one octave. The dominant and secondary f0 values extracted in each frame by nonnegative deconvolution are shown. The algorithm recovers the f0 values of the individual voices almost perfectly, though it does not currently make any effort to track the voices through time. (This is a subject for future work.) Fig. 4 shows in more detail how IF estimation and nonnegative deconvolution are affected by interfering speakers and noise. A clean signal from a single speaker is shown in the top row of the plot, along with its log power spectra, partials extracted by IF estimation, estimated f0, and reconstructed harmonic stack. The second and third rows show the effects of adding white noise and an overlapping speaker, respectively. Both types of interference degrade the harmonic structure in the log power spectra and extracted partials. However, nonnegative deconvolution is still able to recover the pitch of the original speaker, as well as the pitch of the second speaker. On larger evaluations of the algorithm’s robustness, we have obtained results comparable to RAPT over a wide range of SNRs (as low as 0 dB). Time (s) Frequency(Hz) 0 0.5 1 1.5 2 2.5 3 0 200 400 600 800 1000 0 0.5 1 1.5 2 2.5 3 50 100 150 200 Time (s) F0 (Hz) dominant pitch secondary pitch Figure 3: Left: Spectrogram of a mixture of two voices with ascending and descending f0 contours. Right: f0 values estimated by NMF. Clean Waveform White noise added 50 100 150 200 250 Time Mix of two signals Log Power Spectra 0 500 1000 1500 Frequency (Hz) Y 500 1000 1500 Frequency (Hz) Deconvoluted X 0 200 400 Frequency (Hz) Reconstructed Y 500 1000 1500 Frequency (Hz) Figure 4: Effect of white noise (middle row) and overlapping speaker (bottom row) on clean speech (top row). Both types of interference degrade the harmonic structure in the log power spectra (second column) and the partials extracted by IF estimation (third column). The results of nonnegative deconvolution (fourth column), however, are fairly robust. Both the pitch of the original speaker at f0 = 200 Hz and the overlappingspeaker at f0 = 300 Hz are recovered. The fifth column displays the reconstructed profile of extracted partials from activated harmonic templates. 5 Discussion There exists a large body of related work on fundamental frequency estimation of overlapping sources [5, 7, 9, 19, 20, 21]. Our contributions in this paper are to develop a new framework based on recent advances in unsupervised learning and to study the problem with the constraints imposed by real-time system building. Nonnegative deconvolution is similar to EM algorithms [7] for harmonic template matching, but it does not impose normalization constraints on spectral peaks as if they represented a probability distribution. Important directions for future work are to train a richer set of harmonic templates by NMF, to incorporate the frame-based computations of nonnegative deconvolution into a dynamical model, and to embed our real-time system in interactive agents that respond to the pitch contours of human speech. All these directions are being actively pursued. References [1] T. Abe, T. Kobayashi, and S. Imai. Harmonics tracking and pitch extraction based on instantaneous frequency. In Proc. of ICASSP, pages 756–759, 1995. [2] T. Abe, T. Kobayashi, and S. Imai. Robust pitch estimation with harmonics enhancement in noisy environments based on instantaneous frequency. In Proc. of ICSLP, pages 1277–1280, 1996. [3] P. Bagshaw, S. M. Hiller, and M. A. Jack. Enhanced pitch tracking and the processing of f0 contours for computer aided intonation teaching. In Proc. of 3rd European Conference on Speech Communication and Technology, pages 1003–1006, 1993. [4] A. S. Bregman. Auditory Scene Analysis: The Perceptual Organization of Sound. MIT Press, 2nd edition, 1999. [5] A. de Cheveigne and H. Kawahara. Multiple period estimation and pitch perception model. Speech Communication, 27:175–185, 1999. [6] J. Goldstein. An optimum processor theory for the central formation of the pitch of complex tones. J. Acoust. Soc. Am., 54:1496–1516, 1973. [7] M. Goto. A robust predominant-F0 estimation method for real-time detection of melody and bass lines in CD recordings. In Proc. of ICASSP, pages 757–760, June 2000. [8] H. Kawahara, H. Katayose, A. de Cheveign´e, and R. D. Patterson. Fixed point analysis of frequency to instantaneous frequency mapping for accurate estimation of f0 and periodicity. In Proc. of EuroSpeech, pages 2781–2784, 1999. [9] A. Klapuri, T. Virtanen, and J.-M. Holm. Robust multipitch estimation for the analysis and manipulation of polyphonic musical signals. In Proc. of COST-G6 Conference on Digital Audio Effects, Verona, Italy, 2000. [10] D. D. Lee and H. S. Seung. Learning in intelligent embedded systems. In Proc. of USENIX Workshop on Embedded Systems, 1999. [11] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization. Nature, 401:788–791, 1999. [12] Y. Lin, D. D. Lee, and L. K. Saul. Nonnegative deconvolution for time of arrival estimation. In Proc. of ICASSP, 2004. [13] T. Nakatani and T. Irino. Robust fundamental frequency estimation against background noise and spectral distortion. In Proc. of ICSLP, pages 1733–1736, 2002. [14] F. Plante, G. F. Meyer, and W. A. Ainsworth. A pitch extraction reference database. In Proc. of EuroSpeech, pages 837–840, 1995. [15] L. K. Saul, D. D. Lee, C. L. Isbell, and Y. LeCun. Real time voice processing with audiovisual feedback: toward autonomous agents with perfect pitch. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press, 2003. [16] L. K. Saul, F. Sha, and D. D. Lee. Statistical signal processing with nonnegativity constraints. In Proc. of EuroSpeech, pages 1001–1004, 2003. [17] P. Smaragdis and J. C. Brown. Non-negative matrix factorization for polyphonic music transcription. In Proc. of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 177–180, 2003. [18] D. Talkin. A robust algorithm for pitch tracking(RAPT). In W. B. Kleijn and K. K. Paliwal, editors, Speech Coding and Synthesis, chapter 14. Elsevier Science B.V., 1995. [19] T. Tolonen and M. Karjalainen. A computationally efficient multipitch analysis model. IEEE Trans. on Speech and Audio Processing, 8(6):708–716, 2000. [20] T. Virtanen and A. Klapuri. Separation of harmonic sounds using multipitch analysis and iterative parameter estimation. In Proc. of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 83–86, New Paltz, NY, USA, Oct 2001. [21] M. Wu, D. Wang, and G. J. Brown. A multipitch tracking algorithm for noisy speech. IEEE Trans. on Speech and Audio Processing, 11:229–241, 2003.
|
2004
|
179
|
2,595
|
Hierarchical Distributed Representations for Statistical Language Modeling John Blitzer, Kilian Q. Weinberger, Lawrence K. Saul, and Fernando C. N. Pereira Department of Computer and Information Science, University of Pennsylvania Levine Hall, 3330 Walnut Street, Philadelphia, PA 19104 {blitzer,kilianw,lsaul,pereira}@cis.upenn.edu Abstract Statistical language models estimate the probability of a word occurring in a given context. The most common language models rely on a discrete enumeration of predictive contexts (e.g., n-grams) and consequently fail to capture and exploit statistical regularities across these contexts. In this paper, we show how to learn hierarchical, distributed representations of word contexts that maximize the predictive value of a statistical language model. The representations are initialized by unsupervised algorithms for linear and nonlinear dimensionality reduction [14], then fed as input into a hierarchical mixture of experts, where each expert is a multinomial distribution over predicted words [12]. While the distributed representations in our model are inspired by the neural probabilistic language model of Bengio et al. [2, 3], our particular architecture enables us to work with significantly larger vocabularies and training corpora. For example, on a large-scale bigram modeling task involving a sixty thousand word vocabulary and a training corpus of three million sentences, we demonstrate consistent improvement over class-based bigram models [10, 13]. We also discuss extensions of our approach to longer multiword contexts. 1 Introduction Statistical language models are essential components of natural language systems for human-computer interaction. They play a central role in automatic speech recognition [11], machine translation [5], statistical parsing [8], and information retrieval [15]. These models estimate the probability that a word will occur in a given context, where in general a context specifies a relationship to one or more words that have already been observed. The simplest, most studied case is that of n-gram language modeling, where each word is predicted from the preceding n−1 words. The main problem in building these models is that the vast majority of word combinations occur very infrequently, making it difficult to estimate accurate probabilities of words in most contexts. Researchers in statistical language modeling have developed a variety of smoothing techniques to alleviate this problem of data sparseness. Most smoothing methods are based on simple back-off formulas or interpolation schemes that discount the probability of observed events and assign the “leftover” probability mass to events unseen in training [7]. Unfortunately, these methods do not typically represent or take advantage of statistical regularities among contexts. One expects the probabilities of rare or unseen events in one context to be related to their probabilities in statistically similar contexts. Thus, it should be possible to estimate more accurate probabilities by exploiting these regularities. Several approaches have been suggested for sharing statistical information across contexts. The aggregate Markov model (AMM) of Saul and Pereira [13] (also discussed by Hofmann and Puzicha [10] as a special case of the aspect model) factors the conditional probability table of a word given its context by a latent variable representing context “classes”. However, this latent variable approach is difficult to generalize to multiword contexts, as the size of the conditional probability table for class given context grows exponentially with the context length. The neural probabilistic language model (NPLM) of Bengio et al. [2, 3] achieved significant improvements over state-of-the-art smoothed n-gram models [6]. The NPLM encodes contexts as low-dimensional continuous vectors. These are fed to a multilayer neural network that outputs a probability distribution over words. The low-dimensional vectors and the parameters of the network are trained simultaneously to minimize the perplexity of the language model. This model has no difficulty encoding multiword contexts, but its training and application are very costly because of the need to compute a separate normalization for the conditional probabilities associated to each context. In this paper, we introduce and evaluate a statistical language model that combines the advantages of the AMM and NPLM. Like the NPLM, it can be used for multiword contexts, and like the AMM it avoids per-context normalization. In our model, contexts are represented as low-dimensional real vectors initialized by unsupervised algorithms for dimensionality reduction [14]. The probabilities of words given contexts are represented by a hierarchical mixture of experts (HME) [12], where each expert is a multinomial distribution over predicted words. This tree-structured mixture model allows a rich dependency on context without expensive per-context normalization. Proper initialization of the distributed representations is crucial; in particular, we find that initializations from the results of linear and nonlinear dimensionality reduction algorithms lead to better models (with significantly lower test perplexities) than random initialization. In practice our model is several orders of magnitude faster to train and apply than the NPLM, enabling us to work with larger vocabularies and training corpora. We present results on a large-scale bigram modeling task, showing that our model also leads to significant improvements over comparable AMMs. 2 Distributed representations of words Natural language has complex, multidimensional semantics. As a trivial example, consider the following four sentences: The vase broke. The vase contains water. The window broke. The window contains water. The bottom right sentence is syntactically valid but semantically meaningless. As shown by the table, a two-bit distributed representation of the words “vase” and “window” suffices to express that a vase is both a container and breakable, while a window is breakable but cannot be a container. More generally, we expect low dimensional continuous representations of words to be even more effective at capturing semantic regularities. Distributed representations of words can be derived in several ways. In a given corpus of text, for example, consider the matrix of bigram counts whose element Cij records the number of times that word wj follows word wi. Further, let pij = Cij/ P k Cik denote the conditional frequencies derived from these counts, and let ⃗pi denote the V -dimensional frequency vector with elements pij, where V is the vocabulary size. Note that the vectors ⃗pi themselves provide a distributed representation of the words wi in the corpus. For large vocabularies and training corpora, however, this is an extremely unwieldy representation, tantamount to storing the full matrix of bigram counts. Thus, it is natural to seek a lower dimensional representation that captures the same information. To this end, we need to map each vector ⃗pi to some d-dimensional vector ⃗xi, with d ≪V . We consider two methods in dimensionality reduction for this problem. The results from these methods are then used to initialize the HME architecture in the next section. 2.1 Linear dimensionality reduction The simplest form of dimensionality reduction is principal component analysis (PCA). PCA computes a linear projection of the frequency vectors ⃗pi into the low dimensional subspace that maximizes their variance. The variance-maximizing subspace of dimensionality d is spanned by the top d eigenvectors of the frequency vector covariance matrix. The eigenvalues of the covariance matrix measure the variance captured by each axis of the subspace. The effect of PCA can also be understood as a translation and rotation of the frequency vectors ⃗pi, followed by a truncation that preserves only their first d elements. 2.2 Nonlinear dimensionality reduction Intuitively, we would like to map the vectors ⃗pi into a low dimensional space where semantically similar words remain close together and semantically dissimilar words are far apart. Can we find a nonlinear mapping that does this better than PCA? Weinberger et al. recently proposed a new solution to this problem based on semidefinite programming [14]. Let ⃗xi denote the image of ⃗pi under this mapping. The mapping is discovered by first learning the V ×V matrix of Euclidean squared distances [1] given by Dij = |⃗xi −⃗xj|2. This is done by balancing two competing goals: (i) to co-locate semantically similar words, and (ii) to separate semantically dissimilar words. The first goal is achieved by fixing the distances between words with similar frequency vectors to their original values. In particular, if ⃗pj and ⃗pk lie within some small neighborhood of each other, then the corresponding element Djk in the distance matrix is fixed to the value |⃗pj −⃗pk|2. The second goal is achieved by maximizing the sum of pairwise squared distances ΣijDij. Thus, we push the words in the vocabulary as far apart as possible subject to the constraint that the distances between semantically similar words do not change. The only freedom in this optimization is the criterion for judging that two words are semantically similar. In practice, we adopt a simple criterion such as k-nearest neighbors in the space of frequency vectors ⃗pi and choose k as small as possible so that the resulting neighborhood graph is connected [14]. The optimization is performed over the space of Euclidean squared distance matrices [1]. Necessary and sufficient conditions for the matrix D to be interpretable as a Euclidean squared distance matrix are that D is symmetric and that the Gram matrix1 derived from G = −1 2HDHT is semipositive definite, where H = I −1 V 11T. The optimization can thus be formulated as the semidefinite programming problem: Maximize ΣijDij subject to: (i) DT = D, (ii) −1 2HDH ⪰0, and (iii) Dij = |⃗pi −⃗pj|2 for all neighboring vectors ⃗pi and ⃗pj. 1Assuming without loss of generality that the vectors ⃗xi are centered on the origin, the dot products Gij = ⃗xi · ⃗xj are related to the pairwise squared distances Dij = |⃗xi −⃗xj|2 as stated above. 0.0 0.2 0.4 0.6 0.8 1.0 SDE PCA Figure 1: Eigenvalues from principal component analysis (PCA) and semidefinite embedding (SDE), applied to bigram distributions of the 2000 most frequently occuring words in the corpus. The eigenvalues, shown normalized by their sum, measure the relative variance captured by individual dimensions. The optimization is convex, and its global maximum can be computed in polynomial time [4]. The optimization here differs slightly from the one used by Weinberger et al. [14] in that here we only preserve local distances, as opposed to local distances and angles. After computing the matrix Dij by semidefinite programming, a low dimensional embedding ⃗xi is obtained by metric multidimensional scaling [1, 9, 14]. The top eigenvalues of the Gram matrix measure the variance captured by the leading dimensions of this embedding. Thus, one can compare the eigenvalue spectra from this method and PCA to ascertain if the variance of the nonlinear embedding is concentrated in fewer dimensions. We refer to this method of nonlinear dimensionality reduction as semidefinite embedding (SDE). Fig. 1 compares the eigenvalue spectra of PCA and SDE applied to the 2000 most frequent words2 in the corpus described in section 4. The figure shows that the nonlinear embedding by SDE concentrates its variance in many fewer dimensions than the linear embedding by PCA. Indeed, Fig. 2 shows that even the first two dimensions of the nonlinear embedding preserve the neighboring relationships of many words that are semantically similar. By contrast, the analogous plot generated by PCA (not shown) reveals no such structure. MAY, WOULD, COULD, SHOULD, MIGHT, MUST, CAN, CANNOT, COULDN'T, WON'T, WILL ONE, TWO, THREE, FOUR, FIVE, SIX, SEVEN, EIGHT, NINE, TEN, ELEVEN, TWELVE, THIRTEEN, FOURTEEN, FIFTEEN, SIXTEEN, SEVENTEEN, EIGHTEEN JANUARY FEBRUARY MARCH APRIL JUNE JULY AUGUST SEPTEMBER OCTOBER NOVEMBER DECEMBER MILLION BILLION MONDAY TUESDAY WEDNESDAY THURSDAY FRIDAY SATURDAY SUNDAY ZERO Figure 2: Projection of the normalized bigram counts of the 2000 most frequent words onto the first two dimensions of the nonlinear embedding obtained by semidefinite programming. Note that semantically meaningful neighborhoods are preserved, despite the massive dimensionality reduction from V = 60000 to d = 2. 2Though convex, the optimization over distance matrices for SDE is prohibitively expensive for large matrices. For the results in this paper—on the corpus described in section 4—we solved the semidefinite program in this section to embed the 2000 most frequent words in the corpus, then used a greedy incremental solver to embed the remaining 58000 words in the vocabulary. Details of this incremental solver will be given elsewhere. Though not the main point of this paper, the nonlinear embedding of V = 60000 words is to our knowledge one of the largest applications of recently developed spectral methods for nonlinear dimensionality reduction [9, 14]. 3 Hierarchical mixture of experts The model we use to compute the probability that word w′ follows word w is known as a hierarchical mixture of experts (HME) [12]. HMEs are fully probabilistic models, making them ideally suited to the task of statistical language modeling. Furthermore, like multilayer neural networks they can parameterize complex, nonlinear functions of their input. Figure 3 depicts a simple, two-layer HME. HMEs are tree-structured mixture models in which the mixture components are “experts” that lie at the leaves of the tree. The interior nodes of the tree perform binary logistic regressions on the input vector to the HME, and the mixing weight for a leaf is computed by multiplying the probabilities of each branch (left or right) along the path to that leaf. In our model, the input vector ⃗x is a function of the context word w, and the expert at each leaf specifies a multinomial distribution over the predicted word w′. Letting π denote a path through the tree from root to leaf, the HME computes the probability of a word w′ conditioned on a context word w as Pr(w′|w) = X π Pr(π|⃗x(w)) · Pr(w′|π). (1) We can compute the maximum likelihood parameters for the HME using an ExpectationMaximization (EM) algorithm [12]. The E-step involves computing the posterior probability over paths Pr(π|w, w′) for each observed bigram in the training corpus. This can be done by a recursive pass through the tree. In the M-step, we must maximize the EM auxiliary function with respect to the parameters of the logistic regressions and multinomial leaves as well as the input vectors ⃗x(w). The logistic regressions in the tree decouple and can be optimized separately by Newton’s method, while the multinomial leaves have a simple closed-form update. Though the input vectors are shared across all logistic regressions in the tree, we can compute their gradients and hessians in one recursive pass and update them by Newton’s method as well. The EM algorithm for HMEs converges to a local maximum of the log-likelihood, or equivalently, a local minimum of the training perplexity Ptrain = Y ij Pr(wj|wi)Cij −1 C , (2) where C = P ij Cij is the total number of observed bigrams in the training corpus. The algorithm is sensitive to the choice of initialization; in particular, as we show in the next multinomial distribution logistic regression logistic regression logistic regression multinomial distribution multinomial distribution multinomial distribution input vector word initialized by PCA or SDE Figure 3: Two-layer HME for bigram modeling. Words are mapped to input vectors; probabilities of next words are computed by summing over paths through the tree. The mapping from words to input vectors is initialized by dimensionality reduction of bigram counts. Ptest d init 4 8 12 16 random 468 407 378 373 PCA 406 364 362 351 SDE 385 361 360 355 Table 1: Test perplexities of HMEs with different input dimensionalities and initializations. Ptest d m 4 8 12 16 8 435 429 426 428 16 385 361 360 355 32 350 328 320 317 64 336 308 298 294 Table 2: Test perplexities of HMEs with different input dimensionalities and numbers of leaves. section, initialization of the input vectors by PCA or SDE leads to significantly better models than random initialization. We initialized the logistic regressions in the HME to split the input vectors recursively along their dimensions of greatest variance. The multinomial distributions at leaf nodes were initialized by uniform distributions. For an HME with m multinomial leaves and d-dimensional input vectors, the number of parameters scales as O(V d + V m + dm). The resulting model can be therefore be much more compact than a full bigram model over V words. 4 Results We evaluated our models on the ARPA North American Business News (NAB) corpus. Our training set contained 78 million words from a 60,000 word vocabulary. In the interest of speed, we truncated the lowest-count bigrams from our training set. This left us with a training set consisting of 1.7 million unique bigrams. The test set, untruncated, had 13 million words resulting in 2.1 million unique bigrams. 4.1 Empirical evaluation Table 1 reports the test perplexities of several HMEs whose input vectors were initialized in different ways. The number of mixture components (i.e., leaves of the HME) was fixed at m = 16. In all cases, the inputs initialized by PCA and SDE significantly outperformed random initialization. PCA and SDE initialization performed equally well for all but the lowest-dimensional inputs. Here SDE outperformed PCA, most likely because the first few eigenvectors of SDE capture more variance in the bigram counts than those of PCA (see Figure 1). Table 2 reports the test perplexities of several HMEs initialized by SDE, but with varying input dimensionality (d) and numbers of leaves (m). Perplexity decreases with increasing tree depth and input dimensionality, but increasing the dimensionality beyond d = 8 does not appear to give much gain. 4.2 Comparison to a class-based bigram model w z w' Figure 4: Belief network for AMM. We obtained baseline results from an AMM [13] trained on the same corpus. The model (Figure 4) has the form Pr(w′|w) = X z Pr(z|w) · Pr(w′|z). (3) The number of estimated parameters in AMMs scales as 2·|Z|·V , where |Z| is the size of the latent variable (i.e., number of classes) and V is the number of words in the vocabulary. parameters (*1000) Ptest(AMM) Ptest(HME) improvement 960 456 429 6% 1440 414 361 13% 2400 353 328 7% 4320 310 308 1% Table 3: Test perplexities of HMEs and AMMs with roughly equal parameter counts. Table 3 compares the test perplexities of several HMEs and AMMs with similar numbers of parameters. All these HMEs had d = 8 inputs initialized by SDE. In all cases, the HMEs match or outperform the AMMs. The performance is nearly equal for the larger models, which may be explained by the fact that most of the parameters of the larger HMEs come from the multinomial leaves, not from the distributed inputs. 4.3 Comparison to NPLM The most successful large-scale application of distributed representations to language modeling is the NPLM of Bengio et al. [2, 3], which in part inspired our work. We now compare the main aspects of the two models. τ d m 4 8 12 16 8 1 1 1 1 16 2 2 2 2 32 4 4 4 4 64 9 10 10 10 Table 4: Training times τ in hours for HMEs with m leaves. The NPLM uses softmax to compute the probability of a word w′ given its context, thus requiring a separate normalization for each context. Estimating the parameters of this softmax requires O(V ) computation per observed context and accounts for almost all of the computational resources required by the model. Because of this, the NPLM vocabulary size was restricted to 18000 words, and even then it required more than 3 weeks using 40 CPUs to finish 5 epochs of training [2]. By contrast, our HMEs require O(md) computation per observed bigram. As Table 4 shows, actual training times are rather insensitive to input dimensionality. This allowed us to use a 3.5× larger vocabulary and a larger training corpus than were used for the NPLM, and still complete training our largest models in a matter of hours. Note that the numbers in Table 4 do not include the time to compute the initial distributed representations by PCA (30 minutes) or SDE (3 days), but these computations do not need to be repeated for each trained model. The second difference between our model and the NPLM is the choice of initialization. Bengio et al. [3] report negligible improvement from initializing the NPLM input vectors by singular value decomposition. By contrast, we found that initialization by PCA or SDE was essential for optimal performance of our models (Table 1). Finally, the NPLM was applied to multiword contexts. We have not done these experiments yet, but our model extends naturally to multiword contexts, as we explain in the next section. 5 Discussion In this paper, we have presented a statistical language model that exploits hierarchical distributed representations of word contexts. The model shares the advantages of the NPLM [2], but differs in its use of dimensionality reduction for effective parameter initialization and in the significant speedup provided by the HME architecture. We can consequently scale our models to larger training corpora and vocabularies. We have also demonstrated that our models consistently match or outperform a baseline class-based bigram model. The class-based bigram model is nearly as effective as the HME, but it has the major drawback that there is no straightforward way to extend it to multiword contexts without exploding its parameter count. Like the NPLM, however, the HME can be easily extended. We can form an input vector for a multiword history (w1, w2) simply by concatenating the vectors ⃗x(w1) and ⃗x(w2). The parameters of the corresponding HME can be learned by an EM algorithm similar to the one in this paper. Initialization from dimensionality reduction is also straightforward: we can compute the low dimensional representation for each word separately. We are actively pursuing these ideas to train models with hierarchical distributed representations of multiword contexts. References [1] A. Y. Alfakih, A. Khandani, and H. Wolkowicz. Solving Euclidean distance matrix completion problems via semidefinite programming. Computational Optimization Applications, 12(13):13–30, 1999. [2] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. [3] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems, volume 13, Cambridge, MA, 2001. MIT Press. [4] D. B. Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software, 11(1):613–623, 1999. [5] P. Brown, S. D. Pietra, V. D. Pietra, and R. Mercer. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311, 1991. [6] P. F. Brown, V. J. D. Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479, 1992. [7] S. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting of the ACL, pages 310–318, 1996. [8] M. Collins. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, 1997. [9] J. Ham, D. D. Lee, S. Mika, and B. Sch¨olkopf. A kernel view of the dimensionality reduction of manifolds. In Proceedings of the Twenty First International Conference on Machine Learning (ICML-04), Banff, Canada, 2004. [10] T. Hofmann and J. Puzicha. Statistical models for co-occurrence and histogram data. In Proceedings of the International Conference Pattern Recognition, pages 192–194, 1998. [11] F. Jelinek. Statistical Methods for Speech Recognition. MIT Press, 1997. [12] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6:181–214, 1994. [13] L. K. Saul and F. C. N. Pereira. Aggregate and mixed-order Markov models for statistical language processing. In C. Cardie and R. Weischedel, editors, Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (EMNLP-97), pages 81–89, New Providence, RI, 1997. [14] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction. In Proceedings of the Twenty First International Confernence on Machine Learning (ICML-04), Banff, Canada, 2004. [15] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to information retrieval. ACM Transactions on Information Systems, 22(2):179–214, 2004.
|
2004
|
18
|
2,596
|
Efficient Out-of-Sample Extension of Dominant-Set Clusters Massimiliano Pavan and Marcello Pelillo Dipartimento di Informatica, Universit`a Ca’ Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, Italy {pavan,pelillo}@dsi.unive.it Abstract Dominant sets are a new graph-theoretic concept that has proven to be relevant in pairwise data clustering problems, such as image segmentation. They generalize the notion of a maximal clique to edgeweighted graphs and have intriguing, non-trivial connections to continuous quadratic optimization and spectral-based grouping. We address the problem of grouping out-of-sample examples after the clustering process has taken place. This may serve either to drastically reduce the computational burden associated to the processing of very large data sets, or to efficiently deal with dynamic situations whereby data sets need to be updated continually. We show that the very notion of a dominant set offers a simple and efficient way of doing this. Numerical experiments on various grouping problems show the effectiveness of the approach. 1 Introduction Proximity-based, or pairwise, data clustering techniques are gaining increasing popularity over traditional central grouping techniques, which are centered around the notion of “feature” (see, e.g., [3, 12, 13, 11]). In many application domains, in fact, the objects to be clustered are not naturally representable in terms of a vector of features. On the other hand, quite often it is possible to obtain a measure of the similarity/dissimilarity between objects. Hence, it is natural to map (possibly implicitly) the data to be clustered to the nodes of a weighted graph, with edge weights representing similarity or dissimilarity relations. Although such a representation lacks geometric notions such as scatter and centroid, it is attractive as no feature selection is required and it keeps the algorithm generic and independent from the actual data representation. Further, it allows one to use non-metric similarities and it is applicable to problems that do not have a natural embedding to a uniform feature space, such as the grouping of structural or graph-based representations. We have recently developed a new framework for pairwise data clustering based on a novel graph-theoretic concept, that of a dominant set, which generalizes the notion of a maximal clique to edge-weighted graphs [7, 9]. An intriguing connection between dominant sets and the solutions of a (continuous) quadratic optimization problem makes them related in a non-trivial way to spectral-based cluster notions, and allows one to use straightforward dynamics from evolutionary game theory to determine them [14]. A nice feature of this framework is that it naturally provides a principled measure of a cluster’s cohesiveness as well as a measure of a vertex participation to its assigned group. It also allows one to obtain “soft” partitions of the input data, by allowing a point to belong to more than one cluster. The approach has proven to be a powerful one when applied to problems such as intensity, color, and texture segmentation, or visual database organization, and is competitive with spectral approaches such as normalized cut [7, 8, 9]. However, a typical problem associated to pairwise grouping algorithms in general, and hence to the dominant set framework in particular, is the scaling behavior with the number of data. On a dataset containing N examples, the number of potential comparisons scales with O(N 2), thereby hindering their applicability to problems involving very large data sets, such as high-resolution imagery and spatio-temporal data. Moreover, in applications such as document classification or visual database organization, one is confronted with a dynamic environment which continually supplies the algorithm with newly produced data that have to be grouped. In such situations, the trivial approach of recomputing the complete cluster structure upon the arrival of any new item is clearly unfeasible. Motivated by the previous arguments, in this paper we address the problem of efficiently assigning out-of-sample, unseen data to one or more previously determined clusters. This may serve either to substantially reduce the computational burden associated to the processing of very large (though static) data sets, by extrapolating the complete grouping solution from a small number of samples, or to deal with dynamic situations whereby data sets need to be updated continually. There is no straightforward way of accomplishing this within the pairwise grouping paradigm, short of recomputing the complete cluster structure. Recent sophisticated attempts to deal with this problem use optimal embeddings [11] and the Nystr¨om method [1, 2]. By contrast, we shall see that the very notion of a dominant set, thanks to its clear combinatorial properties, offers a simple and efficient solution to this problem. The basic idea consists of computing, for any new example, a quantity which measures the degree of cluster membership, and we provide simple approximations which allow us to do this in linear time and space, with respect to the cluster size. Our classification schema inherits the main features of the dominant set formulation, i.e., the ability of yielding a soft classification of the input data and of providing principled measures for cluster membership and cohesiveness. Numerical experiments show that the strategy of first grouping a small number of data items and then classifying the out-of-sample instances using our prediction rule is clearly successful as we are able to obtain essentially the same results as the dense problem in much less time. We also present results on high-resolution image segmentation problems, a task where the dominant set framework would otherwise be computationally impractical. 2 Dominant Sets and Their Continuous Characterization We represent the data to be clustered as an undirected edge-weighted (similarity) graph with no self-loops G = (V, E, w), where V = {1, . . . , n} is the vertex set, E ⊆V × V is the edge set, and w : E →IR∗ + is the (positive) weight function. Vertices in G correspond to data points, edges represent neighborhood relationships, and edge-weights reflect similarity between pairs of linked vertices. As customary, we represent the graph G with the corresponding weighted adjacency (or similarity) matrix, which is the n × n nonnegative, symmetric matrix A = (aij) defined as: aij = w(i, j) , if (i, j) ∈E 0 , otherwise . Let S ⊆V be a non-empty subset of vertices and i ∈V . The (average) weighted degree of i w.r.t. S is defined as: awdegS (i) = 1 |S| j∈S aij (1) where |S| denotes the cardinality of S. Moreover, if j /∈S we define φS (i, j) = aij − awdegS (i) which is a measure of the similarity between nodes j and i, with respect to the average similarity between node i and its neighbors in S. Figure 1: An example edge-weighted graph. Note that w{1,2,3,4} (1) < 0 and this reflects the fact that vertex 1 is loosely coupled to vertices 2, 3 and 4. Conversely, w{5,6,7,8} (5) > 0 and this reflects the fact that vertex 5 is tightly coupled with vertices 6, 7, and 8. Let S ⊆V be a non-empty subset of vertices and i ∈S. The weight of i w.r.t. S is wS (i) = 1, if |S| = 1 j∈S\{i} φS\{i} (j, i) wS\{i} (j) , otherwise (2) while the total weight of S is defined as: W(S) = i∈S wS(i) . (3) Intuitively, wS (i) gives us a measure of the overall similarity between vertex i and the vertices of S \ {i} with respect to the overall similarity among the vertices in S \ {i}, with positive values indicating high internal coherency (see Fig. 1). A non-empty subset of vertices S ⊆V such that W(T) > 0 for any non-empty T ⊆S, is said to be dominant if: 1. wS (i) > 0, for all i ∈S 2. wS∪{i} (i) < 0, for all i /∈S. The two previous conditions correspond to the two main properties of a cluster: the first regards internal homogeneity, whereas the second regards external inhomogeneity. The above definition represents our formalization of the concept of a cluster in an edgeweighted graph. Now, consider the following quadratic program, which is a generalization of the so-called Motzkin-Straus program [5] (here and in the sequel a dot denotes the standard scalar product between vectors): maximize f(x) = x · Ax subject to x ∈∆n (4) where ∆n = {x ∈IRn : xi ≥0 for all i ∈V and e · x = 1} is the standard simplex of IRn, and e is a vector of appropriate length consisting of unit entries (hence e · x = i xi). The support of a vector x ∈∆n is defined as the set of indices corresponding to its positive components, that is σ (x) = {i ∈V : xi > 0}. The following theorem, proved in [7], establishes an intriguing connection between dominant sets and local solutions of program (4). Theorem 1 If S is a dominant subset of vertices, then its (weighted) characteristics vector xS, which is the vector of ∆n defined as xS i = wS(i) W(S), if i ∈S 0, otherwise (5) is a strict local solution of program (4). Conversely, if x is a strict local solution of program (4) then its support S = σ(x) is a dominant set, provided that wS∪{i} (i) ̸= 0 for all i /∈S. The condition that wS∪{i} (i) ̸= 0 for all i /∈S = σ(x) is a technicality due to the presence of “spurious” solutions in (4) which is, at any rate, a non-generic situation. By virtue of this result, we can find a dominant set by localizing a local solution of program (4) with an appropriate continuous optimization technique, such as replicator dynamics from evolutionary game theory [14], and then picking up its support. Note that the components of the weighted characteristic vectors give us a natural measure of the participation of the corresponding vertices in the cluster, whereas the value of the objective function measures the cohesiveness of the class. In order to get a partition of the input data into coherent groups, a simple approach is to iteratively finding a dominant set and then removing it from the graph, until all vertices have been grouped (see [9] for a hierarchical extension of this framework). On the other hand, by finding all dominant sets, i.e., local solutions of (4), of the original graph, one can obtain a “soft” partition of the dataset, whereby clusters are allowed to overlap. Finally, note that spectral clustering approaches such as, e.g., [10, 12, 13] lead to similar, though intrinsically different, optimization problems. 3 Predicting Cluster Membership for Out-of-Sample Data Suppose we are given a set V of n unlabeled items and let G = (V, E, w) denote the corresponding similarity graph. After determining the dominant sets (i.e., the clusters) for these original data, we are next supplied with a set V ′ of k new data items, together with all kn pairwise affinities between the old and the new data, and are asked to assign each of them to one or possibly more previously determined clusters. We shall denote by ˆG = ( ˆV , ˆE, ˆw), with ˆV = V ∪V ′, the similarity graph built upon all the n + k data. Note that in our approach we do not need the k 2 affinities between the new points, which is a nice feature as in most applications k is typically very large. Technically, ˆG is a supergraph of G, namely a graph having V ⊆ˆV , E ⊆ˆE and w(i, j) = ˆw(i, j) for all (i, j) ∈E. Let S ⊆V be a subset of vertices which is dominant in the original graph G and let i ∈ˆV \ V a new data point. As pointed out in the previous section, the sign of wS∪{i} (i) provides an indication as to whether i is tightly or loosely coupled with the vertices in S (the condition wS∪{i} (i) = 0 corresponds to a non-generic boundary situation that does not arise in practice and will therefore be ignored).1 Accordingly, it is natural to propose the following rule for predicting cluster membership of unseen data: if wS∪{i} (i) > 0, then assign vertex i to cluster S . (6) Note that, according to this rule, the same point can be assigned to more than one class, thereby yielding a soft partition of the input data. To get a hard partition one can use the cluster membership approximation measures we shall discuss below. Note that it may also happen for some instance i that no cluster S satisfies rule (6), in which case the point gets unclassified (or assigned to an “outlier” group). This should be interpreted as an indication that either the point is too noisy or that the cluster formation process was inaccurate. In our experience, however, this situation arises rarely. A potential problem with the previous rule is its computational complexity. In fact, a direct application of formula (2) to compute wS∪{i} (i) is clearly infeasible due to its recursive nature. On the other hand, using a characterization given in [7, Lemma 1] would also be expensive since it would involve the computation of a determinant. The next result allows us to compute the sign of wS∪{i} (i) in linear time and space, with respect to the size of S. Proposition 1 Let G = (V, E, w) be an edge-weighted (similarity) graph, A = (aij) its weighted adjacency matrix, and S ⊆V a dominant set of G with characteristic vector 1Observe that wS (i) depends only on the the weights on the edges of the subgraph induced by S. Hence, no ambiguity arises as to whether wS (i) is computed on G or on ˆG. xS. Let ˆG = ( ˆV , ˆE, ˆw) be a supergraph of G with weighted adjacency matrix ˆA = (ˆaij). Then, for all i ∈ˆV \ V , we have: wS∪{i} (i) > 0 ⇔ h∈S ˆahixS h > f(xS) (7) Proof. From Theorem 1, xS is a strict local solution of program (4) and hence it satisfies the Karush-Kuhn-Tucker (KKT) equality conditions, i.e., the first-order necessary equality conditions for local optimality [4]. Now, let ˆn = |ˆV | be the cardinality of ˆV and let ˆxS be the (ˆn-dimensional) characteristic vector of S in ˆG, which is obtained by padding xS with zeros. It is immediate to see that ˆxS satisfies the KKT equality conditions for the problem of maximizing ˆf(ˆx) = ˆx · ˆAˆx, subject to ˆx ∈∆ˆn. Hence, from Lemma 2 of [7] we have for all i ∈ˆV \ V : wS∪{i} (i) W(S) = h∈S (ˆahi −ahj)xS h (8) for any j ∈S. Now, recall that the KKT equality conditions for program (4) imply h∈S ahjxS h = xS · AxS = f(xS) for any j ∈S [7]. Hence, the proposition follows from the fact that, being S dominant, W(S) is positive. Given an out-of-sample vertex i and a class S such that rule (6) holds, we now provide an approximation of the degree of participation of i in S ∪{i} which, as pointed out in the previous section, is given by the ratio between wS∪{i} (i) and W(S ∪{i}). This can be used, for example, to get a hard partition of the input data when an instance happens to be assigned to more than one class. By equation (8), we have: wS∪{i} (i) W(S ∪{i}) = h∈S (ˆahi −ahj)xS h W(S) W(S ∪{i}) for any j ∈S. Since computing the exact value of the ratio W(S)/W(S ∪{i}) would be computationally expensive, we now provide simple approximation formulas. Since S is dominant, it is reasonable to assume that all weights within it are close to each other. Hence, we approximate S with a clique having constant weight a, and impose that it has the same cohesiveness value f(xS) = xS · AxS as the original dominant set. After some algebra, we get a = |S| |S| −1f(xS) which yields W(S) ≈|S|a|S|−1. Approximating W(S ∪{i}) with |S + 1|a|S| in a similar way, we get: W(S) W(S ∪{i}) ≈|S|a|S|−1 |S + 1|a|S| = 1 f(xS) |S| −1 |S| + 1 which finally yields: wS∪{i} (i) W(S ∪{i}) ≈|S| −1 |S| + 1 h∈S ˆahixS h f(xS) −1 . (9) Using the above formula one can easily get, by normalization, an approximation of the characteristic vector x ˆS ∈∆n+k of ˆS, the extension of cluster S obtained applying rule (6): ˆS = S ∪{i ∈ˆV \ V : wS∪{i} (i) > 0} . With an approximation of x ˆS at hand, it is also easy to compute an approximation of the cohesiveness of the new cluster ˆS, i.e., x ˆS · ˆAx ˆS. Indeed, assuming that ˆS is dominant in ˆG, and recalling the KKT equality conditions for program (4) [7], we get ( ˆAx ˆS)i = x ˆS · ˆAx ˆS for all i ∈ˆS. It is therefore natural to approximate the cohesiveness of ˆS as a weighted average of the ( ˆAx ˆS)i’s. 0 0.005 0.01 0.015 0.02 0.025 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Euclidean distance Sampling rate -5e-005 0 5e-005 0.0001 0.00015 0.0002 0.00025 0.0003 0.00035 0.0004 0.00045 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Euclidean distance Sampling rate 5 6 7 8 9 10 11 12 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Seconds Sampling rate Figure 2: Evaluating the quality of our approximations on a 150-point cluster. Average distance between approximated and actual cluster membership (left) and cohesiveness (middle) as a function of sampling rate. Right: average CPU time as a function of sampling rate. 4 Experimental Results In an attempt to evaluate how the approximations given at the end of the previous section actually compare to the solutions obtained on the dense problem, we conducted the following preliminary experiment. We generated 150 points on the plane so as to form a dominant set (we used a standard Gaussian kernel to obtain similarities), and extracted random samples with increasing sampling rate, ranging from 1/15 to 1. For each sampling rate 100 trials were made, for each of which we computed the Euclidean distance between the approximated and the actual characteristic vector (i.e., cluster membership), as well as the distance between the approximated and the actual cluster cohesiveness (that is, the value of the objective function f). Fig. 2 shows the average results obtained. As can be seen, our approximations work remarkably well: with a sampling rate less than 10 % the distance between the characteristic vectors is around 0.02 and this distance decreases linearly towards zero. As for the objective function, the results are even more impressive as the distance from the exact value (i.e., 0.989) rapidly goes to zero starting from 0.00025, at less than 10% rate. Also, note how the CPU time increases linearly as the sampling rate approaches 100%. Next, we tested our algorithm over the Johns Hopkins University ionosphere database2 which contains 351 labeled instances from two different classes. As in the previous experiment, similarities were computed using a Gaussian kernel. Our goal was to test how the solutions obtained on the sampled graph compare with those of the original, dense problem and to study how the performance of the algorithm scales w.r.t. the sampling rate. As before, we used sampling rates from 1/15 to 1, and for each such value 100 random samples were extracted. After the grouping process, the out-of-sample instances were assigned to one of the two classes found using rule (6). Then, for each example in the dataset a “success” was recorded whenever the actual class label of the instance coincided with the majority label of its assigned class. Fig. 3 shows the average results obtained. At around 40% rate the algorithm was already able to obtain a classification accuracy of about 73.4%, which is even slightly higher that the one obtained on the dense (100% rate) problem, which is 72.7%. Note that, as in the previous experiment, the algorithm appears to be robust with respect to the choice of the sample data. For the sake of comparison we also ran normalized cut on the whole dataset, and it yielded a classification rate of 72.4%. Finally, we applied our algorithm to the segmentation of brightness images. The image to be segmented is represented as a graph where vertices correspond to pixels and edgeweights reflect the “similarity” between vertex pairs. As customary, we defined a similarity measure between pixels based on brightness proximity. Specifically, following [7], similarity between pixels i and j was measured by w(i, j) = exp (I(i) −I(j))2/σ2 where σ is a positive real number which affects the decreasing rate of w, and I(i) is defined as the (normalized) intensity value at node i. After drawing a set of pixels at random with sampling rate p = 0.005, we iteratively found a dominant set in the sampled graph using replicator dynamics [7, 14], we removed it from the graph. and we then employed rule (6) 2http://www.ics.uci.edu/∼mlearn/MLSummary.html 0.65 0.66 0.67 0.68 0.69 0.7 0.71 0.72 0.73 0.74 0.75 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Hit rate Sampling rate 8 9 10 11 12 13 14 15 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Seconds Sampling rate Figure 3: Results on the ionosphere database. Average classification rate (left) and CPU time (right) as a function of sampling rate. Figure 4: Segmentation results on a 115 × 97 weather radar image. From left to right: original image, the two regions found on the sampled image (sampling rate = 0.5%), and the two regions obtained on the whole image (sampling rate = 100%). to extend it with out-of-sample pixels. Figure 4 shows the results obtained on a 115 × 97 weather radar image, used in [13, 7] as an instance whereby edge-detection-based segmentation would perform poorly. Here, and in the following experiment, the major components of the segmentations are drawn on a blue background. The leftmost cluster is the one obtained after the first iteration of the algorithm, and successive clusters are shown left to right. Note how the segmentation obtained over the sparse image, sampled at 0.5% rate, is almost identical to that obtained over the whole image. In both cases, the algorithms correctly discovered a background and a foreground region. The approximation algorithm took a couple of seconds to return the segmentation, i.e., 15 times faster than the one run over the entire image. Note that our results are better than those obtained with normalized cut, as the latter provides an over-segmented solution (see [13]). Fig. 5 shows results on two 481 × 321 images taken from the Berkeley database.3 On these images the sampling process produced a sample with no more than 1000 pixels, and our current MATLAB implementation took only a few seconds to return a solution. Running the grouping algorithm on the whole images (which contain more than 150, 000 pixels) would simply be unfeasible. In both cases, our approximation algorithm partitioned the images into meaningful and clean components. We also ran normalized cut on these images (using the same sample rate of 0.5%) and the results, obtained after a long tuning process, confirm its well-known inherent tendency to over-segment the data (see Fig. 5). 5 Conclusions We have provided a simple and efficient extension to the dominant-set clustering framework to deal with the grouping of out-of-sample data. This makes the approach applicable to very large grouping problems, such as high-resolution image segmentation, where it would otherwise be impractical. Experiments show that the solutions extrapolated from the sparse data are comparable with those of the dense problem, which in turn compare favorably with spectral solutions such as normalized cut’s, and are obtained in much less time. 3http://www.cs.berkeley.edu/projects/vision/grouping/segbench Figure 5: Segmentation results on two 481 × 321 images. Left columns: original images. For each image, the first line shows the major regions obtained with our approximation algorithm, while the second line shows the results obtained with normalized cut. References [1] Y. Bengio, J.-F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, and M. Ouimet. Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. In: S. Thrun, L. Saul, and B.Sch¨olkopf (Eds.), Advances in Neural Information Processing Systems 16, MIT Press, Cambridge, MA, 2004. [2] C. Fowlkes, S. Belongie, F. Chun, and J. Malik. Spectral grouping using the Nystr¨om method. IEEE Trans. Pattern Anal. Machine Intell. 26:214–225, 2004. [3] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Trans. Pattern Anal. Machine Intell. 19:1–14, 1997. [4] D. Luenberger. Linear and Nonlinear Programming. Addison-Wesley, Reading, MA, 1984. [5] T. S. Motzkin and E. G. Straus. Maxima for graphs and a new proof of a theorem of Tur´an. Canad. J. Math. 17:533–540, 1965. [6] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In: T. G. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14, MIT Press, Cambridge, MA, pp. 849–856, 2002. [7] M. Pavan and M. Pelillo. A new graph-theoretic approach to clustering and segmentation. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 145–152, 2003. [8] M. Pavan, M. Pelillo. Unsupervised texture segmentation by dominant sets and game dynamics. In Proc. 12th Int. Conf. on Image Analysis and Processing, pp. 302–307, 2003. [9] M. Pavan and M. Pelillo. Dominant sets and hierarchical clustering. In Proc. 9th Int. Conf. on Computer Vision, pp. 362–369, 2003. [10] P. Perona and W. Freeman. A factorization approach to grouping. In: H. Burkhardt and B. Neumann (Eds.), Computer Vision—ECCV’98, pp. 655–670. Springer, Berlin, 1998. [11] V. Roth, J. Laub, M. Kawanabe, and J. M. Buhmann. Optimal cluster preserving embedding of nonmetric proximity data. IEEE Trans. Pattern Anal. Machine Intell. 25:1540–1551, 2003. [12] S. Sarkar and K. Boyer. Quantitative measures of change based on feature organization: Eigenvalues and eigenvectors. Computer Vision and Image Understanding 71:110–136, 1998. [13] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Machine Intell. 22:888–905, 2000. [14] J. W. Weibull. Evolutionary Game Theory. MIT Press, Cambridge, MA, 1995. [15] Y. Weiss. Segmentation using eigenvectors: A unifying view. In Proc. 7th Int. Conf. on Computer Vision, pp. 975–982, 1999.
|
2004
|
180
|
2,597
|
Instance-Specific Bayesian Model Averaging for Classification Shyam Visweswaran Gregory F. Cooper Center for Biomedical Informatics Center for Biomedical Informatics Intelligent Systems Program Intelligent Systems Program Pittsburgh, PA 15213 Pittsburgh, PA 15213 shyam@cbmi.pitt.edu gfc@cbmi.pitt.edu Abstract Classification algorithms typically induce population-wide models that are trained to perform well on average on expected future instances. We introduce a Bayesian framework for learning instance-specific models from data that are optimized to predict well for a particular instance. Based on this framework, we present a lazy instance-specific algorithm called ISA that performs selective model averaging over a restricted class of Bayesian networks. On experimental evaluation, this algorithm shows superior performance over model selection. We intend to apply such instance-specific algorithms to improve the performance of patient-specific predictive models induced from medical data. 1 Introduction Commonly used classification algorithms, such as neural networks, decision trees, Bayesian networks and support vector machines, typically induce a single model from a training set of instances, with the intent of applying it to all future instances. We call such a model a population-wide model because it is intended to be applied to an entire population of future instances. A population-wide model is optimized to predict well on average when applied to expected future instances. In contrast, an instance-specific model is one that is constructed specifically for a particular instance. The structure and parameters of an instance-specific model are specialized to the particular features of an instance, so that it is optimized to predict especially well for that instance. Usually, methods that induce population-wide models employ eager learning in which the model is induced from the training data before the test instance is encountered. In contrast, lazy learning defers most or all processing until a response to a test instance is required. Learners that induce instance-specific models are necessarily lazy in nature since they take advantage of the information in the test instance. An example of a lazy instance-specific method is the lazy Bayesian rule (LBR) learner, implemented by Zheng and Webb [1], which induces rules in a lazy fashion from examples in the neighborhood of the test instance. A rule generated by LBR consists of a conjunction of the attribute-value pairs present in the test instance as the antecedent and a local simple (naïve) Bayes classifier as the consequent. The structure of the local simple Bayes classifier consists of the attribute of interest as the parent of all other attributes that do not appear in the antecedent, and the parameters of the classifier are estimated from the subset of training instances that satisfy the antecedent. A greedy step-forward search selects the optimal LBR rule for a test instance to be classified. When evaluated on 29 UCI datasets, LBR had the lowest average error rate when compared to several eager learning methods [1]. Typically, both eager and lazy algorithms select a single model from some model space, ignoring the uncertainty in model selection. Bayesian model averaging is a coherent approach to dealing with the uncertainty in model selection, and it has been shown to improve the predictive performance of classifiers [2]. However, since the number of models in practically useful model spaces is enormous, exact model averaging over the entire model space is usually not feasible. In this paper, we describe a lazy instance-specific averaging (ISA) algorithm for classification that approximates Bayesian model averaging in an instance-sensitive manner. ISA extends LBR by adding Bayesian model averaging to an instance-specific model selection algorithm. While the ISA algorithm is currently able to directly handle only discrete variables and is computationally more intensive than comparable eager algorithms, the results in this paper show that it performs well. In medicine, such lazy instance-specific algorithms can be applied to patient-specific modeling for improving the accuracy of diagnosis, prognosis and risk assessment. The rest of this paper is structured as follows. Section 2 introduces a Bayesian framework for instance-specific learning. Section 3 describes the implementation of ISA. In Section 4, we evaluate ISA and compare its performance to that of LBR. Finally, in Section 5 we discuss the results of the comparison. 2 Decision Theoretic Framework We use the following notation. Capital letters like X, Z, denote random variables and corresponding lower case letters, x, z, denote specific values assigned to them. Thus, X = x denotes that variable X is assigned the value x. Bold upper case letters, such as X, Z, represent sets of variables or random vectors and their realization is denoted by the corresponding bold lower case letters, x, z. Hence, X = x denotes that the variables in X have the states given by x. In addition, Z denotes the target variable being predicted, X denotes the set of attribute variables, M denotes a model, D denotes the training dataset, and <Xt, Zt> denotes a generic test instance that is not in D. We now characterize population-wide and instance-specific model selection in decision theoretic terms. Given training data D and a separate generic test instance <Xt, Zt>, the Bayes optimal prediction for Zt is obtained by combining the predictions of all models weighted by their posterior probabilities, as follows: ∫ = M t t dM D M P M Z P D Z P ) | ( ) , | ( ) , | ( t t X X . (1) The optimal population-wide model for predicting Zt is as follows: [ ] ∑ t X t t X X X ) | ( ) , | ( ), , | ( max D P M Z P D Z P U t t M , (2) where the function U gives the utility of approximating the Bayes optimal estimate P(Zt | Xt, D), with the estimate P(Zt | Xt, M) obtained from model M. The term P(X | D) is given by: ∫ = M dM D M P M P D P ) | ( ) | ( ) | ( X X . (3) The optimal instance-specific model for predicting Zt is as follows: [ ] { } ) , | ( ), , | ( max M Z P D Z P U t t M t t x X x X t t = = , (4) where xt are the values of the attributes of the test instance Xt for which we want to predict Zt. The Bayes optimal estimate P(Zt | Xt = xt, D), in Equation 4 is derived using Equation 1, for the special case in which Xt = xt. The difference between the population-wide and the instance-specific models can be noted by comparing Equations 2 and 4. Equation 2 for the population-wide model selects the model that on average will have the greatest utility. Equation 4 for the instance-specific model, however, selects the model that will have the greatest expected utility for the specific instance Xt = xt. For predicting Zt in a given instance Xt = xt, the model selected using Equation 2 can never have an expected utility greater than the model selected using Equation 4. This observation provides support for developing instance-specific models. Equations 2 and 4 represent theoretical ideals for population-wide and instancespecific model selection, respectively; we are not suggesting they are practical to compute. The current paper focuses on model averaging, rather than model selection. Ideal Bayesian model averaging is given by Equation 1. Model averaging has previously been applied using population-wide models. Studies have shown that approximate Bayesian model averaging using population-wide models can improve predictive performance over population-wide model selection [2]. The current paper concentrates on investigating the predictive performance of approximate Bayesian model averaging using instance-specific models. 3 Instance-Specific Algorithm We present the implementation of the lazy instance-specific algorithm based on the above framework. ISA searches the space of a restricted class of Bayesian networks to select a subset of the models over which to derive a weighted (averaged) posterior of the target variable Zt. A key characteristic of the search is the use of a heuristic to select models that will have a significant influence on the weighted posterior. We introduce Bayesian networks briefly and then describe ISA in detail. 3.1 Bayesian Networks A Bayesian network is a probabilistic model that combines a graphical representation (the Bayesian network structure) with quantitative information (the parameters of the Bayesian network) to represent the joint probability distribution over a set of random variables [3]. Specifically, a Bayesian network M representing the set of variables X consists of a pair (G, ΘG). G is a directed acyclic graph that contains a node for every variable in X and an arc between every pair of nodes if the corresponding variables are directly probabilistically dependent. Conversely, the absence of an arc between a pair of nodes denotes probabilistic independence between the corresponding variables. ΘG represents the parameterization of the model. In a Bayesian network M, the immediate predecessors of a node Xi in X are called the parents of Xi and the successors, both immediate and remote, of Xi in X are called the descendants of Xi. The immediate successors of Xi are called the children of Xi. For each node Xi there is a local probability distribution (that may be discrete or continuous) on that node given the state of its parents. The complete joint probability distribution over X, represented by the parameterization ΘG, can be factored into a product of local probability distributions defined on each node in the network. This factorization is determined by the independences captured by the structure of the Bayesian network and is formalized in the Bayesian network Markov condition: A node (representing a variable) is independent of its nondescendants given just its parents. According to this Markov condition, the joint probability distribution on model variables X = (X1, X2, …, Xn) can be factored as follows: ∏ = = n i i i n X parents X P X X X P 1 2 1 )) ( | ( ) ..., , , ( , (5) where parents(Xi) denotes the set of nodes that are the parents of Xi. If Xi has no parents, then the set parents(Xi) is empty and P(Xi | parents(Xi)) is just P(Xi). 3.2 ISA Models The LBR models of Zheng and Webb [1] can be represented as members of a restricted class of Bayesian networks (see Figure 1). We use the same class of Bayesian networks for the ISA models, to facilitate comparison between the two algorithms. In Figure 1, all nodes represent attributes that are discrete. Each node in X has either an outgoing arc into target node, Z, or receives an arc from Z. That is, each node is either a parent or a child of Z. Thus, X is partitioned into two sets: the first containing nodes (X1, …, Xj in Figure 1) each of which is a parent of Z and every node in the second set, and the second containing nodes (Xj+1, …, Xk in Figure 1) that have as parents the node Z and every node in the first set. The nodes in the first set are instantiated to the corresponding values in the test instance for which Zt is to be predicted. Thus, the first set of nodes represents the antecedent of the LBR rule and the second set of nodes represents the consequent. Figure 1: An example of a Bayesian network LBR model with target node Z and k attribute nodes of which X1, …, Xj are instantiated to values x1, …, xj in xt. X1, …, Xj are present in the antecedent of the LBR rule and Z, Xj+1, …, Xk (that form the local simple Bayes classifier) are present in the consequent. The indices need not be ordered as shown, but are presented in this example for convenience of exposition. Z X1= x1 Xi= xi Xi+1 Xk ... ... 3.3 Model Averaging For Bayesian networks, Equation 1 can be evaluated as follows: ∑ = M t t D M P M Z P D Z P ) | ( ) , | ( ) , | ( t t x x , (6) with M being a Bayesian network comprised of structure G and parameters ΘG. The probability distribution of interest is a weighted average of the posterior distribution over all possible Bayesian networks where the weight is the probability of the Bayesian network given the data. Since exhaustive enumeration of all possible models is not feasible, even for this class of simple Bayesian networks, we approximate exact model averaging with selective model averaging. Let R be the set of models selected by the search procedure from all possible models in the model space, as described in the next section. Then, with selective model averaging, P(Zt | xt, D) is estimated as: ∑ ∑ ∈ ∈ ≅ R M R M t t D M P D M P M Z P D Z P ) | ( ) | ( ) , | ( ) , | ( t t x x . (7) Assuming uniform prior belief over all possible models, the model posterior P(M | D) in Equation 7 can be replaced by the marginal likelihood P(D | M), to obtain the following equation: ∑ ∑ ∈ ∈ ≅ R M R M t t M D P M D P M Z P D Z P ) | ( ) | ( ) , | ( ) , | ( t t x x . (8) The (unconditional) marginal likelihood P(D | M) in Equation 8, is a measure of the goodness of fit of the model to the data and is also known as the model score. While this score is suitable for assessing the model’s fit to the joint probability distribution, it is not necessarily appropriate for assessing the goodness of fit to a conditional probability distribution which is the focus in prediction and classification tasks, as is the case here. A more suitable score in this situation is a conditional model score that is computed from training data D of d instances as: ∏ = − = d p p p p ,M ,...,z ,z ,..., z P M D score 1 1 1 1 ) | ( ) , ( x x . (9) This score is computed in a predictive and sequential fashion: for the pth training instance the probability of predicting the observed value zp for the target variable is computed based on the values of all the variables in the preceding p-1 training instances and the values xp of the attributes in the pth instance. One limitation of this score is that its value depends on the ordering of the data. Despite this limitation, it has been shown to be an effective scoring criterion for classification models [4]. The parameters of the Bayesian network M, used in the above computations, are defined as follows: ij ij ijk ijk ijk i i N N j X parents k X P α α θ + + = ≡ = = ) ) ( | ( , (10) where (i) Nijk is the number of instances in the training dataset D where variable Xi has value k and the parents of Xi are in state j, (ii) ∑ = k ijk ij N N , (iii) αijk is a parameter prior that can be interpreted as the belief equivalent of having previously observed αijk instances in which variable Xi has value k and the parents of Xi are in state j, and (iv) ∑ = k ijk ij α α . 3.4 Model Search We use a two-phase best-first heuristic search to sample the model space. The first phase ignores the evidence xt in the test instance while searching for models that have high scores as given by Equation 9. This is followed by the second phase that searches for models having the greatest impact on the prediction of Zt for the test instance, which we formalize below. The first phase searches for models that predict Z in the training data very well; these are the models that have high conditional model scores. The initial model is the simple Bayes network that includes all the attributes in X as children of Z. A succeeding model is derived from a current model by reversing the arc of a child node in the current model, adding new outgoing arcs from it to Z and the remaining children, and instantiating this node to the value in the test instance. This process is performed for each child in the current model. An incoming arc of a child node is considered for reversal only if the node’s value is not missing in the test instance. The newly derived models are added to a priority queue, Q. During each iteration of the search, the model with the highest score (given by Equation 9) is removed from Q and placed in a set R, following which new models are generated as described just above, scored and added to Q. The first phase terminates after a user-specified number of models have accumulated in R. The second phase searches for models that change the current model-averaged estimate of P(Zt | xt, D) the most. The idea here is to find viable competing models for making this posterior probability prediction. When no competitive models can be found, the prediction becomes stable. During each iteration of the search, the highest ranked model M* is removed from Q and added to R. The ranking is based on how much the model changes the current estimate of P(Zt | xt, D). More change is better. In particular, M* is the model in Q that maximizes the following function: *}) { ( ) ( *) , ( M R g R g M R f U − = , (11) where for a set of models S, the function g(S) computes the approximate model averaged prediction for Zt, as follows: ∑ ∑ ∈ ∈ = S M S M t M D score M D score M Z P S g ) , ( ) , ( ) , | ( ) ( tx . (12) The second phase terminates when no new model can be found that has a value (as given by Equation 11) that is greater than a user-specified minimum threshold T. The final distribution of Zt is then computed from the models in R using Equation 8. 4 Evaluation We evaluated ISA on the 29 UCI datasets that Zheng and Webb used for the evaluation of LBR. On the same datasets, we also evaluated a simple Bayes classifier (SB) and LBR. For SB and LBR, we used the Weka implementations (Weka v3.3.6, http://www.cs.waikato.ac.nz/ml/weka/) with default settings [5]. We implemented the ISA algorithm as a standalone application in Java. The following settings were used for ISA: a maximum of 100 phase-1 models, a threshold T of 0.001 in phase-2, and an upper limit of 500 models in R. For the parameter priors in Equation 10, all αijk were set to 1. All error rates were obtained by averaging the results from two stratified 10-fold cross-validation (20 trials total) similar to that used by Zheng and Webb. Since, both LBR and ISA can handle only discrete attributes, all numeric attributes were discretized in a pre-processing step using the entropy based discretization method described in [6]. For each pair of training and test folds, the discretization intervals were first estimated from the training fold and then applied to both folds. The error rates of two algorithms on a dataset were compared with a paired t-test carried out at the 5% significance level on the error rate statistics obtained from the 20 trials. The results are shown in Table 1. Compared to SB, ISA has significantly fewer errors on 9 datasets and significantly more errors on one dataset. Compared to LBR, ISA has significantly fewer errors on 7 datasets and significantly more errors on two datasets. On two datasets, chess and tic-tac-toe, ISA shows considerable improvement in performance over both SB and LBR. With respect to computation Percent error rate Dataset Size No. of classes Num. Attrib. Nom. Attrib. SB LBR ISA Annealing 898 6 6 32 3.5 - 2.7 - 1.9 Audiology 226 24 0 69 29.6 29.4 30.9 Breast (W) 699 2 9 0 2.9 + 2.8 + 3.7 Chess (KR-KP) 3169 2 0 36 12.1 - 3.0 - 1.1 Credit (A) 690 2 6 9 13.8 14.0 13.9 Echocardiogram 131 2 6 1 33.2 34.0 35.9 Glass 214 6 9 0 26.9 27.8 29.0 Heart (C) 303 2 13 0 16.2 16.2 17.5 Hepatitis 155 2 6 13 14.2 - 14.2 - 11.3 Horse colic 368 2 7 15 20.2 16.0 17.8 House votes 84 435 2 0 16 10.1 - 7.0 - 5.1 Hypothyroid 3163 2 7 18 1.4 - 0.9 0.9 Iris 150 3 4 0 6.0 6.0 5.3 Labor 57 2 8 8 8.8 6.1 7.0 LED 24 200 10 0 24 40.5 40.5 40.3 Liver disorders 345 2 6 0 36.8 36.8 36.8 Lung cancer 32 3 0 56 56.3 56.3 56.3 Lymphography 148 4 0 18 15.5 - 15.5 - 13.2 Pima 768 2 8 0 21.8 22.0 22.3 Postoperative 90 3 1 7 33.3 33.3 33.3 Primary tumor 339 22 0 17 54.4 53.5 54.2 Promoters 106 2 0 57 7.5 7.5 7.5 Solar flare 1389 2 0 10 20.2 18.3 + 19.4 Sonar 208 2 60 0 15.4 15.6 15.9 Soybean 683 19 0 35 7.9 - 7.1 7.2 Splice junction 3177 3 0 60 4.7 4.3 4.4 Tic-Tac-Toe 958 2 0 9 30.3 - 13.7 - 10.3 Wine 178 3 13 0 1.1 1.1 1.1 Zoo 101 7 0 16 8.4 - 8.4 - 6.4 Table 1: Percent error rates of simple Bayes (SB), Lazy Bayesian Rule (LBR) and Instance-Specific Averaging (ISA). A - indicates that the ISA error rate is statistically significantly lower than the marked SB or LBR error rate. A + indicates that the ISA error rate is statistically significantly higher. times, ISA took 6 times longer to run than LBR on average for a single test instance on a desktop computer with a 2 GHz Pentium 4 processor and 3 GB of RAM. 5 Conclusions and Future Research We have introduced a Bayesian framework for instance-specific model averaging and presented ISA as one example of a classification algorithm based on this framework. An instance-specific algorithm like LBR that does model selection has been shown by Zheng and Webb to perform classification better than several eager algorithms [1]. Our results show that ISA, which extends LBR by adding Bayesian model averaging, improves overall on LBR, which provides support that we can obtain additional prediction improvement by performing instance-specific model averaging rather than just instance-specific model selection. In future work, we plan to explore further the behavior of ISA with respect to the number of models being averaged and the effect of the number of models selected in each of the two phases of the search. We will also investigate methods to improve the computational efficiency of ISA. In addition, we plan to examine other heuristics for model search as well as more general model spaces such as unrestricted Bayesian networks. The instance-specific framework is not restricted to the Bayesian network models that we have used in this investigation. In the future, we plan to explore other models using this framework. Our ultimate interest is to apply these instancespecific algorithms to improve patient-specific predictions (for diagnosis, therapy selection, and prognosis) and thereby to improve patient care. Acknowledgments This work was supported by the grant T15-LM/DE07059 from the National Library of Medicine (NLM) to the University of Pittsburgh’s Biomedical Informatics Training Program. We would like to thank the three anonymous reviewers for their helpful comments. References [1] Zheng, Z. and Webb, G.I. (2000). Lazy Learning of Bayesian Rules. Machine Learning, 41(1):53-84. [2] Hoeting, J.A., Madigan, D., Raftery, A.E. and Volinsky, C.T. (1999). Bayesian Model Averaging: A Tutorial. Statistical Science, 14:382-417. [3] Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Mateo, CA. [4] Kontkanen, P., Myllymaki, P., Silander, T., and Tirri, H. (1999). On Supervised Selection of Bayesian Networks. In Proceedings of the 15th International Conference on Uncertainty in Artificial Intelligence, pages 334-342, Stockholm, Sweden. Morgan Kaufmann. [5] Witten, I.H. and Frank, E. (2000). Data Mining: Practical Machine Learning Tools with Java Implementations. Morgan Kaufmann, San Francisco, CA. [6] Fayyad, U.M., and Irani, K.B. (1993). Multi-Interval Discretization of ContinuousValued Attributes for Classification Learning. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1022-1027, San Mateo, CA. Morgan Kaufmann.
|
2004
|
181
|
2,598
|
Hierarchical Eigensolver for Transition Matrices in Spectral Methods Chakra Chennubhotla∗and Allan D. Jepson† ∗Department of Computational Biology, University of Pittsburgh †Department of Computer Science, University of Toronto Abstract We show how to build hierarchical, reduced-rank representation for large stochastic matrices and use this representation to design an efficient algorithm for computing the largest eigenvalues, and the corresponding eigenvectors. In particular, the eigen problem is first solved at the coarsest level of the representation. The approximate eigen solution is then interpolated over successive levels of the hierarchy. A small number of power iterations are employed at each stage to correct the eigen solution. The typical speedups obtained by a Matlab implementation of our fast eigensolver over a standard sparse matrix eigensolver [13] are at least a factor of ten for large image sizes. The hierarchical representation has proven to be effective in a min-cut based segmentation algorithm that we proposed recently [8]. 1 Spectral Methods Graph-theoretic spectral methods have gained popularity in a variety of application domains: segmenting images [22]; embedding in low-dimensional spaces [4, 5, 8]; and clustering parallel scientific computation tasks [19]. Spectral methods enable the study of properties global to a dataset, using only local (pairwise) similarity or affinity measurements between the data points. The global properties that emerge are best understood in terms of a random walk formulation on the graph. For example, the graph can be partitioned into clusters by analyzing the perturbations to the stationary distribution of a Markovian relaxation process defined in terms of the affinity weights [17, 18, 24, 7]. The Markovian relaxation process need never be explicitly carried out; instead, it can be analytically expressed using the leading order eigenvectors, and eigenvalues, of the Markov transition matrix. In this paper we consider the practical application of spectral methods to large datasets. In particular, the eigen decomposition can be very expensive, on the order of O(n3), where n is the number of nodes in the graph. While it is possible to compute analytically the first eigenvector (see §3 below), the remaining subspace of vectors (necessary for say clustering) has to be explicitly computed. A typical approach to dealing with this difficulty is to first sparsify the links in the graph [22] and then apply an efficient eigensolver [13, 23, 3]. In comparison, we propose in this paper a specialized eigensolver suitable for large stochastic matrices with known stationary distributions. In particular, we exploit the spectral properties of the Markov transition matrix to generate hierarchical, successively lower-ranked approximations to the full transition matrix. The eigen problem is solved directly at the coarsest level of representation. The approximate eigen solution is then interpolated over successive levels of the hierarchy, using a small number of power iterations to correct the solution at each stage. 2 Previous Work One approach to speeding up the eigen decomposition is to use the fact that the columns of the affinity matrix are typically correlated. The idea then is to pick a small number of representative columns to perform eigen decomposition via SVD. For example, in the Nystrom approximation procedure, originally proposed for integral eigenvalue problems, the idea is to randomly pick a small set of m columns; generate the corresponding affinity matrix; solve the eigenproblem and finally extend the solution to the complete graph [9, 10]. The Nystrom method has also been recently applied in the kernel learning methods for fast Gaussian process classification and regression [25]. Other sampling-based approaches include the work reported in [1, 2, 11]. Our starting point is the transition matrix generated from affinity weights and we show how building a representational hierarchy follows naturally from considering the stochastic matrix. A closely related work is the paper by Lin on reduced rank approximations of transition matrices [14]. We differ in how we approximate the transition matrices, in particular our objective function is computationally less expensive to solve. In particular, one of our goals in reducing transition matrices is to develop a fast, specialized eigen solver for spectral clustering. Fast eigensolving is also the goal in ACE [12], where successive levels in the hierarchy can potentially have negative affinities. A graph coarsening process for clustering was also pursued in [21, 3]. 3 Markov Chain Terminology We first provide a brief overview of the Markov chain terminology here (for more details see [17, 15, 6]). We consider an undirected graph G = (V, E) with vertices vi, for i = {1, . . . , n}, and edges ei,j with non-negative weights ai,j. Here the weight ai,j represents the affinity between vertices vi and vj. The affinities are represented by a non-negative, symmetric n × n matrix A having weights ai,j as elements. The degree of a node j is defined to be: dj = Pn i=1 ai,j = Pn j=1 aj,i, where we define D = diag(d1, . . . , dn). A Markov chain is defined using these affinities by setting a transition probability matrix M = AD−1, where the columns of M each sum to 1. The transition probability matrix defines the random walk of a particle on the graph G. The random walk need never be explicitly carried out; instead, it can be analytically expressed using the leading order eigenvectors, and eigenvalues, of the Markov transition matrix. Because the stochastic matrices need not be symmetric in general, a direct eigen decomposition step is not preferred for reasons of instability. This problem is easily circumvented by considering a normalized affinity matrix: L = D−1/2AD−1/2, which is related to the stochastic matrix by a similarity transformation: L = D−1/2MD1/2. Because L is symmetric, it can be diagonalized: L = UΛU T , where U = [⃗u1, ⃗u2, · · · , ⃗un] is an orthogonal set of eigenvectors and Λ is a diagonal matrix of eigenvalues [λ1, λ2, · · · , λn] sorted in decreasing order. The eigenvectors have unit length ∥⃗uk∥= 1 and from the form of A and D it can be shown that the eigenvalues λi ∈(−1, 1], with at least one eigenvalue equal to one. Without loss of generality, we take λ1 = 1. Because L and M are similar we can perform an eigen decomposition of the Markov transition matrix as: M = D1/2LD−1/2 = D1/2U Λ U T D−1/2. Thus an eigenvector ⃗u of L corresponds to an eigenvector D1/2⃗u of M with the same eigenvalue λ. The Markovian relaxation process after β iterations, namely M β, can be represented as: M β = D1/2UΛβU T D−1/2. Therefore, a particle undertaking a random walk with an initial distribution ⃗p 0 acquires after β steps a distribution ⃗p β given by: ⃗p β = M β⃗p 0. Assuming the graph is connected, as β →∞, the Markov chain approaches a unique stationary distribution given by ⃗π = diag(D)/ Pn i=1 di, and thus, M ∞= ⃗π1T , where 1 is a n-dim column vector of all ones. Observe that ⃗π is an eigenvector of M as it is easy to show that M⃗π = ⃗π and the corresponding eigenvalue is 1. Next, we show how to generate hierarchical, successively low-ranked approximations for the transition matrix M. 4 Building a Hierarchy of Transition Matrices The goal is to generate a very fast approximation, while simultaneously achieving sufficient accuracy. For notational ease, we think of M as a fine-scale representation and f M as some coarse-scale approximation to be derived here. By coarsening f M further, we can generate successive levels of the representation hierarchy. We use the stationary distribution ⃗π to construct a corresponding coarse-scale stationary distribution ⃗δ. As we just discussed a critical property of the fine scale Markov matrix M is that it is similar to the symmetric matrix L and we wish to preserve this property at every level of the representation hierarchy. 4.1 Deriving Coarse-Scale Stationary Distribution We begin by expressing the stationary distribution ⃗π as a probabilistic mixture of latent distributions. In matrix notation, we have ⃗π = K⃗δ, (1) where ⃗δ is an unknown mixture coefficient vector of length m, K is an n×m non-negative kernel matrix whose columns are latent distributions that each sum to 1: Pn i=1 Ki,j = 1 and m ≪n. It is easy to derive a maximum likelihood approximation of ⃗δ using an EM type algorithm [16]. The main step is to find a stationary point ⃗δ, λ for the Lagrangian: E ≡− n X i=1 πi ln m X j=1 Ki,jδj + λ m X j=1 δj −1 . (2) An implicit step in this EM procedure is to compute the the ownership probability ri,j of the jth kernel (or node) at the coarse scale for the ith node on the fine scale and is given by ri,j = δjKi,j Pm k=1 δkKi,k . (3) The EM procedure allows for an update of both ⃗δ and the latent distributions in the kernel matrix K (see §8.3.1 in [6]). For initialization, ⃗δ is taken to be uniform over the coarse-scale states. But in choosing kernels K, we provide a good initialization for the EM procedure. Specifically, the Markov matrix M is diffused using a small number of iterations to get M β. The diffusion causes random walks from neighboring nodes to be less distinguishable. This in turn helps us select a small number of columns of M β in a fast and greedy way to be the kernel matrix K. We defer the exact details on kernel selection to a later section (§4.3). 4.2 Deriving the Coarse-Scale Transition Matrix In order to define f M, the coarse-scale transition matrix, we break it down into three steps. First, the Markov chain propagation at the coarse scale can be defined as: ⃗q k+1 = f M⃗q k, (4) where ⃗q k is the coarse scale probability distribution after k steps of the random walk. Second, we expand ⃗q k into the fine scale using the kernels K resulting in a fine scale probability distribution ⃗p k: ⃗p k = K⃗q k. (5) Finally, we lift ⃗p k back into the coarse scale by using the ownership probability of the jth kernel for the ith node on the fine grid: q k+1 j = n X i=1 ri,jp k i (6) Substituting for Eqs.(3) and (5) in Eq. 6 gives q k+1 j = n X i=1 ri,j m X t=1 Ki,tq k t = n X i=1 δjKi,j Pm k=1 δkKi,k m X t=1 Ki,tq k t . (7) We can write the preceding equation in a matrix form: ⃗q k+1 = diag(⃗δ ) KT diag K⃗δ −1 K⃗q k. (8) Comparing this with Eq. 4, we can derive the transition matrix f M as: f M = diag(⃗δ ) KT diag K⃗δ −1 K. (9) It is easy to see that ⃗δ = f M⃗δ, so ⃗δ is the stationary distribution for f M. Following the definition of f M, and its stationary distribution ⃗δ, we can generate a symmetric coarse scale affinity matrix eA given by eA = f Mdiag(⃗δ) = diag(⃗δ ) KT diag K⃗δ −1 Kdiag(⃗δ) , (10) where we substitute for the expression f M from Eq. 9. The coarse-scale affinity matrix eA is then normalized to get: eL = eD−1/2 eA eD−1/2; eD = diag( ed1, ed2, · · · , edm), (11) where edj is the degree of node j in the coarse-scale graph represented by the matrix eA (see §3 for degree definition). Thus, the coarse scale Markov matrix f M is precisely similar to a symmetric matrix eL. 4.3 Selecting Kernels For demonstration purpose, we present the kernel selection details on the image of an eye shown below. To begin with, a random walk is defined where each pixel in the test image is associated with a vertex of the graph G. The edges in G are defined by the standard 8-neighbourhood of each pixel. For the demonstrations in this paper, the edge weight ai,j between neighbouring pixels xi and xj is given by a function of the difference in the corresponding intensities I(xi) and I(xj): ai,j = exp(−(I(xi) −I(xj))2/2σ2 a), where σa is set according to the median absolute difference |I(xi) −I(xj)| between neighbours measured over the entire image. The affinity matrix A with the edge weights is then used to generate a Markov transition matrix M. The kernel selection process we use is fast and greedy. First, the fine scale Markov matrix M is diffused to M β using β = 4. The Markov matrix M is sparse as we make the affinity matrix A sparse. Every column in the diffused matrix M β is a potential kernel. To facilitate the selection process, the second step is to rank order the columns of M β based on a probability value in the stationary distribution ⃗π. Third, the kernels (i.e. columns of M β) are picked in such a way that for a kernel Ki all of the neighbours of pixel i which are within the half-height of the the maximum value in the kernel Ki are suppressed from the selection process. Finally, the kernel selection is continued until every pixel in the image is within a half-height of the peak value of at least one kernel. If M is a full matrix, to avoid the expense of computing M β explicitly, random kernel centers can be selected, and only the corresponding columns of M β need be computed. We show results from a three-scale hierarchy on the eye image (below). The image has 25 × 20 pixels but is shown here enlarged for clarity. At the first coarse scale 83 kernels are picked. The kernels each correspond to a different column in the fine scale transition matrix and the pixels giving rise to these kernels are shown numbered on the image. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Coarse Scale 1 Coarse Scale 2 Using these kernels as an initialization, the EM procedure derives a coarse-scale stationary distribution ⃗δ (Eq. 2), while simultaneously updating the kernel matrix. Using the newly updated kernel matrix K and the derived stationary distribution ⃗δ a transition matrix f M is generated (Eq. 9). The coarse scale Markov matrix is then diffused to f M β, again using β = 4. The kernel selection algorithm is reapplied, this time picking 32 kernels for the second coarse scale. Larger values of β cause the coarser level to have fewer elements. But the exact number of elements depends on the form of the kernels themselves. For the random experiments that we describe later in §6 we found β = 2 in the first iteration and 4 thereafter causes the number of kernels to be reduced by a factor of roughly 1/3 to 1/4 at each level. At coarser levels of the hierarchy, we expect the kernels to get less sparse and so will the affinity and the transition matrices. In order to promote sparsity at successive levels of the hierarchy we sparsify eA by zeroing out elements associated with “small” transition probabilities in f M. However, in the experiments described later in §6, we observe this sparsification step to be not critical. To summarize, we use the stationary distribution ⃗π at the fine-scale to derive a transition matrix f M, and its stationary distribution ⃗δ, at the coarse-scale. The coarse scale transition in turn helps to derive an affinity matrix eA and its normalized version eL. It is obvious that this procedure can be repeated recursively. We describe next how to use this representation hierarchy for building a fast eigensolver. 5 Fast EigenSolver Our goal in generating a hierarchical representation of a transition matrix is to develop a fast, specialized eigen solver for spectral clustering. To this end, we perform a full eigen decomposition of the normalized affinity matrix only at the coarsest level. As discussed in the previous section, the affinity matrix at the coarsest level is not likely to be sparse, hence it will need a full (as opposed to a sparse) version of an eigen solver. However it is typically the case that e ≤m ≪n (even in the case of the three-scale hierarchy that we just considered) and hence we expect this step to be the least expensive computationally. The resulting eigenvectors are interpolated to the next lower level of the hierarchy by a process which will be described next. Because the eigen interpolation process between every adjacent pair of scales in the hierarchy is similar, we will assume we have access to the leading eigenvectors eU (size: m×e) for the normalized affinity matrix eL (size: m×m) and describe how to generate the leading eigenvectors U (size: n × e), and the leading eigenvalues S (size: e × 1), for the fine-scale normalized affinity matrix L (size: n × n). There are several steps to the eigen interpolation process and in the discussion that follows we refer to the lines in the pseudo-code presented below. First, the coarse-scale eigenvectors eU can be interpolated using the kernel matrix K to generate U = K eU, an approximation for the fine-scale eigenvectors (line 9). Second, interpolation alone is unlikely to set the directions of U exactly aligned with UL, the vectors one would obtain by a direct eigen decomposition of the fine-scale normalized affinity matrix L. We therefore update the directions in U by applying a small number of power iterations with L, as given in lines 13-15. function (U, S) = CoarseToFine(L, K, eU, eS) 1: INPUT 2: L, K ⇐{L is n × n and K is n × m where m ≪n} 3: eU/eS ⇐{leading coarse-scale eigenvectors/eigenvalues of eL. eU is of size m × e, e ≤m} 4: OUTPUT 5: U, S ⇐{leading fine-scale eigenvectors/eigenvalues of L. U is n × e and S is e × 1.} 5 10 15 20 25 30 0.88 0.9 0.92 0.94 0.96 0.98 1 Eigen Index Eigen Value Eigen Spectrum 5 10 15 20 25 30 0.5 1 1.5 2 2.5 3 x 10 −3 Eigen Index Absolute Relative Error |δλ|λ−1 5 10 15 20 25 30 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Eigen Index Relative Error (a) (b) (c) Figure 1: Hierarchical eigensolver results. (a) comparing ground truth eigenvalues SL (red circles) with multi-scale eigensolver spectrum S (blue line) (b) Relative absolute error between eigenvalues: |S−SL| SL (c) Eigenvector mismatch: 1 −diag |U T UL| , between eigenvectors U derived by the multi-scale eigensolver and the ground truth UL. Observe the slight mismatch in the last few eigenvectors, but excellent agreement in the leading eigenvectors (see text). 6: CONSTANTS: TOL = 1e-4; POWER ITERS = 50 7: 8: TPI = min “ POWER ITERS, log(e × eps/TOL)/ log(min( eS)) ” {eps: machine accuracy} 9: U = K eU {interpolation from coarse to fine} 10: while not converged do 11: Uold = U {n × e matrix, e ≪n} 12: for i = 1 to TPI do 13: U ⇐LU 14: end for 15: U ⇐Gram-Schmidt(U) {orthogonalize U} 16: Le = U T LU {L may be sparse, but Le need not be.} 17: UeSeU T e = svd(Le) {eigenanalysis of Le, which is of size e × e.} 18: U ⇐UUe {update the leading eigenvectors of L} 19: S = diag(Se) {grab the leading eigenvalues of L} 20: innerProd = 1 −diag( U T old U ) {1 is a e × 1 vector of all ones} 21: converged = max[abs(innerProd)] < TOL 22: end while The number of power iterations TPI can be bounded as discussed next. Suppose ⃗v = Uc where U is a matrix of true eigenvectors and c is a coefficient vector for an arbitrary vector ⃗v. After TPI power iterations ⃗v becomes ⃗v = Udiag(STPI)c, where S has the exact eigenvalues. In order for the component of a vector ⃗v in the direction Ue (the eth column of U) not to be swamped by other components, we can limit it’s decay after TPI iterations as follows: (S(e)/S(1))TPI >= e×eps/TOL, where S(e) is the exact eth eigenvalue, S(1) = 1, eps is the machine precision, TOL is requested accuracy. Because we do not have access to the exact value S(e) at the beginning of the interpolation procedure, we estimate it from the coarse eigenvalues eS. This leads to a bound on the power iterations TPI, as derived on the line 9 above. Third, the interpolation process and the power iterations need not preserve orthogonality in the eigenvectors in U. We fix this by Gram-Schmidt orthogonalization procedure (line 16). Finally, there is a still a problem with power iterations that needs to be resolved, in that it is very hard to separate nearby eigenvalues. In particular, for the convergence of the power iterations the ratio that matters is between the (e + 1)st and eth eigenvalues. So the idea we pursue is to use the power iterations only to separate the reduced space of eigenvectors (of dimension e) from the orthogonal subspace (of dimension n −e). We then use a full SVD on the reduced space to update the leading eigenvectors U, and eigenvalues S, for the fine-scale (lines 17-20). This idea is similar to computing the Ritz values and Ritz vectors in a Rayleigh-Ritz method. 6 Interpolation Results Our multi-scale decomposition code is in Matlab. For the direct eigen decomposition, we have used the Matlab program svds.m which invokes the compiled ARPACKC routine [13], with a default convergence tolerance of 1e-10. In Fig. 1a we compare the spectrum S obtained from a three-scale decomposition on the eye image (blue line) with the ground truth, which is the spectrum SL resulting from direct eigen decomposition of the fine-scale normalized affinity matrices L (red circles). There is an excellent agreement in the leading eigenvalues. To illustrate this, we show absolute relative error between the spectra: |S−SL| SL in Fig. 1b. The spectra agree mostly, except for the last few eigenvalues. For a quantitative comparison between the eigenvectors, we plot in Fig. 1c the following measure: 1−diag(|U T UL|), where U is the matrix of eigenvectors obtained by the multi-scale approximation, UL is the ground-truth resulting from a direct eigen decomposition of the fine-scale affinity matrix L and 1 is a vector of all ones. The relative error plot demonstrates a close match, within the tolerance threshold of 1e-4 that we chose for the multi-scale method, in the leading eigenvector directions between the two methods. The relative error is high with the last few eigen vectors, which suggests that the power iterations have not clearly separated them from other directions. So, the strategy we suggest is to pad the required number of leading eigen basis by about 20% before invoking the multi-scale procedure. Obviously, the number of hierarchical stages for the multi-scale procedure must be chosen such that the transition matrix at the coarsest scale can accommodate the slight increase in the subspace dimensions. For lack of space we are omitting extra results (see Ch.8 in [6]). Next we measure the time the hierarchical eigensolver takes to compute the leading eigenbasis for various input sizes, in comparison with the svds.m procedure [13]. We form images of different input sizes by Gaussian smoothing of i.i.d noise. The Gaussian function has a standard deviation of 3 pixels. The edges in graph G are defined by the standard 8-neighbourhood of each pixel. The edge weights between neighbouring pixels are simply given by a function of the difference in the corresponding intensities (see §4.3). The affinity matrix A with the edge weights is then used to generate a Markov transition matrix M. The fast eigensolver is run on ten different instances of the input image of a given size and the average of these times is reported here. For a fair comparison between the two procedures, we set the convergence tolerance value for the svds.m procedure to be 1e-4, the same as the one used for the fast eigensolver. We found the hierarchical representation derived from this tolerance threshold to be sufficiently accurate for a novel min-cut based segmentation results that we reported in [8]. Also, the subspace dimensionality is fixed to be 51 where we expect (and indeed observe) the leading 40 eigenpairs derived from the multi-scale procedure to be accurate. Hence, while invoking svds.m we compute only the leading 41 eigenpairs. In the table shown below, the first column corresponds to the number of nodes in the graph, while the second and third columns report the time taken in seconds by the svds.m procedure and the Matlab implementation of the multi-scale eigensolver respectively. The fourth column reports the speedups of the multi-scale eigensolver over svds.m procedure on a standard desktop (Intel P4, 2.5GHz, 1GB RAM). Lowering the tolerance threshold for svds.m made it faster by about 20 −30%. Despite this, the multi-scale algorithm clearly outperforms the svds.m procedure. The most expensive step in the multi-scale algorithm is the power iteration required in the last stage, that is interpolating eigenvectors from the first coarse scale to the required fine scale. The complexity is of the order of n × e where e is the subspace dimensionality and n is the size of the graph. Indeed, from the table we can see that the multi-scale procedure is taking time roughly proportional to n. Deviations from the linear trend are observed at specific values of n, which we believe are due to the n svds.m Multi-Scale Speedup 322 1.6 1.5 1.1 632 10.8 4.9 2.2 642 20.5 5.5 3.7 652 12.6 5.1 2.5 1002 44.2 13.1 3.4 1272 91.1 20.4 4.5 1282 230.9 35.2 6.6 1292 96.9 20.9 4.6 1602 179.3 34.4 5.2 2552 819.2 90.3 9.1 2562 2170.8 188.7 11.5 2572 871.7 93.3 9.3 5112 7977.2 458.8 17.4 5122 20269 739.3 27.4 5132 7887.2 461.9 17.1 6002 10841.4 644.2 16.8 7002 15048.8 1162.4 12.9 8002 1936.6 variations in the difficulty of the specific eigenvalue problem (eg. nearly multiple eigenvalues). The hierarchical representation has proven to be effective in a min-cut based segmentation algorithm that we proposed recently [8]. Here we explored the use of random walks and associated spectral embedding techniques for the automatic generation of suitable proposal (source and sink) regions for a min-cut based algorithm. The multiscale algorithm was used to generate the 40 leading eigenvectors of large transition matrices (eg. size 20K ×20K). In terms of future work, it will be useful to compare our work with other approximate methods for SVD such as [23]. Ack: We thank S. Roweis, F. Estrada and M. Sakr for valuable comments. References [1] D. Achlioptas and F. McSherry. Fast Computation of Low-Rank Approximations. STOC, 2001. [2] D. Achlioptas et alSampling Techniques for Kernel Methods. NIPS, 2001. [3] S. Barnard and H. Simon Fast Multilevel Implementation of Recursive Spectral Bisection for Partitioning Unstructured Problems. PPSC, 627-632. [4] M. Belkin et al Laplacian Eigenmaps and Spectral Techniques for Embedding. NIPS, 2001. [5] M. Brand et al A unifying theorem for spectral embedding and clustering. AI & STATS, 2002. [6] C. Chennubhotla. Spectral Methods for Multi-scale Feature Extraction and Spectral Clustering. http://www.cs.toronto.edu/˜chakra/thesis.pdf Ph.D Thesis, Department of Computer Science, University of Toronto, Canada, 2004. [7] C. Chennubhotla and A. Jepson. Half-Lives of EigenFlows for Spectral Clustering. NIPS, 2002. [8] F. Estrada, A. Jepson and C. Chennubhotla. Spectral Embedding and Min-Cut for Image Segmentation. Manuscript Under Review, 2004. [9] C. Fowlkes et al Efficient spatiotemporal grouping using the Nystrom method. CVPR, 2001. [10] S. Belongie et al Spectral Partitioning with Indefinite Kernels using Nystrom app. ECCV, 2002. [11] A. Frieze et al Fast Monte-Carlo Algorithms for finding low-rank approximations. FOCS, 1998. [12] Y. Koren et al ACE: A Fast Multiscale Eigenvectors Computation for Drawing Huge Graphs IEEE Symp. on InfoVis 2002, pp. 137-144 [13] R. B. Lehoucq, D. C. Sorensen and C. Yang. ARPACK User Guide: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods. SIAM 1998. [14] J. J. Lin. Reduced Rank Approximations of Transition Matrices. AI & STATS, 2002. [15] L. Lova’sz. Random Walks on Graphs: A Survey Combinatorics, 1996, 353–398. [16] G. J. McLachlan et al Mixture Models: Inference and Applications to Clustering. 1988 [17] M. Meila and J. Shi. A random walks view of spectral segmentation. AI & STATS, 2001. [18] A. Ng, M. Jordan and Y. Weiss. On Spectral Clustering: analysis and an algorithm NIPS, 2001. [19] A. Pothen Graph partitioning algorithms with applications to scientific computing. Parallel Numerical Algorithms, D. E. Keyes et al (eds.), Kluwer Academic Press, 1996. [20] G. L. Scott et al Feature grouping by relocalization of eigenvectors of the proximity matrix. BMVC, pg. 103-108, 1990. [21] E. Sharon et alFast Multiscale Image Segmentation CVPR, I:70-77, 2000. [22] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, August, 2000. [23] H. Simon et al Low-Rank Matrix Approximation Using the Lanczos Bidiagonalization Process with Applications SIAM J. of Sci. Comp. 21(6):2257-2274, 2000. [24] N. Tishby et al Data clustering by Markovian Relaxation NIPS, 2001. [25] C. Williams et al Using the Nystrom method to speed up the kernel machines. NIPS, 2001.
|
2004
|
182
|
2,599
|
Exponentiated Gradient Algorithms for Large-margin Structured Classification Peter L. Bartlett U.C.Berkeley bartlett@stat.berkeley.edu Michael Collins MIT CSAIL mcollins@csail.mit.edu Ben Taskar Stanford University btaskar@cs.stanford.edu David McAllester TTI at Chicago mcallester@tti-c.org Abstract We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs. 1 Introduction Structured classification is the problem of predicting y from x in the case where y has meaningful internal structure. For example x might be a word string and y a sequence of part of speech labels, or x might be a Markov random field and y a labeling of x, or x might be a word string and y a parse of x. In these examples the number of possible labels y is exponential in the size of x. This paper presents a training algorithm for a general definition of structured classification covering both Markov random fields and parsing. We restrict our attention to linear discriminative classification. We assume that pairs ⟨x, y⟩ can be embedded in a linear feature space Φ(x, y), and that a predictive rule is determined by a direction (weight vector) w in that feature space. In linear discriminative prediction we select the y that has the greatest value for the inner product ⟨Φ(x, y), w⟩. Linear discrimination has been widely studied in the binary and multiclass setting [6, 4]. However, the case of structured labels has only recently been considered [2, 12, 3, 13]. The structured-label case takes into account the internal structure of y in the assignment of feature vectors, the computation of loss, and the definition and use of margins. We focus on a formulation where each label y is represented as a set of “parts”, or equivalently, as a bit-vector. Moreover, we assume that the feature vector for y and the loss for y are both linear in the individual bits of y. This formulation has the advantage that it naturally covers both simple labeling problems, such as part-of-speech tagging, as well as more complex problems such as parsing. We consider the large-margin optimization problem defined in [12] for selecting the classification direction w given a training sample. The starting-point for these methods is a primal problem that has one constraint for each possible labeling y; or equivalently a dual problem where each y has an associated dual variable. We give a new training algorithm that relies on an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The computation of these expectations appears to be a natural computational problem for structured problems, and has specific polynomial-time dynamic programming algorithms for some important examples: for example, the clique-tree belief propagation algorithm can be used in Markov random fields, and the inside-outside algorithm can be used in the case of weighted context-free grammars. The optimization method for structured labels relies on a more general result, specifically the application of exponentiated gradient (EG) updates [7, 8] to quadratic programs (QPs). We describe a method for solving QPs based on EG updates, and give bounds on its rate of convergence. The algorithm uses multiplicative updates on dual parameters in the problem. In addition to their application to the structured-labels task, the EG updates lead to simple algorithms for optimizing “conventional” binary or multiclass SVM problems. Related work [2, 12, 3, 13] consider large-margin methods for Markov random fields and (weighted) context-free grammars. We consider the optimization problem defined in [12]. [12] use a row-generation approach based on Viterbi decoding combined with an SMO optimization method. [5] describe exponentiated gradient algorithms for SVMs, but for binary classification in the “hard-margin” case, without slack variables. We show that the EG-QP algorithm converges significantly faster than the rates shown in [5]. Multiplicative updates for SVMs are also described in [11], but unlike our method, the updates in [11] do not appear to factor in a way that allows algorithms for MRFs and WCFGs based on Gibbsdistribution representations. Our algorithms are related to those for conditional random fields (CRFs) [9]. CRFs define a linear model for structured problems, in a similar way to the models in our work, and also rely on the efficient computation of marginals in the training phase. Finally, see [1] for a longer version of the current paper, which includes more complete derivations and proofs. 2 The General Setting We consider the problem of learning a function f : X →Y, where X is a set and Y is a countable set. We assume a loss function L : X × Y × Y →R+. The function L(x, y, ˆy) measures the loss when y is the true label for x, and ˆy is a predicted label; typically, ˆy is the label proposed by some function f(x). In general we will assume that L(x, y, ˆy) = 0 for y = ˆy. Given some distribution over examples (X, Y ) in X × Y, our aim is to find a function with low expected loss, or risk, EL(X, Y, f(X)). We consider functions f which take a linear form. First, we assume a fixed function G which maps an input x to a set of candidates G(x). For all x, we assume that G(x) ⊆Y, and that G(x) is finite. A second component to the model is a feature-vector representation Φ : X × Y →Rd. Given a parameter vector w ∈Rd, we consider functions of the form fw(x) = arg max y∈G(x)⟨Φ(x, y), w⟩. Given n independent training examples (xi, yi) with the same distribution as (X, Y ), we will formalize a large-margin optimization problem that is a generalization of support vector methods for binary classifiers, and is essentially the same as the formulation in [12]. The optimal parameters are taken to minimize the following regularized empirical risk function: 1 2∥w∥2 + C X i max y (L(xi, yi, y) −mi,y(w)) + where mi,y(w) = ⟨w, φ(xi, yi)⟩−⟨w, φ(xi, y)⟩is the “margin” on (i, y) and (z)+ = max{z, 0}. This optimization can be expressed as the primal problem in Figure 1. Following [12], the dual of this problem is also shown in Figure 1. The dual is a quadratic Primal problem: Dual problem: max¯α F(¯α), where minw,¯ϵ 1 2∥w∥2 + C P i ϵi F(¯α) = C P i,y αi,yLi,y− 1 2C2 P i,y P j,z αi,yαj,z⟨Φi,y, Φj,z⟩ Subject to the constraints: ∀i, ∀y ∈G(xi), ⟨w, Φi,y⟩≥Li,y −ϵi ∀i, ϵi ≥0 Subject to the constraints: ∀i, X y αi,y = 1 ; ∀i, y, αi,y ≥0 Relationship between optimal values: w∗= C P i,y α∗ i,yΦi,y where w∗is the arg min of the primal problem, and ¯α∗is the arg max of the dual problem. Figure 1: The primal and dual problems. We use the definitions Li,y = L(xi, yi, y), and Φi,y = Φ(xi, yi) −Φ(xi, y). We assume that for all i, Li,y = 0 for y = yi. The constant C dictates the relative penalty for values of the slack variables ϵi which are greater than 0. program F(¯α) in the dual variables αi,y for all i = 1 . . . n, y ∈G(xi). The dual variables for each example are constrained to form a probability distribution over Y. 2.1 Models for structured classification The problems we are interested in concern structured labels, which have a natural decomposition into “parts”. Formally, we assume some countable set of parts, R. We also assume a function R which maps each object (x, y) ∈X × Y to a finite subset of R. Thus R(x, y) is the set of parts belonging to a particular object. In addition we assume a feature-vector representation φ of parts: this is a function φ : X × R →Rd. The feature vector for an object (x, y) is then a sum of the feature vectors for its parts, and we also assume that the loss function L(x, y, ˆy) decomposes into a sum over parts: Φ(x, y) = X r∈R(x,y) φ(x, r) L(x, y, ˆy) = X r∈R(x,ˆy) l(x, y, r) Here φ(x, r) is a “local” feature vector for part r paired with input x, and l(x, y, r) is a “local” loss for part r when proposed for the pair (x, y). For convenience we define indicator variables I(x, y, r) which are 1 if r ∈R(x, y), 0 otherwise. We also define sets R(xi) = ∪y∈G(xi)R(xi, y) for all i = 1 . . . n. Example 1: Markov Random Fields (MRFs) In an MRF the space of labels G(x), and their underlying structure, can be represented by a graph. The graph G = (V, E) is a collection of vertices V = {v1, v2, . . . vl} and edges E. Each vertex vi ∈V has a set of possible labels, Yi. The set G(x) is then defined as Y1 × Y2 . . . × Yl. Each clique in the graph has a set of possible configurations: for example, if a particular clique contains vertices {v3, v5, v6}, the set of possible configurations of this clique is Y3 × Y5 × Y6. We define C to be the set of cliques in the graph, and for any c ∈C we define Y(c) to be the set of possible configurations for that clique. We decompose each y ∈G(x) into a set of parts, by defining R(x, y) = {(c, a) ∈R : c ∈C, a ∈Y(c), (c, a) is consistent with y}. The feature vector representation φ(x, c, a) for each part can essentially track any characteristics of the assignment a for clique c, together with any features of the input x. A number of choices for the loss function l(x, y, (c, a)) are possible. For example, consider the Hamming loss used in [12], defined as L(x, y, ˆy) = P i Iyi̸=ˆyi. To achieve this, first assign each vertex vi to a single one of the cliques in which it appears. Second, define l(x, y, (c, a)) to be the number of labels in the assignment (c, a) which are both incorrect and correspond to vertices which have been assigned to the clique c (note that assigning each vertex to a single clique avoids “double counting” of label errors). Example 2: Weighted Context-Free Grammars (WCFGs). In this example x is an input string, and y is a “parse tree” for that string, i.e., a left-most derivation for x under some context-free grammar. The set G(x) is the set of all left-most derivations for x Inputs: A learning rate η. Data structures: A vector ¯θ of variables, θi,r, ∀i, ∀r ∈R(xi). Definitions: αi,y(¯θ) = exp(P r∈R(xi,y) θi,r)/Zi where Zi is a normalization term. Algorithm: • Choose initial values ¯θ1 for the θi,r variables (these values can be arbitrary). • For t = 1 . . . T + 1: – For i = 1 . . . n, r ∈R(xi), calculate µt i,r = P y αi,y(¯θt)I(xi, y, r). – Set wt = C P i,r∈R(xi,yi) φi,r −P i,r∈R(xi) µt i,rφi,r – For i = 1 . . . n, r ∈R(xi), calculate updates θt+1 i,r = θt i,r + ηC (li,r + ⟨wt, φi,r⟩) Output: Parameter values wT +1 Figure 2: The EG algorithm for structured problems. We use φi,r = φ(xi, r) and li,r = l(xi, yi, r). under the grammar. For convenience, we restrict the grammar to be in Chomsky-normal form, where all rules in the grammar are of the form ⟨A →B C⟩or ⟨A →a⟩, where A, B, C are non-terminal symbols, and a is some terminal symbol. We take a part r to be a CF-rule-tuple ⟨A →B C, s, m, e⟩. Under this representation A spans words s . . . e inclusive in x; B spans words s . . . m; and C spans words (m + 1) . . . e. The function R(x, y) maps a derivation y to the set of parts which it includes. In WCFGs φ(x, r) can be any function mapping a rule production and its position in the sentence x, to a feature vector. One example of a loss function would be to define l(x, y, r) to be 1 only if r’s non-terminal A is not seen spanning words s . . . e in the derivation y. This would lead to L(x, y, ˆy) tracking the number of “constituent errors” in ˆy, where a constituent is a (non-terminal, start-point, end-point) tuple such as (A, s, e). 3 EG updates for structured objects We now consider an algorithm for computing ¯α∗= arg max¯α∈∆F(¯α), where F(¯α) is the dual form of the maximum margin problem, as in Figure 1. In particular, we are interested in the optimal values of the primal form parameters, which are related to ¯α∗by w∗= C P i,y α∗ i,yΦi,y. A key problem is that in many of our examples, the number of dual variables αi,y precludes dealing with these variables directly. For example, in the MRF case or the WCFG cases, the set G(x) is exponential in size, and the number of dual variables αi,y is therefore also exponential. We describe an algorithm that is efficient for certain examples of structured objects such as MRFs or WCFGs. Instead of representing the αi,y variables explicitly, we will instead manipulate a vector ¯θ of variables θi,r for i = 1 . . . n, r ∈R(xi). Thus we have one of these “mini-dual” variables for each part seen in the training data. Each of the variables θi,r can take any value in the reals. We now define the dual variables αi,y as a function of the vector ¯θ, which takes the form of a Gibbs distribution: αi,y(¯θ) = exp(P r∈R(xi,y) θi,r) P y′ exp(P r∈R(xi,y′) θi,r) . Figure 2 shows an algorithm for maximizing F(¯α). The algorithm defines a sequence of values ¯θ1, ¯θ2, . . .. In the next section we prove that the sequence F(¯α(¯θ1)), F(¯α(¯θ2)), . . . converges to maxα F(¯α). The algorithm can be implemented efficiently, independently of the dimensionality of ¯α, provided that there is an efficient algorithm for computing marginal terms µi,r = P i,y αi,y(¯θ)I(xi, y, r) for all i = 1 . . . n, r ∈R(xi), and all ¯θ. A key property is that the primal parameters w = C P i,y αi,y(¯θ)Φi,y = C P i Φ(xi, yi) − C P i,y αi,y(¯θ)Φ(xi, y) can be expressed in terms of the marginal terms, because: X i,y αi,y(¯θ)Φ(xi, y) = X i,y αi,y(¯θ) X r∈R(xi,y) φ(xi, r) = X i,r∈R(xi) µi,rφ(xi, r) and hence w = C P i Φ(xi, yi) −C P i,r∈R(xi) µi,rφ(xi, r). The µi,r values can be calculated for MRFs and WCFGs in many cases, using standard algorithms. For example, in the WCFG case, the inside-outside algorithm can be used, provided that each part r is a context-free rule production, as described in Example 2 above. In the MRF case, the µi,r values can be calculated efficiently if the tree-width of the underlying graph is small. Note that the main storage requirements of the algorithm in Figure 2 concern the vector ¯θ. This is a vector which has as many components as there are parts in the training set. In practice, the number of parts in the training data can become extremely large. Fortunately, an alternative, “primal form” algorithm is possible. Rather than explicitly storing the θi,r variables, we can store a vector zt of the same dimensionality as wt. The θi,r values can be computed from zt. More explicitly, the main body of the algorithm in Figure 2 can be replaced with the following: • Set z1 to some initial value. For t = 1 . . . T + 1: – Set wt = 0 – For i = 1 . . . n: Compute µt i,r for r ∈R(xi), using θt i,r = ηC((t −1)li,r + ⟨zt, φi,r⟩); Set wt = wt + C P r∈R(xi,yi) φi,r −P r∈R(xi) µt i,rφi,r – Set zt+1 = zt + wt It can be verified that if ∀i, r, θ1 i,r = ηC⟨φi,r, z1⟩, then this alternative algorithm defines the same sequence of (implicit) θt i,r values, and (explicit) wt values, as the original algorithm. In the next section we show that the original algorithm converges for any choice of initial values ¯θ1, so this restriction on θ1 i,r should not be significant. 4 Exponentiated gradient (EG) updates for quadratic programs We now prove convergence properties of the algorithm in Figure 2. We show that it is an instantiation of a general algorithm for optimizing quadratic programs (QPs), which relies on Exponentiated Gradient (EG) updates [7, 8]. In the general problem we assume a positive semi-definite matrix A ∈Rm×m, and a vector b ∈Rm, specifying a loss function Q(¯α) = b′ ¯α + 1 2 ¯α′A¯α. Here ¯α is an m-dimensional vector of reals. We assume that ¯α is formed by the concatenation of n vectors ¯αi ∈Rmi for i = 1 . . . n, where P i mi = m. We assume that each ¯αi lies in a simplex of dimension mi, so that the feasible set is ∆= {¯α : ¯α ∈Rm; for i = 1 . . . n, mi X j=1 αi,j = 1; for all i, j, αi,j ≥0}. (1) Our aim is to find arg min¯α∈∆Q(¯α). Figure 3 gives an algorithm—the “EG-QP” algorithm—for finding the minimum. In the next section we give a proof of its convergence properties. The EG-QP algorithm can be used to find the minimum of −F(¯α), and hence the maximum of the dual objective F(¯α). We justify the algorithm in Figure 2 by showing that it is equivalent to minimization of −F(¯α) using the EG-QP algorithm. We give the following theorem: Theorem 1 Define F(¯α) = C P i,y αi,yLi,y − 1 2C2 P i,y P j,z αi,yαj,z⟨Φi,y, Φj,z⟩, and assume as in section 2 that Li,y = P r∈R(xi,y) l(xi, y, r) and Φ(xi, y) = P r∈R(xi,y) φ(xi, r). Consider the sequence ¯α(¯θ1) . . . ¯α(¯θT +1) defined by the algorithm in Figure 2, and the sequence ¯α1 . . . ¯αT +1 defined by the EG-QP algorithm when applied to Q(¯α) = −F(¯α). Then under the assumption that ¯α(¯θ1) = ¯α1, it follows that ¯α(¯θt) = ¯αt for t = 1 . . . (T + 1). Inputs: A positive semi-definite matrix A, and a vector b, specifying a loss function Q(¯α) = b · ¯α + 1 2 ¯α′A¯α. Each vector ¯α is in ∆, where ∆is defined in Eq. 1. Algorithm: • Initialize ¯α1 to a point in the interior of ∆. Choose a learning rate η > 0. • For t = 1 . . . T – Calculate ¯st = ∇Q(¯αt) = b + A¯αt. – Calculate ¯αt+1 as: ∀i, j, αt+1 i,j = αt i,j exp{−ηst i,j}/ P k αt i,k exp{−ηst i,k} Output: Return ¯αT +1. Figure 3: The EG-QP algorithm for quadratic programs. Proof. We can write F(¯α) = C P i,y αi,yLi,y −1 2C2∥P i Φ(xi, yi) −P i,y αi,yΦ(xi, y)∥2. It follows that ∂F (¯αt) ∂αi,y = CLi,y + C⟨Φ(xi, y), wt⟩= C P r∈R(xi,y) li,r + ⟨φi,r, wt⟩ where as before wt = C(P i Φ(xi, yi) −P i,y αt i,yΦ(xi, y)). The rest of the proof proceeds by induction; due to space constraints we give a sketch of the proof here. The idea is to show that ¯α(¯θt+1) = ¯αt+1 under the inductive hypothesis that ¯α(¯θt) = ¯αt. This follows immediately from the definitions of the mappings ¯α(¯θt) →¯α(¯θt+1) and ¯αt →¯αt+1 in the two algorithms, together with the identities st i,y = −∂F (¯αt) ∂αi,y = −C P r∈R(xi,y) (li,r + ⟨φi,r, wt⟩) and θt+1 i,r −θt i,r = ηC (li,r + ⟨φi,r, wt⟩). 4.1 Convergence of the exponentiated gradient QP algorithm The following theorem shows how the optimization algorithm converges to an optimal solution. The theorem compares the value of the objective function for the algorithm’s vector ¯αt to the value for a comparison vector u ∈∆. (For example, consider u as the solution of the QP.) The convergence result is in terms of several properties of the algorithm and the comparison vector u. The distance between u and ¯α1 is measured using the KullbackLiebler (KL) divergence. Recall that the KL-divergence between two probability vectors ¯u, ¯v is defined as D(¯u, ¯v) = P i ui log ui vi . For sequences of probability vectors, ¯u ∈∆ with ¯u = (¯u1, . . . , ¯un) and ¯ui = (ui,1, . . . , ui,mi), we can define a divergence as the sum of KL-divergences: for ¯u, ¯v ∈∆, ¯D(¯u, ¯v) = Pn i=1 D(¯ui, ¯vi). Two other key parameters are λ, the largest eigenvalue of the positive semidefinite symmetric matrix A, and B = max ¯α∈∆ max i (∇Q(¯α))i −min i (∇Q(¯α))i ≤2 n max ij |Aij| + max i |bi| . Theorem 2 For all ¯u ∈∆, 1 T T X t=1 Q(¯αt) ≤Q(¯u) + ¯D(¯u, ¯α1) ηT + eηB −1 −ηB η2B2 (1 −η(B + λ)eηB) Q(¯α1) −Q(¯αT +1) T . Choosing η = 0.4/(B + λ) ensures that Q ¯αT +1 ≤1 T T X t=1 Q(¯αt) ≤Q(¯u) + 2.5(B + λ) ¯D(¯u, ¯α1) T + 1.5Q(¯α1) −Q(¯αT +1) T . The first lemma we require is due to Kivinen and Warmuth [8]. Lemma 1 For any ¯u ∈∆, ηQ(¯αt) −ηQ(¯u) ≤¯D(¯u, ¯αt) −¯D(¯u, ¯αt+1) + ¯D(¯αt, ¯αt+1) We focus on the third term. Define ∇(i)Q(¯α) as the segment of the gradient vector corresponding to the component ¯αi of ¯α, and define the random variable Xi,t, satisfying Pr Xi,t = − ∇(i)Q(¯αt) j = αi,j. Lemma 2 ¯D(¯αt, ¯αt+1) = n X i=1 log E h eη(Xi,t−EXi,t)i ≤ eηB −1 −ηB B2 n X i=1 var(Xi,t). Proof. ¯D(¯αt, ¯αt+1) = n X i=1 X j αt ij log αt ij αt+1 ij = n X i=1 X j αt ij log X k αt ik exp(−η∇i,k) ! + η∇i,j ! = n X i=1 log X k αt ik exp −η∇i,k + η¯αt i · ∇i ! = n X i=1 log E h eη(Xi,t−EXi,t)i ≤eηB −1 −ηB B2 n X i=1 var(Xi,t). This last inequality is at the heart of the proof of Bernstein’s inequality; e.g., see [10]. The second part of the proof of the theorem involves bounding this variance in terms of the loss. The following lemma relies on the fact that this variance is, to first order, the decrease in the quadratic loss, and that the second order term in the Taylor series expansion of the loss is small compared to the variance, provided the steps are not too large. The lemma and its proof require several definitions. For any d, let σ : Rd →(0, 1)d be the softmax function, σ(¯θ)i = exp(θi)/ Pd j=1 exp(θj), for ¯θ ∈Rd. We shall work in the exponential parameter space: let ¯θt be the exponential parameters at step t, so that the updates are ¯θt+1 = ¯θt −η∇Q(¯αt), and the QP variables satisfy ¯αt i = σ(¯θt i). Define the random variables Xi,t,¯θ, satisfying Pr Xi,t,¯θ = − ∇(i)Q(¯αt) j = σ(¯θi) j. This takes the same values as Xi,t, but its distribution is given by a different exponential parameter (¯θi instead of ¯θt i). Define ¯θt, ¯θt+1 = a¯θt + (1 −a)¯θt+1 : a ∈[0, 1] . Lemma 3 For some ¯θ ∈[¯θt, ¯θt+1], η n X i=1 var(Xi,t) −η2(B + λ) n X i=1 var(Xi,t,¯θ) ≤Q(¯αt) −Q(¯αt+1), but for all ¯θ ∈[¯θt, ¯θt+1], var(Xi,t,¯θ) ≤eηB var(Xi,t). Hence, n X i=1 var(Xi,t) ≤ 1 η (1 −η(B + λ)eηB) Q(¯αt) −Q(¯αt+1) . Thus, for η < 0.567/(B + λ), Q(¯αt) is non-increasing in t. The proof is in [1]. Theorem 2 follows from an easy calculation. 5 Experiments We compared an online1 version of the Exponentiated Gradient algorithm with the factored Sequential Minimal Optimization (SMO) algorithm in [12] on a sequence segmentation task. We selected the first 1000 sentences (12K words) from the CoNLL-2003 named entity recognition challenge data set for our experiment. The goal is to extract (multiword) entity names of people, organizations, locations and miscellaneous entities. Each word is labelled by 9 possible tags (beginning of one of the four entity types, continuation of one of the types, or not-an-entity). We trained a first-order Markov chain over the tags, 1In the online algorithm we calculate marginal terms, and updates to the wt parameters, one training example at a time. As yet we do not have convergence bounds for this method, but we have found that it works well in practice. 0 2 4 6 8 10 12 14 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 SMO EG (eta .5) EG (eta 1) 0 2 4 6 8 10 12 14 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 SMO EG (th -2.7) EG (th -3) EG (th -4.5) (a) (b) Figure 4: Number of iterations over training set vs. dual objective for the SMO and EG algorithms. (a) Comparison with different η values; (b) Comparison with η = 1 and different initial θ values. where our cliques are just the nodes for the tag of each word and edges between tags of consecutive words. The feature vector for each node assignment consists of the word itself, its capitalization and morphological features, etc., as well as the previous and consecutive words and their features. Likewise, the feature vector for each edge assignment consists of the two words and their features as well as surrounding words. Figure 4 shows the growth of the dual objective function after each pass through the data for SMO and EG, for several settings of the learning rate η and the initial setting of the θ parameters. Note that SMO starts up very quickly but slows down in a suboptimal region, while EG lags at the start, but overtakes SMO and achieves a larger than 10% increase in the value of the objective. These preliminary results suggest that a hybrid algorithm could get the benefits of both, by starting out with several SMO updates and then switching to EG. The key issue is to switch from the marginal µ representation SMO maintains to the Gibbs θ representation that EG uses. We can find θ that produces µ by first computing conditional “probabilities” that correspond to our marginals (e.g. dividing edge marginals by node marginals in this case) and then letting θ’s be the logs of the conditional probabilities. References [1] Long version of this paper. Available at http://www.ai.mit.edu/people/mcollins. [2] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In ICML, 2003. [3] Michael Collins. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In Harry Bunt, John Carroll, and Giorgio Satta, editors, New Developments in Parsing Technology. Kluwer, 2004. [4] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2(5):265–292, 2001. [5] N. Cristianini, C. Campbell, and J. Shawe-Taylor. Multiplicative updatings for support-vector learning. Technical report, NeuroCOLT2, 1998. [6] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, 2000. [7] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997. [8] J. Kivinen and M. Warmuth. Relative loss bounds for multidimensional regression problems. Journal of Machine Learning Research, 45(3):301–329, 2001. [9] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML-01, 2001. [10] D. Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984. [11] F. Sha, L. Saul, and D. Lee. Multiplicative updates for large margin classifiers. In COLT, 2003. [12] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. In NIPS, 2003. [13] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. ICML, 2004 (To appear).
|
2004
|
183
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.